Cancer is a leading cause of death globally, and early detection is crucial for better outcomes. This research aims to improve Region Of Interest (ROI) segmentation and feature extraction in medical image analysis using Radiomics techniques
with 3D Slicer, Pyradiomics, and Python. Dimension reduction methods, including PCA, K-means, t-SNE, ISOMAP, and Hierarchical Clustering, were applied to high dimensional features to enhance interpretability and efficiency. The study assessed the ability of the reduced feature set to predict T-staging, an essential component of the TNM system for cancer diagnosis. Multinomial logistic regression models were developed and evaluated using MSE, AIC, BIC, and Deviance Test. The dataset consisted of CT and PET-CT DICOM images from 131 lung cancer patients. Results showed that PCA identified 14 features, Hierarchical Clustering 17, t-SNE 58, and ISOMAP 40, with texture-based features being the most critical. This study highlights the potential of integrating Radiomics and unsupervised learning techniques to enhance cancer prediction from medical images.
An Augmentation in the Diagnostic Potency of Breast Cancer through A Deep Learning Cloud-Based AI Framework to Compute Tumor Malignancy & Risk
Agarwal, O
International Research Journal of Innovations in Engineering and Technology (IRJIET)2019Journal Article, cited 0 times
CBIS-DDSM
This research project focuses on developing a web-based multi-platform solution for augmenting prognostic strategies to diagnose breast cancer (BC), from a variety of different tests, including histology, mammography, cytopathology, and fine-needle aspiration cytology, all inan automated fashion. The respective application utilizes tensor-based data representations and deep learning architectural algorithms, to produce optimized models for the prediction of novel instances against each of these medical tests. This system has been designed in a way that all of its computation can be integrated seamlessly into a clinical setting, without posing any disruption to a clinician’s productivity or workflow, but rather an enhancement of their capabilities. This software can make the diagnostic process automated, standardized, faster, and even more accurate than current benchmarks achieved by both pathologists, and radiologists, which makes it invaluable from a clinical standpoint to make well-informed diagnostic decisions with nominal resources.
Adaptive Multi-Column Deep Neural Networks with Application to Robust Image Denoising
Agostinelli, Forest
Anderson, Michael R
Lee, Honglak
2013Conference Proceedings, cited 118 times
Website
Head-Neck Cetuximab
Algorithm Development
Image denoising
Machine Learning
Deep Learning
Stacked sparse denoising auto-encoders (SSDAs) have recently been shown to be successful at removing noise from corrupted images. However, like most denoising techniques, the SSDA is not robust to variation in noise types beyond what it has seen during training. We present the multi-column stacked sparse denoising autoencoder, a novel technique of combining multiple SSDAs into a multi-column SSDA (MC-SSDA) by combining the outputs of each SSDA. We eliminate the need to determine the type of noise, let alone its statistics, at test time. We show that good denoising performance can be achieved with a single system on a variety of different noise types, including ones not seen in the training set. Additionally, we experimentally demonstrate the efficacy of MC-SSDA denoising by achieving MNIST digit error rates on denoised images at close to that of the uncorrupted images.
FEATURE EXTRACTION OF LUNG CANCER USING IMAGE ANALYSIS TECHNIQUES
ALAYUE, L.T.
GOSHU, B.S.
TAJU, ENDRIS
Romanian Journal of Biophysics2022Journal Article, cited 0 times
Website
TCGA-LUSC
Computed Tomography (CT)
Lung Cancer
Computer Aided Detection (CADe)
MATLAB
Lung cancer is one of the most life-threatening diseases. It is a medical problem that needs accurate diagnosis and timely treatment by healthcare professionals. Although CT is preferred over other imaging modalities, visual interpretation of CT scan images may be subject to error and can cause a delay in lung cancer detection. Therefore, image processing techniques are widely used for early-stage detection of lung tumors. This study was conducted to perform pre-processing, segmentation, and feature extraction of lung CT images using image processing techniques. We used the MATLAB programming language to devise a stepwise approach that included image acquisition, pre-processing, segmentation, and features extraction. A total of 14 lung CT scan images in the age group of 55–75 years were downloaded from an open access repository. The analyzed images were grayscale, 8 bits, with a resolution ranging from 151 213 to 721 900, and Digital Imaging and Communications in Medicine (DICOM) format. In the pre-processing stage median filter was used to remove noise from the original image since it preserved the edges of the image, whereas segmentation was done through edge detection and threshold analysis. The results show that solid tumors were detected in three CT images corresponding to patients aged between 71 and 75 years old. Our study indicates that image processing plays a significant role in lung cancer recognition and early-stage treatment. Health professionals need to work closely with medical physicists to improve the accuracy of diagnosis.
Self-organizing Approach to Learn a Level-set Function for Object Segmentation in Complex Background Environments
Boundary extraction for object region segmentation is one of the most challenging tasks in image processing and computer vision areas. The complexity of large variations in the appearance of the object and the background in a typical image causes the performance degradation of existing segmentation algorithms. One of the goals of computer vision studies is to produce algorithms to segment object regions to produce accurate object boundaries that can be utilized in feature extraction and classification.; ; This dissertation research considers the incorporation of prior knowledge of intensity/color of objects of interest within segmentation framework to enhance the performance of object region and boundary extraction of targets in unconstrained environments. The information about intensity/color of object of interest is taken from small patches as seeds that are fed to learn a neural network. The main challenge is accounting for the projection transformation between the limited amount of prior information and the appearance of the real object of interest in the testing data. We address this problem by the use of a Self-organizing Map (SOM) which is an unsupervised learning neural network. The segmentation process is achieved by the construction of a local fitted image level-set cost function, in which, the dynamic variable is a Best Matching Unit (BMU) coming from the SOM map.; ; The proposed method is demonstrated on the PASCAL 2011 challenging dataset, in which, images contain objects with variations of illuminations, shadows, occlusions and clutter. In addition, our method is tested on different types of imagery including thermal, hyperspectral, and medical imagery. Metrics illustrate the effectiveness and accuracy of the proposed algorithm in improving the efficiency of boundary extraction and object region detection.; ; In order to reduce computational time, a lattice Boltzmann Method (LBM) convergence criteria is used along with the proposed self-organized active contour model for producing faster and effective segmentation. The lattice Boltzmann method is utilized to evolve the level-set function rapidly and terminate the evolution of the curve at the most optimum region. Experiments performed on our test datasets show promising results in terms of time and quality of the segmentation when compared to other state-of-the-art learning-based active contour model approaches. Our method is more than 53% faster than other state-of-the-art methods. Research is in progress to employ Time Adaptive Self- Organizing Map (TASOM) for improved segmentation and utilize the parallelization property of the LBM to achieve real-time segmentation.
Multi-modal Multi-temporal Brain Tumor Segmentation, Growth Analysis and Texture-based Classification
Brain tumor analysis is an active field of research, which has received a lot of attention from both the medical and the technical communities in the past decades. The purpose of this thesis is to investigate brain tumor segmentation, growth analysis and tumor classification based on multi-modal magnetic resonance (MR) image datasets of low- and high-grade glioma making use of computer vision and machine learning methodologies. Brain tumor segmentation involves the delineation of tumorous structures, such as edema, active tumor and necrotic tumor core, and healthy brain tissues, often categorized in gray matter, white matter and cerebro-spinal fluid. Deep learning frameworks have proven to be among the most accurate brain tumor segmentation techniques, performing particularly well when large accurately annotated image datasets are available. A first project is designed to build a more flexible model, which allows for intuitive semi-automated user-interaction, is less dependent on training data, and can handle missing MR modalities. The framework is based on a Bayesian network with hidden variables optimized by the expectation-maximization algorithm, and is tailored to handle non-Gaussian multivariate distributions using the concept of Gaussian copulas. To generate reliable priors for the generative probabilistic model and to spatially regularize the segmentation results, it is extended with an initialization and a post-processing module, both based on supervoxels classified by random forests. Brain tumor segmentation allows to assess tumor volumetry over time, which is important to identify disease progression (tumor regrowth) after therapy. In a second project, a dataset of temporal MR sequences is analyzed. To that end, brain tumor segmentation and brain tumor growth assessment are unified within a single framework using a conditional random field (CRF). The CRF extends over the temporal patient datasets and includes directed links with infinite weight in order to incorporate growth or shrinkage constraints. The model is shown to obtain temporally coherent tumor segmentation and aids in estimating the likelihood of disease progression after therapy. Recent studies classify brain tumors based on their genotypic parameters, which are reported to have an important impact on the prognosis and the therapy of patients. A third project is aimed to investigate whether the genetic profile of glioma can be predicted based on the MR images only, which would eliminate the need to take biopsies. A multi-modal medical image classification framework is built, classifying glioma in three genetic classes based on DNA methylation status. The framework makes use of short local image descriptors as well as deep-learned features acquired by denoising auto-encoders to generate meaningful image features. The framework is successfully validated and shown to obtain high accuracies even though the same image-based classification task is hardly possible for medical experts.
Application of Fuzzy c-means and Neural networks to categorize tumor affected breast MR Images
Anand, Shruthi
Vinod, Viji
Rampure, Anand
International Journal of Applied Engineering Research2015Journal Article, cited 4 times
Website
The image semantic segmentation challenge consists of classifying each pixel of an image (or just several ones) into an instance, where each instance (or category) corresponds to an object. This task is a part of the concept of scene understanding or better explaining the global context of an image. In the medical image analysis domain, image segmentation can be used for image-guided interventions, radiotherapy, or improved radiological diagnostics. Following a comprehensive review of state-of-the-art deep learning-based medical and non-medical image segmentation solutions, we make the following contributions. A deep learning-based (medical) image segmentation typical pipeline includes designing layers (A), designing an architecture (B), and defining a loss function (C). A clean/modified (D)/adversarialy perturbed (E) image is fed into a model (consisting of layers and loss function) to predict a segmentation mask for scene understanding etc. In some cases where the number of segmentation annotations is limited, weakly supervised approaches (F) are leverages. For some applications where further analysis is needed e.g., predicting volumes and objects burden, the segmentation mask is fed into another post-processing step (G). In this thesis, we tackle each of the steps (A-G). I) As for step (A and E), we studied the effect of the adversarial perturbation on image segmentation models and proposed a method that improves the segmentation performance via a non-linear radial basis convolutional feature mapping by learning a Mahalanobis-like distance function on both adversarially perturbed and unperturbed images. Our method then maps the convolutional features onto a linearly well-separated manifold, which prevents small adversarial perturbations from forcing a sample to cross the decision boundary. II) As for step (B), we propose light, learnable skip connections which learn first to select the most discriminative channels and then aggregate the selected ones as single-channel attending to the most discriminative regions of input. Compared to the heavy classical skip connections, our method reduces the computation cost and memory usage while it improves segmentation performance. III) As for step (C), we examined the critical choice of a loss function in order to handle the notorious imbalance problem that plagues both the input and output of a learning model. In order to tackle both types of imbalance during training and inference, we introduce a new curriculum learning-based loss function. Specifically, we leverage the Dice similarity coefficient to deter model parameters from being held at bad local minima and at the same time, gradually learn better model parameters by penalizing for false positives/negatives using a cross-entropy term which also helps. IV) As for step (D), we propose a new segmentation performance-boosting paradigm that relies on optimally modifying the network's input instead of the network itself. In particular, we leverage the gradients of a trained segmentation network with respect to the input to transfer it into a space where the segmentation accuracy improves. V) As for step (F), we propose a weakly supervised image segmentation model with a learned spatial masking mechanism to filter out irrelevant background signals from attention maps. The proposed method minimizes mutual information between a masked variational representation and the input while maximizing the information between the masked representation and class labels. VI) Although many semi-automatic segmentation based methods have been developed, as for step (G), we introduce a method that completely eliminates the segmentation step and directly estimates the volume and activity of the lesions from positron emission tomography scans.
Computer Aided Detection Scheme To Improve The Prognosis Assessment Of Early Stage Lung Cancer Patients
Athira, KV
Nithin, SS
Computer2018Journal Article, cited 0 times
Website
Radiomics
non-small cell lung cancer
Machine learning
To develop a computer aided detection scheme to predict the stage 1 non-small cell lung cancer recurrence risk in lung cancer patients after surgery. By using chest computed tomography images; that taken before surgery, this system automatically segment the tumor that seen on CT images and extract the tumor related morphological and texture-based image features. We trained a Naïve Bayesian network classifier using six image features and an ANN classifier using two genomic biomarkers, these biomarkers are protein expression of the excision repair cross-complementing 1 gene (ERCC1) & a regulatory subunit of ribonucleotide reductase (RRM1) to predict the cancer recurrence risk, respectively. We developed a new approach that has a high potential to assist doctors in more effectively managing first stage NSCLC patients to reduce the cancer recurrence risk.
BIOMEDICAL IMAGE RETRIEVAL USING LBWP
Babu, Joyce Sarah
Mathew, Soumya
Simon, Rini
International Research Journal of Engineering and Technology2017Journal Article, cited 0 times
Website
Glioblastoma (GBM) is an aggressive cancer with an average 5-year survival rate of about 5%. Following treatment with surgery, radiation, and chemotherapy, diagnosing tumor recurrence requires serial magnetic resonance imaging (MRI) scans. Infiltrative tumor cells beyond gadolinium enhancement on T1-weighted MRI are difficult to detect. This study therefore aims to improve tumor detection beyond traditional tumor margins. To accomplish this, a neural network model was trained to classify tissue samples as ‘tumor’ or ‘not tumor’. ; This model was then used to classify thousands of tiles from histology samples acquired at autopsy with known MRI locations on the patient’s final clinical MRI scan. This combined radiological-pathological (rad-path) dataset was then treated as a ground truth to train a second model for predicting tumor presence from MRI alone. Predictive maps were created for seven patients left out of the training steps, and tissue samples were tested to determine the model’s accuracy. The final model produced a receiver operator characteristic (ROC) area under the curve (AUC) of 0.70. This study demonstrates a new method for detecting infiltrative tumor beyond conventional radiologist defined margins based on neural networks applied to rad-path datasets in glioblastoma.
Machine Intelligence for Advanced Medical Data Analysis: Manifold Learning Approach
In the current work, linear and non-linear manifold learning techniques, specifically Principle Component Analysis (PCA) and Laplacian Eigenmaps, are studied in detail. Their applications in medical image and shape analysis are investigated.; In the first contribution, a manifold learning-based multi-modal image registration technique is developed, which results in a unified intensity system through intensity transformation between the reference and sensed images. The transformation eliminates intensity variations in multi-modal medical scans and hence facilitates employing well-studied monomodal registration techniques. The method can be used for registering multi-modal images with full and partial data.; Next, a manifold learning-based scale invariant global shape descriptor is introduced. The proposed descriptor benefits from the capability of Laplacian Eigenmap in dealing with high dimensional data by introducing an exponential weighting scheme. It eliminates the limitations tied to the well-known cotangent weighting scheme, namely dependency on triangular mesh representation and high intra-class quality of 3D models.; In the end, a novel descriptive model for diagnostic classification of pulmonary nodules is presented. The descriptive model benefits from structural differences between benign and malignant nodules for automatic and accurate prediction of a candidate nodule. It extracts concise and discriminative features automatically from the 3D surface structure of a nodule using spectral features studied in the previous work combined with a point cloud-based deep learning network.; Extensive experiments have been conducted and have shown that the proposed algorithms based on manifold learning outperform several state-of-the-art methods. Advanced computational techniques with a combination of manifold learning and deep networks can play a vital role in effective healthcare delivery by providing a framework for several fundamental tasks in image and shape processing, namely, registration, classification, and detection of features of interest.
Towards High Performing and Reliable Deep Convolutional Neural Network Models for Typically Limited Medical Imaging Datasets
Artificial Intelligence (AI) is “The science and engineering of making intelligent machines, especially intelligent computer programs” [93]. Artificial Intelligence has been applied in a wide range of fields including automobiles, space, robotics, and healthcare.; According to recent reports, AI will have a huge impact on increasing the world economy by 2030 and it’s expected that the greatest impact will be in the field of healthcare. The global market size of AI in healthcare was estimated at USD 10.4 billion in 2021 and is; expected to grow at a high rate from 2022 to 2030 (CAGR of 38.4%) [124]. Applications of AI in healthcare include robot-assisted surgery, disease detection, health monitoring, and; automatic medical image analysis. Healthcare organizations are becoming increasingly in terested in how artificial intelligence can support better patient care while reducing costs; and improving efficiencies.; Deep learning is a subset of AI that is becoming transformative for healthcare. Deep; learning offers fast and accurate data analysis. Deep learning is based on the concept of; artificial neural networks to solve complex problems.; In this dissertation, we propose deep learning-based solutions to the problems of limited; medical imaging in two clinical contexts: brain tumor prognosis and COVID-19 diagno sis. For brain tumor prognosis, we suggest novel systems for overall survival prediction; of Glioblastoma patients from small magnetic resonance imaging (MRI) datasets based on; ensembles of convolutional neural networks (CNNs). For COVID-19 diagnosis, we reveal; one critical problem with CNN-based approaches for predicting COVID-19 from chest X-ray; (CXR) imaging: shortcut learning. Then, we experimentally suggest methods to mitigate; this problem to build fair, reliable, robust, and transparent deep learning based clinical; decision support systems. We discovered this problem with CNNs and using Chest Xray imaging. However, the issue and solutions generally apply to other imaging modalities and; recognition problems.
Detection of Motion Artifacts in Thoracic CT Scans
The analysis of a lung CT scan can be a complicated task due to the presence of certain image artifacts such as cardiac motion, respiratory motion, beam hardening artefacts, and so on. In this project, we have built a deep learning based model for the detection of these motion artifacts in the image. Using biomedical image segmentation models we have trained the model on lung CT scans from the LIDC dataset. The developed model is able to identify the regions in the scan which are affected by motion by segmenting the image. Further it is also able to separate normal (or easy to analyze) CT scans from CT scans that may provide incorrect quantitative analysis, even when the examples of image artifacts or low quality scans are scarce. In addition, the model is able to evaluate a quality score for the scan based on the amount of artifacts detected which could hamper its authenticity for the further diagnosisof disease or disease progression. We used two main approaches during the experimentation process - 2D slice based approaches and 2D patch based approaches of which the patch based approaches yielded the final model. The final model gave an AUC of 0.814 in the ROC analysis of the evaluation study conducted. Discussions on the approaches and findings of the final model are provided and future directions are proposed.
COMPARISON OF A PATIENT-SPECIFIC COMPUTED TOMOGRAPHY ORGAN DOSE SOFTWARE WITH COMMERCIAL PHANTOM-BASED TOOLS
Computed Tomography imaging is an important diagnostic tool but carries some; risk due to radiation dose used to form the image. Currently, CT scanners report a; measure of radiation dose for each scan that reflects the radiation emitted by the scanner,; not the radiation dose absorbed by the patient. The radiation dose absorbed by organs,; known as organ dose, is a more relevant metric that is important for risk assessment and; CT protocol optimization. Tools for rapid organ-dose estimation are available but are; limited to using general patient models. These publicly available tools are unable to; model patient-specific anatomy and positioning within the scanner. To address these; limitations, the Personalized Rapid Estimator of Dose in Computed Tomography; (PREDICT) dosimetry tool was recently developed. This study validated the organ doses; estimated by ‘PREDICT’ with ground truth values. The patient-specific PREDICT; performance was also compared to two publicly available phantom-based methods:; VirtualDose and NCICT. The PREDICT tool demonstrated lower organ dose errors; compared to the phantom-based methods, demonstrating the benefit of patient-specific; modeling. This study also developed a method to extract the walls of cavity organs, such; as the bladder and the intestines, and quantified the effect of organ wall extraction on; organ dose. The study found that the exogenous material within the cavity organ can; affect organ dose estimate, therefore demonstrating the importance of boundary wall; extraction in dosimetry tools such as PREDICT.
High Capacity and Reversible Fragile Watermarking Method for Medical Image Authentication and Patient Data Hiding
Bouarroudj, Riadh
Bellala, Fatma Zohra
Souami, Feryel
Journal of Medical Systems2024Journal Article, cited 0 times
Website
TCGA-LUAD
Selección de un algoritmo para la clasificación de Nódulos Pulmonares Solitarios
Castro, Arelys Rivero
Correa, Luis Manuel Cruz
Lezcano, Jeffrey Artiles
Revista Cubana de Informática Médica2016Journal Article, cited 0 times
Website
LIDC-IDRI
Optimizations for Deep Learning-Based CT Image Enhancement
Computed tomography (CT) combined with deep learning (DL) has recently shown great potential in biomedical imaging. Complex DL models with varying architectures inspired by the human brain are improving imaging software and aiding diagnosis. However, the accuracy of these DL models heavily relies on the datasets used for training, which often contain low-quality CT images from low-dose CT (LDCT) scans. Moreover, in contrast to the neural architecture of the human brain, DL models today are dense and complex, resulting in a significant computational footprint. Therefore, in this work, we propose sparse optimizations to minimize the complexity of the DL models and leverage architecture-aware optimization to reduce the total training time of these DL models. To that end, we leverage a DL model called DenseNet and Deconvolution Network (DDNet). The model enhances LDCT chest images into high-quality (HQ) ones but requires many hours to train. To further improve the quality of final HQ images, we first modified DDNet's architecture with a more robust multi-level VGG (ML-VGG) loss function to achieve state-of-the-art CT image enhancement. However, improving the loss function results in increased computational cost. Hence, we introduce sparse optimizations to reduce the complexity of the improved DL model and then propose architecture-aware optimizations to efficiently utilize the underlying computing hardware to reduce the overall training time. Finally, we evaluate our techniques for performance and accuracy using state-of-the-art hardware resources.
MRI prostate cancer radiomics: Assessment of effectiveness and perspectives
Concussion is a traumatic brain injury usually caused by a direct or indirect blow to the head that affects brain function. The maximum mechanical impedance of the brain tissue occurs at 450±50 Hz and may be affected by the skull resonant frequencies. After an impact to the head, vibration resonance of the skull damages the underlying cortex. The skull deforms and vibrates, like a bell for 3 to 5 milliseconds, bruising the cortex. Furthermore, the deceleration forces the frontal and temporal cortex against the skull, eliminating a layer of cerebrospinal fluid. When the skull vibrates, the force spreads directly to the cortex, with no layer of cerebrospinal fluid to reflect the wave or cushion its force. To date, there is few researches investigating the effect of transient vibration of the skull. Therefore, the overall goal of the proposed research is to gain better understanding of the role of transient vibration of the skull on concussion. This goal will be achieved by addressing three research objectives. First, a MRI skull and brain segmentation automatic technique is developed. Due to bones’ weak magnetic resonance signal, MRI scans struggle with differentiating bone tissue from other structures. One of the most important components for a successful segmentation is high-quality ground truth labels. Therefore, we introduce a deep learning framework for skull segmentation purpose where the ground truth labels are created from CT imaging using the standard tessellation language (STL). Furthermore, the brain region will be important for a future work, thus, we explore a new initialization concept of the convolutional neural network (CNN) by orthogonal moments to improve brain segmentation in MRI. Second, the creation of a novel 2D and 3D Automatic Method to Align the Facial Skeleton is introduced. An important aspect for further impact analysis is the ability to precisely simulate the same point of impact on multiple bone models. To perform this task, the skull must be precisely aligned in all anatomical planes. Therefore, we introduce a 2D/3D technique to align the facial skeleton that was initially developed for automatically calculating the craniofacial symmetry midline. In the 2D version, the entire concept of using cephalometric landmarks and manual image grid alignment to construct the training dataset was introduced. Then, this concept was extended to a 3D version where coronal and transverse planes are aligned using CNN approach. As the alignment in the sagittal plane is still undefined, a new alignment based on these techniques will be created to align the sagittal plane using Frankfort plane as a framework. Finally, the resonant frequencies of multiple skulls are assessed to determine how the skull resonant frequency vibrations propagate into the brain tissue. After applying material properties and mesh to the skull, modal analysis is performed to assess the skull natural frequencies. Finally, theories will be raised regarding the relation between the skull geometry, such as shape and thickness, and vibration with brain tissue injury, which may result in concussive injury.; ; Summary for Lay Audience; A concussion is a traumatic brain injury usually caused by a direct or indirect blow to the head that affects brain function. As the maximum mechanical impedance of the brain tissue occurs at 450±50 Hz, skull resonant frequencies may play an important role in the propagation of this vibration into the brain tissue. The overall goal of the proposed research is to gain a better understanding of the role of transient vibration of the skull on concussion. This goal will be achieved by addressing three research objectives: I) develop an automatic method to segment/extract skull and brain from magnetic resonance imaging (MRI), II) create a novel 2D and 3D automatic method to align the facial skeleton, and III) identify the skull resonant frequencies and raise the theory of how these vibrations may propagate into brain tissue. For objective 1, 58 MRI and their respective computed tomography (CT) scans were used to create a convolutional neural network framework for skull and brain segmentation in MRI. Moreover, an invariant moment kernel was introduced to improve the brain segmentation accuracy in MRI. For objective 2, a 2D and 3D technique for automatically calculating the craniofacial symmetry midline from head CT scans using deep learning techniques was used to precisely align the facial skeleton for future impact analysis. In objective 3, several skulls segmented were tested to identify their natural resonant frequencies. Those with a resonant frequency of 450±50 Hz were selected to improve understanding of how their shapes and thickness may help the vibration to propagate deeply in the brain tissue. The results from this study will improve our understanding of the role of transient vibration of the skull on concussion.
Feature Extraction In Medical Images by Using Deep Learning Approach
Dara, S
Tumma, P
Eluri, NR
Kancharla, GR
International Journal of Pure and Applied Mathematics2018Journal Article, cited 0 times
Website
TCGA-LUAD
Machine Learning
Deep Learning
Feature Extraction
Impact of GAN-based Lesion-Focused Medical Image Super-Resolution on Radiomic Feature Robustness
Robust machine learning models based on radiomic features might allow for accu rate diagnosis, prognosis, and medical decision-making. Unfortunately, the lack of standardized radiomic feature extraction has hampered their clinical use. Since the radiomic features tend to be affected by low voxel statistics in regions of interest, in creasing the sample size would improve their robustness in clinical studies. Therefore, we propose a Generative Adversarial Network (GAN)-based lesion-focused framework for Computed Tomography (CT) image Super-Resolution (SR); for the lesion (i.e., can cer) patch-focused training, we incorporate Spatial Pyramid Pooling (SPP) into GAN-Constrained by the Identical, Residual, and Cycle Learning Ensemble (GAN-CIRCLE).; At 2× SR, the proposed model achieved better perceptual quality with less blurring than the other considered state-of-the-art SR methods, while producing comparable results at 4× SR. We also evaluated the robustness of our model’s radiomic feature in terms of quantization on a different lung cancer CT dataset using Principal Component Analysis (PCA). Intriguingly, the most important radiomic features in our PCA-based analysis were the most robust features extracted on the GAN-super-resolved images. These achievements pave the way for the application of GAN-based image Super-Resolution techniques for studies of radiomics for robust biomarker discovery.
COMPUTATIONAL IMAGING AND MULTIOMIC BIOMARKERS FOR PRECISION MEDICINE: CHARACTERIZING HETEROGENEITY IN LUNG CANCER
Lung cancer is the leading cause of cancer deaths and is the third most diagnosed cancer in both men and women in the United States. Non-small cell lung cancer (NSCLC) accounts for 84% of all lung cancer cases. The inherent intra-tumor and inter-tumor heterogeneity in lung tumors has been linked with adverse clinical outcomes. A well-rounded characterization of tumor heterogeneity by personalized biomarkers is needed to develop precision medicine treatment strategies for cancer. Large-scale genome-based characterization poses the disadvantages of high cost and technical complexity. Further, a histopathological sample from a tumor biopsy may not be able to fully represent the structural and functional properties of the entire tumor. Medical imaging is now emerging as a key player in the field of personalized medicine, due to its ability to non-invasively characterize the anatomical and physiological properties of the tumor regions. The studies included in this thesis introduce analytical tools developed thorough machine learning and bioinformatics and use information from diagnostic images and other “omic” sources, to develop computational imaging and multiomic biomarkers to characterize intratumor heterogeneity. A novel radiomic biomarker, that integrates with PDL1 expression, ECOG status, BMI, and smoking status, to enhance the ability to predict progression-free survival in a preliminary cohort of patients with stage 4 NSCLC, treated with first-line anti-PD1/PDL1 checkpoint inhibitor therapy PEMBROLIZUMAB. This study also showed that mitigation of the heterogeneity introduced by voxel spacing and image acquisition parameters improves the prognostic performance of the radiomic phenotypes. We further performed a detailed investigation of the effects of heterogeneity in image parameters on the reproducibility of prognostic performance of models built using radiomic biomarkers. The results of this second study indicated that accounting for heterogeneity in image parameters is important to obtain more reproducible prognostic scores, irrespective of image site or modality. In the third study, we developed novel multiomic phenotypes in a larger cohort of patients with stage 4 NSCLC treated with PEMBROLIZUMAB. These multiomic phenotypes, formed by integration of radiomics, radiological and pathological information of the patients, enhanced precision in progression-free survival prediction upon combination with prognostic clinical variables. To our knowledge, our study is the first to construct a “multiomic signature for prognosis of NSCLC patient response to immunotherapy, in contrast to prior radiogenomic approaches leveraging a radiomics signature to identify patient categories based on a genomic biomarker-based classification. In the exploratory fourth study, we evaluated the performance of radiomics analyses of part-solid lung nodules to detect nodule invasiveness using several approaches: radiomics analysis in the presurgical CT scan, delta radiomics over three time-points leading up to surgical resection and nodule volumetry. The best performing model for the prediction of nodule invasiveness was the model built using a combination of immediate pre-surgical, delta radiomics, delta volumes and clinical assessment. The study showed that the combined utilization of clinical, volumetric and radiomic features may facilitate complex decision making in the management of subsolid lung nodules. To summarize, the studies included in this thesis demonstrate the value of computational radiomic and multiomic biomarkers in the characterization of lung tumor heterogeneity and have the potential to be utilized in the advancement of precision medicine in oncology.
An introduction to Topological Object Data Analysis
Summary and analysis are important foundations in Statistics, but typical methods may prove ineffective at providing thorough summaries of complex object data. Topological data analysis (TDA) (also called topological object data analysis (TODA) when applied to object data) provides additional topological summaries, such as the persistence diagram and persistence landscape, that can be useful in distinguishing distributions based on data sets. The main tool is persistent homology, which tracks the births and deaths of various homology classes as one steps through a filtered simplicial complex that covers the sample. The persistence diagrams and landscapes can also be used to provide confidence sets for “significant” features and two-sample tests between groups. An example of application is provided via analyzing mammogram images for patients with benign and malignant masses.
Collaborative learning of joint medical image segmentation tasks from heterogeneous and weakly-annotated data
Convolutional Neural Networks (CNNs) have become the state-of-the-art for most image segmentation tasks and therefore one would expect them to be able to learn joint tasks, such as brain structures and pathology segmentation. However, annotated databases required to train CNNs are usually dedicated to a single task, leading to partial annotations (e.g. brain structure or pathology delineation but not both for joint tasks). Moreover, the information required for these tasks may come from distinct magnetic resonance (MR) sequences to emphasise different types of tissue contrast, leading to datasets with different sets of image modalities. Similarly, the scans may have been acquired at different centres, with different MR parameters, leading to differences in resolution and visual appearance among databases (domain shift). Given the large amount of resources, time and expertise required to carefully annotate medical images, it is unlikely that large and fully-annotated databases will become readily available for every joint problem. For this reason, there is a need to develop collaborative approaches that exploit existing heterogeneous and task-specific datasets, as well as weak annotations instead of time-consuming pixel-wise annotations.; ; In this thesis, I present methods to learn joint medical segmentation tasks from task-specific, domain-shifted, hetero-modal and weakly-annotated datasets. The problem lies at the intersection of several branches of Machine Learning: Multi-Task Learning, Hetero-Modal Learning, Domain Adaptation and Weakly Supervised Learning. First, I introduce a mathematical formulation of a joint segmentation problem under the constraint of missing modalities and partial annotations, in which Domain Adaptation techniques can be directly integrated, and a procedure to optimise it. Secondly, I propose a principled approach to handle missing modalities based on Hetero-Modal Variational Auto-Encoders. Thirdly, in this thesis, I focus on Weakly Supervised Learning techniques and present a novel approach to train deep image segmentation networks using particularly weak train-time annotations: only 4 (2D) or 6 (3D) extreme clicks at the boundary of the objects of interest. The proposed framework connects the extreme points using a new formulation of geodesics that integrates the network outputs and uses the generated paths for supervision. Fourthly, I introduce a new weakly-supervised Domain Adaptation technique using scribbles on the target domain and formulate as a cross-domain CRF optimisation problem. Finally, I led the organisation of the first medical segmentation challenge for unsupervised cross-modality domain adaptation (crossMoDA). The benchmark reported in this thesis provides a comprehensive characterisation of cross-modality domain adaptation techniques.; ; Experiments are performed on brain MR images from patients with different types of brain diseases: gliomas, white matter lesions and vestibular schwannoma. The results demonstrate the broad applicability of the presented frameworks to learn joint segmentation tasks with the potential to improve brain disease diagnosis and patient management in clinical practice.
3D-RADNet: Extracting labels from DICOM metadata for training general medical domain deep 3D convolution neural networks
Training deep convolution neural network requires a large amount of data to obtain good performance and generalisable results. Transfer learning approaches from datasets such as ImageNet had become important in increasing accuracy and lowering training samples required. However, as of now, there has not been a popular dataset for training 3D volumetric medical images. This is mainly due to the time and expert knowledge required to accurately annotate medical images. In this study, we present a method in extracting labels from DICOM metadata that information on the appearance of the scans to train a medical domain 3D convolution neural network. The labels include imaging modalities and sequences, patient orientation and view, presence of contrast agent, scan target and coverage, and slice spacing. We applied our method and extracted labels from a large amount of cancer imaging dataset from TCIA to train a medical domain 3D deep convolution neural network. We evaluated the effectiveness of using our proposed network in transfer learning a liver segmentation task and found that our network achieved superior segmentation performance (DICE=90.0) compared to training from scratch (DICE=41.8). Our proposed network shows promising results to be used as a backbone network for transfer learning to another task. Our approach along with the utilising our network, can potentially be used to extract features from large-scale unlabelled DICOM datasets.
A Content-Based-Image-Retrieval Approach for Medical Image Repositories
Despite recent advances in life sciences and technology, the amount of time and money spent in the drug development process remain drastically inflated. Thus, there is a need to rapidly recognize characteristics that will help identify novel therapies.; First, we address the increased need for drug repurposing, the approach of identifying new indications for approved or investigational drugs. We present a novel drug repurposing method called Creating A Translational Network for Indication Prediction (CATNIP) which relies solely on biological and chemical drug characteristics to identify disease areas for specific drugs and drug classes. This drug-focused approach could allow our approach to be used for both FDA approved drugs as well as investigational drugs. Our method, trained with 2,576 diverse small molecules, is built using easily interpretable features, such as chemical structure and targets, allowing for probable drug-disease mechanisms to be discovered from the predictions made. The strength of this method's approach is demonstrated through a repurposing network that can be utilized identify drug class candidate opportunities. In order to treat many of these conditinos, a drug compound is orally ingested by a patient. One of the major absorption sites for drugs is the small intestine, and drug properties such as permeability are proven important to maximize treatment efforts. Poor absorption of drug candidates is likely to lead to failure in the drug development process, so we propose an innovative approach to predict the permeability of a drug. The Caco-2 cell model is a standard surrogate for predicting in vitro intestinal permeability. We collected one of the largest experimentally based datasets of Caco-2 values to create a computational model. Using an approach called graph convolutional networks that treats molecules as graphs, we are able to take in a line-notation form molecular structure and successfully make predictions about a drug compound's permeability. ; Altogether, this work demonstrates how the integration of diverse datasets can aid in addressing the multitutde of challenging problems in the field of drug discovery. Computational approaches such as these, that prioritize applicability and interpretability, have the strong potential to transform and improve upon the drug development pipeline.
A COMPUTER AIDED DIAGNOSIS SYSTEM FOR LUNG CANCER DETECTION USING SVM
Computer aided diagnosis is starting to be implemented broadly in the diagnosis and; detection of many varieties of abnormities acquired during various imaging procedures.; The main aim of the CAD systems is to increase the accuracy and decrease the time of; diagnoses, while the general achievement for CAD systems are to find the place of nodules; and to determine the characteristic features of the nodule. As lung cancer is one of the fatal; and leading cancer types, there has been plenty of studies for the usage of the CAD; systems to detect lung cancer. Yet, the CAD systems need to be developed a lot in order to; identify the different shapes of nodules, lung segmentation and to have higher level of; sensitivity, specifity and accuracy. This challenge is the motivation of this study in; implementation of CAD system for lung cancer detection. In the study, LIDC database is; used which comprises of an image set of lung cancer thoracic documented CT scans. The; presented CAD system consists of CT image reading, image pre-processing, segmentation,; feature extraction and classification steps. To avoid losing important features, the CT; images were read as a raw form in DICOM file format. Then, filtration and enhancement; techniques were used as an image processing. Otsu’s algorithm, edge detection and; morphological operations are applied for the segmentation, following the feature; extractions step. Finally, support vector machine with Gaussian RBF is utilized for the; classification step which is widely used as a supervised classifier.
Attention P-Net for Segmentation of Post-operative Glioblastoma in MRI
Segmentation of post-operative glioblastoma is important for follow-up treatment. In this thesis, Fully Convolutional Networks (FCN) are utilised together with attention modules for segmentation of post-operative glioblastoma in MR images. Attention-based modules help the FCN to focus on relevant features to improve segmentation results. Channel and spatial attention combines both the spatial context as well as the semantic information in MR images. P-Net is used as a backbone for creating an architecture with existing bottleneck attention modules and was named attention P-Net. The proposed network and competing techniques were evaluated on a Uppsala University database containing T1-weighted MR images of brain from 12 subjects. The proposed framework shows substantial improvement over the existing techniques.
Sparse View Deep Differentiated Backprojection for Circular Trajectories in CBCT
In this paper, we present a method for removing streak artifacts from reconstructions of sparse cone beam CT (CBCT) projections along circular trajectories. The differentiated backprojection on 2-D planes is combined with convolutional neural networks for both artifact reduction and the ill-posed inversion of the Hilbert transform. Undersampling errors occur at different stages of the algorithm, so the influence of applying the neural networks at these stages is investigated. Spectral blending is used to combine coronal and sagittal planes to a full 3-D reconstruction. Experimental results show that using a neural network to reconstruct a plane-of-interest from the differentiated backprojection of few projections works best by additionally providing FDK reconstructed planes to the network. This approach reduces streaking and cone beam artifacts compared to the direct FDK reconstruction and is also superior to post-processing CNNs.
A study of machine learning and deep learning models for solving medical imaging problems
Application of machine learning and deep learning methods on medical imaging aims to create systems that can help in the diagnosis of disease and the automation of analyzing medical images in order to facilitate treatment planning. Deep learning methods do well in image recognition, but medical images present unique challenges. The lack of large amounts of data, the image size, and the high class-imbalance in most datasets, makes training a machine learning model to recognize a particular pattern that is typically present only in case images a formidable task.; Experiments are conducted to classify breast cancer images as healthy or nonhealthy, and to detect lesions in damaged brain MRI (Magnetic Resonance Imaging) scans. Random Forest, Logistic Regression and Support Vector Machine perform competitively in the classification experiments, but in general, deep neural networks beat all conventional methods. Gaussian Naïve Bayes (GNB) and the Lesion Identification with Neighborhood Data Analysis (LINDA) methods produce better lesion detection results than single path neural networks, but a multi-modal, multi-path deep neural network beats all other methods. The importance of pre-processing training data is also highlighted and demonstrated, especially for medical images, which require extensive preparation to improve classifier and detector performance. Only a more complex and deeper neural network combined with properly pre-processed data can produce the desired accuracy levels that can rival and maybe exceed those of human experts.
LCD-OpenPACS: sistema integrado de telerradiologia com auxílio ao diagnóstico de nódulos pulmonares em exames de tomografia computadorizada
Machine Learning Methods for Image Analysis in Medical Applications From Alzheimer’s Disease, Brain Tumors, to Assisted Living
Chenjie Ge
2020Thesis, cited 0 times
Thesis
Dissertation
Machine learning
Supervised
Convolutional Neural Network (CNN)
BraTS
Classification
Generative Adversarial Network (GAN)
ADNI
Healthcare has progressed greatly nowadays owing to technological advances, where machine learning plays an important role in processing and analyzing a large amount of medical data. This thesis investigates four healthcare-related issues (Alzheimer's disease detection, glioma classification, human fall detection, and obstacle avoidance in prosthetic vision), where the underlying methodologies are associated with machine learning and computer vision. For Alzheimer’s disease (AD) diagnosis, apart from symptoms of patients, Magnetic Resonance Images (MRIs) also play an important role. Inspired by the success of deep learning, a new multi-stream multi-scale Convolutional Neural Network (CNN) architecture is proposed for AD detection from MRIs, where AD features are characterized in both the tissue level and the scale level for improved feature learning. Good classification performance is obtained for AD/NC (normal control) classification with test accuracy 94.74%. In glioma subtype classification, biopsies are usually needed for determining different molecular-based glioma subtypes. We investigate non-invasive glioma subtype prediction from MRIs by using deep learning. A 2D multi-stream CNN architecture is used to learn the features of gliomas from multi-modal MRIs, where the training dataset is enlarged with synthetic brain MRIs generated by pairwise Generative Adversarial Networks (GANs). Test accuracy 88.82% has been achieved for IDH mutation (a molecular-based subtype) prediction. A new deep semi-supervised learning method is also proposed to tackle the problem of missing molecular-related labels in training datasets for improving the performance of glioma classification. In other two applications, we also address video-based human fall detection by using co-saliency-enhanced Recurrent Convolutional Networks (RCNs), as well as obstacle avoidance in prosthetic vision by characterizing obstacle-related video features using a Spiking Neural Network (SNN). These investigations can benefit future research, where artificial intelligence/deep learning may open a new way for real medical applications.
Brain tumor detection from MRI image: An approach
Ghosh, Debjyoti
Bandyopadhyay, Samir Kumar
International Journal of Applied Research2017Journal Article, cited 0 times
Website
Algorithm Development
REMBRANDT
BRAIN
Magnetic Resonance Imaging (MRI)
Segmentation
Computer Aided Detection (CADe)
A brain tumor is an abnormal growth of cells within the brain, which can be cancerous or noncancerous (benign). This paper detects different types of tumors and cancerous growth within the brain and other associated areas within the brain by using computerized methods on MRI images of a patient.; It is also possible to track the growth patterns of such tumors.
When the machine does not know measuring uncertainty in deep learning models of medical images
Recently, Deep learning (DL), which involves powerful black box predictors, has outperformed human experts in several medical diagnostic problems. However, these methods focus exclusively on improving the accuracy of point predictions without assessing their outputs’ quality and ignore the asymmetric cost involved in different types of misclassification errors. Neural networks also do not deliver confidence in predictions and suffer from over and under confidence, i.e. are not well calibrated. Knowing how much confidence there is in a prediction is essential for gaining clinicians’ trust in the technology. Calibrated uncertainty quantification is a challenging problem as no ground truth is available. To address this, we make two observations: (i) cost-sensitive deep neural networks with Dropweights models better quantify calibrated predictive uncertainty, and (ii) estimated uncertainty with point predictions in Deep Ensembles Bayesian Neural Networks with DropWeights can lead to a more informed decision and improve prediction quality. This dissertation focuses on quantifying uncertainty using concepts from cost-sensitive neural networks, calibration of confidence, and Dropweights ensemble method. First, we show how to improve predictive uncertainty by deep ensembles of neural networks with Dropweights learning an approximate distribution over its weights in medical image segmentation and its application in active learning. Second, we use the Jackknife resampling technique to correct bias in quantified uncertainty in image classification and propose metrics to measure uncertainty performance. The third part of the thesis is motivated by the discrepancy between the model predictive error and the objective in quantified uncertainty when costs for misclassification errors or unbalanced datasets are asymmetric. We develop cost-sensitive modifications of the neural networks in disease detection and propose metrics to measure the quality of quantified uncertainty. Finally, we leverage an adaptive binning strategy to measure uncertainty calibration error that directly corresponds to estimated uncertainty performance and address problematic evaluation methods. We evaluate the effectiveness of the tools on nuclei images segmentation, multi-class Brain MRI image classification, multi-level cell type-specific protein expression prediction in ImmunoHistoChemistry (IHC) images and cost-sensitive classification for Covid-19 detection from X-Rays and CT image dataset. Our approach is thoroughly validated by measuring the quality of uncertainty. It produces an equally good or better result and paves the way for the future that addresses the practical problems at the intersection of deep learning and Bayesian decision theory. In conclusion, our study highlights the opportunities and challenges of the application of estimated uncertainty in deep learning models of medical images, representing the confidence of the model’s prediction, and the uncertainty quality metrics show a significant improvement when using Deep Ensembles Bayesian Neural Networks with DropWeights.
Deep Learning Architecture to Improve Edge Accuracy of Auto-Contouring for Head and Neck Radiotherapy
The manual delineation of the gross tumor volume (GTV) for Head and Neck Cancer (HNC) patients is an essential step in the radiotherapy treatment process. Methods to automate this process have the potential to decrease the amount of time it takes for a clinician to complete a plan, while also decreasing the inter-observer variability between clinicians. Deep learning (DL) methods have shown great promise in auto-segmentation problems. For HNC, we show that DL methods systematically fail at the axial edges of GTV where the segmentation is dependent on both information from the center of the tumor and nearby slices. These failures may decrease trust and usage of proposed Auto-Contouring Systems if not accounted for. In this paper we propose a modified version of the U-Net, a fully convolutional network for image segmentation, which can more accurately process dependence between slices to create a more robust GTV contour. We also show that it can outperform the current proposed methods that capture slice dependencies by leveraging 3D convolutions. Our method uses Convolutional Recurrent Neural Networks throughout the decoder section of the U-Net to capture both spatial and adjacent-slice information when considering a contour. To account for shifts in anatomical structures through adjacent CT slices, we allow an affine transformation to the adjacent feature space using Spatial Transformer Networks. Our proposed model increases accuracy at the edges by 12% inferiorly and 26% superiorly over a baseline 2D U-Net, which has no inherent way to capture information between adjacent slices.
Targeted Design Choices in Machine Learning Architectures Can Both Improve Model Performance and Support Joint Activity
Opaque models do not support Joint Activity and create brittle systems that fail rapidly when the model reaches the edges of its operating conditions. Instead, we should use models which are observable, directable, and predictable – qualities which are better suited by transparent or ‘explainable’ models. However, using explainable models has traditionally been seen as a trade-off in machine performance, ignoring the potential benefits to the performance of the human machine teams. While the cost to model performance is negligible when considering the cost to the human machine team, there is a benefit to machine learning that has increased accuracy or capabilities when designed appropriately to deal with failure. Increased accuracy can indicate better alignment with the world and the increased capability to generalize across a broader variety of cases. Increased capability does not always have to come at the cost of explainability, and this dissertation will discuss approaches to make traditionally opaque models more usable in human machine teaming architectures.
Real-Time Computed Tomography-based Medical Diagnosis Using Deep Learning
Computed tomography has been widely used in medical diagnosis to generate accurate images of the body's internal organs. However, cancer risk is associated with high X-ray dose CT scans, limiting its applicability in medical diagnosis and telemedicine applications. CT scans acquired at low X-ray dose generate low-quality images with noise and streaking artifacts. Therefore we develop a deep learning-based CT image enhancement algorithm for improving the quality of low-dose CT images. Our algorithm uses a convolution neural network called DenseNet and Deconvolution network (DDnet) to remove noise and artifacts from the input image. To evaluate its advantages in medical diagnosis, we use DDnet to enhance chest CT scans of COVID-19 patients. We show that image enhancement can improve the accuracy of COVID-19 diagnosis (~5% improvement), using a framework consisting of AI-based tools. For training and inference of the image enhancement AI model, we use heterogeneous computing platform for accelerating the execution and decreasing the turnaround time. Specifically, we use multiple GPUs in distributed setup to exploit batch-level parallelism during training. We achieve approximately 7x speedup with 8 GPUs running in parallel compared to training DDnet on a single GPU. For inference, we implement DDnet using OpenCL and evaluate its performance on multi-core CPU, many-core GPU, and FPGA. Our OpenCL implementation is at least 2x faster than analogous PyTorch implementation on each platform and achieves comparable performance between CPU and FPGA, while FPGA operated at a much lower frequency.
Pulmonary nodule segmentation in computed tomography with deep learning
Early detection of lung cancer is essential for treating the disease. Lung nodule segmentation systems can be used together with Computer-Aided Detection (CAD) systems, and help doctors diagnose and manage lung cancer. In this work, we create a lung nodule segmentation system based on deep learning. Deep learning is a sub-field of machine learning responsible for state-of-the-art results in several segmentation datasets such as the PASCAL VOC 2012. Our model is a modified 3D U-Net, trained on the LIDC-IDRI dataset, using the intersection over union (IOU) loss function. We show our model works for multiple types of lung nodules. Our model achieves state-of-the-art performance on the LIDC test set, using nodules annotated by at least 3 radiologists and with a consensus truth of 50%.
Privacy-Preserving Dashboard for F.A.I.R Head and Neck Cancer data supporting multi-centered collaborations
Research in modern healthcare requires vast volumes of data from various healthcare centers across the globe. It is not always feasible to centralize clinical data without compromising privacy. A tool addressing these issues and facilitating reuse of clinical data is the need of the hour. The Federated Learning approach, governed in a set of agreements such as the Personal Health Train (PHT) manages to tackle these concerns by distributing models to the data centers instead of the traditional approach of centralizing datasets. One of the prerequisites of PHT is using semantically interoperable datasets for the models to be able to find them. FAIR (Findable, Accessible, Interoperable, Reusable) principles help in building interoperable and reusable data by adding knowledge representation and providing descriptive metadata. However, the process of making data FAIR is not always easy and straight-forward. Our main objective is to disentangle this process by using domain and technical expertise and get data prepared for federated learning. This paper introduces applications that are easily deployable as Docker containers, which will automate parts of the aforementioned process and significantly simplify the task of creating FAIR clinical data. Our method bypasses the need for clinical researchers to have a high degree of technical skills. We demonstrate the FAIR-ification process by applying it to five Head and Neck cancer datasets (four public and one private). The PHT paradigm is explored by building a distributed visualization dashboard from the aggregated summaries of the FAIR-ified datasets. Using the PHT infrastructure for exchanging only statistical summaries or model coefficients allows researchers to explore data from multiple centers without breaching privacy.
Interoperable encoding and 3D printing of anatomical structures resulting from manual or automated delineation
Gregoir, Thibault
2023Thesis, cited 0 times
Thesis
Pancreatic-CT-CBCT-SEG
Segmentation
3D printing
ChatGPT
Computed Tomography (CT)
RTSTRUCT
Surface reconstruction
Interoperable encoding
Manual or automated delineation
The understanding and visualization of the human body have been instrumental in the progress of medical science. Over time, the shift from cumbersome and invasive methods to modern scanners highlights the significance of expertise in retrieving, utilizing, and comprehending the resulting data. 3D rendering and printing of organic structures offer promising applications such as surgical planning and medical education.; However, challenges arise as technological advancements generate increasingly vast amounts of data, necessitating seamless manipulation and transfer within the medical field. The goal of this master thesis is to explore interoperability in encoding 3D models and the ability to print those models resulting from 3D reconstruction on medical input data. This exploration will be done for models that were originally segmented by manual delineation or in an automated way. Different parts of this thematic were already explored in a specific way like for the surface reconstruction or the automatic segmentation. The idea here will be to combine the different aspects of this thesis in a single tool available and usable by everyone.
Using Deep Learning for Pulmonary Nodule Detection & Diagnosis
Gruetzemacher, Richard
Gupta, Ashish
2016Conference Paper, cited 0 times
LIDC-IDRI
Generative Models and Feature Extraction on Patient Images and Structure Data in Radiation Therapy
The aim of this thesis was to examine and enhance the scientific groundwork for translating deep learning (DL) algorithms for brain tumour segmentation into clinical decision support tools. Paper II describes a scoping review conducted to map the field of automatic brain lesion segmentation on magnetic resonance (MR) images according to a predefined and peer-reviewed study protocol (Paper I). Insufficient preprocessing description was identified as one factor hindering clinical implementation of the reviewed algorithms. A reproducibility and replicability analysis of two algorithms was described in Paper III. The two algorithms and their validation studies were previously assessed as reproducible. In this experimental investigation, the original validation results were reproduced and replicated for one algorithm. Analysing the reasons for failure to reproduce validation of the second algorithm led to a suggested update to a commonly-used reproducibility checklist; the importance of a thorough description of preprocessing was highlighted. In Paper IV, radiologists' perception of DL-generated brain tumour labels in tumour volume growth assessment was examined. Ten radiologists participated in a reading/questionnaire session of 20 MR examination cases. The readers were confident that the label-derived volume change is more accurate than their visual assessment, even when the inter-rater agreement on the label quality was poor. In Paper V, the broad theme of trust in artificial intelligence (AI) in radiology was explored. A semi-structured interview study with twenty-six AI implementation stakeholders was conducted. Four requirements of the implemented tools and procedures were identified that promote trust in AI: reliability, quality control, transparency, and inter-organisational compatibility. The findings indicate that current strategies to validate DL algorithms do not suffice to assess their accuracy in a clinical setting. Despite the recognition from radiologists that DL algorithms can improve the accuracy of tumour volume assessment, implementation strategies require more work and the involvement of multiple stakeholders.
User-centered design and evaluation of interactive segmentation methods for medical images
Segmentation of medical images is a challenging task that aims to identify a particular structure present on the image. Among the existing methods involving the user at different levels, from a fully-manual to a fully-automated task, interactive segmentation methods provide assistance to the user during the task to reduce the variability in the results and allow occasional corrections of segmentation failures. Therefore, they offer a compromise between the segmentation efficiency and the accuracy of the results. It is the user who judges whether the results are satisfactory and how to correct them during the segmentation, making the process subject to human factors. Despite the strong influence of the user on the outcomes of a segmentation task, the impact of such factors has received little attention, with the literature focusing the assessment of segmentation processes on computational performance. Yet, involving the user performance in the analysis is more representative of a realistic scenario. Our goal is to explore the user behaviour in order to improve the efficiency of interactive image segmentation processes. This is achieved through three contributions. First, we developed a method which is based on a new user interaction mechanism to provide hints as to where to concentrate the computations. This significantly improves the computation efficiency without sacrificing the quality of the segmentation. The benefits of using such hints are twofold: (i) because our contribution is based on user interaction, it generalizes to a wide range of segmentation methods, and (ii) it gives comprehensive indications about where to focus the segmentation search. The latter advantage is used to achieve the second contribution. We developed an automated method based on a multi-scale strategy to: (i) reduce the user’s workload and, (ii) improve the computational time up to tenfold, allowing real-time segmentation feedback. Third, we have investigated the effects of such improvements in computations on the user’s performance. We report an experiment that manipulates the delay induced by the computation time while performing an interactive segmentation task. Results reveal that the influence of this delay can be significantly reduced with an appropriate interaction mechanism design. In conclusion, this project provides an effective image segmentation solution that has been developed in compliance with user performance requirements. We validated our approach through multiple user studies that provided a step forward into understanding the user behaviour during interactive image segmentation.; ; Titre traduit; ; Conception et évaluation orientées utilisateur des méthodes de segmentation interactives des images médicales; Résumé traduit; ; La segmentation d’images consiste à identifier une structure particulière dans une image. Parmi les méthodes existantes qui impliquent l’utilisateur à différents niveaux, les méthodes de segmentation interactives fournissent un support logiciel pour assister l’utilisateur dans cette tâche, ce qui aide à réduire la variabilité des résultats et permet de corriger les erreurs occasionnelles. Ces méthodes offrent un compromis entre l’efficacité et la précision des résultats. En effet, durant la segmentation, l’utilisateur décide si les résultats sont satisfaisants et dans le cas contraire, comment les corriger, rendant le processus sujet aux facteurs humains. Malgré la forte influence qu’a l’utilisateur sur l’issue de la segmentation, l’impact de ces facteurs a reçu peu d’attention de la part de la communauté scientifique, qui souvent, réduit l’évaluation des methods de segmentation à leurs performances de calcul. Pourtant, inclure la performance de l’utilisateur lors de l’évaluation de la segmentation permet une représentation plus fidèle de la réalité. Notre but est d’explorer le comportement de l’utilisateur afin d’améliorer l’efficacité des méthodes de segmentation interactives. Cette tâche est réalisée en trois contributions. Dans un premier temps, nous avons développé un nouveau mécanisme d’interaction utilisateur qui oriente la méthode de segmentation vers les endroits de l’image où concentrer les calculs. Ceci augmente significativement l’efficacité des calculs sans atténuer la qualité de la segmentation. Il y a un double avantage à utiliser un tel mécanisme: (i) puisque notre contribution est base sur l’interaction utilisateur, l’approche est généralisable à un grand nombre de méthodes de segmentation, et (ii) ce mécanisme permet une meilleure compréhension des endroits de l’image où l’on doit orienter la recherche du contour lors de la segmentation. Ce dernier point est exploité pour réaliser la deuxième contribution. En effet, nous avons remplacé le mécanisme d’interaction par une méthode automatique basée sur une stratégie multi-échelle qui permet de: (i) réduire l’effort produit par l’utilisateur lors de la segmentation, et (ii) améliorer jusqu’à dix fois le temps de calcul, permettant une segmentation en temps-réel. Dans la troisième contribution, nous avons étudié l’effet d’une telle amélioration des performances de calculs sur l’utilisateur. Nous avons mené une expérience qui manipule les délais des calculs lors de la segmentation interactive. Les résultats révèlent qu’une conception appropriée du mécanisme d’interaction peut réduire l’effet de ces délais sur l’utilisateur. En conclusion, ce projet offer une solution interactive de segmentation d’images développée en tenant compte de la performance de l’utilisateur. Nous avons validé notre approche à travers de multiples études utilisateurs qui nous ont permis une meilleure compréhension du comportement utilisateur durant la segmentation interactive des images.
Efficient Transfer Learning using Pre-trained Models on CT/MRI
The medical imaging field has unique obstacles to face when performing computer vision classification tasks. The retrieval of the data, be it CT scans or MRI, is not only expensive but also limited due to the lack of publicly available labeled data. In spite of this, clinicians often need this medical imaging data to perform diagnosis and recommendations for treatment. This motivates the use of efficient transfer learning techniques to not only condense the complexity of the data as it is often volumetric, but also to achieve better results faster through the use of established machine learning techniques like transfer learning, fine-tuning, and shallow deep learning. In this paper, we introduce a three-step process to perform classification using CT scans and MRI data. The process makes use of fine-tuning to align the pretrained model with the target class, feature extraction to preserve learned information for downstream classification tasks, and shallow deep learning to perform subsequent training. Experiments are done to compare the performance of the proposed methodology as well as the time cost trade offs for using our technique compared to other baseline methods. Through these experiments we find that our proposed method outperforms all other baselines while achieving a substantial speed up in overall training time.
Brain Tumor Detection using Curvelet Transform and Support Vector Machine
Gupta, Bhawna
Tiwari, Shamik
International Journal of Computer Science and Mobile Computing2014Journal Article, cited 8 times
Website
Artificial Intelligence for Detection of Lung and Airway Nodules in Clinical Chest CT scans
Segmentation of the prostate and its internal anatomical zones in magnetic resonance images is an important step in many diagnostic applications. This task can be time consuming, and is therefore a good candidate for introducing an automated method.; The aim of this thesis has been to train a three dimensional Convolutional Neural Network (CNN) that segments the prostate and its four anatomical zones, according to the global PI-RADS standard for use as decision support in the delineation process.; This was performed on a publicly available data set that included images for training (n=78) and validation (n=20). For the evaluation, an internal data set from the University Hospital of Umeå consisting of forty patients, were used to test the generalization capability of the model. Prior to training, the delineations of anterior fibromuscular stroma (AFS), the peripheral (PZ), central (CZ) and transitional (TZ) zones, as well as the prostatic urethra, were validated in collaboration with an experienced radiologist.; The Dice score for the segmentation of the prostate was 0.88, and for the internal zones: PZ: 0.72, CZ: 0.40, TZ: 0.72, U: 0.05, and AFS: 0.34, for the test dataset. Accurate segmentation of the Urethra was challenging due to the structural differences between the data sets, and therefore these results can easily be discarded and viewed as less relevant when reviewing the structures. In conclusion, the trained CNN can be used as decision support for prostate zone delineation.
Renal Cancer Cell Nuclei Detection from Cytological Images Using Convolutional Neural Network for Estimating Proliferation Rate
Hossain, Shamim
Jalab, Hamid A.
Zulfiqar, Fariha
Pervin, Mahfuza
Journal of Telecommunication, Electronic and Computer Engineering2019Journal Article, cited 0 times
Website
TCGA-KIRC
Kidney
Convolutional Neural Network (CNN)
Machine Learning
The Cytological images play an essential role in monitoring the progress of cancer cell mutation. The proliferation rate of the cancer cell is the prerequisite for cancer treatment. It is hard to accurately identify the nucleus of the abnormal cell in a faster way as well as find the correct; proliferation rate since it requires an in-depth manual examination, observation and cell counting, which are very tedious and time-consuming. The proposed method starts with segmentation to separate the background and object regions with K-means clustering. The small candidate regions, which contain cell region is detected based on the value of support vector machine automatically. The sets of cell regions are marked with selective search according to the local distance between the nucleus and cell boundary, whether they are overlapping or non-overlapping cell regions. After that, the selective segmented cell features are taken to learn the normal and abnormal cell nuclei separately from the regional convolutional neural network. Finally, the proliferation rate in the invasive cancer area is calculated based on the number of; abnormal cells. A set of renal cancer cell cytological images is taken from the National Cancer Institute, USA and this data set is available for the research work. Quantitative evaluation of this method is performed by comparing its accuracy with the accuracy of the other state of the art cancer cell nuclei detection methods. Qualitative assessment is done based on human observation. The proposed method is able to detect renal cancer cell nuclei accurately and provide automatic proliferation rate.
The Study on Data Hiding in Medical Images
Huang, Li-Chin
Tseng, Lin-Yu
Hwang, Min-Shiang
International Journal of Network Security2012Journal Article, cited 25 times
Website
Algorithm Development
Image analysis
Reversible data hiding plays an important role in medical image systems. Many hospitals have already applied the electronic medical information in healthcare systems. Reversible data hiding is one of the feasible methodologies to protect the individual privacy and confidential information. With application in several high quality medical devices, the detection rate of diseases and treating are improved at the early stage. Its demands havebeen rising for recognizing complicated anatomical structures in high quality images. However, most data hiding methods are still applied in 8-bit depth medical images with 255 intensity levels. This paper summarizes the existing reversible data hiding algorithms and introduces basic knowledge in medical image.
Radiomics of NSCLC: Quantitative CT Image Feature Characterization and Tumor Shrinkage Prediction
APPLICATION OF MAGNETIC RESONANCE RADIOMICS PLATFORM (MRP) FOR MACHINE LEARNING BASED FEATURES EXTRACTION FROM BRAIN TUMOR IMAGES
Idowu, B.A.
Dada, O. M.
Awojoyogbe, O.B.
Journal of Science, Technology, Mathematics and Education (JOSTMED)2021Journal Article, cited 0 times
Website
TCGA-GBM
TCGA-LGG
BRAIN
Magnetic Resonance Imaging (MRI)
Machine Learning
Radiomic features
NIfTI
This study investigated the implementation of magnetic resonance radiomics platform (MRP) for machine learning based features extraction from brain tumor images. Magnetic resonance imaging data publicly available in The Cancer Imaging Archive (TCIA) were downloaded and used to perform image Coregistration, Multi-Modality, Images interpolation, Morphology and Extraction of radiomic features with MRP tools. Radiomics analyses were then applied to the data (containing AX-T1-POST, Diffusion weighted, AX-T2-FSE and AX-T2-FLAIR sequences) using wavelet decomposition principles. The results employing different configurations of low-pass and high-pass filters were exported to Microsoft excel data sheets. The exported data were visualized using MATLAB’s classification learner tool. These exported data and the visualizations provide a new way of deep assessment of image data as well as easier interpretation of image scans. Findings from this study revealed that Machine learning Radiomics Platform is important in characterizing, visualizing and gives adequate information of a brain tumor.
X-ray CT scatter correction by a physics-motivated deep neural network
A fundamental problem in X-ray Computed Tomography (CT) is the scatter occurring due to the interaction of photons with the imaged object. Unless it is corrected, this phenomenon manifests itself as degradations in the reconstructions in the form of various artifacts. This makes scatter correction a critical step to obtain the desired reconstruction quality. Scatter correction methods consist of two groups: hardware-based and software-based. Despite success in specific settings, hardware-based methods require modification in the hardware or an increase in the scan time or dose. This makes software-based methods attractive. In this context, Monte-Carlo based scatter estimation, analytical-numerical and kernel-based methods were developed. Furthermore, the capacity of data-driven approaches to tackle this problem was recently demonstrated. In this thesis, two novel physics-motivated deep-learning-based methods are proposed. The methods estimate and correct for the scatter in the obtained projection measurements. They incorporate both an initial reconstruction of the object of interest and the scatter-corrupted measurements related to it. They use a common specific deep neural network architecture and a cost function adapted to the problem. Numerical experiments with data obtained by Monte-Carlo simulations of the imaging of phantoms reveal noticeable improvement over a recent projection-domain deep neural network correction method.
Lung Cancer Detection and Classification using Machine Learning Algorithm
Ismail, Meraj Begum Shaikh
Turkish Journal of Computer and Mathematics Education (TURCOMAT)2021Journal Article, cited 0 times
Website
LungCT-Diagnosis
Machine Learning
Segmentation
LUNG
co-occurrence matrix
The Main Objective of this research paper is to find out the early stage of lung cancer and explore the accuracy levels of various machine learning algorithms. After a systematic literature study, we found out that some classifiers have low accuracy and some are higher accuracy but difficult to reached nearer of 100%. Low accuracy and high implementation cost due to improper dealing with DICOM images. For medical image processing many different types of images are used but Computer Tomography (CT) scans are generally preferred because of less noise. Deep learning is proven to be the best method for medical image processing, lung nodule detection and classification, feature extraction and lung cancer stage prediction. In the first stage of this system used image processing techniques to extract lung regions. The segmentation is done using K Means. The features are extracted from the segmented images and the classification are done using various machine learning algorithm. The performances of the proposed approaches are evaluated based on their accuracy,; sensitivity, specificity and classification time.
Retina U-Net: Embarrassingly Simple Exploitation of Segmentation Supervision for Medical Object Detection
The task of localizing and categorizing objects in medical images often remains formulated as a semantic segmentation problem. This approach, however, only indirectly solves the coarse localization task by predicting pixel-level scores, requiring ad-hoc heuristics when mapping back to object-level scores. State-of-the-art object detectors on the other hand, allow for individual object scoring in an end-to-end fashion, while ironically trading in the ability to exploit the full pixel-wise supervision signal. This can be particularly disadvantageous in the setting of medical image analysis, where data sets are notoriously small. In this paper, we propose Retina U-Net, a simple architecture, which naturally fuses the Retina Net one-stage detector with the U-Net architecture widely used for semantic segmentation in medical images. The proposed architecture recaptures discarded supervision signals by complementing object detection with an auxiliary task in the form of semantic segmentation without introducing the additional complexity of previously proposed two-stage detectors. We evaluate the importance of full segmentation supervision on two medical data sets, provide an in-depth analysis on a series of toy experiments and show how the corresponding performance gain grows in the limit of small data sets. Retina U-Net yields strong detection performance only reached by its more complex two-staged counterparts. Our framework including all methods implemented for operation on 2D and 3D images is available at github.com/pfjaeger/medicaldetectiontoolkit.
Quantitative imaging in radiation oncology: An emerging science and clinical service
We present an AI-assisted approach for classification of malignancy of lung nodules in CT scans for explainable AI-assisted lung cancer screening. We evaluate this explainable classification to estimate lung nodule malignancy against the LIDC-IDRI dataset. The LIDC-IDRI dataset includes biomarkers from Radiologist’s annotations thereby providing a training dataset for nodule malignancy suspicion and other findings. The algorithm employs a 3D Convolutional Neural Network (CNN) to predict both the malignancy suspicion level as well as the biomarker attributes. Some biomarkers such as malignancy and subtlety are ordinal in nature, but others such as internal structure and calcification are categorical. Our approach is uniquely able to predict a multitude of fields such as to not only estimate malignancy but many other correlated biomarker variables. We evaluate the malignancy classification algorithm in several ways including presentation of the accuracy of malignancy screening, as well as comparable metrics for biomarker fields.
A First Step Towards an Algorithm for Breast Cancer Reoperation Prediction Using Machine Learning and Mammographic Images
Abstract; Cancer is the second leading cause of death worldwide and 30% of all cancer cases among women are breast cancer. A popular treatment is breast-conserving surgery, where only a part of the breast is surgically removed. Surgery is expensive and has a significant impact on the body, and on some women, a reoperation is needed. The aim of this thesis was to see if there is a possibility to predict whether a person will be in need of reoperation with the help of whole mammographic images and deep learning.; The data used in this thesis were collected from two different open sources: (1) The Chinese Mammography Database (CMMD) where 1052 benign images and 1090 malignant images were used. (2) The Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM) where 182 benign images and 145 malignant images were used. With those images, both a simple convolutional neural network (CNN) and a transfer learning network using the pre-trained model MobileNet were trained to classify the images as benign or malignant. All the networks were evaluated using learning curves, confusion matrix, accuracy, sensitivity, specificity, AUC and a ROC-curve.; The highest results obtained belonged to a transfer learning network that used the pre-trained model MobileNet and trained on the CMMD data set. It got an AUC value of 0.599.; Sammanfattning; Cancer är idag det näst vanligaste dödsorsaken i världen, där 30% av alla cancerfall bland kvinnor är bröstcancer. En vanlig behandling är bröstbevarande operation, där en bit av bröstet kirurgiskt tas bort. Operationer är både dyrt och har en betydande inverkan på kroppen och för vissa kvinnor krävs en omoperation efter den första operationen. Syftet med detta arbete har varit att undersöka möjligheten att förutsäga om en person kommer att vara i behov av en omoperation med hjälp av hela mammografibilder och maskininlärning. ; Datan som användes i arbetet hämtades från två olika öppna källor: (1) The Chinese Mammography Database (CMMD) där 1052 benigna bilder och 1090 maligna bilder användes. (2) The Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM) får 182 benigna bilder och 145 maligna bilder användes. Med dessa bilder tränades både ett enkelt konvoluionellt nätverk och ett överförningsinlärningsnätverk med den för-tränade modellen MobileNet för att klassificera bilderna som benigna eller maligna. Alla nätverken utvärderades med inlärningskurvor, confusion matrix, nog grannhet, känslighet, specificitet och en ROC-kurva.; De högsta resultaten som erhölls var ett AUC-värde på 0.599 och tillhörde ett överföringsinlärning nätverk som använt den för-tränade modellen MobileNet och tränat på CMMD-datauppsättningen.
Radiogenomic correlation for prognosis in patients with glioblastoma multiformae
Training of deep convolutional neural nets to extract radiomic signatures of tumors
Kim, J.
Seo, S.
Ashrafinia, S.
Rahmim, A.
Sossi, V.
Klyuzhin, I.
Journal of Nuclear Medicine2019Journal Article, cited 0 times
Head-Neck-PET-CT
Radiomics
Objectives: Radiomics-based analysis of FDG PET images has been shown to improve the assessment and prediction of tumor growth rate, response to treatment and other patient outcomes [1]. An alternative new approach to image analysis involves the use of convolutional neural networks (CNNs), wherein relevant image features are learned implicitly and automatically in the process of network training [2]; this is in contrast to radiomics analyses, where the features are “hand-crafted” and are explicitly computed (EC). Although CNNs represent a more general approach, it is not clear whether the implicitly learned features may, or have the ability to include radiomics features (RFs) as a subset. If this is the case, CNN-based approaches may eventually obviate the use of EC RFs. Further, the use of CNNs instead of RFs may completely eliminate the need for feature selection and tumor delineation, enabling high-throughput data analyses. Thus, our objective was to test whether CNNs can learn to act similarly to several commonly used RFs. Using a set of simulated and real FDG PET images of tumors, we train the CNNs to estimate the values of RFs from the images without the explicit computation. We then compare the values of the CNN-estimated and EC features. Methods: Using a stochastic volumetric model for tumor growth, 2000 FDG images of tumors confined to a bounding box (BB) were simulated (40x40x40 voxels, voxel size 2.0 mm), and 10 RFs (3 x morphology, 4 x intensity histogram, 3 x texture features) were computed for each image using the SERA library [3] (compliant with the Image Biomarker Standardization Initiative, IBSI [4]). A 3D CNN with 4 convolutional layers, and a total of 164 filters, was implemented in Python using the Keras library with TensorFlow backend (https://www.keras.io). The mean absolute error was the optimized loss function. The CNN was trained to automatically estimate the values each of the 10 RFs for each image; 1900 of images were used for training, and 100 were used for testing, to compare the CNN-estimated values to the EC feature values. We also used a secondary test set comprised of 133 real tumor images, obtained from the head and neck PET/CT imaging study [5] publicly available at the Cancer Imaging Archive. The tumors were cropped to a BB, and the images were resampled to yield similar image size to the simulated image set. Results: After the training procedure, on the simulated test set the CNN was able to estimate the values of most EC RFs with 10-20% error (relative to the range). In the morphology group, the errors were 3.8% for volume, 12.0% for compactness, 15.7% for flatness. In the intensity group, the errors were 13.7% for the mean, 15.4% for variance, 12.3% for skewness, and 13.1% for kurtosis. In the texture group, the error was 10.6% for GLCM contrast, 13.4% for cluster tendency, and 21.7% for angular momentum. With all features, the difference between the CNN-estimated and EC feature values were statistically insignificant (two-sample t-test), and the correlation between the feature values was highly significant (p<0.01). On the real image test set, we observed higher error rates, on the order of 20-30%; however, with all but one feature (angular momentum), there was a significant correlation between the CNN-estimated and EC features (p<0.01). ; Conclusions: Our results suggest that CNNs can be trained to act similarly to several widely used RFs. While the accuracy of CNN-based estimates varied between the features, in general, the CNN showed a good propensity for learning. Thus, it is likely that with more complex network architectures and training data, features can be estimated more accurately. While a greater number of RFs need to be similarly tested in the future, these initial experiments provide first evidence that, given the sufficient quality and quantity of the training data, the CNNs indeed represent a more general approach to feature extraction, and may potentially replace radiomics-based analyses without compromising the descriptive thoroughness.
A Study on the Geometrical Limits and Modern Approaches to External Beam Radiotherapy
Radiation therapy is integral to treating cancer and improving survival probability. Improving treatment methods and modalities can lead to significant impacts on the life quality of cancer patients. One such method is stereotactic radiotherapy. Stereotactic radiotherapy is a form of External Beam Radiotherapy (EBRT). It delivers a highly conformal dose of radiation to a target from beams arranged at many different angles. The goal of any radiotherapy treatment is to deliver radiation only to the cancerous cells while maximally sparing other tissues. However, such a perfect treatment outcome is difficult to achieve due to the physical limitations of EBRT. The quality of treatment is dependent on the characteristics of these beams and the number of angles of which radiation is delivered. However, as technology and techniques have improved, the dependence on the quality of beams and beam coverage may have become less critical.; ; This thesis investigates different geometric aspects of stereotactic radiotherapy and their impacts on treatment quality. The specific aims are: (1) To explore the treatment outcome of a virtual stereotactic delivery where no geometric limit exists in the sense of physical collisions. This allows for the full solid angle treatment space to be investigated and to explore if a large solid angle space is necessary to improve treatment. (2) To evaluate the effect of a reduced solid angle with a specific radiotherapy device using real clinical cases. (3) To investigate how the quality of a single beam influences treatment outcome when multiple overlapping beams are in use. (4) To study the feasibility of using a novel treatment method of lattice radiotherapy with an existing stereotactic device for treating breast cancer. All these aims were investigated with the use of inverse planning optimization and Monte-Carlo based particle transport simulations.
An Enhanced Convolutional Neural Architecture with Residual Module for Mri Brain Image Classification System
Kumar, S Mohan
Yadav, K.P.
Turkish Journal of Physiotherapy and Rehabilitation2021Journal Article, cited 0 times
Website
Deep Learning
Classification
REMBRANDT
Computer Aided Diagnosis (CADx)
Deep Neural Network (DNN) has played an important role in the analysis of image and signal processing. It has the ability to abstract features very deeply. In the field of medical image processing DNN has provided a recognition method for classifying the abnormality of the medical images. In this paper, DNN based Magnetic Resonance Imaging (MRI) brain image classification with modified residual module named Pyramid Design of Residual (PDR) system is developed. The conventional residual module is arranged in a pyramid like architecture. The MRI image classification tests performed on REpository of Molecular BRAin Neoplasia DaTa (REMBRANDT) database demonstrated that the DNN-PDR system can improve the accuracy. The classification test results also show that there is notable improvement in terms of accuracy (99.5%), specificity (100%) and sensitivity (99%). A comparison between the DNN-PDR system and the existing systems are also given.
Textural Analysis of Tumour Imaging: A Radiomics Approach
Conventionally, tumour characteristic are assessed by performing a biopsy. These biopsies are invasive and submissive to the problem of tumour heterogeneity. However, analysis; of imaging data may render the need for such biopsies obsolete. This master’s dissertation describes in what matter images of tumour masses can be post-processed to classify the tumours in a variety of respective clinical response classes. Tumour images obtained using both computed tomography and magnetic resonance imaging are analysed. The analysis of these images is done; using a radiomics approach. This approach will convert the imaging data into a high dimensional mineable feature space. The features considered are first-order statistics, texture features, wavelet-based features and shape parameters. Post-processing techniques applied on this feature space include k-means clustering, assessment of stability and prognostic performance and; machine learning techniques. Both random forests and neural networks are included. Results from these analyses show that the radiomics features can be correlated with different clinical response classes as well as serve as input data to create predictive models with correct prediction rates up to 63.9 % in CT and 66.0 % in MRI. Furthermore, a radiomics signature can be created; that consists of four features and is capable of predicting clinical response factors with almost the same accuracy as obtained using the entire data space.; Keywords - Radiomics, texture analysis, lung tumour, CT, brain tumour, MRI, clustering,; random forest, neural network, machine learning, radiomics signature, biopsy, tumour heterogeneity
Conditional random fields improve the CNN-based prostate cancer classification performance
Prostate cancer is a condition with life-threatening implications but without clear causes yet identified. Several diagnostic procedures can be used, ranging from human dependent and very invasive to using state of the art non-invasive medical imaging. With recent academic and industry focus on the deep learning field, novel research has been performed on to how to improve prostate cancer diagnosis using Convolutional Neural Networks to interpret Magnetic Resonance images. Conditional Random Fields have achieved outstanding results in the image segmentation task, by promoting homogeneous classification at the pixel level. A new implementation, CRF-RNN defines Conditional Random Fields by means of convolutional layers, allowing the end to end training of the feature extractor and classifier models. This work tries to repurpose CRFs for the image classification task, a more traditional sub-field of imaging analysis, on a way that to the best of the author’s knowledge, has not been implemented before. To achieve this, a purpose-built architecture was refitted, adding a CRF layer as a feature extractor step. To serve as the implementation’s benchmark, a multi-parametric Magnetic Resonance Imaging dataset was used, initially provided for the PROSTATEx Challenge 2017 and collected by the Radboud University. The results are very promising, showing an increase in the network’s classification quality.; ; ; Cancro da próstata é uma condição que pode apresentar risco de vida, mas sem causas ainda corretamente identificadas. Vários métodos de diagnóstico podem ser utilizados, desde bastante invasivos e dependentes do operador humano a métodos não invasivos de ponta através de imagens médicas. Com o crescente interesse das universidades e da indústria no campo do deep learning, investigação tem sido desenvolvida com o propósito de melhorar o diagnóstico de cancro da próstata através de Convolutional Neural Networks (CNN) (Redes Neuronais Convolucionais) para interpretar imagens de Ressonância Magnética. Conditional Random Fields (CRF) (Campos Aleatórios Condicionais) alcançaram resultados muito promissores no campo da Segmentação de Imagem, por promoverem classificações homogéneas ao nível do pixel. Uma nova implementação, CRF-RNN redefine os CRF através de camadas de CNN, permitindo assim o treino integrado da rede que extrai as características e o modelo que faz a classificação. Este trabalho tenta aproveitar os CRF para a tarefa de Classificação de Imagem, um campo mais tradicional, numa abordagem que nunca foi implementada anteriormente, para o conhecimento do autor. Para conseguir isto, uma nova arquitetura foi definida, utilizando uma camada CRF-RNN como um extrator de características. Como meio de comparação foi utilizada uma base de dados de imagens multiparamétricas de Ressonância Magnética, recolhida pela Universidade de Radboud e inicialmente utilizada para o PROSTATEx Challenge 2017. Os resultados são bastante promissores, mostrando uma melhoria na capacidade de classificação da rede neuronal.
Automatic Prostate Cancer Segmentation Using Kinetic Analysis in Dynamic Contrast-Enhanced MRI
Lavasani, S Navaei
Mostaar, A
Ashtiyani, M
Journal of Biomedical Physics & Engineering2018Journal Article, cited 0 times
Website
QIN PROSTATE
DCE-MRI
Prostate Cancer
Semi-quantitative Feature
Wavelet Kinetic Feature
Segmentation
Quantitative neuroimaging with handcrafted and deep radiomics in neurological diseases
Lavrova, Elizaveta
2024Thesis, cited 0 times
Dissertation
Thesis
Radiomics
LGG-1p19qDeletion
TCGA-LGG
neuroimaging
medical image analysis
clinical decision support
Magnetic Resonance Imaging (MRI)
Deep learning
The motivation behind this thesis is to explore the potential of "radiomics" in the field of neurology, where early diagnosis and accurate treatment selection are crucial for improving patient outcomes. Neurological diseases are a major cause of disability and death globally, and there is a pressing need for reliable imaging biomarkers to aid in disease detection and monitoring. While radiomics has shown promising results in oncology, its application in neurology remains relatively unexplored. Therefore, this work aims to investigate the feasibility and challenges of implementing radiomics in the neurological context, addressing various limitations and proposing potential solutions. The thesis begins with a demonstration of the predictive power of radiomics for identifying important diagnostic biomarkers in neuro-oncology. Building on this foundation, the research then delves into radiomics in non-oncological neurology, providing an overview of the pipeline steps, potential clinical applications, and existing challenges. Despite promising results in proof-of-concept studies, the field faces limitations, mostly data-related, such as small sample sizes, retrospective nature, and lack of external validation. To explore the predictive power of radiomics in non-oncological tasks, a radiomics approach was implemented to distinguish between multiple sclerosis patients and normal controls. Notably, radiomic features extracted from normal-appearing white matter were found to contain distinctive information for multiple sclerosis detection, confirming the hypothesis of the thesis. To overcome the data harmonization challenge, in this work quantitative mapping of the brain was used. Unlike traditional imaging methods, quantitative mapping involves measuring the physical properties of brain tissues, providing a more standardized and consistent data representation. By reconstructing the physical properties of each voxel based on multi-echo MRI acquisition, quantitative mapping produces data that is less susceptible to domain-specific biases and scanner variability. Additionally, the insights gained from quantitative mapping are building the bridge toward the physical and biological properties of brain tissues, providing a deeper understanding of the underlying pathology. Another crucial challenge in radiomics is robust and fast data labeling, particularly segmentation. A deep learning method was proposed to perform automated carotid artery segmentation in stroke at-risk patients, surpassing current state-of-the-art approaches. This novel method showcases the potential of automated segmentation to enhance radiomics pipeline implementation. In addition to addressing specific challenges, the thesis also proposes a community-driven open-source toolbox for radiomics, aimed at enhancing pipeline standardization and transparency. This software package would facilitate data curation and exploratory analysis, fostering collaboration and reproducibility in radiomics research. Through an in-depth exploration of radiomics in neuroimaging, this thesis demonstrates its potential to enhance neurological disease diagnosis and monitoring. By uncovering valuable information from seemingly normal brain tissues, radiomics holds promise for early disease detection. Furthermore, the development of innovative tools and methods, including deep learning and quantitative mapping, has the potential to address data labeling and harmonization challenges. Looking to the future, embracing larger, diverse datasets and longitudinal studies will further enhance the generalizability and predictive power of radiomics in neurology. By addressing the challenges identified in this thesis and fostering collaboration within the research community, radiomics can advance toward clinical implementation, revolutionizing precision medicine in neurology.
Machine Learning Models on Prognostic Outcome Prediction for Cancer Images with Multiple Modalities
Machine learning algorithms have been applied to predict different prognostic outcomes for many different diseases by directly using medical images. However, the higher resolution in various types of medical imaging modalities and new imaging feature extraction framework bringsnew challenges for predicting prognostic outcomes. Compared to traditional radiology practice, which is only based on visual interpretation and simple quantitative measurements, medical imaging featurescan dig deeper within medical images and potentially provide further objective support for clinical decisions.In this dissertation, we cover three projects with applying or designing machine learning models on predicting prognostic outcomes using various types of medical images.
Deep learning for magnetic resonance imaging-genomic mapping of invasive breast carcinoma
To identify MRI-based radiomic features that could be obtained automatically by a deep learning (DL) model and could predict the clinical characteristics of breast cancer (BC). Also, to explain the potential underlying genomic mechanisms of the predictive radiomic features. A denoising autoencoder (DA) was developed to retrospectively extract 4,096 phenotypes from the MRI of 110 BC patients collected by The Cancer Imaging Archive (TCIA). The associations of these phenotypes with genomic features (commercialized gene signatures, expression of risk genes, and biological pathways activities extracted from the same patients’ mRNA expression collected by The Cancer Genome Atlas (TCGA)) were tested based on linear mixed effect (LME) models. A least absolute shrinkage and selection operator (LASSO) model was used to identify the most predictive MRI phenotypes for each clinical phenotype (tumor size (T), lymph node metastasis(N), status of estrogen receptor (ER), progesterone receptor (PR), and human epidermal growth factor receptor 2 (HER2)). More than 1,000 of the 4,096 MRI phenotypes were associated with the activities of risk genes, gene signatures, and biological pathways (adjusted P-value < 0.05). High performances are obtained in the prediction of the status of T, N, ER, PR, HER2 (AUC>0.9). These identified MRI phenotypes also show significant power to stratify the BC tumors. DL based auto MRI features performed very well in predicting clinical characteristics of BC and these phenotypes were identified to have genomic significance.
Evaluating the Interference of Noise when Performing MRI Segmentation
Lung cancer is diagnosed through the detection and interpretation of (pulmonary) lung nodules, small masses of tissues, in a patient’s lung. In order to determine a patient’s risk of lung cancer, radiologists assess each of these nodules’ malignancy risk based on their characteristics, such as location, size and shape.; The task of lung nodule malignancy classification has been shown to be successfully solved by deep learning models, but these models are still susceptible to over-confident or wrong predictions. It is difficult to understand the reasoning behind these predictions because of the models’ black-box nature. As a result, medical experts lack trust in these models, which hinders the adaptation of the models in practice. This lack of trust of experts can be addressed through the field of explainable AI (XAI) as well as visual analytics (VA). Explainable AI addresses the reasoning about the decisions of a machine learning models through several explainability techniques. Visual analytics, on the other hand, focuses on the transparent communication of the predictions of the model as well through solving complex analysis tasks.; We propose LungVISX, a system designed to explain lung nodule malignancy classification by implementing explainability techniques in a visual analytics tool to enable experts to explore and analyze the predictions of a nodule malignancy classification model. We address explainability; through a model that incorporates the nodule characteristics in its decisions. Moreover, ensembles, which provide the uncertainty of predictions, and attribution methods, which provide location based information for these predictions, are used to explain the model’s decisions. The visual analytics tool of the system allows for complex analysis of the explanations of the models. A nodule can be compared to its cohort in terms of characteristics and malignancy both for the prediction score and uncertainty. Moreover, detection and analysis of important and uncertain; areas of a nodule, related to characteristic and malignancy predictions, can be performed. To our knowledge, no tool has been proposed that provides such an exploration of explainable methods in the context of lung nodule malignancy classification.; The value of the proposed system has been assessed based on use cases, model performance and a user study with three radiologists. The use cases explore and illustrate the capabilities of the visual tool. The model performance and model interpretability face a trade-off, as incorporating; characteristics predictions in the model led to a lower performance. However, the radiologists evaluated the final system as interpretable and effective, highlighting the potential of the tool for explaining the reasoning of a lung cancer malignancy classification model.
Quantitative cone-beam computed tomography reconstruction for radiotherapy planning
Radiotherapy planning involves the calculation of dose deposition throughout the patient, based upon quantitative electron density images from computed tomography (CT) scans taken before treatment. Cone beam CT (CBCT), consisting of a point source and flat panel detector, is often built onto radiotherapy delivery machines and used during a treatment session to ensure alignment of the patient to the plan. If the plan could be recalculated throughout the course of treatment, then margins of uncertainty and toxicity to healthy tissues could be reduced. CBCT reconstructions are normally too poor to be used as the basis of planning however, due to their insufficient sampling, beam hardening and high level of scatter. In this work, we investigate reconstruction techniques to enable dose calculation from CBCT. Firstly, we develop an iterative method for directly inferring electron density from the raw X-ray measurements, which is robust to both low doses and polyenergetic artefacts from hard bone and metallic implants. Secondly, we supplement this with a fast integrated scatter model, also able to take into account the polyenergetic nature of the diagnostic X-ray source. Finally, we demonstrate the ability to provide accurate dose calculation using our methodology from numerical and physical experiments. Not only does this unlock the capability to perform CBCT radiotherapy planning, offering more targeted and less toxic treatment, but the developed techniques are also applicable and beneficial for many other CT applications.
“One Stop Shop” for Prostate Cancer Staging using Imaging Biomarkers and Spatially Registered Multi-Parametric MRI
Mayer, Rulon
2020Patent, cited 0 times
Prostate
Biomarker
Multi-parametric MRI
patent
EQUIPMENT TO ADDRESS INFRASTRUCTURE AND HUMAN RESOURCE CHALLENGES FOR RADIOTHERAPY IN LOW-RESOURCE SETTINGS
Millions of people in low- and middle- income countries (LMICs) are without access to radiation therapy and as rate of population growth in these regions increase and lifestyle factors which are indicative of cancer increase; the cancer burden will only rise. There are a multitude of reasons for lack of access but two themes among them are the lack of access to affordable and reliable teletherapy units and insufficient properly trained staff to deliver high quality care. The purpose of this work was to investigate to two proposed efforts to improve access to radiotherapy in low-resource areas; an upright radiotherapy chair (to facilitate low-cost treatment devices) and a fully automated treatment planning strategy.; ; A fixed-beam patient treatment device would allow for reduced upfront and ongoing cost of teletherapy machines. The enabling technology for such a device is the immobilization chair. A rotating seated patient not only allows for a low-cost fixed treatment machine but also has dosimetric and comfort advantages. We examined the inter- and intra- fraction setup reproducibility, and showed they are less than 3mm, similar to reports for the supine position.; ; The head-and-neck treatment site, one of the most challenging treatment planning, greatly benefits from the use of advanced treatment planning strategies. These strategies, however, require time consuming normal tissue and target contouring and complex plan optimization strategies. An automated treatment planning approach could reduce the additional number of medical physicists (the primary treatment planners) in LMICs by up to half. We used in-house algorithms including mutli-atlas contouring and quality assurance checks, combined with tools in the Eclipse Treatment Planning System®, to automate every step of the treatment planning process for head-and-neck cancers. Requiring only the patient CT scan, patient details including dose and fractionation, and contours of the gross tumor volume, high quality treatment plans can be created in less than 40 minutes.
A Neural Network Approach to Deformable Image Registration
Deformable image registration (DIR) is an important component of a patient’s radiation therapy treatment. During the planning stage it combines complementary information from different imaging modalities and time points. During treatment, it aligns the patient to a reproducible position for accurate dose delivery. As the treatment progresses, it can inform clinicians of important changes in anatomy which trigger plan adjustment. And finally, after the treatment is complete, registering images at subsequent time points can help to monitor the patient’s health. ; The body’s natural non-rigid motion makes DIR a complex challenge. Recently neural networks have shown impressive improvements in image processing and have been leveraged for DIR tasks. ; This thesis is a compilation of neural network-based approaches addressing lingering issues in medical DIR, namely 1) multi-modality registration, 2) registration with different scan extents, and 3) modeling large motion in registration. For the first task we employed a cycle consistentgenerative adversarial network to translate images in the MRI domain to the CT domain, such that the moving and target images were in a common domain. DIR could then proceed as a synthetically bridged mono-modality registration. The second task used advances in network based inpainting to artificially extend images beyond their scan extent. The third task leveraged axial self-attention networks’ ability to learn long range interactions to predict the deformation in the presence of large motion. For all these studies we used images from the head and neck, which exhibit all of these challenges, although these results can be generalized to other parts of the anatomy.; The results of our experiments yielded networks that showed significant improvements in multi modal DIR relative to traditional methods. We also produced network which can successfully predict missing tissue and demonstrated a DIR workflow that is independent of scan length. ; Finally, we trained a network whose accuracy is a balance between large and small motion prediction, and which opens the door to non-convolution-based DIR.; By leveraging the power of artificial intelligence, we demonstrate a new paradigm in deformable image registration. Neural networks learn new patterns and connections in imaging data which go beyond the hand-crafted features of traditional image processing. This thesis shows how each step of registration, from the image pre-processing to the registration itself, can benefit from this exciting and cutting-edge approach.
Detection of Lung Cancer Nodule on CT scan Images by using Region Growing Method
Mhetre, Rajani R
Sache, Rukhsana G
International Journal of Current Trends in Engineering & Research2016Journal Article, cited 0 times
Website
LIDC-IDRI
Radiomics
Predicting survival status of lung cancer patients using machine learning
5-year survival rate of patients with metastasized non-small cell lung cancer (NSCLC) who received chemotherapy was less than 5% (Kathryn C. Arbour, 2019). Our ability to provide survival status of a patient i.e. Alive or death at any time in future is important from at least two standpoints: a) from clinical standpoint it enables clinicians to provide optimal delivery of healthcare and b) from personal standpoint by providing patient’s family with opportunities to plan their life ahead and potentially cope with emotional aspect of loss of life.; In this thesis, we investigate different approaches for predicting survival status of patients suffering from non-small cell lung cancer. In Chapter 2, we review background of machine learning and related work in cancer prediction followed by steps to follow before applying machine learning classifiers to training dataset. In chapter 3, we present different classifiers on which our analysis will be performed and later in the chapter we list evaluation metrics for measuring performance. In chapter 4, related dataset and results from different tests performed on training data will be discussed. In last chapter, we conclude our findings for this study and present suggestions for future work.
In this work, we present a novel method to segment brain tumors using deep learning. An accurate brain tumor segmentation is key for a patient to get the right treatment and for the doctor who must perform surgery. Due to the genetic differences that exist in different patients, even between the same kind of tumor, an accurate segmentation is crucial. To beat state-of-the-art methods, we want to use technology that has provided major breakthroughs in many different areas, including segmentation, deep learning, a new area of machine learning. It is a branch of machine learning that is attempting to model high level abstractions in data. We will be using Convolutional Neural Networks, CNNs, and we will evaluate the results that we obtain comparing our method against the best results obtained from the Brain Tumor Segmentation Challenge, BRATS.
Towards Explainable Deep Learning in Oncology: Integrating EfficientNet-B7 with XAI techniques for Acute Lymphoblastic Leukaemia
Acute Lymphoblastic Leukaemia (ALL), presents a potential risk to human health due to its rapid progression and impact on the body’s blood-producing system. The accurate diagnosis derived through investigations plays a crucial role in formulating effective treatment plans that can influence the likelihood of patient recovery. In the pursuit of improving diagnostic accuracy, diverse Machine Learning (ML) and Deep Learning (DL) approaches have been employed, demonstrating significant improvement in analyzing intricate biomedical data for identifying ALL. However, the complex nature of these algorithms often makes them difficult to comprehend, posing challenges for patients, medical professionals, and the wider community. To address this issue, it is essential to clarify the functioning of these ML/DL models, strengthen trust and providing users with a clearer understanding of diagnostic outcomes. This paper introduces an innovative framework for ALL diagnosis by incorporating the EfficientNet-B7 architecture with Explainable Artificial Intelligence (XAI) methods. Firstly, the proposed model accurately classified the ALL utilizing C-NMC-19 and Taleqani Hospital datasets. The efficacy of the proposed model was rigorously validated utilizing established evaluation metrics notably AUC, mAP, Accuracy, Precision, Recall, and F1-score. Secondly, the XAI approaches namely, Grad-CAM, LIME and IG were applied to explain the proposed model decision. Our contributions on pioneering the explanation of EfficientNet-B7 decisions using XAI for the diagnosis of ALL, set a new benchmark for trust and transparency in the medical field.
Lung Nodule Segmentation for Explainable AI-Based Cancer Screening
We present a novel approach for segmentation and identification of lung nodules in CT scans, for the purpose of Explainable AI assisted screening. Our segmentation approach combines the U-Net segmentation architecture with a graph-based connected component analysis for false positive nodule identification. CADe systems with high true nodule detection rate and low false positive nodules are desired. ; We also develop a 3D nodule dataset that can be used to build explainable classification model for nodule malignancy and biomarkers estimation. We train and evaluate the segmentation model based on its percentage of true nodules identified within the LIDC dataset which contains 1018 CT scans and nodule annotations marked by four board certified radiologists. We further present results of the segmentation and nodule filtering algorithm and description of 3D nodule dataset generated.
Automated Brain Lesion Detection and Segmentation Using Magnetic Resonance Images
Uma Proposta Para Utilização De Workflows Científicos Para A Definição De Pipelines Para A Recuperação De Imagens Médicas Por Conteúdo Em Um Ambiente Distribuído
A Neuro-Fuzzy Based System for the Classification of Cells as Cancerous or Non-Cancerous
Omotosho, Adebayo
Oluwatobi, Asani Emmanuel
Oluwaseun, Ogundokun Roseline
Chukwuka, Ananti Emmanuel
Adekanmi, Adegun
International Journal of Medical Research & Health Sciences2018Journal Article, cited 0 times
Website
Algorithm Development
lung cancer
neuro-fuzzy
Differential diagnosis of low-and high-grade gliomas using radiomics and deep learning fusion signatures based on multiple magnetic resonance imaging sequences
Cancer is hard to cure and radiation therapy is one of the most popular treatment modalities. Even though the benefits of radiation therapy are undeniable, it still has possible side effects. To avoid severe side effects, with clinical evidence, delivering optimal radiation doses to patients is crucial. Intensity-modulated radiation therapy (IMRT) is an advanced radiation therapy technique and will be discussed in this thesis. One important step when creating an IMRT treatment plan is radiation beam geometry generation, which means choosing the number of radiation beams and their directions. The primary goal of this thesis was to find good gantry angles for IMRT plans by combining computer graphics and machine learning. To aid the plan generation process, a new method called reverse beam was introduced in this work. The new solution consists of two stages: angle discovery and angle selection. In the first stage, an algorithm based on the ray casting technique will be used to find all potential angles of the beams. For the second stage, with a predefined beam number, K-means clustering algorithm will be employed to select the gantry angles based on the clusters. The proposed method was tested against non-small cell lung cancer dataset from The Cancer Imaging Archive. By using IMRT plans with seven equidistant fields with 45◦collimator rotations generated by the Ethos therapy system from Varian Medical Systems as a baseline for comparison, the plans generated by the reverse beam method illustrated good performance with the capability of avoiding organs while targeting tumors.
A Reversible and Imperceptible Watermarking Approach for Ensuring the Integrity and Authenticity of Brain MR Images
Qasim, Asaad Flayyih
2019Thesis, cited 0 times
Thesis
Dissertation
BRAIN
Magnetic Resonance Imaging (MRI)
The digital medical workflow has many circumstances in which the image data can be manipulated both within the secured Hospital Information Systems (HIS) and outside, as images are viewed, extracted and exchanged. This potentially grows ethical and legal concerns regarding modifying images details that are crucial in medical examinations. Digital watermarking is recognised as a robust technique for enhancing trust within medical imaging by detecting alterations applied to medical images. Despite its efficiency, digital watermarking has not been widely used in medical imaging. Existing watermarking approaches often suffer from validation of their appropriateness to medical domains. Particularly, several research gaps have been identified: (i) essential requirements for the watermarking of medical images are not well defined; (ii) no standard approach can be found in the literature to evaluate the imperceptibility of watermarked images; and (iii) no study has been conducted before to test digital watermarking in a medical imaging workflow. This research aims to investigate digital watermarking to designing, analysing and applying it to medical images to confirm manipulations can be detected and tracked. In addressing these gaps, a number of original contributions have been presented. A new reversible and imperceptible watermarking approach is presented to detect manipulations of brain Magnetic Resonance (MR) images based on Difference Expansion (DE) technique. Experimental results show that the proposed method, whilst fully reversible, can also realise a watermarked image with low degradation for reasonable and controllable embedding capacity. This is fulfilled by encoding the data into smooth regions (blocks that have least differences between their pixels values) inside the Region of Interest (ROI) part of medical images and also through the elimination of the large location map (location of pixels used for encoding the data) required at extraction to retrieve the encoded data. This compares favourably to outcomes reported under current state-of-art techniques in terms of visual image quality of watermarked images. This was also evaluated through conducting a novel visual assessment based on relative Visual Grading Analysis (relative VGA) to define a perceptual threshold in which modifications become noticeable to radiographers. The proposed approach is then integrated into medical systems to verify its validity and applicability in a real application scenario of medical imaging where medical images are generated, exchanged and archived. This enhanced security measure, therefore, enables the detection of image manipulations, by an imperceptible and reversible watermarking approach, that may establish increased trust in the digital medical imaging workflow.
Detection, quantification, malignancy prediction and growth forecasting of pulmonary nodules using deep learning in follow-up CT scans
Nowadays, lung cancer assessment is a complex and tedious task mainly performed by radiological visual inspection of suspicious pulmonary nodules, using; computed tomography (CT) scan images taken to patients over time.; Several computational tools relying on conventional artificial intelligence and computer vision algorithms have been proposed for supporting lung cancer detection and classification. These solutions mostly rely on the analysis of individual lung CT images of patients and on the use of hand-crafted image descriptors. Unfortunately, this makes them unable to cope with the complexity and variability of the problem. Recently, the advent of deep learning has led to a major breakthrough in the medical image domain, outperforming conventional approaches.; Despite recent promising achievements in nodule detection, segmentation, and lung cancer classification, radiologists are still reluctant to use these solutions in their day-to-day clinical practice. One of the main reasons is that current solutions do not provide support to automatic analysis of the temporal evolution of lung tumours. The difficulty to collect and annotate longitudinal lung CT cases to train models may partially explain the lack of deep learning studies that address this issue.; In this dissertation, we investigate how to automatically provide lung cancer assessment through deep learning algorithms and computer vision pipelines, especially taking into consideration the temporal evolution of the pulmonary nodules.; To this end, our first goal consisted on obtaining accurate methods for lung cancer assessment (diagnostic ground truth) based on individual lung CT images.; Since these types of labels are expensive and difficult to collect (e.g. usually after biopsy), we proposed to train different deep learning models, based on 3D; convolutional neural networks (CNN), to predict nodule malignancy based on radiologist visual inspection annotations (which are reasonable to obtain). These; classifiers were built based on ground truth consisting of the nodule malignancy, the position and the size of the nodules to classify. Next, we evaluated different ways of synthesizing the knowledge embedded by the nodule malignancy neural network, into an end-to-end pipeline aimed to detect pulmonary nodules and predict lung cancer at the patient level, given a lung CT image. The positive results confirmed the convenience of using CNNs for modelling nodule malignancy, according to radiologists, for the automatic prediction of lung cancer.; Next, we focused on the analysis of lung CT image series. Thus, we first faced the problem of automatically re-identifying pulmonary nodules from different lung CT scans of the same patient. To do this, we present a novel method based on a Siamese neural network (SNN) to rank similarity between nodules,; overpassing the need for image registration. This change of paradigm avoided; introducing potentially erroneous image deformations and provided computationally faster results. Different configurations of the SNN were examined, including the application of transfer learning, using different loss functions, and the combination of several feature maps of different network levels. This method obtained state-of-the-art performances for nodule matching both in an isolated manner and embedded in an end-to-end nodule growth detection pipeline.; Afterwards, we moved to the core problem of supporting radiologists in the longitudinal management of lung cancer. For this purpose, we created a novel; end-to-end deep learning pipeline, composed of four stages that completely au tomatize from the detection of nodules to the classification of cancer, through the detection of growth in the nodules. In addition, the pipeline integrated a novel approach for nodule growth detection, which relies on a recent hierarchical prob abilistic segmentation network adapted to report uncertainty estimates. Also, a second novel method was introduced for lung cancer nodule classification, integrating into a two stream 3D-CNN the estimated nodule malignancy probabilities derived from a pre-trained nodule malignancy network. The pipeline was evaluated in a longitudinal cohort and the reported outcomes (i.e. nodule detection, re-identification, growth quantification, and malignancy prediction) were compa rable with state-of-the-art work, focused on solving one or a few of the function alities of our pipeline.; Thereafter, we also investigated how to help clinicians to prescribe more ac curate tumour treatments and surgical planning. Thus, we created a novel method; to forecast nodule growth given a single image of the nodule. Particularly, the method relied on a hierarchical, probabilistic and generative deep neural network able to produce multiple consistent future segmentations of the nodule at a given time. To do this, the network learned to model the multimodal posterior distri bution of future lung tumour segmentations by using variational inference and injecting the posterior latent features. Eventually, by applying Monte-Carlo sampling on the outputs of the trained network, we estimated the expected tumour growth mean and the uncertainty associated with the prediction.; Although further evaluation in a larger cohort would be highly recommended, the proposed methods reported accurate results to adequately support the radiological workflow of pulmonary nodule follow-up. Beyond this specific application, the outlined innovations, such as the methods for integrating CNNs into computer vision pipelines, the re-identification of suspicious regions over time based on SNNs, without the need to warp the inherent image structure, or the proposed deep generative and probabilistic network to model tumour growth considering ambiguous images and label uncertainty, they could be easily applicable to other types of cancer (e.g. pancreas), clinical diseases (e.g. Covid-19) or medical applications (e.g. therapy follow-up).
REPRESENTATION LEARNING FOR BREAST CANCER LESION DETECTION
Raimundo, João Nuno Centeno
2022Thesis, cited 0 times
Thesis
Duke-Breast-Cancer-MRI
Computer Aided Detection (CADe)
BREAST
Machine Learning
Convolutional Neural Network (CNN)
Magnetic Resonance Imaging (MRI)
Graphics Processing Units (GPU)
Breast Cancer (BC) is the second type of cancer with a higher incidence in women, it is responsible for the death of hundreds of thousands of women every year. However, when detected in the early stages of the disease, treatment methods have proven to be very effective in increasing life expectancy and, in many cases, patients fully recover. Several medical image modalities, such as MG – Mammography (X-Rays), US - Ultrasound, CT - Computer Tomography, MRI - Magnetic Resonance Imaging, and Tomosynthesis have been explored to support radiologists/physicians in clinical decision-making workflows for the detection and diagnosis of BC. MG is the imaging modality more used at the worldwide level, however, recent research results have demonstrated that breast MRI is more sensitive than mammography to find pathological lesions, and it is not limited/affected by breast density issues. Therefore, it is currently a trend to introduce MRI-based breast assessment into clinical workflows (screening and diagnosis), but when compared to MG the workload of radiologists/physicians increases, MRI assessment is a more time-consuming task, and its effectiveness is affected not only by the variety of morphological characteristics of each specific tumor phenotype and its origin but also by human fatigue. Computer-Aided Detection (CADe) methods have been widely explored primarily in mammography screening tasks, but it remains an unsolved problem in breast MRI settings. ; This work aims to explore and validate BC detection models using Machine (Deep) Learning algorithms. ; As the main contribution, we have developed and validated an innovative method that improves the “breast MRI preprocessing phase” to select the patient’s image slices and bounding boxes representing pathological lesions. With this, it is possible to build a more robust training dataset to feed the deep learning models, reducing the computation time and the dimension of the dataset, and more importantly, to identify with high accuracy the specific regions (bounding boxes) for each of the patient images, in ; which a possible pathological lesion (tumor) has been identified. In experimental settings using a fully annotated (released for public domain) dataset comprising a total of 922 MRI-based BC patient cases, we have achieved, as the most accurate trained model, an accuracy rate of 97.83%, and subsequently, applying a ten-fold cross-validation method, a mean accuracy on the trained models of 94.46% and an associated standard deviation of 2.43%.
Intelligent texture feature extraction and indexing for MRI image retrieval using curvelet and PCA with HTF
Rajakumar, K
Muttan, S
Deepa, G
Revathy, S
Priya, B Shanmuga
Advances in Natural and Applied Sciences2015Journal Article, cited 0 times
Website
Radiomics
Content based image retrieval (CBIR)
Magnetic Resonance Imaging (MRI)
BRAIN
BREAST
PROSTATE
PHANTOM
MATLAB
With the development of multimedia network technology and the rapid increase of image application, Content Based Image Retrieval (CBIR) has become the most active area in image retrieval system. The fields of application of CBIR are becoming more and more exhaustive and wide. Most traditional image retrieval systems usually use color, texture, shape and spatial relationship. At present texture features play a very important role in computer vision and pattern recognition, especially in describing the content of images. Most texture image retrieval systems are providing retrieval result with insufficient retrieval accuracy. We address this problem, by using curvelet with PCA using Haralick Texture Feature (HTF) based image retrieval system is proposed in this paper. The combined approach of curvelet and PCA using HTF has produced better results than other proposed techniques.
Improving semi-supervised deep learning under distribution mismatch for medical image analysis applications
Deep learning methodologies have shown outstanding success in different image analysis applications. They rely on the abundance of labelled observations to build the model. However, frequently it is expensive to gather labelled observations of data, making the usage of deep learning models imprudent. Different practical examples of this challenge can be found in the analysis of medical images. For instance, labelling images to solve medical imaging problems require expensive labelling efforts, as experts (i.e., radiologists) are required to produce reliable labels. Semi-supervised learning is an increasingly popular alternative approach to deal with small labelled datasets and increase model test accuracy, by leveraging unlabelled data. However, in real-world usage settings, an unlabelled dataset might present a different distribution than the labelled dataset (i.e., the labelled dataset was sampled from a target clinic and the unlabelled dataset from a source clinic). There are different causes for a distribution mismatch between the labelled and the unlabelled dataset: a prior probability shift, a set of observations from unseen classes in the labelled dataset, and a covariate shift of the features. In this work, we assess the impact of this phenomena, for the state of the art semi-supervised model known as MixMatch. We evaluate both label and feature distribution mismatch impact in MixMatch in a real-world application: the classification of chest X-ray images for COVID-19 detection. We also test the performance gain of using MixMatch for malignant cancer detection using mammograms. For both study cases we managed to build new datasets from a private clinic in Costa Rica. We propose different approaches to address different causes of a distribution mismatch between the labelled and unlabelled datasets. First, regarding the prior probability shift, a simple model-oriented approach to deal with this challenge, is proposed. According to our experiments, the proposed method yielded accuracy gains of up to 14% statistical significance. As for more challenging distribution mismatch settings caused by a covariate shift in the feature space and sampling unseen classes in the unlabelled dataset we propose a data-oriented approach to deal with such challenges. As an assessment tool, we propose a set of dataset dissimilarity metrics designed to measure how much performance benefit a semi-supervised training regime can get from using a specific unlabelled dataset over another. Also, two techniques designed to score each unlabelled observation according to how much accuracy might bring including such observation into the unlabelled dataset for semi-supervised training are proposed. These scores can be used to discard harmful unlabelled observations. The novel methods use a generic feature extractor to build a feature space where the metrics and scores are computed. The dataset dissimilarity metrics yielded a linear correlation of up to 90% to the performance of the state-of-the-art Mix- Match semi-supervised training algorithm, suggesting that such metrics can be used to assess the quality of an unlabelled dataset. As for the scoring methods for unlabelled data, according to our tests, using them to discard harmful unlabelled data, was able to increase the performance of MixMatch to around 20%. This in the context of medical image analysis applications.
Accelerating Machine Learning with Training Data Management
One of the biggest bottlenecks in developing machine learning applications today is the need for large hand-labeled training datasets. Even at the world's most sophisticated technology companies, and especially at other organizations across science, medicine, industry, and government, the time and monetary cost of labeling and managing large training datasets is often the blocking factor in using machine learning. In this thesis, we describe work on training data management systems that enable users to programmatically build and manage training datasets, rather than labeling and managing them by hand, and present algorithms and supporting theory for automatically modeling this noisier process of training set specification in order to improve the resulting training set quality. We then describe extensive empirical results and real-world deployments demonstrating that programmatically building, managing, and modeling training sets in this way can lead to radically faster, more flexible, and more accessible ways of developing machine learning applications. We start by describing data programming, a paradigm for labeling training datasets programmatically rather than by hand, and Snorkel, an open source training data management system built around data programming that has been used by major technology companies, academic labs, and government agencies to build machine learning applications in days or weeks rather than months or years. In Snorkel, rather than hand-labeling training data, users write programmatic operators called labeling functions, which label data using various heuristic or weak supervision strategies such as pattern matching, distant supervision, and other models. These labeling functions can have noisy, conflicting, and correlated outputs, which Snorkel models and combines into clean training labels without requiring any ground truth using theoretically consistent modeling approaches we develop. We then report on extensive empirical validations, user studies, and real-world applications of Snorkel in industrial, scientific, medical, and other use cases ranging from knowledge base construction from text data to medical monitoring over image and video data. Next, we will describe two other approaches for enabling users to programmatically build and manage training datasets, both currently integrated into the Snorkel open source framework: Snorkel MeTaL, an extension of data programming and Snorkel to the setting where users have multiple related classification tasks, in particular focusing on multi-task learning; and TANDA, a system for optimizing and managing strategies for data augmentation, a critical training dataset management technique wherein a labeled dataset is artificially expanded by transforming data points. Finally, we will conclude by outlining future research directions for further accelerating and democratizing machine learning workflows, such as higher-level programmatic interfaces and massively multi-task frameworks.
Segmentation of candidates for pulmonary nodules based on computed tomorography
Rocha, Maura G. R. da
Saraiva, Willyams M.
Drumond, Patrícia M. L de L.
Carvalho Filho, Antonio O. de
de Sousa, Alcilene D.
2016Conference Paper, cited 0 times
LIDC-IDRI
Computed Tomography (CT)
Image Processing
Segmentation
Automated computer aided diagnosis
Automatic detection
Abstract: The present work presents a methodology for automatic segmentation of pulmonary solitary nodules candidates using cellular automaton. Early detection of pulmonary solitary nodules that may become cancer is essential; for survival of patients. To assist the experts in the identification of these nodules are being developed computer aided; systems that aim to automate the work of detection and classification. The segmentation stage plays a key role in automatic detection of lung nodules, as it allows separating the image elements in regions, which have the same property or; characteristic. The methodology used in the article includes acquisition of images, noise elimination, pulmonary parenchyma segmentation and segmentation of pulmonary solitary nodules candidates. The tests were conducted using set; of images of the LIDC-IDRI base, containing 739 nodules. The test results show a sensitivity of 95.66% of the nodules.
High Level Mammographic Information Fusion For Real World Ontology Population
Salem, Yosra Ben
Idodi, Rihab
Ettabaa, Karim Saheb
Hamrouni, Kamel
Solaiman, Basel
Journal of Digital Information Management2017Journal Article, cited 1 times
Website
Ontology
BREAST
Imaging features
Mammography
Magnetic Resonance Imaging (MRI)
In this paper, we propose a novel approach for ontology instantiating from real data related to the mammographic domain. In our study, we are interested in handling two modalities of mammographic images:mammography and Breast MRI. Firstly, we propose to model both images content in ontological representations since ontologies allow the description of the objects from a common perspective. In order, to overcome the ambiguity problem of representation of image’s entities, we propose to take advantage of the possibility theory applied to the ontological representation. Second, both local generated ontologies are merged in a unique formal representation with the use of two similarity measures: syntactic measure and possibilistic measure. The candidate instances are, finally, used for the global domain ontology populating in order to empower the mammographic knowledge base. The approach was validated on real world domain and the results were evaluated in terms of precision and recall by an expert.
Towards Generation, Management, and Exploration of Combined Radiomics and Pathomics Datasets for Cancer Research
Saltz, Joel
Almeida, Jonas
Gao, Yi
Sharma, Ashish
Bremer, Erich
DiPrima, Tammy
Saltz, Mary
Kalpathy-Cramer, Jayashree
Kurc, Tahsin
AMIA Summits on Translational Science Proceedings2017Journal Article, cited 4 times
Website
Radiomics
Pathomics
Glioblastoma Multiforme (GBM)
TCGA-LUSC
TCGA-GBM
Non Small Cell Lung Cancer (NSCLC)
Cancer is a complex multifactorial disease state and the ability to anticipate and steer treatment results will require information synthesis across multiple scales from the host to the molecular level. Radiomics and Pathomics, where image features are extracted from routine diagnostic Radiology and Pathology studies, are also evolving as valuable diagnostic and prognostic indicators in cancer. This information explosion provides new opportunities for integrated, multi-scale investigation of cancer, but also mandates a need to build systematic and integrated approaches to manage, query and mine combined Radiomics and Pathomics data. In this paper, we describe a suite of tools and web-based applications towards building a comprehensive framework to support the generation, management and interrogation of large volumes of Radiomics and Pathomics feature sets and the investigation of correlations between image features, molecular data, and clinical outcome.;
Classification of Lung CT Images using BRISK Features
Sambasivarao, B.
Prathiba, G.
International Journal of Engineering and Advanced Technology (IJEAT)2019Journal Article, cited 0 times
Website
Lung cancer is the major cause of death in humans. To increase the survival rate of the people, early detection of cancer is required. Lung cancer that starts in the cells of lung is mainly of two types i.e., cancerous (malignant) and non-cancerous cell (benign). In this paper, work is done on the lung images obtained from the Society of Photographic Instrumentation Engineers (SPIE) database. This SPIE database contains normal, benign and malignant images. In this work, 300 images from the database are used out of which 150 are benign and 150 are malignant. Feature points of lung tumor images are extracted by using Binary Robust Invariant Scale Keypoints (BRISK). BRISK attains commensurate characteristic of correspondence at much less computation time. BRISK is adaptive, high quality accomplishments in avant-grade algorithms. BRISK features divide the pairs of pixels surrounding the keypoint into two subsets: short-distance and long-distance pairs. The orientation of the feature point is calculated by Local intensity gradients from long distance pairs. Rotation of Short distance pairs is obtained using this orientation. These BRISK features are used by classifier for classifying the lung tumors as either benign or malignant. The performance is evaluated by calculating the accuracy.
Lung Cancer Detection on CT Scan Images Using Artificial Neural Network
These days, image processing techniques are generally utilized in a few clinical regions for image improvement in prior discovery and treatment stages, where the time factor is critical to find the variation from the norm issues in target images, particularly in different malignant growth tumors, for example, lung disease, malignancy, and so forth. Image quality and precision is the center components of this exploration, image quality evaluation just as progress are relying upon the upgrade stage where low pre-processing methods is utilized dependent on channels inside principles. Following the segmentation principles, an improved area of the object of interest that is utilized as a fundamental establishment of highlight extraction is acquired. Depending on broad highlights, a typicality examination is made. In this exploration, the principle identified highlights for exact images examination are pixels rate which assists with recognizing the harmful nodules present in the CT scan images and gives the qualification between the pictures containing nodule is benign or malignant.
Resolving the molecular complexity of brain tumors through machine learning approaches for precision medicine
Glioblastoma (GBM) tumors are highly aggressive malignant brain tumors and are resistant to conventional therapies. The Cancer Genome Atlas (TCGA) efforts distinguished histologically similar GBM tumors into unique molecular subtypes. The World Health Organization (WHO) has also since incorporated key molecular indicators such as IDH mutations and 1p/19q co-deletions in the clinical classification scheme. The National Neuroscience Institute (NNI) Brain Tumor Resource distinguishes itself as the exclusive collection of patient tumors with corresponding live cells capable of re-creating the full spectrum of the original patient tumor molecular heterogeneity. These cells are thus important to re-create “mouse-patient tumor replicas” that can be prospectively tested with novel compounds, yet have retrospective clinical history, transcriptomic data and tissue paraffin blocks for data mining. My thesis aims to establish a computational framework for the molecular subtyping of brain tumors using machine learning approaches. The applicability of the empirical Bayes model has been demonstrated in the integration of various transcriptomic databases. We utilize predictive algorithms such as template-based, centroid-based, connectivity map (CMAP) and recursive feature elimination combined with random forest approaches to stratify primary tumors and GBM cells. These subtyping approaches serve as key factors for the development of predictive models and eventually, improving precision medicine strategies. We validate the robustness and clinical relevance of our Brain Tumor Resource by evaluating two critical pathways for GBM maintenance. We identify a sialyltransferase enzyme (ST3Gal1) transcriptomic program contributing to tumorigenicity and tumor cell invasiveness. Further, we generate a STAT3 functionally-tuned signature and demonstrate its pivotal role in patient prognosis and chemoresistance. We show that IGF1-R mediates resistance in non-responders to STAT3 inhibitors. Taken together, our studies demonstrate the application of machine learning approaches in revealing molecular insights into brain tumors and subsequently, the translation of these integrative analyses into more effective targeted therapies in the clinics.
BRAIN CANCER DETECTION FROM MRI: A MACHINE LEARNING APPROACH (TENSORFLOW)
COMPUTER AIDED DETECTION OF LUNG CYSTS USING CONVOLUTIONAL NEURAL NETWORK (CNN)
Kishore Sebastian
S. Devi
Turkish Journal of Physiotherapy and Rehabilitation2021Journal Article, cited 0 times
Website
LIDC-IDRI
LUNG
Algorithm Development
Support Vector Machine (SVM)
Lung cancer is one of the baleful diseases. The survival rate will be low if the diagonisation and treatment of lung tumour gets delayed. But the survival rate and saving lives can be enhanced with opportune diagnosis and prompt treatment. The seriousness of the disease calls for a highly efficient system that can identify cancerous growth with high accuracy level. Computer Tomography (CT) scan is used to obtain detailed picture of different body parts. However it is difficult to scrutinize the presence and coverage of cancerous cells in the lungs using this scan; even for professionals.So a new model based on the Mumford and Shah Model using convolutional neural network (CNN) classification is proposed in this paper. The proposed model will provide an output with higher efficiency and accuracy in lesser amount of time. This system uses seven metrics for assessment used in this system are Classification Accuracy, sensitivity, AUC, F Measure, Specificity, precision, Brier Score and MCC. And finally the results obtained using SVM are then compared in terms of these seven metrics with the results obtained using Decision-Tree, KNN, CNN and Adaptive Boosting algorithms, and this clearly shows the higher accuracy of the proposed system over the existing system
Deep Learning Architectures for Automated Image Segmentation
Image segmentation is widely used in a variety of computer vision tasks, such as object localization and recognition, boundary detection, and medical imaging. This thesis proposes deep learning architectures to improve automatic object localization and boundary delineation for salient object segmentation in natural images and for 2D medical image segmentation.; First, we propose and evaluate a novel dilated dense encoder-decoder architecture with a custom dilated spatial pyramid pooling block to accurately localize and delineate boundaries for salient object segmentation. The dilation offers better spatial understanding and the dense connectivity preserves features learned at shallower levels of the network for better localization. Tested on three publicly available datasets, our architecture outperforms the state-of-the-art for one and is very competitive on the other two.; Second, we propose and evaluate a custom 2D dilated dense UNet architecture for accurate lesion localization and segmentation in medical images. This architecture can be utilized as a stand alone segmentation framework or used as a rich feature extracting backbone to aid other models in medical image segmentation. Our architecture outperforms all baseline models for accurate lesion localization and segmentation on a new dataset. We furthermore explore the main considerations that should be taken into account for 3D medical image segmentation, among them preprocessing techniques and specialized loss functions.
Analysis and Application of clustering and visualization methods of computed tomography radiomic features to contribute to the characterization of patients with non-metastatic Non-small-cell lung cancer.
Serra, Maria Mercedes
2022Thesis, cited 0 times
Thesis
NSCLC-Radiomics
Radiomic feature
Visualization
Non-Small Cell Lung Cancer (NSCLC)
Background: The lung is the most common site for cancer and has the highest worldwide cancer-related mortality. Routine study of patients with lung cancer usually includes at least one computed tomography (CT) study previous to the histopathological diagnosis. In the last decade the development of tools that help extract quantitative measures from medical imaging, known as radiomic characteristics, have become increasingly relevant in this domain, including mathematically extracted measures of volume, shape, texture analysis, etc. Radiomics can quantify tumor phenotypic characteristics non-invasively and could potentially contribute with objective elements to support these patients' diagnosis, management and prognosis in routine clinical practice. Methodology: LUNG1 dataset frommUniversity of Maastricht and publicly available in The Cancer Imaging Archive was obtained. Radiomic feature extraction was performed with pyRadiomics package v3.0.1 using CT scans from 422 non-small cell lung cancer (NSCLC) patients, including manual segmentations of the gross tumor volume. A single data frame was constructed including clinical data, radiomic features output, CT manufacturer and study date acquisition information. Exploratory data analysis, curation, feature selection, modeling and visualization was performed using R Software. Model based clustering was performed using VarselLCM library both with and without wrapper feature selection. Results: During exploratory data analysis lack of independence was found between histology and age and overall stage, and between survival curves and scanner manufacturer model. Features related to the manufacturer model were excluded from further analysis. Additional feature filtering was performed using the MRMR algorithm. When performing clustering analysis both models, with and without variable selection, showed significant association between partitions generated and survival curves, significance of this association was greater for the model with wrapper variable selection which selected only radiomic variables. original\_shape\_VoxelVolume feature showed the highest discriminative power for both models along with log.sigma.5.0.mm.3D\_glzm\_LargeAreaLowGrayLevelEmphasis and wavelet\_LHL\_glzm\_LargeAreaHighGrayLevelEmphasis. Clusters with significant lower median survival were also related to higher Clinical T stages, greater mean values of original\_shape\_VoxelVolume, log.sigma.5.0.mm.3D\_glzm\_LargeAreaLowGrayLevelEmphasis and wavelet\_LHL\_glzm\_LargeAreaHighGrayLevelEmphasis and lower mean wavelet.HHl\_glcm\_ClusterProminence. A weaker relationship was found between histology and selected clusters. Conclusions: Potential sources of bias given by relationship between different variables of interest and technical sources should be taken into account when analyzing this data set. Aside from original\_shape\_VoxelVolume feature, texture features applied to images with LoG and wavelet filters where found most significantly associated with different clinical characteristics in the present analysis. Value: This work highlights the relevance of analyzing clinical data and technical sources when performing radiomic analysis. It also goes through the different steps needed to extract, analyze and visualize a high dimensional dataset of radiomic features and describes associations between radiomic features and clinical variables establishing the base for future work.
Topological Data Analysis for Medical Imaging and RNA Data Analysis on Tree Spaces
Ideas from the algebraic topology of studying object data are used to introduce a framework for using; persistence landscapes to vectorized objects. These methods are applied to analyze data from The Cancer; Imaging Archive (TCIA), using a technique developed earlier for regular digital images. Our study aims; at tumor differentiation from medical images, including brain images from CPTAC Glioblastoma patients.; The result shows that persistence landscapes that capture topological features are distinguishing on average; between tumor and normal brains. Besides topological object data analysis, asymptotics of sample means; on stratified spaces are also introduced and developed in this dissertation. A stratified space is a metric space; that admits a filtration by closed subspaces, such that the difference between the d-th indexed subspace and; the (d − 1) indexed subspace is empty or is a d-dimensional manifold, called the d-th stratum. Examples of; stratified sample spaces, which are not themselves manifolds, include similarity shape spaces, affine shape; spaces, projective shape spaces, phylogenetic tree spaces, and graphs. The behavior of the Frechet sample ´; means is different around the singular Frechet mean points in some cases of stratified spaces, such as on ´; open books. The asymptotic results for the Frechet sample mean are extended from data on spiders, which ´; are open books, to a more general class of stratified spaces that are not open books. Phylogenetic tree; spaces are typically stratified spaces, including genetic information from nucleotide data, such as DNA,; RNA data. Coronavirus disease 2019 (Covid-19) is an infectious disease caused by severe acute respiratory; syndrome coronavirus 2 (SARS-CoV-2). The raw RNA sequences from SARS-CoV-2 are studied. The; ideas from the phylogenetic tree and statistical analysis on stratified spaces are applied to study distributions; on phylogenetic tree spaces. A framework is also presented for computing mean and applying Central; Limit Theorem(CLT), to provide statistical inference on data. We apply these methods to analyze RNA; sequences of SARS-CoV-2 from multiple sources. By building sample trees and applying the ensuing; statistical analysis, we could compare evolutionary results for SARS-CoV-2 vs other coronaviruses.
Combination of fuzzy c-means clustering and texture pattern matrix for brain MRI segmentation
Shijin Kumar, P.S.
Dharun, V.S.
Biomedical Research2017Journal Article, cited 0 times
RIDER NEURO MRI
MRI
BRAIN
Radiomic feature
The process of image segmentation can be defined as splitting an image into different regions. It is an important step in medical image analysis. We introduce a hybrid tumor tracking and segmentation algorithm for Magnetic Resonance Images (MRI). This method is based on Fuzzy C-means clustering algorithm (FCM) and Texture Pattern Matrix (TPM). The key idea is to use texture features along with intensity while performing segmentation. The performance parameters can be improved by using Texture Pattern Matrix (TPM). FCM is capable of predicting tumor cells with high accuracy. In FCM homogeneous regions in an image are obtained based on intensity. Texture Pattern Matrix (TPM) provides details about spatial distribution of pixels in an image. Experimental results obtained by applying proposed segmentation method for tracking tumors are presented. Various performance parameters are evaluated by comparing the outputs of proposed method and Fuzzy C-means algorithm. The computational complexity and computation time can be reduced by using this hybrid segmentation method.
A Novel Imaging-Genomic Approach to Predict Outcomes of Radiation Therapy
Singh, Apurva
Goyal, Sharad
Rao, Yuan James
Loew, Murray
2019Thesis, cited 0 times
Thesis
Radiogenomics
Radiomics
HNSCC
Head-Neck-PET-CT
TCGA-HNSC
TCGA-LUSC
TCGA-LUAD
TCGA-CESC
K Nearest Neighbor (KNN)
Support Vector Machine (SVM)
Introduction: Tumor regions are populated by various cellular species. Intra-tumor radiogenomic heterogeneity can be attributed to factors including variations in the blood flow to the different parts of the tumor and variations in the gene mutation frequencies. This heterogeneity is further propagated by cancer cells which adopt an “evolutionarily enlightened” growth approach. This growth, which focuses on developing an adaptive mechanism to progressively develop a strong resistance to therapy, follows a unique pattern in each patient. This makes the development of a uniform treatment technique very challenging and makes the concept of “precision medicine”, which is developed using information unique to each patient, very crucial to the development of effective cancer treatment methods. Our study aims to determine whether information present in the heterogeneity of tumor regions in the pre-treatment PET scans of patients and in their gene mutation status can measure the efficacy of radiation therapy in their treatment. We wish to develop a scheme which could predict the effectiveness of therapy at the pre-treatment stage, reduce the unnecessary exposure of the patient to radiation which would ultimately not be helpful in curing the patient and thus help in choosing alternative cancer therapy measures for the patients under consideration. ; Materials and methods: Our radiomics analysis was developed using PET scans for 20 patients from the HNSCC database from TCIA (The Cancer Imaging Archive). Clinical data were used to divide the patients into two categories based on the recurrence status of the tumor. Radiation structures are overlain on the PET scans for tumor delineation. Texture features extracted from tumor regions are reduced using correlation matrix-based technique and are classified by methods including Weighted KNN, Linear SVM and Bagged Trees. Slice-wise classification results are computed, treating each slice as a 2D image and treating the collection of slices as a 3D volume. Patient-wise results are computed by a voting scheme which assigns to each patient the class label possessed by more than half of its slices. After the voting is complete, the assigned labels are compared to the actual labels to compute the patient-wise classification accuracies. This workflow was tested on a group of 53 patients of the database- Head-Neck-PET-CT. We further proceeded to develop a radiogenomic workflow by combining gene expression features with tumor texture features for a group of 11 patients of our third database: TCGA-HNSC. We developed geometric transform-based database augmentation method and used it to generate PET scans using images from the existing dataset. To evaluate our analysis, we decided to test our workflow on patients with tumors at different sites, using scans of different modalities. We included PET scans for 24 lung cancer patients (15 from TCGA-LUSC (Lung Squamous Cell Carcinoma) and 9 from TCGA-LUAD (Lung Adenocarcinoma) databases). We used wavelet features along with the existing group of texture features to improve the classification scores. Further, we used non-rigid transform-based techniques for database augmentation. We also included MR scans for 54 cervical cancer patients (from TCGA-CESC (Cervical Squamous Cell Carcinoma and Endocervical Carcinoma) database) in our study and employed Fisher based selection technique for reduction of the high dimensional feature space. ; Results: The classification accuracy obtained by the 2D and 3D texture analysis is about 70% for slice-wise classification and 80% for patient-wise classification for the head and neck cancer patients (HNSCC and Head-Neck-PT-CT databases). The overall classification accuracies obtained from the transformed tumor slices are comparable to the original tumor slices. Thus, geometric transformation is an effective method for database augmentation. The addition of binary genomic features to the texture features (TCGA-HNSC patients) increases the classification accuracies (from 80%-100% for 2D and from 60%-100% for 3D patient-wise classification). The classification accuracies increase from 58% to 84% (2D slice-wise) and from 58% to 70% (2D patient-wise) in the case of lung cancer patients with the inclusion of wavelet features to the existing texture feature group and by augmenting the database (non-rigid transformation) to include equal number of patients and slices in the recurrent and non-recurrent categories. The accuracies are about 64% for 2D slice-wise and patient-wise classification for cervical cancer patients (using correlation-matrix based feature selection) and increase to about 72% using Fisher- based selection criteria; Conclusion: Our study has introduced the novel approach of fusing the information present in The Cancer Imaging Archive (TCIA) and TCGA to develop a combined imaging phenotype and genotype expression for therapy personalization. Texture measures provide a measure of tumor heterogeneity, which can be used to predict recurrence status. Information from gene expression patterns of the patients, when combined with texture measures, provides a unique radiogenomic feature which substantially improves therapy response prediction scores.
Brain Tumor Segmentation Using Deep Learning Technique
Evaluating anatomical variations in structures like the nasal passage and sinuses is; challenging because their complexity can often make it difficult to differentiate normal; and abnormal anatomy. By statistically modeling these variations and estimating; individual patient anatomy using these models, quantitative estimates of similarity; or dissimilarity between the patient and the sample population can be made. In; order to do this, a spatial alignment, or registration, between patient anatomy and; the statistical model must first be computed.; In this dissertation, a deformable most likely point paradigm is introduced that; incorporates statistical variations into probabilistic feature-based registration algorithms. This paradigm is a variant of the most likely point paradigm, which incorporates feature uncertainty into the registration process. The deformable registration; algorithms optimize the probability of feature alignment as well as the probability of; model deformation allowing statistical models of anatomy to estimate, for instance,; structures seen in endoscopic video without the need for patient specific computed; tomography (CT) scans. The probabilistic framework also enables the algorithms to assess the quality of registrations produced, allowing users to know when an alignment; can be trusted. This dissertation covers three algorithms built within this paradigm; and evaluated in simulation and in-vivo experiments.
Simultaneous segmentation and correspondence improvement using statistical modes
Lung nodule detection using fuzzy clustering and support vector machines
Sivakumar, S
Chandrasekar, C
International Journal of Engineering and Technology2013Journal Article, cited 43 times
Website
Algorithm Development
Computer Aided Detection (CADe)
Computed Tomography (CT)
LUNG
Machine Learning
Lung cancer is the primary cause of tumor deaths for both sexes in most countries. Lung nodule, an abnormality which leads to lung cancer is detected by various medical imaging techniques like X-ray, Computerized Tomography (CT), etc. Detection of lung nodules is a challenging task since the nodules are commonly attached to the blood vessels. Many studies have shown that early diagnosis is the most efficient way to cure this disease. This paper aims to develop an efficient lung nodule detection scheme by performing nodule segmentation through fuzzy based clustering models; classification by using a machine learning technique called Support Vector Machine (SVM). This methodology uses three different types of kernels among these RBF kernel gives better class performance.
A STUDY ON IMAGE DENOISING FOR LUNG CT SCAN IMAGES
Sivakumar, S
Chandrasekar, C
International Journal of Emerging Technologies in Computational and Applied Sciences2014Journal Article, cited 1 times
Website
LIDC-IDRI
Image denoising
Computed Tomography (CT)
Medical imaging is the technique and process used to create images of the human body for clinical purposes and diagnosis. Medical imaging is often perceived to designate the set of techniques that non- invasively produce images of the internal aspect of the body. The x-ray computed tomographic (CT) scanner has made it possible to detect the presence of lesions of very low contrast. The noise in the reconstructed CT images is significantly reduced through the use of efficient x-ray detectors and electronic processing. The CT reconstruction technique almost completely eliminates the superposition of anatomic structures, leading to a reduction of "structural" noise. It is the random noise in a CT image that ultimately limits the ability of the radiologist to discriminate between two regions of different density. Because of its unpredictable nature, such noise cannot be completely eliminated from the image and will always lead to some uncertainty in the interpretation of the image. The noise present in the images may appear as additive or multiplicative components and the main purpose of denoising is to remove these noisy components while preserving the important signal as much as possible. In this paper we analyzed the denoising filters such as Mean, Median, Midpoint, Wiener filters and the three more modified filter approaches for the Lung CT scan images to remove the noise present in the images and compared by the quality parameters.
A Novel Noise Removal Method for Lung CT SCAN Images Using Statistical Filtering Techniques
Sivakumar, S
Chandrasekar, C
International Journal of Algorithms Design and Analysis2015Journal Article, cited 0 times
LIDC-IDRI
Automatic detection and segmentation of malignant lesions from [18F]FDG PET/CT images using machine learning techniques: application in lymphomas
New studies have arisen trying to automatically perform some clinical tasks, such as the detection and segmentation of medical images. Manual and, sometimes, semi-automatic methods, are very time-consuming and prone to inter-observer variability. This is especially significant when the lesions spread throughout the entire body, as happens with lymphomas. The main goal was to develop fully automatic deep learning-based models (U-Net and ResUNet) for detecting and segmenting lymphoma lesions in [ 18F]FDG PET images. A secondary goal was to study the impact the training data has on the final performance, namely the impact of the patient's primary tumour type, the acquisition scanner, the number of images, and the use of transfer learning. The Dice similarity coefficient (DSC) and the lesion detection index (LDI) were used to study the models’ performance. The training dataset contains 491 [ 18F]FDG PET images from the MICCAI AutoPET 2022 Challenge and 87 [ 18F]FDG PET images from the Champalimaud Clinical Centre (CCC). Primary tumours are lymphoma, melanoma, and lung cancer, among others The test set contains 39 [ 18F]FDG PET images from lymphoma patients from the CCC. Regarding the results, using data from the lymphoma patients during training positively impacts the performance of both models on lymphoma lesions’segmentation. The results also showed that when the training dataset increases in size and has images acquired in the same equipment as the images used in the test dataset, both DSC and LDI increase. The best model using a U-Net achieved a DSC of 0.593 and a LDI of 0.186. When using a ResU-Net, the best model had a DSC of 0.524 and a LDI of 0.200. In conclusion, this study confirms the adequacy of the U-Net and ResU-Net architectures for lesion segmentation in [18F]FDG PET/CT images of patients with lymphoma. Moreover, it pointed out some clues for future training strategies.
Information Fusion of Magnetic Resonance Images and Mammographic Scans for Improved Diagnostic Management of Breast Cancer
Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI)
mini-MIAS
InBreast
Computer Aided Detection (CADe)
Medical imaging is critical to non-invasive diagnosis and treatment of a wide spectrum of medical conditions. However, different modalities of medical imaging employ/apply di erent contrast mechanisms and, consequently, provide different depictions of bodily anatomy. As a result, there is a frequent problem where the same pathology can be detected by one type of medical imaging while being missed by others. This problem brings forward the importance of the development of image processing tools for integrating the information provided by different imaging modalities via the process of information fusion. One particularly important example of clinical application of such tools is in the diagnostic management of breast cancer, which is a prevailing cause of cancer-related mortality in women. Currently, the diagnosis of breast cancer relies mainly on X-ray mammography and Magnetic Resonance Imaging (MRI), which are both important throughout different stages of detection, localization, and treatment of the disease. The sensitivity of mammography, however, is known to be limited in the case of relatively dense breasts, while contrast enhanced MRI tends to yield frequent 'false alarms' due to its high sensitivity. Given this situation, it is critical to find reliable ways of fusing the mammography and MRI scans in order to improve the sensitivity of the former while boosting the specificity of the latter. Unfortunately, fusing the above types of medical images is known to be a difficult computational problem. Indeed, while MRI scans are usually volumetric (i.e., 3-D), digital mammograms are always planar (2-D). Moreover, mammograms are invariably acquired under the force of compression paddles, thus making the breast anatomy undergo sizeable deformations. In the case of MRI, on the other hand, the breast is rarely constrained and imaged in a pendulous state. Finally, X-ray mammography and MRI exploit two completely di erent physical mechanisms, which produce distinct diagnostic contrasts which are related in a non-trivial way. Under such conditions, the success of information fusion depends on one's ability to establish spatial correspondences between mammograms and their related MRI volumes in a cross-modal cross-dimensional (CMCD) setting in the presence of spatial deformations (+SD). Solving the problem of information fusion in the CMCD+SD setting is a very challenging analytical/computational problem, still in need of efficient solutions. In the literature, there is a lack of a generic and consistent solution to the problem of fusing mammograms and breast MRIs and using their complementary information. Most of the existing MRI to mammogram registration techniques are based on a biomechanical approach which builds a speci c model for each patient to simulate the effect of mammographic compression. The biomechanical model is not optimal as it ignores the common characteristics of breast deformation across different cases. Breast deformation is essentially the planarization of a 3-D volume between two paddles, which is common in all patients. Regardless of the size, shape, or internal con guration of the breast tissue, one can predict the major part of the deformation only by considering the geometry of the breast tissue. In contrast with complex standard methods relying on patient-speci c biomechanical modeling, we developed a new and relatively simple approach to estimate the deformation and nd the correspondences. We consider the total deformation to consist of two components: a large-magnitude global deformation due to mammographic compression and a residual deformation of relatively smaller amplitude. We propose a much simpler way of predicting the global deformation which compares favorably to FEM in terms of its accuracy. The residual deformation, on the other hand, is recovered in a variational framework using an elastic transformation model. The proposed algorithm provides us with a computational pipeline that takes breast MRIs and mammograms as inputs and returns the spatial transformation which establishes the correspondences between them. This spatial transformation can be applied in different applications, e.g., producing 'MRI-enhanced' mammograms (which is capable of improving the quality of surgical care) and correlating between different types of mammograms. We investigate the performance of our proposed pipeline on the application of enhancing mammograms by means of MRIs and we have shown improvements over the state of the art.
Dynamic Co-occurrence of Local Anisotropic Gradient Orientations (DyCoLIAGe) Descriptors from Pre-treatment Perfusion DSC-MRI to Predict Overall Survival in Glioblastoma
A significant clinical challenge in glioblastoma is to risk-stratify patients for clinical trials, preferably using MRI scans. Radiomics involves mining of sub-visual features that could serve as surrogate markers of tumor heterogeneity from routine imaging. Previously our group had developed a new gradient-based radiomic descriptor, Co-occurrence of Local Anisotropic Gradient Orientations (COLLAGE), to capture tumor heterogeneity on structural MRI. I present an extension of CoLLAGE on perfusion MRI, termed dynamic COLLAGE (DyCoLIAGe), and demonstrate its application in predicting overall survival in glioblastoma. Following manual segmentation, 52 CoLIAGe features were extracted from edema and enhancing tumor at different time phases during contrast administration of perfusion MRI. Each feature was separately plotted across different time-points, and a 3rd-order polynomial was fit to each feature curve. The corresponding polynomial coefficients were evaluated in terms of their prognosis performance. My results suggest that DyCoLIAGe may be prognostic of overall survival in glioblastoma.
Self-supervised pre-training of an attention-based model for 3D medical image segmentation
Sund Aillet, Albert
2023Thesis, cited 0 times
Thesis
TCGA-OV
TCGA-UCEC
CPTAC-UCEC
CPTAC-PDA
KiTS
CPTAC-LSCC
HNSCC
LCTSC
Computer vision
Deep learning
Segmentation
Algorithm Development
Self-supervised
Abstract [en]; Accurate segmentation of anatomical structures is crucial for radiation therapy in cancer treatment. Deep learning methods have been demonstrated effective for segmentation of 3D medical images, establishing the current standard. However, they require large amounts of labelled data and suffer from reduced performance on domain shift. A possible solution to these challenges is self-supervised learning, that uses unlabelled data to learn representations, which could possibly reduce the need for labelled data and produce more robust segmentation models. This thesis investigates the impact of self-supervised pre-training on an attention-based model for 3D medical image segmentation, specifically focusing on single-organ semantic segmentation, exploring whether self-supervised pre-training enhances the segmentation performance on CT scans with and without domain shift. The Swin UNETR is chosen as the deep learning model since it has been shown to be a successful attention-based architecture for semantic segmentation. During the pre-training stage, the contracting path is trained for three self-supervised pretext tasks using a large dataset of 5 465 unlabelled CT scans. The model is then fine-tuned using labelled datasets with 97, 142 and 288 segmentations of the stomach, the sternum and the pancreas. The results indicate that a substantial performance gain from self-supervised pre-training is not evident. Parameter freezing of the contracting path suggest that the representational power of the contracting path is not as critical for model performance as expected. Decreasing the amount of supervised training data shows that while the pre-training improves model performance when the amount of training data is restricted, the improvements are strongly decreased when more supervised training data is used.; ; Abstract [sv]; Noggrann segmentering av anatomiska strukturer är avgörande för strålbehandling inom cancervården. Djupinlärningmetoder har visat sig vara effektiva och utgör standard för segmentering av 3D medicinska bilder. Dessa metoder kräver däremot stora mängder märkt data och kännetecknas av lägre prestanda vid domänskift. Eftersom självövervakade inlärningsmetoder använder icke-märkt data för inlärning, kan de möjligen minska behovet av märkt data och producera mer robusta segmenteringsmodeller. Denna uppsats undersöker effekten av självövervakad förberedande träning av en attention-baserad modell för 3D medicinsk bildsegmentering, med särskilt fokus på semantisk segmentering av enskilda organ. Syftet är att studera om självövervakad förberedande träning förbättrar segmenteringsprestandan utan respektive med domänskift. Swin UNETR har valts som djupinlärningsmodell eftersom den har visat sig vara en framgångsrik attention-baserad arkitektur för semantisk segmentering. Under den förberedande träningsfasen optimeras modellens kontraherande del med 5 465 icke-märkta CT-scanningar. Modellen tränas sedan på märkta dataset med 97, 142 och 288 segmenterade skanningar av magen, bröstbenet och bukspottkörteln. Resultaten visar att prestandaökningen från självövervakad förberedande träning inte är tydlig. Parameterfrysning av den kontraherande delen visar att dess representationer inte lika avgörande för segmenteringsprestandan som förväntat. Minskning av mängden träningsdata tyder på att även om den förberedande träningen förbättrar modellens prestanda när mängden träningsdata är begränsad, minskas förbättringarna betydligt när mer träningsdata används.
Classification of Benign and Malignant Tumors of Lung Using Bag of Features
Suzan, A Melody
Prathibha, G
Journal of Scientific & Engineering Research2017Journal Article, cited 0 times
Website
This paper presents a novel approach for feature extraction and classification of lung cancer, i.e., Benign or malignant.; Classification of lung cancer is based on a code book generated by using Bag of features algorithm. In this paper 300 regions of Interest; (ROI’s) of lung cancer images from The Cancer Imaging Archive (TICA) sponsored by SPIE are used. In this approach Scale-Invariant; Feature Transform (SIFT) is used for feature extraction and this coefficients are quantized using a bag of features into a predefined code; book. This code book is given as input to the KNN classifier. The overall performance of the system in classifying tumors of lung is; evaluated by using Receiver Operating Characteristics Curve (ROC). Area under the curve (AUC) is Az=0.95.
Five Classifications of Mammography IMages Based on Deep Cooperation Convolutional Neural Network
Tang, Chun-ming
Cui, Xiao-Mei
Yu, Xiang
Yang, Fan
American Scientific Research Journal for Engineering, Technology, and Sciences (ASRJETS)2019Journal Article, cited 0 times
CBIS-DDSM
Convolutional Neural Network (CNN)
Mammography is currently the preferred imaging method for breast cancer screening. Masses and calcificationare the main positive signs of mammography. Due to the variable appearance of masses and calcification, asignificant number of breast cancer cases are missed or misdiagnosed if it is only depended on the radiologists’subjective judgement. At present, most of the studies are based on the classical Convolutional Neural Networks(CNN), which uses the transfer learning to classify the benign and malignant masses in the mammographyimages. However, the CNN is designed for natural images which are substantially different from medicalimages. Therefore, we propose a Deep Cooperation CNN (DCCNN) to classify mammography images of a dataset into five categories including benign calcification, benign mass, malignant calcification, malignant mass andnormal breast. The data set consists of 695 normal cases from DDSM, 753 calcification cases and 891 masscases from CBIS-DDSM. Finally, DCCNN achieves 91% accuracy and 0.98 AUC on the test set, whoseperformance is superior to VGG16, GoogLeNet and InceptionV3 models. Therefore, DCCNN can aidradiologists to make more accurate judgments, greatly reducing the rate of missed and misdiagnosis.
Automated Detection of Early Pulmonary Nodule in Computed Tomography Images
Tariq, Ahmed Usama
2019Thesis, cited 0 times
Thesis
LIDC-IDRI
LUNA16 Challenge
Classification
Classification of lung cancer in CT scans majorly have two steps, detect all suspicious lesions also known as pulmonary nodules and calculate the malignancy. Currently, a lot of studies are about nodules detection, but some are about the evaluation of nodule malignancy. Since the presence of nodule does not unquestionably define the presence lung cancer and the morphology of nodule has a complex association with malignant growth, the diagnosis of lung cancer requests cautious examinations on each suspicious nodule and integrateed information every nodule. We propose a 3D CNN CAD systemto solve this problem. The system consists of two modules a 3D CNN for nodule detection, which outputs all suspicious nodules for a subject and second module train on XGBoost classifier with selective data to acquire the probability of lung malignancy for the subject.
DESIGNING AND TESTING A MOLECULARLY TARGETED GLIOBLASTOMA THERANOSTIC: EXPERIMENTAL AND COMPUTATIONAL STUDIES
With an extremely poor patient prognosis glioblastoma multiforme (GBM) is one of the most aggressive forms of brain tumor with a median patient survival of less than 15 ; months. While new diagnostic and therapeutic approaches continue to emerge, the progress to reduce the mortality associated with the disease is insufficient. Thus, developing new methods having the potential to overcome problems that limit effective imaging and therapeutic efficacy in GBM is still a critical need. The overall goal of this research was therefore to develop targeted glioblastoma theranostics capable of imaging disease progression and simultaneously killing cancer cells. To achieve this, the state of the art of liposome based cancer theranostics are reviewed in detail and potential glioblastoma biomarkers for theranostic delivery are identified by querying different databases and by reviewing the literature. Then tumor targeting liposomes loaded with Gd3N@C80 and doxorubicin (DXR) are developed and tested in vitro. Finally, the stability of these formulations in different physiological salt solutions is evaluated using computational techniques including area per lipid, lipid interdigitaion, carbon-deuterium order parameter, radial distribution of ions as well as steered molecular dynamic simulations. In conclusion the experimental and computational studies of this dissertation ; demonstrated that DXR and Gd3N@C80-OH loaded and lactoferrin & transferrin dual tagged, PEGylated liposomes might be potential drug and imaging agent delivery systems for GBM treatment.
Lung Nodule Detection and Classification using Machine Learning Techniques
Tekade, Ruchita
ASIAN JOURNAL FOR CONVERGENCE IN TECHNOLOGY (AJCT)-UGC LISTED2018Journal Article, cited 0 times
Website
LIDC-IDRI
Machine learning
Computer Aided Detection (CADe)
As lung cancer is second most leading cause of death, early detection of lung cancer is became necessary in many computer aided dignosis (CAD) systems. Recently many CAD systems have been implemented to detect the lung nodules which uses Computer Tomography (CT) scan images [2]. In this paper, some image pre-processing methods such as thresholding, clearing borders, morphological operations (viz., erosion, closing, opening) are discussed to detect lung nodule regions ie, Region of Interest (ROI) in patient lung CT scan images. Also, machine learning techniques such as Support Vector Machine (SVM) and Convolutional Neural Network (CNN) has been discussed for classifying the lung nodules and non-nodules objects in patient lung ct scan images using the sets of lung nodule regions. In this study, Lung Image Database Consortium image collection (LIDC-IDRI) dataset having patient CT scan images has been used to detect and classify lung nodules. Lung nodule classification accuacy of SVM is 90% and that of CNN is 91.66%.
Improving radiomic model reliability and generalizability using perturbations in head and neck carcinoma
Teng, Xinzhi
2023Thesis, cited 0 times
Dissertation
RIDER Lung CT
Head-Neck-PET-CT
OPC-Radiomics
ACRIN 6698
I-SPY 2
Medical Radiology
Algorithm Development
Classification
Risk assessment
Background: Radiomic models for clinical applications need to be reliable. However, the model reliability is conventionally established in prospective settings, requiring proposal and special design of a separate study. As prospective studies are rare, the reliability of most proposed models is unknown. Facilitating the assessment of radiomic model reliability during development would help to identify the most promising models for prospective studies.; Purpose: This thesis aims to propose a framework to build reliable radiomic models using perturbation method. The aim was separated to three studies: 1) develop a perturbation-based assessment method to quantitatively evaluate the reliability of radiomic models, 2) evaluate perturbation-based method against test-retest method for developing reliable radiomic model, and 3) evaluate radiomic model reliability and generalizability after removing low-reliable radiomics features.; Methods and Materials: Four publicly available head-and-neck carcinoma (HNC) datasets and one breast cancer dataset, in total of 1,641 patients, were retrospectively recruited from The Cancer Image Archive (TCIA). The computed tomography (CT) images, their gross tumor volume (GTV) segmentations, distant metastasis (DM) and local-/regional-recurrence (LR) after definitive treatment were collected from HNC datasets. Multi-parametric diffusion-weighted images (DWI), test-retest DWI scans, pathological complete response (pCR) were collected from breast cancer dataset. For the development of reliability assessment method for radiomic model, one dataset with DM outcome as clinical task was used to build the survival model. Sixty perturbed datasets were simulated by randomly translating, rotating, and adding noise to the original image and randomizing GTV segmentation. The perturbed features were subsequently extracted from the perturbed datasets. The radiomic survival model was developed for DM risk prediction, and its reliability was quantified with intra-class coefficient of correlation (ICC) to evaluate the model prediction consistency on perturbed features. In addition, the sensitivity analysis was performed to verify the variation between input feature reliability and output prediction reliability. Then, a new radiomic model to predict pCR with DWI-derived apparent diffusion coefficient (ADC) map was developed, and its reliability was quantified with ICC to quantify the model prediction consistency on perturbed image features and test-retest image features respectively. Following the establishment of perturbation-based model reliability assessment (ICC), the model reliability and generalizability after removing low-reliable features (ICC thresholds of 0, 0.75 and 0.95) was evaluated under a repeated stratified cross-validation with HNC datasets. The model reliability is evaluated with perturbation-based ICC and the model generalizability is evaluated by the average train-test area under the receiver operating characteristic curve (AUC) difference in cross-validation. The experiment was conducted on all four HNC datasets, two clinical outcomes and five classification algorithms.; Results: In development of model reliability assessment method, the reliability index ICC was used to quantify the model output consistency in features extracted from the perturbed images and segmentations. In a six-feature radiomic model, the concordance indexes (C-indexes) of the survival model were 0.742 and 0.769 for the training and testing cohorts, respectively. For the perturbed training and testing datasets, the respective mean C-indexes were 0.686 and 0.678. This yielded ICC values of 0.565 (0.518–0.615) and 0.596 (0.527–0.670) for the perturbed training and testing datasets, respectively. When only highly reliable features were used for radiomic modeling, the model’s ICC increased to 0.782 (0.759–0.815) and 0.825 (0.782–0.867) and its C-index decreased to 0.712 and 0.642 for the training and testing data, respectively. It shows our assessment method is sensitive to the reliability of the input. In the comparison experiment between perturbation-based and test-retest method, the perturbation method achieved radiomic model with comparable reliability (ICC: 0.90 vs. 0.91, P-value > 0.05) and classification performance (AUC: 0.76 vs. 0.77, P-value > 0.05) to test-retest method. For the model reliability and generalizability evaluation after removing low-reliable features, the average model reliability ICC showed significant improvements from 0.65 to 0.78 (ICC threshold 0 vs 0.75, P-value < 0.01) and 0.91 (ICC threshold 0 vs. 0.95, P-value < 0.01) under the increasing reliability thresholds. Additionally, model generalizability has increased substantially, as the mean train-test AUC difference was reduced from 0.21 to 0.18 (P-value < 0.01) and 0.12 (P-value < 0.01), and the testing AUCs were maintained at the same level (P-value > 0.05).; Conclusions: We proposed a perturbation-based framework to evaluate radiomic model reliability and to develop more reliable and generalizable radiomic model. The perturbation-based method is a practical alternative to test-retest scans in assessing radiomic model reliability. Our results also suggest the pre-screening of low-reliable radiomics features prior to modeling is a necessary step to improve final model reliability and generalizability to the unseen dataset.
Extraction of Tumor in Brain MRI using Support Vector Machine and Performance Evaluation
Tunga, Prakash
Visvesvaraya Technological University Journal of Engineering Sciences and Management2019Journal Article, cited 0 times
Website
BraTS
Segmentation
Support Vector Machine (SVM)
BRAIN
In this article, we discuss mainly the extraction of tumor in brain MRI (Magnetic Resonance Imaging) images based on Support Vector Machine (SVM) technique. The work forms computer assisted demarcation of tumor from brain MRI and aims to be a part of routine which would otherwise performed manually by specialists. Here we focus on one of the common types of brain tumors, the Gliomas. These tumors have proved to be life threatening in advanced stages. MRI being a non-invasive procedure, can provide very good soft tissue contrast and so, forms a suitable imaging method for processing which leads to brain tumor detection and description. At first, we preprocess the given MRI image using anisotropic diffusion method, and then SVM technique is applied which classifies the image into tumor and non-tumorous regions. Next, we do the extraction of tumor, referred as Region of Interest (ROI) and describe it by calculating its size and position in the image. The remaining part, i.e., brain region with no tumor presence, refers to Non Region of Interest (NROI). Separation of ROI and NROI parts aids further processing such as ROI based compression. We also calculate the parameters that reflect the performance of the approach.
Cancer Risk Assessment Using Quantitative Imaging Features from Solid Tumors and Surrounding Structures
Medical imaging is a powerful tool for clinical practice allowing in-vivo insight into a patient’s disease state. Many modalities exist, allowing for the collection of diverse information about the underlying tissue structure and/or function. Traditionally, medical professionals use visual assessment of scans to search for disease, assess relevant disease predictors and propose clinical intervention steps. However, the imaging data contain potentially useful information beyond visual assessment by trained professional. To better use the full depth of information contained in the image sets, quantitative imaging characteristics (QICs), can be extracted using mathematical and statistical operations on regions or volumes of interests. The process of using QICs is a pipeline typically involving image acquisition, segmentation, feature extraction, set qualification and analysis of informatics. These descriptors can be integrated into classification methods focused on differentiating between disease states. Lung cancer, a leading cause of death worldwide, is a clear application for advanced in-vivo imaging based classification methods.; ; We hypothesize that QICs extracted from spatially-linked and size-standardized regions of surrounding lung tissue can improve risk assessment quality over features extracted from only the lung tumor, or nodule, regions. We require a robust and flexible pipeline for the extraction and selection of disease QICs in computed tomography (CT). This includes creating an optimized method for feature extraction, reduction, selection, and predictive analysis which could be applied to a multitude of disease imaging problems. This thesis expanded a developmental pipeline for machine learning using a large multicenter controlled CT dataset of lung nodules to extract CT QICs from the nodule, surrounding parenchyma, and greater lung volume and explore CT feature interconnectivity. Furthermore, it created a validated pipeline that is more computationally and time efficient and with stability of performance. The modularity of the optimized pipeline facilitates broader application of the tool for applications beyond CT identified pulmonary nodules.; ; We have developed a flexible and robust pipeline for the extraction and selection of Quantitative Imaging Characteristics for Risk Assessment from the Tumor and its Environment (QIC-RATE). The results presented in this thesis support our hypothesis, showing that classification of lung and breast tumors is improved through inclusion of peritumoral signal. Optimal performance in the lung application achieved with the QIC-RATE tool incorporating 75% of the nodule diameter equivalent in perinodular parenchyma with a development performance of 100% accuracy. The stability of performance was reflected in the maintained high accuracy (98%) in the independent validation dataset of 100 CT from a separate institution. In the breast QIC-RATE application, optimal performance was achieved using 25% of the tumor diameter in breast tissue with 90% accuracy in development, 82% in validation. We address the need for more complex assessments of medically imaged tumors through the QIC-RATE pipeline; a modular, scalable, transferrable pipeline for extracting, reducing and selecting, and training a classification tool based on QICs. Altogether, this research has resulted in a risk assessment methodology that is validated, stable, high performing, adaptable, and transparent.
Implementación de algoritmos de reconstrucción tomográfica mediante programación paralela (CUDA)
“La reconstrucción de imágenes médicas es clave en una amplia gama de tecnologías. Para los sistemas de tomografía computarizada clásica, la cantidad de señales medidas por segundo aumentó exponencialmente en las últimas cuatro décadas, mientras que la complejidad computacional de la mayoría de los algoritmos utilizados no ha cambiado significativamente. Un gran interés y desafío es proporcionar una calidad de imagen óptima con la menor dosis posible de radiación al paciente. Una solución y un campo de investigación activo para resolver ese problema son los métodos iterativos de reconstrucción de imágenes médicas. Su complejidad es múltiple en comparación con los métodos analíticos clásicos que se utilizaron en casi todos los sistemas disponibles comercialmente. En esta tesis se investiga el uso de tarjetas gráficas en el campo de la reconstrucción iterativa de imágenes médicas. Se presentan y evalúan los diferentes enfoques de algoritmos de reconstrucción de imagen acelerados por la GPU (Unidad de Procesamiento Gráfico, por sus siglas en inglés).”
Brain Tumor Classification using Support Vector Machine
Vani, N
Sowmya, A
Jayamma, N
International Research Journal of Engineering and Technology2017Journal Article, cited 0 times
Website
BRAIN
Classification
MATLAB
Computer Aided Detection (CADe)
image processing
Radiomics
Support Vector Machine (SVM)
Classification of benign and malignant lung nodules using image processing techniques
Vas, Moffy Crispin
Dessai, Amita
International Research Journal of Engineering and Technology2017Journal Article, cited 0 times
Website
LUNG
Computed Tomography (CT)
Segmentation
Haralick feature
Artificial Neural Network (ANN)
Cancer is the second leading cause of most number of deaths worldwide after the heart disease, out of which, lung cancer is the leading cause of deaths among all the cancer types. Hence, the lung cancer issue is of global concern and thus this work deals with detection of malignant lung cancer nodules and tries to distinguish it from the benign nodules by processing the Computer tomography (CT) images with the help of Haar wavelet decomposition, Haralick feature extraction followed by artificial neural networks (ANN) .
Una metodología para el análisis y selección de características extraídas mediante Deep Learning de imágenes de Tomografía Computerizada de pulmón.
Vega Gonzalo, María
2018Thesis, cited 0 times
Thesis
Dissertation
LUNG
Deep Learning
Computed Tomography (CT)
Segmentation
Radiomics
Classification
Algorithm Development
Este proyecto se enmarca dentro del proyecto de investigación europeo IASIS, en el cual participa el laboratorio de Análisis de Datos Médicos (MEDAL) del Centro de Tecnología Biómedica de la UPM. El proyecto IASIS pretende estructurar la información médica relacionada con el cáncer de pulmón y la enfermedad de Alzheimer, con el objetivo de analizarla y, a partir del conocimiento extraído, mejorar el diagnóstico y tratamiento de estas enfermedades. El objetivo del presente TFG es establecer una metodología que permita la reducción de la dimensionalidad de características extraídas mediante Deep Learning de imágenes de Tomografía Axial Computerizada. El motivo por el que se desea disminuir el número de variables de los datos, es que la extracción de dichos datos tiene como objetivo utilizarlos para clasificar los nódulos presentes en las imágenes mediante un clasificador. Sin embargo, la alta dimensionalidad de los datos puede perjudicar la precisión de la clasificación, además de suponer un alto coste computacional.; ; (below as google translates:) ; This project is part of the IASIS European research project, in which the Medical Data Analysis Laboratory (MEDAL) of the Center for Biological Technology of the UPM participates. The IASIS project aims to structure medical information related to lung cancer and Alzheimer's disease, with the aim of analyzing it and, based on the knowledge extracted, improving the diagnosis and treatment of these diseases. The objective of this TFG is to establish a methodology that allows the reduction of the dimensionality of features extracted through Deep Learning of Computerized Axial Tomography images. The reason why we want to reduce the number of data variables is that the extraction of said data is intended to be used to classify the nodules present in the images by means of a classifier. However, the high dimensionality of the data can impair the accuracy of the classification, in addition to assuming a high computational cost.
Using Radiomics to improve the 2-year survival of Non-Small Cell Lung Cancer Patients
This thesis both exploits and further contributes enhancements to the utilization of radiomics (extracted quantitative features of radiological imaging data) for improving cancer survival prediction. Several machine learning methods were compared in this analysis, including but not limited to support vector machines, convolutional neural networks and logistic regression.A technique for analysing prognostic image characteristics, for non-small cell lung cancer based on the edge regions, as well as tissues immediately surrounding visible tumours is developed. Regions external to and neighbouring a tumour were shown to also have prognostic value. By using the additional texture features an increase in accuracy, of 3%, is shown over previous approaches for predicting two-year survival, which has been determined by examining the outside rind tissue including the tumour compared to the volume without the rind. This indicates that while the centre of the tumour is currently the main clinical target for radiotherapy treatment, the tissue immediately around the tumour is also clinically important for survival analysis. Further, it was found that improved prediction resulted up to some 6 pixels outside the tumour volume, a distance of approximately 5mm outside the original gross tumour volume (GTV), when applying a support vector machine, which achieved the highest accuracy of 71.18%. This research indicates the periphery of the tumour is highly predictive of survival. To our knowledge this is the first study that has concentrically expanded and analysed the NSCLC rind for radiomic analysis.
Classificação Multirrótulo na Anotação Automática de Nódulo Pulmonar Solitário
Villani, Leonardo
Prati, Ronaldo Cristiano
2012Conference Proceedings, cited 0 times
Multi-Modality Automatic Lung Tumor Segmentation Method Using Deep Learning and Radiomics
Wang, Siqiu
Radiation Oncology2022Thesis, cited 0 times
Website
Dissertation
NSCLC Radiogenomics
Thesis
Inter-observer variability
Radiotherapy
Segmentation
Delineation of the tumor volume is the initial and fundamental step in the radiotherapy planning process. The current clinical practice of manual delineation is time-consuming and suffers from observer variability. This work seeks to develop an effective automatic framework to produce clinically usable lung tumor segmentations. First, to facilitate the development and validation of our methodology, an expansive database of planning CTs, diagnostic PETs, and manual tumor segmentations was curated, and an image registration and preprocessing pipeline was established. Then a deep learning neural network was constructed and optimized to utilize dual-modality PET and CT images for lung tumor segmentation. The feasibility of incorporating radiomics and other mechanisms such as a tumor volume-based stratification scheme for training/validation/testing were investigated to improve the segmentation performance. The proposed methodology was evaluated both quantitatively with similarity metrics and clinically with physician reviews. In addition, external validation with an independent database was also conducted. Our work addressed some of the major limitations that restricted clinical applicability of the existing approaches and produced automatic segmentations that were consistent with the manually contoured ground truth and were highly clinically-acceptable according to both the quantitative and clinical evaluations. Both novel approaches of implementing a tumor volume-based training/validation/ testing stratification strategy as well as incorporating voxel-wise radiomics feature images were shown to improve the segmentation performance. The results showed that the proposed method was effective and robust, producing automatic lung tumor segmentations that could potentially improve both the quality and consistency of manual tumor delineation.
A Gaussian Mixture Model based Level Set Method for Volume Segmentation in Medical Images
This thesis proposes a probabilistic level set method to be used in segmentation of tumors with heterogeneous intensities. It models the intensities of the tumor and surrounding tissue using Gaussian mixture models. Through a contour based initialization procedure samples are gathered to be used in expectation maximization of the mixture model parameters. The proposed method is compared against a threshold-based segmentation method using MRI images retrieved from The Cancer Imaging Archive. The cases are manually segmented and an automated testing procedure is used to find optimal parameters for the proposed method and then it is tested against the threshold-based method.; Segmentation times, dice coefficients, and volume errors are compared. The evaluation reveals that the proposed method has a comparable mean segmentation time to the threshold-based method, and performs faster in cases where the volume error does not exceed 40%. The mean dice coefficient and volume; error are also improved while achieving lower deviation.
Proton radiotherapy spot order optimization to maximize the FLASH effect
Widenfalk, Oscar
2023Thesis, cited 0 times
Thesis
NSCLC-Radiomics-Interobserver1
Radiotherapy
Optimization
PROSTATE
BRAIN
LUNG
Proton Radiation Therapy
Electron Radiation Therapy
Algorithm Development
Cancer is a group of deadly diseases, to which one treatment method is radiotherapy. Recent studies indicate advantages of delivering so-called FLASH treatments using ultra-high dose rates (> 40 Gy/s), with a normal tissue sparing FLASH effect.; Delivering a high dose in a short time imposes requirements on both the treatment machine and the treatment plan. To see as much of the FLASH effect as possible, the delivery pattern should be optimized, which is the focus of this thesis.; The optimization method was applied to 17 lung plans, and the results show that a local-searchbased optimization achieves overall good results, achieving a mean FLASH coverage of 31.7 % outside of the CTV after a mean optimization time of 8.75 s. This is faster than published results using a genetic algorithm.
Supervised Machine Learning Approach Utilizing Artificial Neural Networks for Automated Prostate Zone Segmentation in Abdominal MR images
Development of a method for automating effective patient diameter estimation for digital radiography
Worrall, Mark
2019Thesis, cited 0 times
Thesis
Dissertation
TCGA-SARC
RIDER_Lung CT
Algorithm Development
National patient dose audit of paediatric radiographic examinations is complicated by a lack of data containing a direct measurement of the patient diameter in the examination orientation or height and weight. This has meant that National Diagnostic Reference Levels (NDRLs) for paediatric radiographic examinations have not been updated in the UK since 2000, despite significant changes in imaging technology over that period.; This work is the first step in the development of a computational model intended to automate an estimate of paediatric patient diameter. Whilst the application is intended for a paediatric population, its development within this thesis uses an adult cohort. The computational model uses the radiographic image, the examination exposure factors and a; priori information relating to the x-ray system and the digital detector.; The computational model uses the Beer-Lambert law. A hypothesis was developed that this would work for clinical exposures despite its single energy photon basis. Values of initial air kerma are estimated from the examination exposure factors and measurements made on the x-ray system. Values of kerma at the image receptor are estimated from a measurement of pixel value made at the centre of the radiograph and the measured calibration between; pixel value and kerma for the image receptor. Values of effective linear attenuation coefficient are estimated from Monte Carlo simulations. Monte Carlo simulations were created for two x-ray systems. The simulations were optimised and thoroughly validated to ensure that any result obtained is accurate. The validation process compared simulation results with measurements made on the x-ray units themselves, producing values for effective linear attenuation coefficient that were demonstrated to be accurate.; Estimates of attenuator thickness can be made using the estimated values for each variable. ; The computational model was demonstrated to accurately estimate the thickness of single composition attenuators across a range of thicknesses and exposure factors on three different x-ray systems. The computational model was used in a clinical validation study of 20 adult patients undergoing AP abdominal x-ray examinations. For 19 of these examinations, it estimated the true patient thickness to within ±9%. This work presents a feasible computational model that could be used to automate the estimation of paediatric patient thickness during radiographic examinations allowing for automation of paediatric radiographic dose audit.
Voco: A simple-yet-effective volume contrastive learning framework for 3d medical image analysis
Wu, Linshan
Zhuang, Jiaxin
Chen, Hao
2024Conference Proceedings, cited 0 times
CT Images in COVID-19
Deep Learning
Deep Domain Adaptation Learning Framework for Associating Image Features to Tumour Gene Profile
Computational Models for Automated Histopathological Assessment of Colorectal Liver Metastasis Progression
XU, Xiaoyang
2019Thesis, cited 0 times
Thesis
Dissertation
Histopathology imaging features
COLON
Histopathology imaging is a type of microscopy imaging commonly used for the microlevel clinical examination of a patient’s pathology. Due to the extremely large size of; histopathology images, especially whole slide images (WSIs), it is difficult for pathologists to make a quantitative assessment by inspecting the details of a WSI. Hence, a computeraided system is necessary to provide a subjective and consistent assessment of the WSI for personalised treatment decisions. In this thesis, a deep learning framework for the automatic analysis of whole slide histopathology images is presented for the first time, which aims to address the challenging task of assessing and grading colorectal liver metastasis (CRLM). Quantitative evaluations of a patient’s condition with CRLM are conducted through quantifying different tissue components in resected tumorous specimens. This study mimics the visual examination process of human experts, by focusing on three levels of information, the tissue level, cell level and pixel level, to achieve the step by step segmentation of histopathology images. ; At the tissue level, patches with category information are utilised to analyse the WSIs. Both classification-based approaches and segmentation-based approaches are investigated to locate the metastasis region and quantify different components of the WSI. For the classification-based method, different factors that might affect the classification accuracy are explored using state-of-the-art deep convolutional neural networks (DCNNs). Furthermore, a novel network is proposed to merge the information from different magnification levels to include contextual information to support the final decision. With the support by the segmentation-based method, edge information from the image is integrated with the proposed fully convolutional neural network to further enhance the segmentation results.; At the cell level, nuclei related information is examined to tackle the challenge of inadequate annotations. The problem is approached from two aspects: a weakly supervised; nuclei detection and classification method is presented to model the nuclei in the CRLM by integrating a traditional image processing method and variational auto-encoder (VAE).; A novel nuclei instance segmentation framework is proposed to boost the accuracy of the nuclei detection and segmentation using the idea of transfer learning. Afterwards, a fusion framework is proposed to enhance the tissue level segmentation results by leveraging the statistical and spatial properties of the cells.; At the pixel level, the segmentation problem is tackled by introducing the information from the immunohistochemistry (IHC) stained images. Firstly, two data augmentation; approaches, synthesis-based and transfer-based, are proposed to address the problem of insufficient pixel level segmentation. Afterwards, with the paired image and masks having been obtained, an end-to-end model is trained to achieve pixel level segmentation. Secondly, another novel weakly supervised approach based on the generative adversarial network (GAN) is proposed to explore the feasibility of transforming unpaired haematoxylin and eosin (HE) images to IHC stained images. Extensive experiments reveal that the virtually stained images can also be used for pixel level segmentation.
Accelerating Brain DTI and GYN MRI Studies Using Neural Network
There always exists a demand to accelerate the time-consuming MRI acquisition process. Many methods have been proposed to achieve this goal, including deep learning method which appears to be a robust tool compared to conventional methods. While many works have been done to evaluate the performance of neural networks on standard anatomical MR images, few attentions have been paid to accelerating other less conventional MR image acquisitions. This work aims to evaluate the feasibility of neural networks on accelerating Brain DTI and Gynecological Brachytherapy MRI.Three neural networks including U-net, Cascade-net and PD-net were evaluated. Brain DTI data was acquired from public database RIDER NEURO MRI while cervix gynecological MRI data was acquired from Duke University Hospital clinic data. A 25% Cartesian undersampling strategy was applied to all the training and test data. Diffusion weighted images and quantitative functional maps in Brain DTI, T1-spgr and T2 images in GYN studies were reconstructed. The performance of the neural networks was evaluated by quantitatively calculating the similarity between the reconstructed images and the reference images, using the metric Total Relative Error (TRE). Results showed that with the architectures and parameters set in this work, all three neural networks could accelerate Brain DTI and GYN T2 MR imaging. Generally, PD-net slightly outperformed Cascade-net, and they both outperformed U-net with respect to image reconstruction performance. While this was also true for reconstruction of quantitative functional diffusion weighted maps and GYN T1-spgr images, the overall performance of the three neural networks on these two tasks needed further improvement. To be concluded, PD-net is very promising on accelerating T2-weighted-based MR imaging. Future work can focus on adjusting the parameters and architectures of the neural networks to improve the performance on accelerating GYN T1-spgr MR imaging and adopting more robust undersampling strategy such as radial undersampling strategy to further improve the overall acceleration performance.
Non-invasive Profiling of Molecular Markers in Brain Gliomas using Deep Learning and Magnetic Resonance Images
Gliomas account for the most common malignant primary brain tumors in both pediatric and adult populations. They arise from glial cells and are divided into low grade and high-grade gliomas with significant differences in patient survival. Patients with aggressive high-grade gliomas have life expectancies of less than 2 years. Glioblastoma (GBM) are aggressive brain tumors classified by the world health organization (WHO) as stage IV brain cancer. The overall survival for GBM patients is poor and is in the range of 12 to 15 months. These tumors are typically treated by surgery, followed by radiotherapy and chemotherapy. Gliomas often consist of active tumor tissue, necrotic tissue, and surrounding edema. Magnetic Resonance Imaging (MRI) is the most commonly used modality to assess brain tumors because of its superior soft tissue contrast. MRI tumor segmentation is used to identify the subcomponents as enhancing, necrotic or edematous tissue. Due to the heterogeneity and tissue relaxation differences in these subcomponents, multi-parametric (or multi-contrast) MRI is often used for accurate segmentation. Manual brain tumor segmentation is a challenging and tedious task for human experts due to the variability of tumor appearance, unclear borders of the tumor and the need to evaluate multiple MR images with different contrasts simultaneously. In addition, manual segmentation is often prone to significant intra- and inter-rater variability. To address these issues, Chapter 2 of my dissertation aims at designing and developing a highly accurate, 3D Dense-Unet Convolutional Neural Network (CNN) for segmenting brain tumors into subcomponents that can easily be incorporated into a clinical workflow. Primary brain tumors demonstrate broad variations in imaging features, response to therapy, and prognosis. It has become evident that this heterogeneity is associated with specific molecular and genetic profiles. For example, isocitrate dehydrogenase 1 and 2 (IDH 1/2) mutated gliomas demonstrate increased survival compared to wild-type gliomas with the same histologic grade. Identification of the IDH mutation status as a marker for therapy and prognosis is considered one of the most important recent discoveries in brain glioma biology. Additionally, 1p/19q co-deletion and O6-methyl guanine-DNA methyltransferase (MGMT) promoter methylation is associated with differences in response to specific chemoradiation regimens. Currently, the only reliable way of determining a molecular marker is by obtaining glioma tissue either via an invasive brain biopsy or following open surgical resection. Although the molecular profiling of gliomas is now a routine part of the evaluation of specimens obtained at biopsy or tumor resection, it would be helpful to have this information prior to surgery. In some cases, the information would aid in planning the extent of tumor resection. In others, for tumors in locations where resection is not possible, and the risk of a biopsy is high, accurate delineation of the molecular and genetic profile of the tumor might be used to guide empiric treatment with radiation and/or chemotherapy. The ability to non-invasively profile these molecular markers using only T2w MRI has significant implications in determining therapy, predicting prognosis, and feasible clinical translation. Thus, Chapters 3, 4 and 5 of my dissertation focuses on developing and evaluating deep learning algorithms for non-invasive profiling of molecular markers in brain gliomas using T2w MRI only. This includes developing highly accurate fully automated deep learning networks for, (i) classification of IDH mutation status (Chapter 3), (ii) classification of 1p/19q co-deletion status (Chapter 4), and (iii) classification of MGMT promoter status in Brain Gliomas (Chapter 5). An important caveat of using MRI is the effects of degradation on the images, such as motion artifact, and in turn, on the performance of deep learning-based algorithms. Motion artifacts are an especially pervasive source of MR image quality degradation and can be due to gross patient movements, as well as cardiac and respiratory motion. In clinical practice, these artifacts can interfere with diagnostic interpretation, necessitating repeat imaging. The effect of motion artifacts on medical images and deep learning based molecular profiling algorithms has not been studied systematically. It is likely that motion corruption will also lead to reduced performance of deep-learning algorithms in classifying brain tumor images. Deep learning based brain tumor segmentation and molecular profiling algorithms generally perform well only on specific datasets. Clinical translation of such algorithms has the potential to reduce interobserver variability, and improve planning for radiation therapy, improve speed & response to therapy. Although these algorithms perform very well on several publicly available datasets, their generalization to clinical datasets or tasks have been poor, preventing easy clinical translation. Thus, Chapter 6 of my dissertation focuses on evaluating the performance of the molecular profiling algorithms on motion corrupted, motion corrected and clinical T2w MRI. This includes, (i) evaluating the effect of motion corruption on the molecular profiling algorithms, (ii) determining if deep learning-based motion correction can recover the performance of these algorithms to levels similar to non-corrupted images, and (iii) evaluating the performance of these algorithms on clinical T2w MRI before & after motion correction. This chapter is an investigation on the effects of induced motion artifact on deep learning-based molecular classification, and the relative importance of robust correction methods in recovering the accuracies for potential clinical applicability. Deep-learning studies typically require a very large amount of data to achieve good performance. The number of subjects available from the TCIA database is relatively small when compared to the sample sizes typically required for deep learning. Despite this caveat, the data are representative of real-world clinical experience, with multiparametric MR images from multiple institutions, and represents one of the largest publicly available brain tumor databases. Additionally, the acquisition parameters and imaging vendor platforms are diverse across the imaging centers contributing data to TCIA. This study provides a framework for training, evaluating, and benchmarking any new artifact-correction architectures for potential insertion into a workflow. Although our results show promise for expeditious clinical translation, it will be essential to train and validate the algorithms using additional independent datasets. Thus, Chapter 7 of my dissertation discusses the limitations and possible future directions for this work.
Co-Segmentation Methods for Improving Tumor Target Delineation in PET-CT Images
Renal cancer is the seventh most prevalent cancer among men and the tenth most ; frequent cancer among women, accounting for 5% and 3% of all adult malignancies, ; respectively. Κidney cancer is increasing dramatically in developing countries due to ; inadequate living conditions but and in developed countries due to bad lifestyles, smoking, ; obesity, and hypertension. For decades, radical nephrectomy (RN) was the standard method ; to address the problem of the high incidence of kidney cancer. However, the utilization of ; minimally invasive partial nephrectomy (PN), for the treatment of localized small renal masses ; has increased with the advent of laparoscopic and robotic-assisted procedures. In this; framework, certain factors must be considered in surgical planning and decision-making of ; partial nephrectomies, such as the morphology and location of the tumor.; Advanced technologies such as automatic image segmentation, image and surface ; reconstruction, and 3D printing have been developed to assess the tumor anatomy before; surgery and its relationship to surrounding structures, such as the arteriovenous system, with ; the aim of preventing damage. Overall, it is obvious that 3D printed anatomical kidney models ; are very useful to urologists, surgeons, and researchers as a reference point for preoperative ; planning and intraoperative visualization for a more efficient treatment and a high standard ; of care. Furthermore, they can provide a lot of degrees of comfort in education, in patient ; counseling, and in delivering therapeutic methods customized to the needs of each individual ; patient. ; To this context, the fundamental objective of this thesis is to provide an analytical and ; general pipeline for the generation of a renal 3D printed model from CT images. In addition, ; there are proposed methods to enhance preoperative planning and help surgeons to prepare ; with increased accuracy the surgical procedure so that improve their performance.; Keywords: Medical Image, Computed Tomography (CT), Semantic Segmentation, ; Convolutional Neural Networks (CNNs), Surface Reconstruction, Mesh Processing, 3D ; Printing of Kidney, Operative assistance
Deep Learning for Automated Medical Image Analysis
Medical imaging is an essential tool in many areas of medical applications, used for both diagnosis and treatment. However, reading medical images and making diagnosis or treatment recommendations require specially trained medical specialists. The current practice of reading medical images is labor-intensive, time-consuming, costly, and error-prone. It would be more desirable to have a computer-aided system that can automatically make diagnosis and treatment recommendations.; Recent advances in deep learning enable us to rethink the ways of clinician diagnosis based on medical images. Early detection has proven to be critical to give patients the best chance of recovery and survival. Advanced computer-aided diagnosis systems are expected to have high sensitivities and small low positive rates. How to provide accurate diagnosis results and explore different types of clinical data is an important topic in the current computer-aided diagnosis research.; In this thesis, we will introduce 1) mammograms for detecting breast cancers, the most frequently diagnosed solid cancer for U.S. women, 2) lung Computed Tomography (CT) images for detecting lung cancers, the most frequently diagnosed malignant cancer, and 3) head and neck CT images for automated delineation of organs at risk in radiotherapy. First, we will show how to employ the adversarial concept to generate the hard examples improving mammogram mass segmentation. Second, we will demonstrate how to use the weakly labelled data for the mammogram breast cancer diagnosis by efficiently design deep learning for multiinstance learning. Third, the thesis will walk through DeepLung system which combines deep 3D ConvNets and Gradient Boosting Machine (GBM) for automated lung nodule detection and classification. Fourth, we will show how to use weakly labelled data to improve existing lung nodule detection system by integrating deep learning with a probabilistic graphic model. Lastly, we will demonstrate the AnatomyNet which is thousands of times faster and more accurate than previous methods on automated anatomy segmentation.
New Diagnostics for Bipedality: The hominin ilium displays landmarks of a modified growth trajectory
Association of Peritumoral Radiomics With Tumor Biology and Pathologic Response to Preoperative Targeted Therapy for HER2 (ERBB2)-Positive Breast Cancer
Braman, Nathaniel
Prasanna, Prateek
Whitney, Jon
Singh, Salendra
Beig, Niha
Etesami, Maryam
Bates, David D. B.
Gallagher, Katherine
Bloch, B. Nicolas
Vulchi, Manasa
Turk, Paulette
Bera, Kaustav
Abraham, Jame
Sikov, William M.
Somlo, George
Harris, Lyndsay N.
Gilmore, Hannah
Plecha, Donna
Varadan, Vinay
Madabhushi, Anant
JAMA Netw Open2019Journal Article, cited 0 times
Website
Radiogenomics
TCGA-BRCA
Importance There has been significant recent interest in understanding the utility of quantitative imaging to delineate breast cancer intrinsic biological factors and therapeutic response. No clinically accepted biomarkers are as yet available for estimation of response to human epidermal growth factor receptor 2 (currently known as ERBB2, but referred to as HER2 in this study)–targeted therapy in breast cancer.; ; Objective To determine whether imaging signatures on clinical breast magnetic resonance imaging (MRI) could noninvasively characterize HER2-positive tumor biological factors and estimate response to HER2-targeted neoadjuvant therapy.; ; Design, Setting, and Participants In a retrospective diagnostic study encompassing 209 patients with breast cancer, textural imaging features extracted within the tumor and annular peritumoral tissue regions on MRI were examined as a means to identify increasingly granular breast cancer subgroups relevant to therapeutic approach and response. First, among a cohort of 117 patients who received an MRI prior to neoadjuvant chemotherapy (NAC) at a single institution from April 27, 2012, through September 4, 2015, imaging features that distinguished HER2+ tumors from other receptor subtypes were identified. Next, among a cohort of 42 patients with HER2+ breast cancers with available MRI and RNaseq data accumulated from a multicenter, preoperative clinical trial (BrUOG 211B), a signature of the response-associated HER2-enriched (HER2-E) molecular subtype within HER2+ tumors (n = 42) was identified. The association of this signature with pathologic complete response was explored in 2 patient cohorts from different institutions, where all patients received HER2-targeted NAC (n = 28, n = 50). Finally, the association between significant peritumoral features and lymphocyte distribution was explored in patients within the BrUOG 211B trial who had corresponding biopsy hematoxylin-eosin–stained slide images. Data analysis was conducted from January 15, 2017, to February 14, 2019.; ; Main Outcomes and Measures Evaluation of imaging signatures by the area under the receiver operating characteristic curve (AUC) in identifying HER2+ molecular subtypes and distinguishing pathologic complete response (ypT0/is) to NAC with HER2-targeting.; ; Results In the 209 patients included (mean [SD] age, 51.1 [11.7] years), features from the peritumoral regions better discriminated HER2-E tumors (maximum AUC, 0.85; 95% CI, 0.79-0.90; 9-12 mm from the tumor) compared with intratumoral features (AUC, 0.76; 95% CI, 0.69-0.84). A classifier combining peritumoral and intratumoral features identified the HER2-E subtype (AUC, 0.89; 95% CI, 0.84-0.93) and was significantly associated with response to HER2-targeted therapy in both validation cohorts (AUC, 0.80; 95% CI, 0.61-0.98 and AUC, 0.69; 95% CI, 0.53-0.84). Features from the 0- to 3-mm peritumoral region were significantly associated with the density of tumor-infiltrating lymphocytes (R2 = 0.57; 95% CI, 0.39-0.75; P = .002).; ; Conclusions and Relevance A combination of peritumoral and intratumoral characteristics appears to identify intrinsic molecular subtypes of HER2+ breast cancers from imaging, offering insights into immune response within the peritumoral environment and suggesting potential benefit for treatment guidance.
A Competition, Benchmark, Code, and Data for Using Artificial Intelligence to Detect Lesions in Digital Breast Tomosynthesis
Konz, N.
Buda, M.
Gu, H.
Saha, A.
Yang, J.
Chledowski, J.
Park, J.
Witowski, J.
Geras, K. J.
Shoshan, Y.
Gilboa-Solomon, F.
Khapun, D.
Ratner, V.
Barkan, E.
Ozery-Flato, M.
Marti, R.
Omigbodun, A.
Marasinou, C.
Nakhaei, N.
Hsu, W.
Sahu, P.
Hossain, M. B.
Lee, J.
Santos, C.
Przelaskowski, A.
Kalpathy-Cramer, J.
Bearce, B.
Cha, K.
Farahani, K.
Petrick, N.
Hadjiiski, L.
Drukker, K.
Armato, S. G., 3rd
Mazurowski, M. A.
JAMA Netw Open2023Journal Article, cited 0 times
Website
Breast-Cancer-Screening-DBT
Challenge
Humans
Computer Aided Detection (CADe)
Benchmarking
Mammography/methods
Algorithm Development
Radiographic Image Interpretation
Computer-Assisted/methods
*Breast Neoplasms/diagnostic imaging
IMPORTANCE: An accurate and robust artificial intelligence (AI) algorithm for detecting cancer in digital breast tomosynthesis (DBT) could significantly improve detection accuracy and reduce health care costs worldwide. OBJECTIVES: To make training and evaluation data for the development of AI algorithms for DBT analysis available, to develop well-defined benchmarks, and to create publicly available code for existing methods. DESIGN, SETTING, AND PARTICIPANTS: This diagnostic study is based on a multi-institutional international grand challenge in which research teams developed algorithms to detect lesions in DBT. A data set of 22 032 reconstructed DBT volumes was made available to research teams. Phase 1, in which teams were provided 700 scans from the training set, 120 from the validation set, and 180 from the test set, took place from December 2020 to January 2021, and phase 2, in which teams were given the full data set, took place from May to July 2021. MAIN OUTCOMES AND MEASURES: The overall performance was evaluated by mean sensitivity for biopsied lesions using only DBT volumes with biopsied lesions; ties were broken by including all DBT volumes. RESULTS: A total of 8 teams participated in the challenge. The team with the highest mean sensitivity for biopsied lesions was the NYU B-Team, with 0.957 (95% CI, 0.924-0.984), and the second-place team, ZeDuS, had a mean sensitivity of 0.926 (95% CI, 0.881-0.964). When the results were aggregated, the mean sensitivity for all submitted algorithms was 0.879; for only those who participated in phase 2, it was 0.926. CONCLUSIONS AND RELEVANCE: In this diagnostic study, an international competition produced algorithms with high sensitivity for using AI to detect lesions on DBT images. A standardized performance benchmark for the detection task using publicly available clinical imaging data was released, with detailed descriptions and analyses of submitted algorithms accompanied by a public release of their predictions and code for selected methods. These resources will serve as a foundation for future research on computer-assisted diagnosis methods for DBT, significantly lowering the barrier of entry for new researchers.
Development and Validation of an Automated Image-Based Deep Learning Platform for Sarcopenia Assessment in Head and Neck Cancer
Ye, Zezhong
Saraf, Anurag
Ravipati, Yashwanth
Hoebers, Frank
Catalano, Paul J.
Zha, Yining
Zapaishchykova, Anna
Likitlersuang, Jirapat
Guthier, Christian
Tishler, Roy B.
Schoenfeld, Jonathan D.
Margalit, Danielle N.
Haddad, Robert I.
Mak, Raymond H.
Naser, Mohamed
Wahid, Kareem A.
Sahlsten, Jaakko
Jaskari, Joel
Kaski, Kimmo
Mäkitie, Antti A.
Fuller, Clifton D.
Aerts, Hugo J. W. L.
Kann, Benjamin H.
JAMA Network Open2023Journal Article, cited 0 times
HNSCC
sarcopenia
Deep Learning
Sarcopenia is an established prognostic factor in patients with head and neck squamous cell carcinoma (HNSCC); the quantification of sarcopenia assessed by imaging is typically achieved through the skeletal muscle index (SMI), which can be derived from cervical skeletal muscle segmentation and cross-sectional area. However, manual muscle segmentation is labor intensive, prone to interobserver variability, and impractical for large-scale clinical use.To develop and externally validate a fully automated image-based deep learning platform for cervical vertebral muscle segmentation and SMI calculation and evaluate associations with survival and treatment toxicity outcomes.For this prognostic study, a model development data set was curated from publicly available and deidentified data from patients with HNSCC treated at MD Anderson Cancer Center between January 1, 2003, and December 31, 2013. A total of 899 patients undergoing primary radiation for HNSCC with abdominal computed tomography scans and complete clinical information were selected. An external validation data set was retrospectively collected from patients undergoing primary radiation therapy between January 1, 1996, and December 31, 2013, at Brigham and Women’s Hospital. The data analysis was performed between May 1, 2022, and March 31, 2023.C3 vertebral skeletal muscle segmentation during radiation therapy for HNSCC.Overall survival and treatment toxicity outcomes of HNSCC.The total patient cohort comprised 899 patients with HNSCC (median [range] age, 58 [24-90] years; 140 female [15.6%] and 755 male [84.0%]). Dice similarity coefficients for the validation set (n = 96) and internal test set (n = 48) were 0.90 (95% CI, 0.90-0.91) and 0.90 (95% CI, 0.89-0.91), respectively, with a mean 96.2% acceptable rate between 2 reviewers on external clinical testing (n = 377). Estimated cross-sectional area and SMI values were associated with manually annotated values (Pearson r = 0.99; P < .001) across data sets. On multivariable Cox proportional hazards regression, SMI-derived sarcopenia was associated with worse overall survival (hazard ratio, 2.05; 95% CI, 1.04-4.04; P = .04) and longer feeding tube duration (median [range], 162 [6-1477] vs 134 [15-1255] days; hazard ratio, 0.66; 95% CI, 0.48-0.89; P = .006) than no sarcopenia.This prognostic study’s findings show external validation of a fully automated deep learning pipeline to accurately measure sarcopenia in HNSCC and an association with important disease outcomes. The pipeline could enable the integration of sarcopenia assessment into clinical decision making for individuals with HNSCC.
Quantitative variations in texture analysis features dependent on MRI scanning parameters: A phantom model
Buch, Karen
Kuno, Hirofumi
Qureshi, Muhammad M
Li, Baojun
Sakai, Osamu
Journal of applied clinical medical physics2018Journal Article, cited 0 times
Website
RIDER
TCGA
texture analysis
MRI
Assessment of prostate cancer prognostic Gleason grade group using zonal-specific features extracted from biparametric MRI using a KNN classifier
Jensen, C.
Carl, J.
Boesen, L.
Langkilde, N. C.
Ostergaard, L. R.
J Appl Clin Med Phys2019Journal Article, cited 0 times
Website
SPIE-AAPM PROSTATEx Challenge
PROSTATE
K Nearest Neighbor (KNN)
Classification
PURPOSE: To automatically assess the aggressiveness of prostate cancer (PCa) lesions using zonal-specific image features extracted from diffusion weighted imaging (DWI) and T2W MRI. METHODS: Region of interest was extracted from DWI (peripheral zone) and T2W MRI (transitional zone and anterior fibromuscular stroma) around the center of 112 PCa lesions from 99 patients. Image histogram and texture features, 38 in total, were used together with a k-nearest neighbor classifier to classify lesions into their respective prognostic Grade Group (GG) (proposed by the International Society of Urological Pathology 2014 consensus conference). A semi-exhaustive feature search was performed (1-6 features in each feature set) and validated using threefold stratified cross validation in a one-versus-rest classification setup. RESULTS: Classifying PCa lesions into GGs resulted in AUC of 0.87, 0.88, 0.96, 0.98, and 0.91 for GG1, GG2, GG1 + 2, GG3, and GG4 + 5 for the peripheral zone, respectively. The results for transitional zone and anterior fibromuscular stroma were AUC of 0.85, 0.89, 0.83, 0.94, and 0.86 for GG1, GG2, GG1 + 2, GG3, and GG4 + 5, respectively. CONCLUSION: This study showed promising results with reasonable AUC values for classification of all GG indicating that zonal-specific imaging features from DWI and T2W MRI can be used to differentiate between PCa lesions of various aggressiveness.
Impact of image preprocessing methods on reproducibility of radiomic features in multimodal magnetic resonance imaging in glioblastoma
Moradmand, Hajar
Aghamiri, Seyed Mahmoud Reza
Ghaderi, Reza
J Appl Clin Med Phys2019Journal Article, cited 0 times
TCGA-GBM
GLISTR
Radiomics
Magnetic Resonance Imaging (MRI)
To investigate the effect of image preprocessing, in respect to intensity inhomogeneity correction and noise filtering, on the robustness and reproducibility of the radiomics features extracted from the Glioblastoma (GBM) tumor in multimodal MR images (mMRI). In this study, for each patient 1461 radiomics features were extracted from GBM subregions (i.e., edema, necrosis, enhancement, and tumor) of mMRI (i.e., FLAIR, T1, T1C, and T2) volumes for five preprocessing combinations (in total 116 880 radiomics features). The robustness and reproducibility of the radiomics features were assessed under four comparisons: (a) Baseline versus modified bias field; (b) Baseline versus modified bias field followed by noise filtering; (c) Baseline versus modified noise, and (d) Baseline versus modified noise followed bias field correction. The concordance correlation coefficient (CCC), dynamic range (DR), and interclass correlation coefficient (ICC) were used as metrics. Shape features and subsequently, local binary pattern (LBP) filtered images were highly stable and reproducible against bias field correction and noise filtering in all measurements. In all MRI modalities, necrosis regions (NC: n ~449/1461, 30%) had the highest number of highly robust features, with CCC and DR >= 0.9, in comparison with edema (ED: n ~296/1461, 20%), enhanced (EN: n ~ 281/1461, 19%) and active-tumor regions (TM: n ~254/1461, 17%). The necrosis regions (NC: n ~ 449/1461, 30%) had a higher number of highly robust features (CCC and DR >= 0.9) than edema (ED: n ~ 296/1461, 20%), enhanced (EN: n ~ 281/1461, 19%) and active-tumor (TM: n ~ 254/1461, 17%) regions across all modalities. Furthermore, our results identified that the percentage of high reproducible features with ICC >= 0.9 after bias field correction (23.2%), and bias field correction followed by noise filtering (22.4%) were higher in contrast with noise smoothing and also noise smoothing follow by bias correction. These preliminary findings imply that preprocessing sequences can also have a significant impact on the robustness and reproducibility of mMRI-based radiomics features and identification of generalizable and consistent preprocessing algorithms is a pivotal step before imposing radiomics biomarkers into the clinic for GBM patients.
Dynamic conformal arcs for lung stereotactic body radiation therapy: A comparison with volumetric-modulated arc therapy
Bokrantz, R.
Wedenberg, M.
Sandwall, P.
J Appl Clin Med Phys2020Journal Article, cited 1 times
Website
4D-Lung
Computed Tomography (CT)
This study constitutes a feasibility assessment of dynamic conformal arc (DCA) therapy as an alternative to volumetric-modulated arc therapy (VMAT) for stereotactic body radiation therapy (SBRT) of lung cancer. The rationale for DCA is lower geometric complexity and hence reduced risk for interplay errors induced by respiratory motion. Forward planned DCA and inverse planned DCA based on segment-weight optimization were compared to VMAT for single arc treatments of five lung patients. Analysis of dose-volume histograms and clinical goal fulfillment revealed that DCA can generate satisfactory and near equivalent dosimetric quality to VMAT, except for complex tumor geometries. Segment-weight optimized DCA provided spatial dose distributions qualitatively similar to those for VMAT. Our results show that DCA, and particularly segment-weight optimized DCA, may be an attractive alternative to VMAT for lung SBRT treatments if the patient anatomy is favorable.
A feasibility study to estimate optimal rigid-body registration using combinatorial rigid registration optimization (CORRO)
Yorke, A. A.
Solis, D., Jr.
Guerrero, T.
J Appl Clin Med Phys2020Journal Article, cited 0 times
PURPOSE: Clinical image pairs provide the most realistic test data for image registration evaluation. However, the optimal registration is unknown. Using combinatorial rigid registration optimization (CORRO) we demonstrate a method to estimate the optimal alignment for rigid-registration of clinical image pairs. METHODS: Expert selected landmark pairs were selected for each CT/CBCT image pair for six cases representing head and neck, thoracic, and pelvic anatomic regions. Combination subsets of a k number of landmark pairs (k-combination set) were generated without repeat to form a large set of k-combination sets (k-set) for k = 4,8,12. The rigid transformation between the image pairs was calculated for each k-combination set. The mean and standard deviation of these transformations were used to derive final registration for each k-set. RESULTS: The standard deviation of registration output decreased as the k-size increased for all cases. The joint entropy evaluated for each k-set of each case was smaller than those from two commercially available registration programs indicating a stronger correlation between the image pair after CORRO was used. A joint histogram plot of all three algorithms showed high correlation between them. As further proof of the efficacy of CORRO the joint entropy of each member of 30 000 k-combination sets in k = 4 were calculated for one of the thoracic cases. The minimum joint entropy was found to exist at the estimated mean of registration indicating CORRO converges to the optimal rigid-registration results. CONCLUSIONS: We have developed a methodology called CORRO that allows us to estimate optimal alignment for rigid-registration of clinical image pairs using a large set landmark point. The results for the rigid-body registration have been shown to be comparable to results from commercially available algorithms for all six cases. CORRO can serve as an excellent tool that can be used to test and validate rigid registration algorithms.
SBRT of ventricular tachycardia using 4pi optimized trajectories
Reis, C.
Little, B.
Lee MacDonald, R.
Syme, A.
Thomas, C. G.
Robar, J. L.
J Appl Clin Med Phys2021Journal Article, cited 0 times
Website
CT Lymph Nodes
Radiation Therapy
Segmentation
radiosurgery
ventricular tachycardia
HEART
PURPOSE: To investigate the possible advantages of using 4pi-optimized arc trajectories in stereotactic body radiation therapy of ventricular tachycardia (VT-SBRT) to minimize exposure of healthy tissues. METHODS AND MATERIALS: Thorax computed tomography (CT) data for 15 patients were used for contouring organs at risk (OARs) and defining realistic planning target volumes (PTVs). A conventional trajectory plan, defined as two full coplanar arcs was compared to an optimized-trajectory plan provided by a 4pi algorithm that penalizes geometric overlap of PTV and OARs in the beam's-eye-view. A single fraction of 25 Gy was prescribed to the PTV in both plans and a comparison of dose sparing to OARs was performed based on comparisons of maximum, mean, and median dose. RESULTS: A significant average reduction in maximum dose was observed for esophagus (18%), spinal cord (26%), and trachea (22%) when using 4pi-optimized trajectories. Mean doses were also found to decrease for esophagus (19%), spinal cord (33%), skin (18%), liver (59%), lungs (19%), trachea (43%), aorta (11%), inferior vena cava (25%), superior vena cava (33%), and pulmonary trunk (26%). A median dose reduction was observed for esophagus (40%), spinal cord (48%), skin (36%), liver (72%), lungs (41%), stomach (45%), trachea (53%), aorta (45%), superior vena cava (38%), pulmonary veins (32%), and pulmonary trunk (39%). No significant difference was observed for maximum dose (p = 0.650) and homogeneity index (p = 0.156) for the PTV. Average values of conformity number were 0.86 +/- 0.05 and 0.77 +/- 0.09 for the conventional and 4pi optimized plans respectively. CONCLUSIONS: 4pi optimized trajectories provided significant reduction to mean and median doses to cardiac structures close to the target but did not decrease maximum dose. Significant improvement in maximum, mean and median doses for noncardiac OARs makes 4pi optimized trajectories a suitable delivery technique for treating VT.
Attention-guided duplex adversarial U-net for pancreatic segmentation from computed tomography images
Li, M.
Lian, F.
Li, Y.
Guo, S.
J Appl Clin Med Phys2022Journal Article, cited 0 times
Website
Pancreas-CT
Machine Learning
Generative adversarial network
Segmentation
PURPOSE: Segmenting the organs from computed tomography (CT) images is crucial to early diagnosis and treatment. Pancreas segmentation is especially challenging because the pancreas has a small volume and a large variation in shape. METHODS: To mitigate this issue, an attention-guided duplex adversarial U-Net (ADAU-Net) for pancreas segmentation is proposed in this work. First, two adversarial networks are integrated into the baseline U-Net to ensure the obtained prediction maps resemble the ground truths. Then, attention blocks are applied to preserve much contextual information for segmentation. The implementation of the proposed ADAU-Net consists of two steps: 1) backbone segmentor selection scheme is introduced to select an optimal backbone segmentor from three two-dimensional segmentation model variants based on a conventional U-Net and 2) attention blocks are integrated into the backbone segmentor at several locations to enhance the interdependency among pixels for a better segmentation performance, and the optimal structure is selected as a final version. RESULTS: The experimental results on the National Institutes of Health Pancreas-CT dataset show that our proposed ADAU-Net outperforms the baseline segmentation network by 6.39% in dice similarity coefficient and obtains a competitive performance compared with the-state-of-art methods for pancreas segmentation. CONCLUSION: The ADAU-Net achieves satisfactory segmentation results on the public pancreas dataset, indicating that the proposed model can segment pancreas outlines from CT images accurately.
Deep learning-based auto segmentation using generative adversarial network on magnetic resonance images obtained for head and neck cancer patients
Kawahara, D.
Tsuneda, M.
Ozawa, S.
Okamoto, H.
Nakamura, M.
Nishio, T.
Nagata, Y.
J Appl Clin Med Phys2022Journal Article, cited 0 times
Website
AAPM RT-MAC
*Deep Learning
*Head and Neck Neoplasms/diagnostic imaging/radiotherapy
Humans
Image Processing
Computer-Assisted/methods
Magnetic Resonance Imaging
Organs at Risk
Convolutional Neural Network (CNN)
Generative Adversarial Network (GAN)
deep learning
segmentation
PURPOSE: Adaptive radiotherapy requires auto-segmentation in patients with head and neck (HN) cancer. In the current study, we propose an auto-segmentation model using a generative adversarial network (GAN) on magnetic resonance (MR) images of HN cancer for MR-guided radiotherapy (MRgRT). MATERIAL AND METHODS: In the current study, we used a dataset from the American Association of Physicists in Medicine MRI Auto-Contouring (RT-MAC) Grand Challenge 2019. Specifically, eight structures in the MR images of HN region, namely submandibular glands, lymph node level II and level III, and parotid glands, were segmented with the deep learning models using a GAN and a fully convolutional network with a U-net. These images were compared with the clinically used atlas-based segmentation. RESULTS: The mean Dice similarity coefficient (DSC) of the U-net and GAN models was significantly higher than that of the atlas-based method for all the structures (p < 0.05). Specifically, the maximum Hausdorff distance (HD) was significantly lower than that in the atlas method (p < 0.05). Comparing the 2.5D and 3D U-nets, the 3D U-net was superior in segmenting the organs at risk (OAR) for HN patients. The DSC was highest for 0.75-0.85, and the HD was lowest within 5.4 mm of the 2.5D GAN model in all the OARs. CONCLUSIONS: In the current study, we investigated the auto-segmentation of the OAR for HN patients using U-net and GAN models on MR images. Our proposed model is potentially valuable for improving the efficiency of HN RT treatment planning.
Improving reproducibility and performance of radiomics in low‐dose CT using cycle GANs
Chen, Junhua
Wee, Leonard
Dekker, Andre
Bermejo, Inigo
Journal of applied clinical medical physics2022Journal Article, cited 0 times
LDCT-and-Projection-data
NSCLC-Radiomics
TCGA-LUAD
BACKGROUND: As a means to extract biomarkers from medical imaging, radiomics has attracted increased attention from researchers. However, reproducibility and performance of radiomics in low-dose CT scans are still poor, mostly due to noise. Deep learning generative models can be used to denoise these images and in turn improve radiomics' reproducibility and performance. However, most generative models are trained on paired data, which can be difficult or impossible to collect.
PURPOSE: In this article, we investigate the possibility of denoising low-dose CTs using cycle generative adversarial networks (GANs) to improve radiomics reproducibility and performance based on unpaired datasets.
METHODS AND MATERIALS: Two cycle GANs were trained: (1) from paired data, by simulating low-dose CTs (i.e., introducing noise) from high-dose CTs and (2) from unpaired real low dose CTs. To accelerate convergence, during GAN training, a slice-paired training strategy was introduced. The trained GANs were applied to three scenarios: (1) improving radiomics reproducibility in simulated low-dose CT images and (2) same-day repeat low dose CTs (RIDER dataset), and (3) improving radiomics performance in survival prediction. Cycle GAN results were compared with a conditional GAN (CGAN) and an encoder-decoder network (EDN) trained on simulated paired data.
RESULTS: The cycle GAN trained on simulated data improved concordance correlation coefficients (CCC) of radiomic features from 0.87 (95%CI, [0.833,0.901]) to 0.93 (95%CI, [0.916,0.949]) on simulated noise CT and from 0.89 (95%CI, [0.881,0.914]) to 0.92 (95%CI, [0.908,0.937]) on the RIDER dataset, as well improving the area under the receiver operating characteristic curve (AUC) of survival prediction from 0.52 (95%CI, [0.511,0.538]) to 0.59 (95%CI, [0.578,0.602]). The cycle GAN trained on real data increased the CCCs of features in RIDER to 0.95 (95%CI, [0.933,0.961]) and the AUC of survival prediction to 0.58 (95%CI, [0.576,0.596]).
CONCLUSION: The results show that cycle GANs trained on both simulated and real data can improve radiomics' reproducibility and performance in low-dose CT and achieve similar results compared to CGANs and EDNs.
Self‐adaption and texture generation: A hybrid loss function for low‐dose CT denoising
Wang, Zhenchuan
Liu, Minghui
Cheng, Xuan
Zhu, Jinqi
Wang, Xiaomin
Gong, Haigang
Liu, Ming
Xu, Lifeng
Journal of applied clinical medical physics2023Journal Article, cited 0 times
LDCT-and-Projection-data
BACKGROUND: Deep learning has been successfully applied to low-dose CT (LDCT) denoising. But the training of the model is very dependent on an appropriate loss function. Existing denoising models often use per-pixel loss, including mean abs error (MAE) and mean square error (MSE). This ignores the difference in denoising difficulty between different regions of the CT images and leads to the loss of large texture information in the generated image.
PURPOSE: In this paper, we propose a new hybrid loss function that adapts to the noise in different regions of CT images to balance the denoising difficulty and preserve texture details, thus acquiring CT images with high-quality diagnostic value using LDCT images, providing strong support for condition diagnosis.
METHODS: We propose a hybrid loss function consisting of weighted patch loss (WPLoss) and high-frequency information loss (HFLoss). To enhance the model's denoising ability of the local areas which are difficult to denoise, we improve the MAE to obtain WPLoss. After the generated image and the target image are divided into several patches, the loss weight of each patch is adaptively and dynamically adjusted according to its loss ratio. In addition, considering that texture details are contained in the high-frequency information of the image, we use HFLoss to calculate the difference between CT images in the high-frequency information part.
RESULTS: Our hybrid loss function improves the denoising performance of several models in the experiment, and obtains a higher peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). Moreover, through visual inspection of the generated results of the comparison experiment, the proposed hybrid function can effectively suppress noise and retain image details.
CONCLUSIONS: We propose a hybrid loss function for LDCT image denoising, which has good interpretation properties and can improve the denoising performance of existing models. And the validation results of multiple models using different datasets show that it has good generalization ability. By using this loss function, high-quality CT images with low radiation are achieved, which can avoid the hazards caused by radiation and ensure the disease diagnosis for patients.
Contrast-enhanced MRI synthesis using dense-dilated residual convolutions based 3D network toward elimination of gadolinium in neuro-oncology
Osman, A. F. I.
Tamam, N. M.
J Appl Clin Med Phys2023Journal Article, cited 0 times
Website
BraTS 2021
Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI)
Deep learning
dilated convolution
gadolinium-based contrast agents
glioma
medical image synthesis
neuro-oncology
residual connection
Recent studies have raised broad safety and health concerns about using of gadolinium contrast agents during magnetic resonance imaging (MRI) to enhance identification of active tumors. In this paper, we developed a deep learning-based method for three-dimensional (3D) contrast-enhanced T1-weighted (T1) image synthesis from contrast-free image(s). The MR images of 1251 patients with glioma from the RSNA-ASNR-MICCAI BraTS Challenge 2021 dataset were used in this study. A 3D dense-dilated residual U-Net (DD-Res U-Net) was developed for contrast-enhanced T1 image synthesis from contrast-free image(s). The model was trained on a randomly split training set (n = 800) using a customized loss function and validated on a validation set (n = 200) to improve its generalizability. The generated images were quantitatively assessed against the ground-truth on a test set (n = 251) using the mean absolute error (MAE), mean-squared error (MSE), peak signal-to-noise ratio (PSNR), structural similarity (SSIM), normalized mutual information (NMI), and Hausdorff distance (HDD) metrics. We also performed a qualitative visual similarity assessment between the synthetic and ground-truth images. The effectiveness of the proposed model was compared with a 3D U-Net baseline model and existing deep learning-based methods in the literature. Our proposed DD-Res U-Net model achieved promising performance for contrast-enhanced T1 synthesis in both quantitative metrics and perceptual evaluation on the test set (n = 251). Analysis of results on the whole brain region showed a PSNR (in dB) of 29.882 +/- 5.924, a SSIM of 0.901 +/- 0.071, a MAE of 0.018 +/- 0.013, a MSE of 0.002 +/- 0.002, a HDD of 2.329 +/- 9.623, and a NMI of 1.352 +/- 0.091 when using only T1 as input; and a PSNR (in dB) of 30.284 +/- 4.934, a SSIM of 0.915 +/- 0.063, a MAE of 0.017 +/- 0.013, a MSE of 0.001 +/- 0.002, a HDD of 1.323 +/- 3.551, and a NMI of 1.364 +/- 0.089 when combining T1 with other MRI sequences. Compared to the U-Net baseline model, our model revealed superior performance. Our model demonstrated excellent capability in generating synthetic contrast-enhanced T1 images from contrast-free MR image(s) of the whole brain region when using multiple contrast-free images as input. Without incorporating tumor mask information during network training, its performance was inferior in the tumor regions compared to the whole brain which requires further improvements to replace the gadolinium administration in neuro-oncology.
A deep learning approach to remove contrast from contrast-enhanced CT for proton dose calculation
Wang, X.
Hao, Y.
Duan, Y.
Yang, D.
J Appl Clin Med Phys2024Journal Article, cited 0 times
CPTAC-PDA
TCGA-STAD
Generative Adversarial Network (GAN)
Contrast enhancement
Computed Tomography (CT)
Deep learning
medical image processing
proton dose calculation
radiation therapy
Radiotherapy
PURPOSE: Non-Contrast Enhanced CT (NCECT) is normally required for proton dose calculation while Contrast Enhanced CT (CECT) is often scanned for tumor and organ delineation. Possible tissue motion between these two CTs raises dosimetry uncertainties, especially for moving tumors in the thorax and abdomen. Here we report a deep-learning approach to generate NCECT directly from CECT. This method could be useful to avoid the NCECT scan, reduce CT simulation time and imaging dose, and decrease the uncertainties caused by tissue motion between otherwise two different CT scans. METHODS: A deep network was developed to convert CECT to NCECT. The network receives a 3D image from CECT images as input and generates a corresponding contrast-removed NCECT image patch. Abdominal CECT and NCECT image pairs of 20 patients were deformably registered and 8000 image patch pairs extracted from the registered image pairs were utilized to train and test the model. CTs of clinical proton patients and their treatment plans were employed to evaluate the dosimetric impact of using the generated NCECT for proton dose calculation. RESULTS: Our approach achieved a Cosine Similarity score of 0.988 and an MSE value of 0.002. A quantitative comparison of clinical proton dose plans computed on the CECT and the generated NCECT for five proton patients revealed significant dose differences at the distal of beam paths. V100% of PTV and GTV changed by 3.5% and 5.5%, respectively. The mean HU difference for all five patients between the generated and the scanned NCECTs was approximately 4.72, whereas the difference between CECT and the scanned NCECT was approximately 64.52, indicating a approximately 93% reduction in mean HU difference. CONCLUSIONS: A deep learning approach was developed to generate NCECTs from CECTs. This approach could be useful for the proton dose calculation to reduce uncertainties caused by tissue motion between CECT and NCECT.
A deep learning-based framework (Co-ReTr) for auto-segmentation of non-small cell-lung cancer in computed tomography images
Kunkyab, T.
Bahrami, Z.
Zhang, H.
Liu, Z.
Hyde, D.
J Appl Clin Med Phys2024Journal Article, cited 0 times
Website
NSCLC-Radiomics
NSCLC Radiogenomics
Gross Tumor Volume (GTV)
Computed Tomography (CT)
Auto-segmentation
Model
Deep convolutional neural network (DCNN)
U-Net
encoder-decoder
Non-Small Cell Lung Cancer (NSCLC)
PURPOSE: Deep learning-based auto-segmentation algorithms can improve clinical workflow by defining accurate regions of interest while reducing manual labor. Over the past decade, convolutional neural networks (CNNs) have become prominent in medical image segmentation applications. However, CNNs have limitations in learning long-range spatial dependencies due to the locality of the convolutional layers. Transformers were introduced to address this challenge. In transformers with self-attention mechanism, even the first layer of information processing makes connections between distant image locations. Our paper presents a novel framework that bridges these two unique techniques, CNNs and transformers, to segment the gross tumor volume (GTV) accurately and efficiently in computed tomography (CT) images of non-small cell-lung cancer (NSCLC) patients. METHODS: Under this framework, input of multiple resolution images was used with multi-depth backbones to retain the benefits of high-resolution and low-resolution images in the deep learning architecture. Furthermore, a deformable transformer was utilized to learn the long-range dependency on the extracted features. To reduce computational complexity and to efficiently process multi-scale, multi-depth, high-resolution 3D images, this transformer pays attention to small key positions, which were identified by a self-attention mechanism. We evaluated the performance of the proposed framework on a NSCLC dataset which contains 563 training images and 113 test images. Our novel deep learning algorithm was benchmarked against five other similar deep learning models. RESULTS: The experimental results indicate that our proposed framework outperforms other CNN-based, transformer-based, and hybrid methods in terms of Dice score (0.92) and Hausdorff Distance (1.33). Therefore, our proposed model could potentially improve the efficiency of auto-segmentation of early-stage NSCLC during the clinical workflow. This type of framework may potentially facilitate online adaptive radiotherapy, where an efficient auto-segmentation workflow is required. CONCLUSIONS: Our deep learning framework, based on CNN and transformer, performs auto-segmentation efficiently and could potentially assist clinical radiotherapy workflow.
Molecular profiles of tumor contrast enhancement: A radiogenomic analysis in anaplastic gliomas
Liu, Xing
Li, Yiming
Sun, Zhiyan
Li, Shaowu
Wang, Kai
Fan, Xing
Liu, Yuqing
Wang, Lei
Wang, Yinyan
Jiang, Tao
Cancer medicine2018Journal Article, cited 0 times
Website
glioma
radiogenomics
gene set enrichment analysis (GSEA)
Molecular Signatures Database v5.1 (MSigDB)
radiomic features
Multiregional radiomics profiling from multiparametric MRI: Identifying an imaging predictor of IDH1 mutation status in glioblastoma
Li, Zhi‐Cheng
Bai, Hongmin
Sun, Qiuchang
Zhao, Yuanshen
Lv, Yanchun
Zhou, Jian
Liang, Chaofeng
Chen, Yinsheng
Liang, Dong
Zheng, Hairong
Cancer medicine2018Journal Article, cited 0 times
Website
TCGA-GBM
Radiogenomics
Glioblastoma multiforme (GBM)
Magnetic Resonance Imaging (MRI)
ITK
Random forest
Isocitrate dehydrogenase (IDH) mutation
PURPOSE: Isocitrate dehydrogenase 1 (IDH1) has been proven as a prognostic and predictive marker in glioblastoma (GBM) patients. The purpose was to preoperatively predict IDH mutation status in GBM using multiregional radiomics features from multiparametric magnetic resonance imaging (MRI). METHODS: In this retrospective multicenter study, 225 patients were included. A total of 1614 multiregional features were extracted from enhancement area, non-enhancement area, necrosis, edema, tumor core, and whole tumor in multiparametric MRI. Three multiregional radiomics models were built from tumor core, whole tumor, and all regions using an all-relevant feature selection and a random forest classification for predicting IDH1. Four single-region models and a model combining all-region features with clinical factors (age, sex, and Karnofsky performance status) were also built. All models were built from a training cohort (118 patients) and tested on an independent validation cohort (107 patients). RESULTS: Among the four single-region radiomics models, the edema model achieved the best accuracy of 96% and the best F1-score of 0.75 while the non-enhancement model achieved the best area under the receiver operating characteristic curve (AUC) of 0.88 in the validation cohort. The overall performance of the tumor-core model (accuracy 0.96, AUC 0.86 and F1-score 0.75) and the whole-tumor model (accuracy 0.96, AUC 0.88 and F1-score 0.75) was slightly better than the single-regional models. The 8-feature all-region radiomics model achieved an improved overall performance of an accuracy 96%, an AUC 0.90, and an F1-score 0.78. Among all models, the model combining all-region imaging features with age achieved the best performance of an accuracy 97%, an AUC 0.96, and an F1-score 0.84. CONCLUSIONS: The radiomics model built with multiregional features from multiparametric MRI has the potential to preoperatively detect the IDH1 mutation status in GBM patients. The multiregional model built with all-region features performed better than the single-region models, while combining age with all-region features achieved the best performance.
Using computer‐extracted image phenotypes from tumors on breast magnetic resonance imaging to predict breast cancer pathologic stage
Biomechanical model for computing deformations for whole‐body image registration: A meshless approach
Li, Mao
Miller, Karol
Joldes, Grand Roman
Kikinis, Ron
Wittek, Adam
International Journal for Numerical Methods in Biomedical Engineering2016Journal Article, cited 13 times
Website
Algorithm Development
Fuzzy C-means clustering (FCM)
Segmentation
Computed Tomography (CT)
Machine Learning
Novel approaches for glioblastoma treatment: Focus on tumor heterogeneity, treatment resistance, and computational tools
Valdebenito, Silvana
D'Amico, Daniela
Eugenin, Eliseo
Cancer Reports2019Journal Article, cited 0 times
TCGA-GBM
Radiogenomics
Background; Glioblastoma (GBM) is a highly aggressive primary brain tumor. Currently, the suggested line of action is the surgical resection followed by radiotherapy and treatment with the adjuvant temozolomide, a DNA alkylating agent. However, the ability of tumor cells to deeply infiltrate the surrounding tissue makes complete resection quite impossible, and, in consequence, the probability of tumor recurrence is high, and the prognosis is not positive. GBM is highly heterogeneous and adapts to treatment in most individuals. Nevertheless, these mechanisms of adaption are unknown.; ; Recent findings; In this review, we will discuss the recent discoveries in molecular and cellular heterogeneity, mechanisms of therapeutic resistance, and new technological approaches to identify new treatments for GBM. The combination of biology and computer resources allow the use of algorithms to apply artificial intelligence and machine learning approaches to identify potential therapeutic pathways and to identify new drug candidates.; ; Conclusion; These new approaches will generate a better understanding of GBM pathogenesis and will result in novel treatments to reduce or block the devastating consequences of brain cancers.
Transferable HMM probability matrices in multi‐orientation geometric medical volumes segmentation
AlZu'bi, Shadi
AlQatawneh, Sokyna
ElBes, Mohammad
Alsmirat, Mohammad
Concurrency and Computation: Practice and Experience2019Journal Article, cited 0 times
Website
Head-Neck Cetuximab
Hidden Markov Model
Segmentation
machine learning
Acceptable error rate, low quality assessment, and time complexity are the major problems in image segmentation, which needed to be discovered. A variety of acceleration techniques have been applied and achieve real time results, but still limited in 3D. HMM is one of the best statistical techniques that played a significant rule recently. The problem associated with HMM is time complexity, which has been resolved using different accelerator. In this research, we propose a methodology for transferring HMM matrices from image to another skipping the training time for the rest of the 3D volume. One HMM train is generated and generalized to the whole volume. The concepts behind multi‐orientation geometrical segmentation has been employed here to improve the quality of HMM segmentation. Axial, saggital, and coronal orientations have been considered individually and together to achieve accurate segmentation results in less processing time and superior quality in the detection accuracy.
Binary differential evolution with self learning and deep neural network for breast cancer classification
Pullaiah, Nagaraja Rao Pamula
Venkatasekhar, Dorai
Venkatramana, Padarthi
Sudhakar, Balaraj
2022Journal Article, cited 0 times
BREAST-DIAGNOSIS
Abstract Early classification of breast cancer helps to treat the patient effectively and increases the survival rate. The existing methods involve applying the feature selection methods and deep learning methods to improve the performance of the breast cancer classification. In this research, the binary differential evolution with self learning (BDE‐SL) and deep neural network (DNN) method is proposed to improve the performance of the breast cancer classification. The BDE‐SL feature selection method involves selecting the relevant features based on the measure of probability difference for each feature and non‐dominated sorting. The DNN method has the advantage which effectively analysis the non‐linear relationship among the selected features and output. The BI‐RADS MRI breast cancer dataset was applied to test the performance of the proposed method. The adaptive histogram equalization and region growing applied in the input images to enhance the image. The dual‐tree complex wavelet transform, gray‐level co‐occurrence matrix, and local directional ternary pattern were the feature extraction method used for the classification. This result shows that BDE‐SL with the DNN method has an accuracy of 99.12% and the existing convolutional neural network has 98.33% accuracy.
Improving lung cancer detection using faster region‐based convolutional neural network aided with fuzzy butterfly optimization algorithm
Sinthia, P.
Malathi, M.
K, Anitha
Suresh Anand, M.
Concurrency and Computation: Practice and Experience2022Journal Article, cited 0 times
Website
LIDC-IDRI
Anti-PD-1_Lung
Convolutional Neural Network (CNN)
Lung cancer is the most deadly type of cancer and it is caused by genetic variations in lung tissues. Other causes of lung cancer are alcohol, smoking, and threatening gas exposures. The diagnosis of lung cancer is an intricate task and early detection of lung cancer can help to get the exact treatment in advance. The application of a computer-aided diagnosis process helps to predict lung cancer earlier, nonetheless, it does not provide better accuracy. The overfitting nature of features and dimensionality of lung cancer can prevent it from obtaining maximum accuracy. Hence, we proposed a novel faster region convolutional neural network (RCNN) based fuzzy butterfly optimization algorithm (FBOA) to achieve better prediction accuracy and effectiveness. The proposed Faster RCNN can provide better positioning of lung cancer swiftly and effectively and the FBOA approach can be used to perform two-stage classification. The fuzzy rules used in the FBOA can be utilized to find the severity of the lung cancer and differentiate the Benign stage and Malignant stage effectively. The experimental analyses are performed in MATLAB simulation. The preprocessing of images is performed by different tools from MATLAB and is to format the images as required. The cancer imaging archive (TCIA) dataset is utilized to analyze the performance of the proposed method and compared with various state-of-art works. The performance of the proposed method is evaluated by using different evaluation metrics such as precision, recall, F-measure, and accuracy and attained 99, 98, 99, and 97% respectively. Thus, our proposed method outperforms all the other approaches.
Special issue “The advance of solid tumor research in China”: Prognosis prediction for stage II colorectal cancer by fusing computed tomography radiomics and deep‐learning features of primary lesions and peripheral lymph nodes
Li, Menglei
Gong, Jing
Bao, Yichao
Huang, Dan
Peng, Junjie
Tong, Tong
2022Journal Article, cited 0 times
StageII-Colorectal-CT
Currently, the prognosis assessment of stage II colorectal cancer (CRC) remains a difficult clinical problem; therefore, more accurate prognostic predictors must be developed. In our study, we developed a prognostic prediction model for stage II CRC by fusing radiomics and deep-learning (DL) features of primary lesions and peripheral lymph nodes (LNs) in computed tomography (CT) scans. First, two CT radiomics models were built using primary lesion and LN image features. Subsequently, an information fusion method was used to build a fusion radiomics model by combining the tumor and LN image features. Furthermore, a transfer learning method was applied to build a deep convolutional neural network (CNN) model. Finally, the prediction scores generated by the radiomics and CNN models were fused to improve the prognosis prediction performance. The disease-free survival (DFS) and overall survival (OS) prediction areas under the curves (AUCs) generated by the fusion model improved to 0.76 ± 0.08 and 0.91 ± 0.05, respectively. These were significantly higher than the AUCs generated by the models using the individual CT radiomics and deep image features. Applying the survival analysis method, the DFS and OS fusion models yielded concordance index (C-index) values of 0.73 and 0.9, respectively. Hence, the combined model exhibited good predictive efficacy; therefore, it could be used for the accurate assessment of the prognosis of stage II CRC patients. Moreover, it could be used to screen out high-risk patients with poor prognoses, and assist in the formulation of clinical treatment decisions in a timely manner to achieve precision medicine.
Improving the diagnosis of ductal carcinoma in situ with microinvasion without immunohistochemistry: An innovative method with H&E‐stained and multiphoton microscopy images
Han, Xiahui
Liu, Yulan
Zhang, Shichao
Li, Lianhuang
Zheng, Liqin
Qiu, Lida
Chen, Jianhua
Zhan, Zhenlin
Wang, Shu
Ma, Jianli
Kang, Deyong
Chen, Jianxin
2024Journal Article, cited 0 times
HE-vs-MPM
Immunohistochemistry
Breast
Ductal carcinoma in situ with microinvasion (DCISM) is a challenging subtype of breast cancer with controversial invasiveness and prognosis. Accurate diagnosis of DCISM from ductal carcinoma in situ (DCIS) is crucial for optimal treatment and improved clinical outcomes. However, there are often some suspicious small cancer nests in DCIS, and it is difficult to diagnose the presence of intact myoepithelium by conventional hematoxylin and eosin (H&E) stained images. Although a variety of biomarkers are available for immunohistochemical (IHC) staining of myoepithelial cells, no single biomarker is consistently sensitive to all tumor lesions. Here, we introduced a new diagnostic method that provides rapid and accurate diagnosis of DCISM using multiphoton microscopy (MPM). Suspicious foci in H&E-stained images were labeled as regions of interest (ROIs), and the nuclei within these ROIs were segmented using a deep learning model. MPM was used to capture images of the ROIs in H&E-stained sections. The intensity of two-photon excitation fluorescence (TPEF) in the myoepithelium was significantly different from that in tumor parenchyma and tumor stroma. Through the use of MPM, the myoepithelium and basement membrane can be easily observed via TPEF and second-harmonic generation (SHG), respectively. By fusing the nuclei in H&E-stained images with MPM images, DCISM can be differentiated from suspicious small cancer clusters in DCIS. The proposed method demonstrated good consistency with the cytokeratin 5/6 (CK5/6) myoepithelial staining method (kappa coefficient = 0.818).
Automated detection of glioblastoma tumor in brain magnetic imaging using ANFIS classifier
Thirumurugan, P
Ramkumar, D
Batri, K
Sundhara Raja, D
International Journal of Imaging Systems and Technology2016Journal Article, cited 3 times
Website
Algorithm Development
BRAIN
Classification
This article proposes a novel and efficient methodology for the detection of Glioblastoma tumor in brain MRI images. The proposed method consists of the following stages as preprocessing, Non-subsampled Contourlet transform (NSCT), feature extraction and Adaptive neuro fuzzy inference system classification. Euclidean direction algorithm is used to remove the impulse noise from the brain image during image acquisition process. NSCT decomposes the denoised brain image into approximation bands and high frequency bands. The features mean, standard deviation and energy are computed for the extracted coefficients and given to the input of the classifier. The classifier classifies the brain MRI image into normal or Glioblastoma tumor image based on the feature set. The proposed system achieves 99.8% sensitivity, 99.7% specificity, and 99.8% accuracy with respect to the ground truth images available in the dataset.;
Automated delineation of non‐small cell lung cancer: A step toward quantitative reasoning in medical decision science
Saad, Maliazurina
Lee, Ik Hyun
Choi, Tae‐Sun
International Journal of Imaging Systems and Technology2019Journal Article, cited 0 times
Website
NSCLC-Radiomics
Radiomics
Non Small Cell Lung Cancer (NSCLC)
Segmentation
U-Net
Convolutional Neural Network (CNN)
Algorithm Development
Quantitative reasoning in medical decision science relies on the delineation of pathological objects. For example, evidence‐based clinical decisions regarding lung diseases require the segmentation of nodules, tumors, or cancers. Non‐small cell lung cancer (NSCLC) tends to be large sized, irregularly shaped, and grows against surrounding structures imposing challenges in the segmentation, even for expert clinicians. An automated delineation tool based on spatial analysis was developed and studied on 25 sets of computed tomography scans of NSCLC. Manual and automated delineations were compared, and the proposed method exhibited robustness in terms of the tumor size (5.32–18.24 mm), shape (spherical or irregular), contouring (lobulated, spiculated, or cavitated), localization (solitary, pleural, mediastinal, endobronchial, or tagging), and laterality (left or right lobe) with accuracy between 80% and 99%. Small discrepancies observed between the manual and automated delineations may arise from the variability in the practitioners' definitions of region of interest or imaging artifacts that reduced the tissue resolution.
Optimizing deep belief network parameters using grasshopper algorithm for liver disease classification
Renukadevi, Thangavel
Karunakaran, Saminathan
International Journal of Imaging Systems and Technology2019Journal Article, cited 0 times
TCGA-LIHC
Deep Learning
Algorithm Development
Computer Assisted Detection (CAD)
Image processing plays a vital role in many areas such as healthcare, military, scientific and business due to its wide variety of advantages and applications. Detection of computed tomography (CT) liver disease is one of the difficult tasks in the medical field. Hand crafted features and classifications are the two types of methods used in the previous approaches, to classify liver disease. But these classification results are not optimal. In this article, we propose a novel method utilizing deep belief network (DBN) with grasshopper optimization algorithm (GOA) for liver disease classification. Initially, the image quality is enhanced by preprocessing techniques and then features like texture, color and shape are extracted. The extracted features are reduced by utilizing the dimensionality reduction method like principal component analysis (PCA). Here, the DBN parameters are optimized using GOA for recognizing liver disease. The experiments are performed on the real time and open source CT image datasets which embraces normal, cyst, hepatoma, and cavernous hemangiomas, fatty liver, metastasis, cirrhosis, and tumor samples. The proposed method yields 98% accuracy, 95.82% sensitivity, 97.52% specificity, 98.53% precision, and 96.8% F‐1 score in simulation process when compared with other existing techniques.
Volumetric medical image compression using inter‐slice correlation switched prediction approach
Sharma, Urvashi
Sood, Meenakshi
Puthooran, Emjee
International Journal of Imaging Systems and Technology2020Journal Article, cited 0 times
LungCT-Diagnosis
RIDER Breast MRI
RIDER NEURO MRI
Abstract With the advancement in medical data acquisition and telemedicine systems, image compression has become an important tool for image handling, as the tremendous amount of data generated in medical field needs to be stored and transmitted effectively. Volumetric MRI and CT images comprise a set of image slices that are correlated to each other. The prediction of the pixels in a slice depends not only upon the spatial information of the slice, but also the inter‐slice information to achieve compression. This article proposes an inter‐slice correlation switched predictor (ICSP) with block adaptive arithmetic encoding (BAAE) technique for 3D medical image data. The proposed ICSP exploits both inter‐slice and intra‐slice redundancies from the volumetric images efficiently. Novelty of the proposed technique is in selecting the correlation coefficient threshold (T ϒ ) for switching of ICSP. Resolution independent gradient edge detector (RIGED) at optimal prediction threshold value is proposed for intra‐slice prediction. Use of RIGED, which is modality and resolution independent, brings the novelty and improved performance for 3D prediction of volumetric images. BAAE is employed for encoding of prediction error image to resulting in higher compression efficiency. The proposed technique is also extended for higher bit depth volumetric medical images (16‐bit depth) presenting significant compression gain of 3D images. The performance of the proposed technique is compared with the state‐of‐the art techniques in terms of bits per pixel (BPP) for 8‐bit depth and was found to be 31.21%, 27.55%, 21.89%, and 2.39% better than the JPEG‐2000, CALIC, JPEG‐LS, M‐CALIC, and 3D‐CALIC respectively. The proposed technique is 11.86%, 8.56%, 7.97%, 6.80%, and 4.86% better than the M‐CALIC, 3D CALIC, JPEG‐2000, JPEG‐LS and CALIC respectively for 16‐bit depth image datasets. The average value of compression ratio for 8‐bit and 16‐bit image dataset is obtained as 3.70 and 3.11 respectively by the proposed technique.
A serialized classification method for pulmonary nodules based on lightweight cascaded convolutional neural network‐long short‐term memory
Ni, Zihao
Peng, Yanjun
International Journal of Imaging Systems and Technology2020Journal Article, cited 0 times
LIDC-IDRI
Abstract Computer Assisted Diagnosis (CAD) is an effective method to detect lung cancer from computed tomography (CT) scans. The development of artificial neural network makes CAD more accurate in detecting pathological changes. Due to the complexity of the lung environment, the existing neural network training still requires large datasets, excessive time, and memory space. To meet the challenge, we analysis 3D volumes as serialized 2D slices and present a new neural network structure lightweight convolutional neural network (CNN)‐long short‐term memory (LSTM) for lung nodule classification. Our network contains two main components: (a) optimized lightweight CNN layers with tiny parameter space for extracting visual features of serialized 2D images, and (b) LSTM network for learning relevant information among 2D images. In all experiments, we compared the training results of several models and our model achieved an accuracy of 91.78% for lung nodule classification with an AUC of 93%. We used fewer samples and memory space to train the model, and we achieved faster convergence. Finally, we analyzed and discussed the feasibility of migrating this framework to mobile devices. The framework can also be applied to cope with the small amount of training data and the development of mobile health device in future.
Deeply supervised U‐Net for mass segmentation in digital mammograms
Ravitha Rajalakshmi, N.
Vidhyapriya, R.
Elango, N.
Ramesh, Nikhil
International Journal of Imaging Systems and Technology2020Journal Article, cited 0 times
Website
CBIS-DDSM
BREAST
Computer Aided Detection (CADe)
Mass detection is a critical process in the examination of mammograms. The shape and texture of the mass are key parameters used in the diagnosis of breast cancer. To recover the shape of the mass, semantic segmentation is found to be more useful rather than mere object detection (or) localization. The main challenges involved in the mass segmentation include: (a) low signal to noise ratio (b) indiscernible mass boundaries, and (c) more false positives. These problems arise due to the significant overlap in the intensities of both the normal parenchymal region and the mass region. To address these challenges, deeply supervised U‐Net model (DS U‐Net) coupled with dense conditional random fields (CRFs) is proposed. Here, the input images are preprocessed using CLAHE and a modified encoder‐decoder‐based deep learning model is used for segmentation. In general, the encoder captures the textual information of various regions in an input image, whereas the decoder recovers the spatial location of the desired region of interest. The encoder‐decoder‐based models lack the ability to recover the non‐conspicuous and spiculated mass boundaries. In the proposed work, deep supervision is integrated with a popular encoder‐decoder model (U‐Net) to improve the attention of the network toward the boundary of the suspicious regions. The final segmentation map is also created as a linear combination of the intermediate feature maps and the output feature map. The dense CRF is then used to fine‐tune the segmentation map for the recovery of definite edges. The DS U‐Net with dense CRF is evaluated on two publicly available benchmark datasets CBIS‐DDSM and INBREAST. It provides a dice score of 82.9% for CBIS‐DDSM and 79% for INBREAST.
Glioma grade detection using grasshopper optimization algorithm‐optimized machine learning methods: The Cancer Imaging Archive study
Hedyehzadeh, Mohammadreza
Maghooli, Keivan
MomenGharibvand, Mohammad
International Journal of Imaging Systems and Technology2021Journal Article, cited 0 times
TCGA-GBM
TCGA-LGG
Abstract Detection of brain tumor's grade is a very important task in treatment plan design which was done using invasive methods such as pathological examination. This examination needs resection procedure and resulted in pain, hemorrhage and infection. The aim of this study is to provide an automated non‐invasive method for estimation of brain tumor's grade using Magnetic Resonance Images (MRI). After pre‐processing, using Fuzzy C‐Means (FCM) segmentation method, tumor region was extracted from post‐processed images. In feature extraction, texture, Local Binary Pattern (LBP) and fractal‐based features were extracted using Matlab software. Then using Grasshopper Optimization Algorithm (GOA), parameters of three different classification methods including Random Forest (RF), K‐Nearest Neighbor (KNN) and Support Vector Machine (SVM) were optimized. Finally, performance of three applied classifiers before and after optimization were compared. The results showed that the random forest with accuracy of 99.09% has achieved better performance comparing other classification methods.
Improved pulmonary lung nodules risk stratification in computed tomography images by fusing shape and texture features in a machine-learning paradigm
Sahu, Satya Prakash
Londhe, Narendra D.
Verma, Shrish
Singh, Bikesh K.
Banchhor, Sumit Kumar
International Journal of Imaging Systems and Technology2020Journal Article, cited 0 times
Website
LIDC-IDRI
Lung cancer
CAD
radiomic features
Abstract Lung cancer is one of the most deadly cancer in both men and women. Accurate and early diagnosis of pulmonary lung nodules is critical. This study presents an accurate computer-aided diagnosis (CADx) system for risk stratification of pulmonary nodules in computed tomography (CT) lung images by fusing shape and texture-based features in a machine-learning (ML) based paradigm. A database with 114 (28 high-risk) patients acquired from Lung Image Database Consortium (LIDC) is used in this study. After nodule segmentation using K-means clustering, features based on shape and texture attributes are extracted. Seven different filter and wrapper-based feature selection techniques are used for dominant feature selection. Lastly, the classification of nodules is performed by a support vector machine using six different kernel functions. The classification results are evaluated using 10-fold cross-validation and hold-out data division protocols. The performance of the proposed system is evaluated using accuracy, sensitivity, specificity, and the area under receiver operating characteristics (AUC). Using 30 dominant features from the pool of shape and texture-based features, the proposed system achieves the highest classification accuracy and AUC of 89% and 0.92, respectively. The proposed ML-based system showed an improvement in risk stratification accuracy by fusing shape and texture-based features.
Novel computer‐aided lung cancer detection based on convolutional neural network‐based and feature‐based classifiers using metaheuristics
Guo, Z. Q.
Xu, L. A.
Si, Y. J.
Razmjooy, N.
International Journal of Imaging Systems and Technology2021Journal Article, cited 1 times
Website
LungCT-Diagnosis
Computer Aided Diagnosis (CADx)
optimization
Classification
Algorithm Development
This study proposes a lung cancer diagnosis system based on computed tomography (CT) scan images for the detection of the disease. The proposed method uses a sequential approach to achieve this goal. Consequently, two well-organized classifiers, the convolutional neural network (CNN) and feature-based methodology, have been used. In the first step, the CNN classifier is optimized using a newly designed optimization method called the improved Harris hawk optimizer. This method is applied to the dataset, and the classification is commenced. If the disease cannot be detected via this method, the results are conveyed to the second classifier, that is, the feature-based method. This classifier, including Haralick and LBP features, is subsequently applied to the received dataset from the CNN classifier. Finally, if the feature-based method also does not detect cancer, the case study is healthy; otherwise, the case study is cancerous.
Detection of lung tumor using dual tree complex wavelet transform and co‐active adaptive neuro fuzzy inference system classification approach
Kailasam, Manoj Senthil
Thiagarajan, MeeraDevi
International Journal of Imaging Systems and Technology2021Journal Article, cited 0 times
Website
LIDC-IDRI
Wavelet
Computed Tomography (CT)
Automatic segmentation
LUNG
The automatic detection and location of the tumor regions in lung images is more important to provide timely medical treatments to patients in order to save their lives. In this article, machine learning-based lung tumor detection, classification and segmentation algorithm is proposed. The tumor classification phase first smooth the source lung computed tomography image using adaptive median filter and then discrete time complex wavelet transform (DT-CWT) is applied on this smoothed lung image to decompose the entire image into a number of sub-bands. Along with the decomposed sub-bands, DWT, pattern, and co-occurrence features are computed and classified using co-active adaptive neuro fuzzy inference system (CANFIS). The tumor segmentation phase uses morphological functions on this classified abnormal lung image to locate the tumor regions. The multi-evaluation parameters are used to evaluate the proposed method. This method is compared with the other state-of-the-art methods on the same lung image from open-access dataset.
Accelerated brain tumor dynamic contrast‐enhanced; MRI; using Adaptive; Pharmaco‐Kinetic; Model Constrained method
Liu, Fan
Li, Dongxiao
Jin, Xinyu
Qiu, Wenyuan
International Journal of Imaging Systems and Technology2021Journal Article, cited 0 times
Website
RIDER Neuro MRI
Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI)
In brain tumor, dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) spatiotemporally resolved high-quality reconstruction which is required for quantitative analysis of some physiological characteristics of brain tissue. By exploiting some kind of sparsity priori, compressed sensing methods can achieve high spatiotemporal DCE-MRI image reconstruction from undersampled k-space data. Recently, as a kind of priori information about the contrast agent (CA) concentration dynamics, Pharmacokinetic (PK) models have been explored for undersampled DCE-MRI reconstruction. This paper presents a novel dictionary learning-based reconstruction method with Adaptive Pharmaco-Kinetic Model Constraints (APKMC). In APKMC, the priori knowledge about CA dynamics is incorporated into a novel dictionary, which consists of PK model-based atoms and adaptive atoms. The PK atoms are constructed based on Patlak model and K-SVD dimension reduction algorithm, and the adaptive ones are used to resolve PK model inconsistencies. To solve APKMC, an optimization algorithm based on variable splitting and alternating iterative optimization is presented. The proposed method has been validated on three brain tumor DCE-MRI data sets by comparing with two state-of-the-art methods. As demonstrated by the quantitative and qualitative analysis of results, APKMC achieved substantially better quality in the reconstruction of brain DCE-MRI images, as well as in the reconstruction of PK model parameter maps.
COLI‐Net: Deep learning‐assisted fully automated COVID‐19 lung and infection pneumonia lesion detection and segmentation from chest computed tomography images
Shiri, Isaac
Arabi, Hossein
Salimi, Yazdan
Sanaat, Amirhossein
Akhavanallaf, Azadeh
Hajianfar, Ghasem
Askari, Dariush
Moradi, Shakiba
Mansouri, Zahra
Pakbin, Masoumeh
Sandoughdaran, Saleh
Abdollahi, Hamid
Radmard, Amir Reza
Rezaei‐Kalantari, Kiara
Ghelich Oghli, Mostafa
Zaidi, Habib
International Journal of Imaging Systems and Technology2021Journal Article, cited 0 times
Website
NSCLC-Radiomics
PleThora
Computed Tomography (CT)
Deep residual neural network
TensorFlow
COVID-19
2D segmentation
3D segmentation
LUNG
Radiomics
Imaging features
We present a deep learning (DL)-based automated whole lung and COVID-19 pneumonia infectious lesions (COLI-Net) detection and segmentation from chest computed tomography (CT) images. This multicenter/multiscanner study involved 2368 (347′259 2D slices) and 190 (17 341 2D slices) volumetric CT exams along with their corresponding manual segmentation of lungs and lesions, respectively. All images were cropped, resized, and the intensity values clipped and normalized. A residual network with non-square Dice loss function built upon TensorFlow was employed. The accuracy of lung and COVID-19 lesions segmentation was evaluated on an external reverse transcription-polymerase chain reaction positive COVID-19 dataset (7′333 2D slices) collected at five different centers. To evaluate the segmentation performance, we calculated different quantitative metrics, including radiomic features. The mean Dice coefficients were 0.98 ± 0.011 (95% CI, 0.98–0.99) and 0.91 ± 0.038 (95% CI, 0.90–0.91) for lung and lesions segmentation, respectively. The mean relative Hounsfield unit differences were 0.03 ± 0.84% (95% CI, −0.12 to 0.18) and −0.18 ± 3.4% (95% CI, −0.8 to 0.44) for the lung and lesions, respectively. The relative volume difference for lung and lesions were 0.38 ± 1.2% (95% CI, 0.16–0.59) and 0.81 ± 6.6% (95% CI, −0.39 to 2), respectively. Most radiomic features had a mean relative error less than 5% with the highest mean relative error achieved for the lung for the range first-order feature (−6.95%) and least axis length shape feature (8.68%) for lesions. We developed an automated DL-guided three-dimensional whole lung and infected regions segmentation in COVID-19 patients to provide fast, consistent, robust, and human error immune framework for lung and pneumonia lesion detection and quantification.
Pathological categorization of lung carcinoma from multimodality images using convolutional neural networks
Jacob, Chinnu
Menon, Gopakumar Chandrasekhara
International Journal of Imaging Systems and Technology2021Journal Article, cited 0 times
Lung-PET-CT-Dx
Abstract Accurate diagnosis and treatment of lung carcinoma depend on its pathological type and staging. Normally, pathological analysis is performed either by needle biopsy or surgery. Therefore, a noninvasive method to detect pathological types would be a good alternative. Hence, this work aims at categorizing different types of lung cancer from multimodality images. The proposed approach involves two stages. Initially, a Blind/Referenceless Image Spatial Quality Evaluator‐based approach is adopted to extract the slices having lung abnormalities from the dataset. The slices then are transferred to a novel shallow convolutional neural network model to detect adenocarcinoma, squamous cell carcinoma, and small cell carcinoma from multimodality images. The classifier efficacy is then investigated by comparing precision, recall, area under curve, and accuracy with pretrained models and existing methods. The results narrate that the suggested system outperformed with a testing accuracy of 95% in Positron emission tomography/computed tomography (PET/CT), 93% in CT images of the Lung‐PET‐CT‐DX dataset, and 98% in the Lung3 dataset. Furthermore, a kappa score of 0.92 in PET/CT of Lung‐PETCT‐DX and 0.98 in CT of Lung3 exhibited the effectiveness of the presented system in the field of lung cancer classification.
Lung cancer classification using exponential mean saturation linear unit activation function in various generative adversarial network models
Thirumagal, Egambaram
Saruladha, Krishnamurthy
International Journal of Imaging Systems and Technology2022Journal Article, cited 0 times
Website
SPIE-AAPM Lung CT Challenge
Generative Adversarial Network (GAN)
Classification
Algorithm Development
Nowadays, the mortality rate due to lung cancer increases rapidly worldwide as it can be classified only at the later stages. Early classification of lung cancer will help patients to take treatment and decrease the death rate. The limited dataset and diversity of data samples are the bottlenecks for early classification. In this paper, robust deep learning generative adversarial network (GAN) models are employed to enhance the dataset and to increase classification accuracy. The activation function plays an important feature-learning role in neural networks. Since the existing activation functions suffer from various drawbacks such as vanishing gradient, dead neurons, output offset, etc., this paper proposes a novel activation function exponential mean saturation linear unit (EMSLU), which aims to speed up training, reduce network running time, and improve classification accuracy. The experiments were conducted using vanilla GAN, Wasserstein generative adversarial network, Wasserstein generative adversarial network with gradient penalty, conditional generative adversarial network, and deep convolutional generative adversarial network. Each GAN is tested with rectified linear unit, exponential linear unit, and proposed EMSLU activation functions. The results show that all the GAN's with EMSLU yields improved precision, recall, F1-score, and accuracy.
A multilevel self‐attention based segmentation and classification technique using Directional Hexagonal Mixed Pattern algorithm for lung nodule detection in thoracic CT image
Sahaya Jeniba, J.
Milton, A.
International Journal of Imaging Systems and Technology2022Journal Article, cited 0 times
Website
LIDC-IDRI
LUNG
Classification
Pulmonic nodules are unusual growing of tissues; originate on one lung or both lungs. They are the round, trifling mass of soft tissues in the lung area. Habitually, pulmonic nodules are indications of lung tumors, but they may be nonthreatening. When identified earlier and treated in time, the patient's life expectancy increases. The anatomy of the lung is highly interconnected in nature, which makes it difficult to diagnose pulmonic nodules by diverse clinical imaging practices. A network model is presented in this paper for accurate classification of pulmonic nodules from computed tomography scans images. The lung images are subjected to semantic segmentation using Attention U-Net to isolate the pulmonary nodules. The proposed Directional Hexagonal Mixed Pattern is applied to generate a new texture pattern. Then, the nodules are classified by combining the proposed multilevel network model with the self-attention network. This paper also demonstrates an experimental arrangement called tenfold cross-validation without a segmentation mask, in which the nodules that had been marked as less than 3 mm by radiologists are discarded. This has obtained an improved result. The experimental results show that with and without segmentation masks the proposed classifier scores an accuracy of 90.48% and 91.83%. In addition, it has efficiently produced the measure of area under curve as 98.08%.
Detection of liver abnormalities—A new paradigm in medical image processing and classification techniques
R, Karthikamani
Rajaguru, Harikumar
International Journal of Imaging Systems and Technology2022Journal Article, cited 0 times
B-mode-and-CEUS-Liver
Abstract The liver is the body's most essential organ, and all human activities are interrelated with normal liver function. Any malfunction of the liver may lead to fatal diseases; therefore, early detection of liver abnormalities is essential. Modern medical imaging techniques combined with engineering procedures are reducing human suffering caused by liver disease. This study uses multiple classifiers to detect liver cirrhosis in ultrasonic images. The ultrasound images were obtained from The Cancer Imaging Archive database. A gray‐level co‐occurrence matrix (GLCM) and statistical approaches are used to extract features from normal and liver‐cirrhosis images. The extracted GLCM features are normalized and classified using nonlinear regression, linear regression, logistic regression, Bayesian Linear Discriminant Classifiers (BLDC), Gaussian Mixture Model (GMM), Firefly, Cuckoo search, Particle Swarm Optimization (PSO), Elephant search, Dragon Fly, Firefly GMM, Cuckoo search GMM, PSO GMM, Elephant search GMM, and Dragon Fly GMM classifiers. Benchmark metrics, such as sensitivity, specificity, accuracy, precision, negative predictive value, false‐negative rate, balanced accuracy, F1 score, Mathew correlation coefficient, F measure, error rate, Jaccard metric, and classifier success index, are assessed to identify the best‐performing classifier. The GMM classifier outperformed other classifiers for statistical features, and it achieved the highest accuracy (98.39%) and lowest error rate (1.61%). Moreover, the Dragon Fly GMM classifier achieved 90.69% for the GLCM feature used to classify liver cirrhosis.
SABOS-Net: Self-supervised attention based network for automatic organ segmentation of head and neck CT images
Francis, S.
Pooloth, G.
Singam, S. B. S.
Puzhakkal, N.
Narayanan, P. P.
Balakrishnan, J. P.
International Journal of Imaging Systems and Technology2022Journal Article, cited 0 times
OPC-Radiomics
auto-contouring
Deep Learning
head and neck ct
organs at risk(oar)
radiation therapy
residual u-net
self supervision
auto-segmentation
framework
Algorithm Development
Atlas
Radiotherapy
The segmentation of Organs At Risk (OAR) in Computed Tomography (CT) images is an essential part of the planning phase of radiation treatment to avoid the adverse effects of cancer radiotherapy treatment. Accurate segmentation is a tedious task in the head and neck region due to a large number of small and sensitive organs and the low contrast of CT images. Deep learning-based automatic contouring algorithms can ease this task even when the organs have irregular shapes and size variations. This paper proposes a fully automatic deep learning-based self-supervised 3D Residual UNet architecture with CBAM(Convolution Block Attention Mechanism) for the organ segmentation in head and neck CT images. The Model Genesis structure and image context restoration techniques are used for self-supervision, which can help the network learn image features from unlabeled data, hence solving the annotated medical data scarcity problem in deep networks. A new loss function is applied for training by integrating Focal loss, Tversky loss, and Cross-entropy loss. The proposed model outperforms the state-of-the-art methods in terms of dice similarity coefficient in segmenting the organs. Our self-supervised model could achieve a 4% increase in the dice score of Chiasm, which is a small organ that is present only in a very few CT slices. The proposed model exhibited better accuracy for 5 out of 7 OARs than the recent state-of-the-art models. The proposed model could simultaneously segment all seven organs in an average time of 0.02 s. The source code of this work is made available at .
FFCAEs : An efficient feature fusion framework using cascaded autoencoders for the identification of gliomas
Gudigar, Anjan
Raghavendra, U.
Rao, Tejaswi N.
Samanth, Jyothi
Rajinikanth, Venkatesan
Satapathy, Suresh Chandra
Ciaccio, Edward J.
Wai Yee, Chan
Acharya, U. Rajendra
International Journal of Imaging Systems and Technology2022Journal Article, cited 0 times
Website
TCGA-LGG
TCGA-GBM
BRAIN
Computer Aided Diagnosis (CADx)
Computer Aided Detection (CADe)
Intracranial tumors arise from constituents of the brain and its meninges. Glioblastoma (GBM) is the most common adult primary intracranial neoplasm and is categorized as high-grade astrocytoma according to the World Health Organization (WHO). The survival rate for 5 and 10 years after diagnosis is under 10%, contributing to its grave prognosis. Early detection of GBM enables early intervention, prognostication, and treatment monitoring. Computer-aided diagnostics (CAD) is a computerized process that helps to differentiate between GBM and low-grade gliomas (LGG), using the perceptible analysis of magnetic resonance (MR) of the brain. This study proposes a framework consisting of a feature fusion algorithm with cascaded autoencoders (CAEs), referred to as FFCAEs. Here we utilized two CAEs and extracted the relevant features from multiple CAEs. Inspired by the existing work on fusion algorithms, the obtained features are then fused by using a novel fusion algorithm. Finally, the resultant fused features are classified with the Softmax classifier to arrive at an average classification accuracy of 96.7%, which is 2.45% more than the previously best-performing model. The method is shown to be efficacious thus, it can be useful as a utility program for doctors.
Histopathological carcinoma classification using parallel, cross‐concatenated and grouped convolutions deep neural network
Kadirappa, Ravindranath
Subbian, Deivalakshmi
Ramasamy, Pandeeswari
Ko, Seok‐Bum
International Journal of Imaging Systems and Technology2023Journal Article, cited 0 times
TCGA-LIHC
Abstract Cancer is more alarming in modern days due to its identification at later stages. Among cancers, lung, liver and colon cancers are the leading cause of untimely death. Manual cancer identification from histopathological images is time‐consuming and labour‐intensive. Thereby, computer‐aided decision support systems are desired. A deep learning model is proposed in this paper to accurately identify cancer. Convolutional neural networks have shown great ability to identify the significant patterns for cancer classification. The proposed Parallel, Cross Concatenated and Grouped Convolutions Deep Neural Network (PC 2 GCDN 2 ) has been developed to obtain accurate patterns for classification. To prove the robustness of the model, it is evaluated on the KMC and TCGA‐LIHC liver dataset, LC25000 dataset for lung and colon cancer classification. The proposed PC 2 GCDN 2 model outperforms states‐of‐the‐art methods. The model provides 5.5% improved accuracy compared to the LiverNet proposed by Aatresh et. al on the KMC dataset. On the LC25000 dataset, 2% improvement is observed compared to existing models. Performance evaluation metrics like Sensitivity, Specificity, Recall, F1‐Score and Intersection‐Over‐Union are used to evaluate the performance. To the best of our knowledge, PC 2 GCDN 2 can be considered as gold standard for multiple histopathology image classification. PC 2 GCDN is able to classify the KMC and TCGA‐LIHC liver dataset with 96.4% and 98.6% accuracy, respectively, which are the best results obtained till now. The performance has been superior on LC25000 dataset with 99.5% and 100% classification accuracy on lung and colon dataset, by utilizing less than 0.5 million parameters.
A transformer-based deep neural network for detection and classification of lung cancer via PET/CT images
Barbouchi, Khalil
Hamdi, Dhekra El
Elouedi, Ines
Aïcha, Takwa Ben
Echi, Afef Kacem
Slim, Ihsen
International Journal of Imaging Systems and Technology2023Journal Article, cited 0 times
Website
Lung-PET-CT-Dx
Algorithm Development
LUNG
Deep Learning
Radiomics
Classification
Lung cancer is the leading cause of death for men and women worldwide and the second most frequent cancer. Therefore, early detection of the disease increases the cure rate. This paper presents a new approach to evaluate the ability of positron emission tomography/computed tomography (PET/CT) images to classify and detect lung cancer using deep learning techniques. Our approach aims to fully automate lung cancer's anatomical localization from PET/CT images. It also searches to classify the tumor, which is essential as it makes it possible to determine the disease's speed of progression and the best treatments to adopt. We have built, in this work, an approach based on transformers by implementing the DETR model as a tool to detect the tumor and assist physicians in staging patients with lung cancer. The TNM staging system and histologic subtype classification were both taken as a standard for classification. Experimental results demonstrated that our approach achieves sound results on tumor localization, T staging, and histology classification. Our proposed approach detects tumors with an intersection over union (IOU) of 0.8 when tested on the Lung-PET-CT-Dx dataset. It also has yielded better accuracy than state-of-the-art T-staging and histologic classification methods. It classified T-stage and histologic subtypes with an accuracy of 0.97 and 0.94, respectively.
Brain tumor image pixel segmentation and detection using an aggregation of GAN models with vision transformer
Datta, Priyanka
Rohilla, Rajesh
International Journal of Imaging Systems and Technology2023Journal Article, cited 0 times
Website
BraTS 2020
Magnetic Resonance Imaging (MRI)
Imaging features
Image Enhancement/methods
Classification
Algorithm Development
Generative Adversarial Network (GAN)
A number of applications in the field of medical analysis require the difficult and crucial tasks of brain tumor detection and segmentation from magnetic resonance imaging (MRI). Given that each type of brain imaging provides distinctive information about the specifics of each tumor component, in order to create a flexible and successful brain tumor segmentation system, we first suggest a normalization preprocessing method along with pixel segmentation. Then creating synthetic images is advantageous in many fields thanks to generative adversarial networks (GANs). In contrast, combining different GANs may enable understanding of the distributed features but it can make the model very complex and confusing. Standalone GAN may only retrieve the localized features in the latent version of an image. To achieve global and local feature extraction in a single model, we have used a vision transformer (ViT) along with a standalone GAN which will further improve the similarity of the images and can increase the performance of the model for detection of tumor. By effectively overcoming the constraint of data scarcity, high computational time, and lower discrimination capability, our suggested model can comprehend better accuracy, and lower computational time and also give the understanding of the information variance in various representations of the original images. The proposed model was evaluated on the BraTS 2020 dataset and Masoud2021 dataset, that is, a combination of the three datasets SARTAJ, Figshare, and BR35H. The obtained results demonstrate that the suggested model is capable of producing fine-quality images with accuracy and sensitivity scores of 0.9765 and 0.977 on the BraTS 2020 dataset as well as 0.9899 and 0.9683 on the Masoud2021 dataset.
An intelligent system of pelvic lymph node detection
Wang, Han
Huang, Hao
Wang, Jingling
Wei, Mingtian
Yi, Zhang
Wang, Ziqiang
Zhang, Haixian
2021Journal Article, cited 0 times
CT Lymph Nodes
Computed tomography (CT) scanning is a fast and painless procedure that can capture clear imaging information beneath the abdomen and is widely used to help diagnose and monitor disease progress. The pelvic lymph node is a key indicator of colorectal cancer metastasis. In the traditional process, an experienced radiologist must read all the CT scanning images slice by slice to track the lymph nodes for future diagnosis. However, this process is time‐consuming, exhausting, and subjective due to the complex pelvic structure, numerous blood vessels, and small lymph nodes. Therefore, automated methods are desirable to make this process easier. Currently, the available open‐source CTLNDataset only contains large lymph nodes. Consequently, a new data set called PLNDataset, which is dedicated to lymph nodes within the pelvis, is constructed to solve this issue. A two‐level annotation calibration method is proposed to guarantee the quality and correctness of pelvic lymph node annotation. Moreover, a novel system composed of a keyframe localization network and a lymph node detection network is proposed to detect pelvic lymph nodes in CT scanning images. The proposed method makes full use of two kinds of prior knowledge: spatial prior knowledge for keyframe localization and anchor prior knowledge for lymph node detection. A series of experiments are carried out to evaluate the proposed method, including ablation experiments, comparing other state‐of‐the‐art methods, and visualization of results. The experimental results demonstrate that our proposed method outperforms other methods on PLNDataset and CTLNDataset. This system is expected to be applied in future clinical practice.
Optimizing interstitial photodynamic therapy with custom cylindrical diffusers
Yassine, Abdul‐Amir
Lilge, Lothar
Betz, Vaughn
Journal of biophotonics2018Journal Article, cited 0 times
Website
Brain
Model
Algorithm Development
Multiparametric MRI of prostate cancer: An update on state‐of‐the‐art techniques and their performance in detecting and localizing prostate cancer
Hegde, John V
Mulkern, Robert V
Panych, Lawrence P
Fennessy, Fiona M
Fedorov, Andriy
Maier, Stephan E
Tempany, Clare
Journal of Magnetic Resonance Imaging2013Journal Article, cited 164 times
Website
Breast cancer molecular subtype classifier that incorporates MRI features
Sutton, Elizabeth J
Dashevsky, Brittany Z
Oh, Jung Hun
Veeraraghavan, Harini
Apte, Aditya P
Thakur, Sunitha B
Morris, Elizabeth A
Deasy, Joseph O
Journal of Magnetic Resonance Imaging2016Journal Article, cited 34 times
Website
Radiomics
Imaging features
BREAST
Machine learning
Radiogenomics
Purpose: To use features extracted from magnetic resonance (MR) images and a machine-learning method to assist in differentiating breast cancer molecular subtypes.; Materials and Methods: This retrospective Health Insurance Portability and Accountability Act (HIPAA)-compliant study received Institutional Review Board (IRB) approval. We identified 178 breast cancer patients between 2006-2011 with: 1) ERPR+ (n=95, 53.4%), ERPR-/HER2+ (n=35, 19.6%), or triple negative (TN, n=48, 27.0%) invasive ductal carcinoma (IDC), and 2) preoperative breast MRI at 1.5T or 3.0T. Shape, texture, and histogram-based features were extracted from each tumor contoured on pre- and three postcontrast MR images using in-house software. Clinical and pathologic features were also collected. Machine-learning-based (support vector machines) models were used to identify significant imaging features and to build models that predict IDC subtype. Leave-one-out cross-validation (LOOCV) was used to avoid model overfitting. Statistical significance was determined using the Kruskal-Wallis test.; Results: Each support vector machine fit in the LOOCV process generated a model with varying features. Eleven out of the top 20 ranked features were significantly different between IDC subtypes with P < 0.05. When the top nine pathologic and imaging features were incorporated, the predictive model distinguished IDC subtypes with an overall accuracy on LOOCV of 83.4%. The combined pathologic and imaging model's accuracy for each subtype was 89.2% (ERPR+), ;63.6% (ERPR-/HER2+), and 82.5% (TN). When only the top nine imaging features were incorporated, the predictive model distinguished IDC subtypes with an overall accuracy on LOOCV of 71.2%. The combined pathologic and imaging model's accuracy for each subtype was 69.9% (ERPR+), 62.9% (ERPR-/HER2+), and 81.0% (TN).; Conclusion: We developed a machine-learning-based predictive model using features extracted from MRI that can distinguish IDC subtypes with significant predictive power.
Intratumor partitioning and texture analysis of dynamic contrast‐enhanced (DCE)‐MRI identifies relevant tumor subregions to predict pathological response of breast cancer to neoadjuvant chemotherapy
Wu, Jia
Gong, Guanghua
Cui, Yi
Li, Ruijiang
Journal of Magnetic Resonance Imaging2016Journal Article, cited 43 times
Website
Algorithm Development
BREAST
PURPOSE: To predict pathological response of breast cancer to neoadjuvant chemotherapy (NAC) based on quantitative, multiregion analysis of dynamic contrast enhancement magnetic resonance imaging (DCE-MRI). MATERIALS AND METHODS: In this Institutional Review Board-approved study, 35 patients diagnosed with stage II/III breast cancer were retrospectively investigated using 3T DCE-MR images acquired before and after the first cycle of NAC. First, principal component analysis (PCA) was used to reduce the dimensionality of the DCE-MRI data with high temporal resolution. We then partitioned the whole tumor into multiple subregions using k-means clustering based on the PCA-defined eigenmaps. Within each tumor subregion, we extracted four quantitative Haralick texture features based on the gray-level co-occurrence matrix (GLCM). The change in texture features in each tumor subregion between pre- and during-NAC was used to predict pathological complete response after NAC. RESULTS: Three tumor subregions were identified through clustering, each with distinct enhancement characteristics. In univariate analysis, all imaging predictors except one extracted from the tumor subregion associated with fast washout were statistically significant (P < 0.05) after correcting for multiple testing, with area under the receiver operating characteristic (ROC) curve (AUC) or AUCs between 0.75 and 0.80. In multivariate analysis, the proposed imaging predictors achieved an AUC of 0.79 (P = 0.002) in leave-one-out cross-validation. This improved upon conventional imaging predictors such as tumor volume (AUC = 0.53) and texture features based on whole-tumor analysis (AUC = 0.65). CONCLUSION: The heterogeneity of the tumor subregion associated with fast washout on DCE-MRI predicted pathological response to NAC in breast cancer. J. Magn. Reson. Imaging 2016;44:1107-1115.
Identifying relations between imaging phenotypes and molecular subtypes of breast cancer: Model discovery and external validation
Wu, Jia
Sun, Xiaoli
Wang, Jeff
Cui, Yi
Kato, Fumi
Shirato, Hiroki
Ikeda, Debra M
Li, Ruijiang
Journal of Magnetic Resonance Imaging2017Journal Article, cited 17 times
Website
TCGA-BRCA
DCE-MRI
Radiomics
Radiogenomics
BREAST
Classification
Purpose: To determine whether dynamic contrast enhancement magnetic resonance imaging (DCE-MRI) characteristics of the breast tumor and background parenchyma can distinguish molecular subtypes (ie, luminal A/B or basal) of breast cancer.; ; Materials and methods: In all, 84 patients from one institution and 126 patients from The Cancer Genome Atlas (TCGA) were used for discovery and external validation, respectively. Thirty-five quantitative image features were extracted from DCE-MRI (1.5 or 3T) including morphology, texture, and volumetric features, which capture both tumor and background parenchymal enhancement (BPE) characteristics. Multiple testing was corrected using the Benjamini-Hochberg method to control the false-discovery rate (FDR). Sparse logistic regression models were built using the discovery cohort to distinguish each of the three studied molecular subtypes versus the rest, and the models were evaluated in the validation cohort.; ; Results: On univariate analysis in discovery and validation cohorts, two features characterizing tumor and two characterizing BPE were statistically significant in separating luminal A versus nonluminal A cancers; two features characterizing tumor were statistically significant for separating luminal B; one feature characterizing tumor and one characterizing BPE reached statistical significance for distinguishing basal (Wilcoxon P < 0.05, FDR < 0.25). In discovery and validation cohorts, multivariate logistic regression models achieved an area under the receiver operator characteristic curve (AUC) of 0.71 and 0.73 for luminal A cancer, 0.67 and 0.69 for luminal B cancer, and 0.66 and 0.79 for basal cancer, respectively.; ; Conclusion: DCE-MRI characteristics of breast cancer and BPE may potentially be used to distinguish among molecular subtypes of breast cancer.; ; Level of evidence: 3 Technical Efficacy: Stage 3 J. Magn. Reson. Imaging 2017;46:1017-1027.; ; Keywords: breast cancer; classification; dynamic contrast enhanced MRI; imaging genomics; molecular subtype.
Radiomics Strategy for Molecular Subtype Stratification of Lower‐Grade Glioma: Detecting IDH and TP53 Mutations Based on Multimodal MRI
Zhang, Xi
Tian, Qiang
Wang, Liang
Liu, Yang
Li, Baojuan
Liang, Zhengrong
Gao, Peng
Zheng, Kaizhong
Zhao, Bofeng
Lu, Hongbing
Journal of Magnetic Resonance Imaging2018Journal Article, cited 5 times
Website
LGG
Radiomics
Computer‐aided diagnosis of prostate cancer using a deep convolutional neural network from multiparametric MRI
Song, Yang
Zhang, Yu‐Dong
Yan, Xu
Liu, Hui
Zhou, Minxiong
Hu, Bingwen
Yang, Guang
Journal of Magnetic Resonance Imaging2018Journal Article, cited 0 times
PROSTATEx
BACKGROUND: Deep learning is the most promising methodology for automatic computer-aided diagnosis of prostate cancer (PCa) with multiparametric MRI (mp-MRI).
PURPOSE: To develop an automatic approach based on deep convolutional neural network (DCNN) to classify PCa and noncancerous tissues (NC) with mp-MRI.
STUDY TYPE: Retrospective.
SUBJECTS: In all, 195 patients with localized PCa were collected from a PROSTATEx database. In total, 159/17/19 patients with 444/48/55 observations (215/23/23 PCas and 229/25/32 NCs) were randomly selected for training/validation/testing, respectively.
SEQUENCE: T2 -weighted, diffusion-weighted, and apparent diffusion coefficient images.
ASSESSMENT: A radiologist manually labeled the regions of interest of PCas and NCs and estimated the Prostate Imaging Reporting and Data System (PI-RADS) scores for each region. Inspired by VGG-Net, we designed a patch-based DCNN model to distinguish between PCa and NCs based on a combination of mp-MRI data. Additionally, an enhanced prediction method was used to improve the prediction accuracy. The performance of DCNN prediction was tested using a receiver operating characteristic (ROC) curve, and the area under the ROC curve (AUC), sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were calculated. Moreover, the predicted result was compared with the PI-RADS score to evaluate its clinical value using decision curve analysis.
STATISTICAL TEST: Two-sided Wilcoxon signed-rank test with statistical significance set at 0.05.
RESULTS: The DCNN produced excellent diagnostic performance in distinguishing between PCa and NC for testing datasets with an AUC of 0.944 (95% confidence interval: 0.876-0.994), sensitivity of 87.0%, specificity of 90.6%, PPV of 87.0%, and NPV of 90.6%. The decision curve analysis revealed that the joint model of PI-RADS and DCNN provided additional net benefits compared with the DCNN model and the PI-RADS scheme.
DATA CONCLUSION: The proposed DCNN-based model with enhanced prediction yielded high performance in statistical analysis, suggesting that DCNN could be used in computer-aided diagnosis (CAD) for PCa classification.
LEVEL OF EVIDENCE: 3 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018;48:1570-1577.
Analysis of postprocessing steps for residue function dependent dynamic susceptibility contrast (DSC)‐MRI biomarkers and their clinical impact on glioma grading for both 1.5 and 3T
Bell, Laura C
Stokes, Ashley M
Quarles, C Chad
Journal of Magnetic Resonance Imaging2020Journal Article, cited 0 times
Website
QIN-BRAIN-DSC-MRI
Brain-Tumor-Progression
Classification
Deep Learning Whole-Gland and Zonal Prostate Segmentation on a Public MRI Dataset
Cuocolo, Renato
Comelli, Albert
Stefano, Alessandro
Benfante, Viviana
Dahiya, Navdeep
Stanzione, Arnaldo
Castaldo, Anna
De Lucia, Davide Raffaele
Yezzi, Anthony
Imbriaco, Massimo
Journal of Magnetic Resonance Imaging2021Journal Article, cited 0 times
Website
ProstateX
Deep learning
segmentation
Background Prostate volume, as determined by magnetic resonance imaging (MRI), is a useful biomarker both for distinguishing between benign and malignant pathology and can be used either alone or combined with other parameters such as prostate-specific antigen. Purpose This study compared different deep learning methods for whole-gland and zonal prostate segmentation. Study Type Retrospective. Population A total of 204 patients (train/test = 99/105) from the PROSTATEx public dataset. Field strength/Sequence A 3 T, TSE T2-weighted. Assessment Four operators performed manual segmentation of the whole-gland, central zone + anterior stroma + transition zone (TZ), and peripheral zone (PZ). U-net, efficient neural network (ENet), and efficient residual factorized ConvNet (ERFNet) were trained and tuned on the training data through 5-fold cross-validation to segment the whole gland and TZ separately, while PZ automated masks were obtained by the subtraction of the first two. Statistical Tests Networks were evaluated on the test set using various accuracy metrics, including the Dice similarity coefficient (DSC). Model DSC was compared in both the training and test sets using the analysis of variance test (ANOVA) and post hoc tests. Parameter number, disk size, training, and inference times determined network computational complexity and were also used to assess the model performance differences. A P < 0.05 was selected to indicate the statistical significance. Results The best DSC (P < 0.05) in the test set was achieved by ENet: 91% ± 4% for the whole gland, 87% ± 5% for the TZ, and 71% ± 8% for the PZ. U-net and ERFNet obtained, respectively, 88% ± 6% and 87% ± 6% for the whole gland, 86% ± 7% and 84% ± 7% for the TZ, and 70% ± 8% and 65 ± 8% for the PZ. Training and inference time were lowest for ENet. Data Conclusion Deep learning networks can accurately segment the prostate using T2-weighted images. Evidence Level 4 Technical Efficacy Stage 2
Prospective Evaluation of Repeatability and Robustness of Radiomic Descriptors in Healthy Brain Tissue Regions In Vivo Across Systematic Variations in T2‐Weighted Magnetic Resonance Imaging Acquisition Parameters
Eck, Brendan
Chirra, Prathyush V.
Muchhala, Avani
Hall, Sophia
Bera, Kaustav
Tiwari, Pallavi
Madabhushi, Anant
Seiberlich, Nicole
Viswanath, Satish E.
Journal of Magnetic Resonance Imaging2021Journal Article, cited 0 times
TCGA-GBM
BACKGROUND: Radiomic descriptors from magnetic resonance imaging (MRI) are promising for disease diagnosis and characterization but may be sensitive to differences in imaging parameters.
OBJECTIVE: To evaluate the repeatability and robustness of radiomic descriptors within healthy brain tissue regions on prospectively acquired MRI scans; in a test-retest setting, under controlled systematic variations of MRI acquisition parameters, and after postprocessing.
STUDY TYPE: Prospective.
SUBJECTS: Fifteen healthy participants.
FIELD STRENGTH/SEQUENCE: A 3.0 T, axial T2 -weighted 2D turbo spin-echo pulse sequence, 181 scans acquired (2 test/retest reference scans and 12 with systematic variations in contrast weighting, resolution, and acceleration per participant; removing scans with artifacts).
ASSESSMENT: One hundred and forty-six radiomic descriptors were extracted from a contiguous 2D region of white matter in each scan, before and after postprocessing.
STATISTICAL TESTS: Repeatability was assessed in a test/retest setting and between manual and automated annotations for the reference scan. Robustness was evaluated between the reference scan and each group of variant scans (contrast weighting, resolution, and acceleration). Both repeatability and robustness were quantified as the proportion of radiomic descriptors that fell into distinct ranges of the concordance correlation coefficient (CCC): excellent (CCC > 0.85), good (0.7 ≤ CCC ≤ 0.85), moderate (0.5 ≤ CCC < 0.7), and poor (CCC < 0.5); for unprocessed and postprocessed scans separately.
RESULTS: Good to excellent repeatability was observed for 52% of radiomic descriptors between test/retest scans and 48% of descriptors between automated vs. manual annotations, respectively. Contrast weighting (TR/TE) changes were associated with the largest proportion of highly robust radiomic descriptors (21%, after processing). Image resolution changes resulted in the largest proportion of poorly robust radiomic descriptors (97%, before postprocessing). Postprocessing of images with only resolution/acceleration differences resulted in 73% of radiomic descriptors showing poor robustness.
DATA CONCLUSIONS: Many radiomic descriptors appear to be nonrobust across variations in MR contrast weighting, resolution, and acceleration, as well in test-retest settings, depending on feature formulation and postprocessing.
EVIDENCE LEVEL: 2 TECHNICAL EFFICACY: Stage 2.
Four‐Dimensional Machine Learning Radiomics for the Pretreatment Assessment of Breast Cancer Pathologic Complete Response to Neoadjuvant Chemotherapy in Dynamic Contrast‐Enhanced MRI
Caballo, Marco
Sanderink, Wendelien BG
Han, Luyi
Gao, Yuan
Athanasiou, Alexandra
Mann, Ritse M
Journal of Magnetic Resonance Imaging2022Journal Article, cited 1 times
Website
Duke-Breast-Cancer-MRI
Machine Learning
Radiomic feature
breast cancer
Noninvasive Evaluation of the Notch Signaling Pathway via Radiomic Signatures Based on Multiparametric MRI in Association With Biological Functions of Patients With Glioma: A Multi-institutional Study
Shen, N.
Lv, W.
Li, S.
Liu, D.
Xie, Y.
Zhang, J.
Zhang, J.
Jiang, J.
Jiang, R.
Zhu, W.
J Magn Reson Imaging2022Journal Article, cited 0 times
Website
CPTAC-GBM
TCGA-GBM
Notch signaling pathway
glioma
multi-parametric magnetic resonance imaging (multi-parametric MRI)
Radiogenomics
Radiomics
BACKGROUND: Noninvasive determination of Notch signaling is important for prognostic evaluation and therapeutic intervention in glioma. PURPOSE: To predict Notch signaling using multiparametric (mp) MRI radiomics and correlate with biological characteristics in gliomas. STUDY TYPE: Retrospective. POPULATION: A total of 63 patients for model construction and 47 patients from two public databases for external testing. FIELD STRENGTH/SEQUENCE: A 1.5 T and 3.0 T, T1-weighted imaging (T1WI), T2WI, T2 fluid attenuated inversion recovery (FLAIR), contrast-enhanced (CE)-T1WI. ASSESSMENT: Radiomic features were extracted from CE-T1WI, T1WI, T2WI, and T2FLAIR and imaging signatures were selected using a least absolute shrinkage and selection operator. Diagnostic performance was compared between single modality and a combined mpMRI radiomics model. A radiomic-clinical nomogram was constructed incorporating the mpMRI radiomic signature and Karnofsky Performance score. The performance was validated in the test set. The radiomic signatures were correlated with immunohistochemistry (IHC) analysis of downstream Notch pathway components. STATISTICAL TESTS: Receiver operating characteristic curve, decision curve analysis (DCA), Pearson correlation, and Hosmer-Lemeshow test. A P value < 0.05 was considered statistically significant. RESULTS: The radiomic signature derived from the combination of all sequences numerically showed highest area under the curve (AUC) in both training and external test sets (AUCs of 0.857 and 0.823). The radiomics nomogram that incorporated the mpMRI radiomic signature and KPS status resulted in AUCs of 0.891 and 0.859 in the training and test sets. The calibration curves showed good agreement between prediction and observation in both sets (P= 0.279 and 0.170, respectively). DCA confirmed the clinical usefulness of the nomogram. IHC identified Notch pathway inactivation and the expression levels of Hes1 correlated with higher combined radiomic scores (r = -0.711) in Notch1 mutant tumors. DATA CONCLUSION: The mpMRI-based radiomics nomogram may reflect the intratumor heterogeneity associated with downstream biofunction that predicts Notch signaling in a noninvasive manner. EVIDENCE LEVEL: 3 TECHNICAL EFFICACY: Stage 2.
Glioma Tumor Grading Using Radiomics on Conventional MRI: A Comparative Study of WHO 2021 and WHO 2016 Classification of Central Nervous Tumors
Moodi, F.
Khodadadi Shoushtari, F.
Ghadimi, D. J.
Valizadeh, G.
Khormali, E.
Salari, H. M.
Ohadi, M. A. D.
Nilipour, Y.
Jahanbakhshi, A.
Rad, H. S.
J Magn Reson Imaging2023Journal Article, cited 0 times
TCGA-LGG
TCGA-GBM
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Classification
WHO CNS tumor classification
artificial intelligence
glioma
machine learning
neoplasm grading
Radiomics
BACKGROUND: Glioma grading transformed in World Health Organization (WHO) 2021 CNS tumor classification, integrating molecular markers. However, the impact of this change on radiomics-based machine learning (ML) classifiers remains unexplored. PURPOSE: To assess the performance of ML in classifying glioma tumor grades based on various WHO criteria. STUDY TYPE: Retrospective. SUBJECTS: A neuropathologist regraded gliomas of 237 patients into WHO 2016 and 2021 from 2007 criteria. FIELD STRENGTH/SEQUENCE: Multicentric 0.5 to 3 Tesla; pre- and post-contrast T1-weighted, T2-weighted, and fluid-attenuated inversion recovery. ASSESSMENT: Radiomic features were selected using random forest-recursive feature elimination. The synthetic minority over-sampling technique (SMOTE) was implemented for data augmentation. Stratified 10-fold cross-validation with and without SMOTE was used to evaluate 11 classifiers for 3-grade (2, 3, and 4; WHO 2016 and 2021) and 2-grade (low and high grade; WHO 2007 and 2021) classification. Additionally, we developed the models on data randomly divided into training and test sets (mixed-data analysis), or data divided based on the centers (independent-data analysis). STATISTICAL TESTS: We assessed ML classifiers using sensitivity, specificity, accuracy, and the area under the receiver operating characteristic curve (AUC). Top performances were compared with a t-test and categorical data with the chi-square test using a significance level of P < 0.05. RESULTS: In the mixed-data analysis, Stacking Classifier without SMOTE achieved the highest accuracy (0.86) and AUC (0.92) in 3-grade WHO 2021 grouping. The results of WHO 2021 were significantly better than WHO 2016 (P-value<0.0001). In the 2-grade analysis, ML achieved 1.00 in all metrics. In the independent-data analysis, ML classifiers showed strong discrimination between grade 2 and 4, despite lower performance metrics than the mixed analysis. DATA CONCLUSION: ML algorithms performed better in glioma tumor grading based on WHO 2021 criteria. Nonetheless, the clinical use of ML classifiers needs further investigation. LEVEL OF EVIDENCE: 3 TECHNICAL EFFICACY: Stage 2.
Prediction of COVID-19 patients in danger of death using radiomic features of portable chest radiographs
Nakashima, M.
Uchiyama, Y.
Minami, H.
Kasai, S.
J Med Radiat Sci2022Journal Article, cited 0 times
Website
COVID-19-AR
Artificial intelligence
Covid-19
portable chest X-ray
prognosis prediction
Radiomics
INTRODUCTION: Computer-aided diagnostic systems have been developed for the detection and differential diagnosis of coronavirus disease 2019 (COVID-19) pneumonia using imaging studies to characterise a patient's current condition. In this radiomic study, we propose a system for predicting COVID-19 patients in danger of death using portable chest X-ray images. METHODS: In this retrospective study, we selected 100 patients, including ten that died and 90 that recovered from the COVID-19-AR database of the Cancer Imaging Archive. Since it can be difficult to analyse portable chest X-ray images of patients with COVID-19 because bone components overlap with the abnormal patterns of this disease, we employed a bone-suppression technique during pre-processing. A total of 620 radiomic features were measured in the left and right lung regions, and four radiomic features were selected using the least absolute shrinkage and selection operator technique. We distinguished death from recovery cases using a linear discriminant analysis (LDA) and a support vector machine (SVM). The leave-one-out method was used to train and test the classifiers, and the area under the receiver-operating characteristic curve (AUC) was used to evaluate discriminative performance. RESULTS: The AUCs for LDA and SVM were 0.756 and 0.959, respectively. The discriminative performance was improved when the bone-suppression technique was employed. When the SVM was used, the sensitivity for predicting disease severity was 90.9% (9/10), and the specificity was 95.6% (86/90). CONCLUSIONS: We believe that the radiomic features of portable chest X-ray images can predict COVID-19 patients in danger of death.
Influence of Contrast Administration on Computed Tomography–Based Analysis of Visceral Adipose and Skeletal Muscle Tissue in Clear Cell Renal Cell Carcinoma
Paris, Michael T
Furberg, Helena F
Petruzella, Stacey
Akin, Oguz
Hötker, Andreas M
Mourtzakis, Marina
Journal of Parenteral and Enteral Nutrition2018Journal Article, cited 0 times
Website
TCGA_RCC
body composition
sarcopenia
visceral adipose
muscle quality
Introducing the Medical Physics Dataset Article
Williamson, Jeffrey F
Das, Shiva K
Goodsitt, Mitchell S
Deasy, Joseph O
Medical Physics2017Journal Article, cited 7 times
Website
Multi‐site quality and variability analysis of 3D FDG PET segmentations based on phantom and clinical image data
Beichel, Reinhard R
Smith, Brian J
Bauer, Christian
Ulrich, Ethan J
Ahmadvand, Payam
Budzevich, Mikalai M
Gillies, Robert J
Goldgof, Dmitry
Grkovski, Milan
Hamarneh, Ghassan
Medical Physics2017Journal Article, cited 7 times
Website
QIN PET Phantom
PURPOSE: Radiomics utilizes a large number of image-derived features for quantifying tumor characteristics that can in turn be correlated with response and prognosis. Unfortunately, extraction and analysis of such image-based features is subject to measurement variability and bias. The challenge for radiomics is particularly acute in Positron Emission Tomography (PET) where limited resolution, a high noise component related to the limited stochastic nature of the raw data, and the wide variety of reconstruction options confound quantitative feature metrics. Extracted feature quality is also affected by tumor segmentation methods used to define regions over which to calculate features, making it challenging to produce consistent radiomics analysis results across multiple institutions that use different segmentation algorithms in their PET image analysis. Understanding each element contributing to these inconsistencies in quantitative image feature and metric generation is paramount for ultimate utilization of these methods in multi-institutional trials and clinical oncology decision making. METHODS: To assess segmentation quality and consistency at the multi-institutional level, we conducted a study of seven institutional members of the National Cancer Institute Quantitative Imaging Network. For the study, members were asked to segment a common set of phantom PET scans acquired over a range of imaging conditions as well as a second set of head and neck cancer (HNC) PET scans. Segmentations were generated at each institution using their preferred approach. In addition, participants were asked to repeat segmentations with a time interval between initial and repeat segmentation. This procedure resulted in overall 806 phantom insert and 641 lesion segmentations. Subsequently, the volume was computed from the segmentations and compared to the corresponding reference volume by means of statistical analysis. RESULTS: On the two test sets (phantom and HNC PET scans), the performance of the seven segmentation approaches was as follows. On the phantom test set, the mean relative volume errors ranged from 29.9 to 87.8% of the ground truth reference volumes, and the repeat difference for each institution ranged between -36.4 to 39.9%. On the HNC test set, the mean relative volume error ranged between -50.5 to 701.5%, and the repeat difference for each institution ranged between -37.7 to 31.5%. In addition, performance measures per phantom insert/lesion size categories are given in the paper. On phantom data, regression analysis resulted in coefficient of variation (CV) components of 42.5% for scanners, 26.8% for institutional approaches, 21.1% for repeated segmentations, 14.3% for relative contrasts, 5.3% for count statistics (acquisition times), and 0.0% for repeated scans. Analysis showed that the CV components for approaches and repeated segmentations were significantly larger on the HNC test set with increases by 112.7% and 102.4%, respectively. CONCLUSION: Analysis results underline the importance of PET scanner reconstruction harmonization and imaging protocol standardization for quantification of lesion volumes. In addition, to enable a distributed multi-site analysis of FDG PET images, harmonization of analysis approaches and operator training in combination with highly automated segmentation methods seems to be advisable. Future work will focus on quantifying the impact of segmentation variation on radiomics system performance.
A supervoxel‐based segmentation method for prostate MR images
Tian, Zhiqiang
Liu, Lizhi
Zhang, Zhenfeng
Xue, Jianru
Fei, Baowei
Medical Physics2017Journal Article, cited 57 times
Website
ISBI-MR-Prostate-2013
Magnetic Resonance Imaging
Prostate
PURPOSE: Segmentation of the prostate on MR images has many applications in prostate cancer management. In this work, we propose a supervoxel-based segmentation method for prostate MR images.
METHODS: A supervoxel is a set of pixels that have similar intensities, locations, and textures in a 3D image volume. The prostate segmentation problem is considered as assigning a binary label to each supervoxel, which is either the prostate or background. A supervoxel-based energy function with data and smoothness terms is used to model the label. The data term estimates the likelihood of a supervoxel belonging to the prostate by using a supervoxel-based shape feature. The geometric relationship between two neighboring supervoxels is used to build the smoothness term. The 3D graph cut is used to minimize the energy function to get the labels of the supervoxels, which yields the prostate segmentation. A 3D active contour model is then used to get a smooth surface by using the output of the graph cut as an initialization. The performance of the proposed algorithm was evaluated on 30 in-house MR image data and PROMISE12 dataset.
RESULTS: The mean Dice similarity coefficients are 87.2 ± 2.3% and 88.2 ± 2.8% for our 30 in-house MR volumes and the PROMISE12 dataset, respectively. The proposed segmentation method yields a satisfactory result for prostate MR images.
CONCLUSION: The proposed supervoxel-based method can accurately segment prostate MR images and can have a variety of application in prostate cancer diagnosis and therapy.
A longitudinal four‐dimensional computed tomography and cone beam computed tomography dataset for image‐guided radiation therapy research in lung cancer
Hugo, Geoffrey D
Weiss, Elisabeth
Sleeman, William C
Balik, Salim
Keall, Paul J
Lu, Jun
Williamson, Jeffrey F
Medical Physics2017Journal Article, cited 8 times
Website
4D-Lung
Computed Tomography (CT)
PURPOSE: To describe in detail a dataset consisting of serial four-dimensional computed tomography (4DCT) and 4D cone beam CT (4DCBCT) images acquired during chemoradiotherapy of 20 locally advanced, nonsmall cell lung cancer patients we have collected at our institution and shared publicly with the research community. ACQUISITION AND VALIDATION METHODS: As part of an NCI-sponsored research study 82 4DCT and 507 4DCBCT images were acquired in a population of 20 locally advanced nonsmall cell lung cancer patients undergoing radiation therapy. All subjects underwent concurrent radiochemotherapy to a total dose of 59.4-70.2 Gy using daily 1.8 or 2 Gy fractions. Audio-visual biofeedback was used to minimize breathing irregularity during all fractions, including acquisition of all 4DCT and 4DCBCT acquisitions in all subjects. Target, organs at risk, and implanted fiducial markers were delineated by a physician in the 4DCT images. Image coordinate system origins between 4DCT and 4DCBCT were manipulated in such a way that the images can be used to simulate initial patient setup in the treatment position. 4DCT images were acquired on a 16-slice helical CT simulator with 10 breathing phases and 3 mm slice thickness during simulation. In 13 of the 20 subjects, 4DCTs were also acquired on the same scanner weekly during therapy. Every day, 4DCBCT images were acquired on a commercial onboard CBCT scanner. An optically tracked external surrogate was synchronized with CBCT acquisition so that each CBCT projection was time stamped with the surrogate respiratory signal through in-house software and hardware tools. Approximately 2500 projections were acquired over a period of 8-10 minutes in half-fan mode with the half bow-tie filter. Using the external surrogate, the CBCT projections were sorted into 10 breathing phases and reconstructed with an in-house FDK reconstruction algorithm. Errors in respiration sorting, reconstruction, and acquisition were carefully identified and corrected. DATA FORMAT AND USAGE NOTES: 4DCT and 4DCBCT images are available in DICOM format and structures through DICOM-RT RTSTRUCT format. All data are stored in the Cancer Imaging Archive (TCIA, http://www.cancerimagingarchive.net/) as collection 4D-Lung and are publicly available. DISCUSSION: Due to high temporal frequency sampling, redundant (4DCT and 4DCBCT) data at similar timepoints, oversampled 4DCBCT, and fiducial markers, this dataset can support studies in image-guided and image-guided adaptive radiotherapy, assessment of 4D voxel trajectory variability, and development and validation of new tools for image registration and motion management.
Automatic intensity windowing of mammographic images based on a perceptual metric
Albiol, Alberto
Corbi, Alberto
Albiol, Francisco
Medical Physics2017Journal Article, cited 0 times
Website
Algorithm Development
Computer Aided Diagnosis (CADx)
BI-RADS
mutual information
Mammography
Gabor filter
BREAST
Radiomic feature
PURPOSE: Initial auto-adjustment of the window level WL and width WW applied to mammographic images. The proposed intensity windowing (IW) method is based on the maximization of the mutual information (MI) between a perceptual decomposition of the original 12-bit sources and their screen displayed 8-bit version. Besides zoom, color inversion and panning operations, IW is the most commonly performed task in daily screening and has a direct impact on diagnosis and the time involved in the process. METHODS: The authors present a human visual system and perception-based algorithm named GRAIL (Gabor-relying adjustment of image levels). GRAIL initially measures a mammogram's quality based on the MI between the original instance and its Gabor-filtered derivations. From this point on, the algorithm performs an automatic intensity windowing process that outputs the WL/WW that best displays each mammogram for screening. GRAIL starts with the default, high contrast, wide dynamic range 12-bit data, and then maximizes the graphical information presented in ordinary 8-bit displays. Tests have been carried out with several mammogram databases. They comprise correlations and an ANOVA analysis with the manual IW levels established by a group of radiologists. A complete MATLAB implementation of GRAIL is available at https://github.com/TheAnswerIsFortyTwo/GRAIL. RESULTS: Auto-leveled images show superior quality both perceptually and objectively compared to their full intensity range and compared to the application of other common methods like global contrast stretching (GCS). The correlations between the human determined intensity values and the ones estimated by our method surpass that of GCS. The ANOVA analysis with the upper intensity thresholds also reveals a similar outcome. GRAIL has also proven to specially perform better with images that contain micro-calcifications and/or foreign X-ray-opaque elements and with healthy BI-RADS A-type mammograms. It can also speed up the initial screening time by a mean of 4.5 s per image. CONCLUSIONS: A novel methodology is introduced that enables a quality-driven balancing of the WL/WW of mammographic images. This correction seeks the representation that maximizes the amount of graphical information contained in each image. The presented technique can contribute to the diagnosis and the overall efficiency of the breast screening session by suggesting, at the beginning, an optimal and customized windowing setting for each mammogram.
Quantifying the reproducibility of lung ventilation images between 4-Dimensional Cone Beam CT and 4-Dimensional CT
Woodruff, Henry C.
Shieh, Chun-Chien
Hegi-Johnson, Fiona
Keall, Paul J.
Kipritidis, John
Medical Physics2017Journal Article, cited 2 times
Website
4D-Lung
lung radiation therapy
functional imaging
ventilation
4D cone beam CT
deformable image registration
Fully automatic and accurate detection of lung nodules in CT images using a hybrid feature set
Shaukat, Furqan
Raja, Gulistan
Gooya, Ali
Frangi, Alejandro F
Medical Physics2017Journal Article, cited 2 times
Website
LIDC-IDRI
Segmentation
optimal thresholding
Support Vector Machine (SVM)
K-Nearest-Neighbour (KNN)
Linear Discriminant Analysis (LDA)
A deep convolutional neural network using directional wavelets for low-dose X-ray CT reconstruction
Kang, E.
Min, J.
Ye, J. C.
Med Phys2017Journal Article, cited 568 times
Website
LDCT-and-Projection-data
*Radiation Dosage
Signal-To-Noise Ratio
Computed Tomography (CT)
Wavelet Analysis
Convolutional Neural Network (CNN)
Deep Learning
PURPOSE: Due to the potential risk of inducing cancer, radiation exposure by X-ray CT devices should be reduced for routine patient scanning. However, in low-dose X-ray CT, severe artifacts typically occur due to photon starvation, beam hardening, and other causes, all of which decrease the reliability of the diagnosis. Thus, a high-quality reconstruction method from low-dose X-ray CT data has become a major research topic in the CT community. Conventional model-based de-noising approaches are, however, computationally very expensive, and image-domain de-noising approaches cannot readily remove CT-specific noise patterns. To tackle these problems, we want to develop a new low-dose X-ray CT algorithm based on a deep-learning approach. METHOD: We propose an algorithm which uses a deep convolutional neural network (CNN) which is applied to the wavelet transform coefficients of low-dose CT images. More specifically, using a directional wavelet transform to extract the directional component of artifacts and exploit the intra- and inter- band correlations, our deep network can effectively suppress CT-specific noise. In addition, our CNN is designed with a residual learning architecture for faster network training and better performance. RESULTS: Experimental results confirm that the proposed algorithm effectively removes complex noise patterns from CT images derived from a reduced X-ray dose. In addition, we show that the wavelet-domain CNN is efficient when used to remove noise from low-dose CT compared to existing approaches. Our results were rigorously evaluated by several radiologists at the Mayo Clinic and won second place at the 2016 "Low-Dose CT Grand Challenge." CONCLUSIONS: To the best of our knowledge, this work is the first deep-learning architecture for low-dose CT reconstruction which has been rigorously evaluated and proven to be effective. In addition, the proposed algorithm, in contrast to existing model-based iterative reconstruction (MBIR) methods, has considerable potential to benefit from large data sets. Therefore, we believe that the proposed algorithm opens a new direction in the area of low-dose CT research.
Segmentation and tracking of lung nodules via graph‐cuts incorporating shape prior and motion from 4D CT
Cha, Jungwon
Farhangi, Mohammad Mehdi
Dunlap, Neal
Amini, Amir A
Medical Physics2018Journal Article, cited 5 times
Website
LIDC-IDRI
Automated image quality assessment for chest CT scans
Reeves, A. P.
Xie, Y.
Liu, S.
Med Phys2018Journal Article, cited 0 times
Website
FDA-Phantom
Lung Image Database Consortium (LIDC)
lung cancer
segmentation
CT image calibration assessment
CT image noise assessment
automatic image quality measurement
PURPOSE: Medical image quality needs to be maintained at standards sufficient for effective clinical reading. Automated computer analytic methods may be applied to medical images for quality assessment. METHODS: For chest CT scans in a lung cancer screening context, an automated quality assessment method is presented that characterizes image noise and image intensity calibration. This is achieved by image measurements in three automatically segmented homogeneous regions of the scan: external air, trachea lumen air, and descending aorta blood. Profiles of CT scanner behavior are also computed. RESULTS: The method has been evaluated on both phantom and real low-dose chest CT scans and results show that repeatable noise and calibration measures may be realized by automated computer algorithms. Noise and calibration profiles show relevant differences between different scanners and protocols. CONCLUSIONS: Automated image quality assessment may be useful for quality control for lung cancer screening and may enable performance improvements to automated computer analysis methods.
Deep learning for segmentation of brain tumors: Impact of cross-institutional training and testing
AlBadawy, E. A.
Saha, A.
Mazurowski, M. A.
Med Phys2018Journal Article, cited 5 times
Website
TCGA-GBM
MICCAI BraTS challenge
Convolutional neural network (CNN)
FMRIB Software Library (FSL)
Dice similarity coefficient
Average Hausdorff Distance
BRAIN
Segmentation
Glioblastoma Multiforme (GBM)
magnetic resonance imaging (MRI)
BACKGROUND AND PURPOSE: Convolutional neural networks (CNNs) are commonly used for segmentation of brain tumors. In this work, we assess the effect of cross-institutional training on the performance of CNNs. METHODS: We selected 44 glioblastoma (GBM) patients from two institutions in The Cancer Imaging Archive dataset. The images were manually annotated by outlining each tumor component to form ground truth. To automatically segment the tumors in each patient, we trained three CNNs: (a) one using data for patients from the same institution as the test data, (b) one using data for the patients from the other institution and (c) one using data for the patients from both of the institutions. The performance of the trained models was evaluated using Dice similarity coefficients as well as Average Hausdorff Distance between the ground truth and automatic segmentations. The 10-fold cross-validation scheme was used to compare the performance of different approaches. RESULTS: Performance of the model significantly decreased (P < 0.0001) when it was trained on data from a different institution (dice coefficients: 0.68 +/- 0.19 and 0.59 +/- 0.19) as compared to training with data from the same institution (dice coefficients: 0.72 +/- 0.17 and 0.76 +/- 0.12). This trend persisted for segmentation of the entire tumor as well as its individual components. CONCLUSIONS: There is a very strong effect of selecting data for training on performance of CNNs in a multi-institutional setting. Determination of the reasons behind this effect requires additional comprehensive investigation.
Synthetic Head and Neck and Phantom Images for Determining Deformable Image Registration Accuracy in Magnetic Resonance Imaging
Ger, Rachel B
Yang, Jinzhong
Ding, Yao
Jacobsen, Megan C
Cardenas, Carlos E
Fuller, Clifton D
Howell, Rebecca M
Li, Heng
Stafford, R Jason
Zhou, Shouhao
Medical Physics2018Journal Article, cited 0 times
Website
MRI-DIR
head and neck cancer
mri
T1-weighted
T2-weighted
porcine phantom
4D robust optimization including uncertainties in time structures can reduce the interplay effect in proton pencil beam scanning radiation therapy
Engwall, Erik
Fredriksson, Albin
Glimelius, Lars
Medical Physics2018Journal Article, cited 2 times
Website
non-small-cell lung cancer
4D-Lung
Opportunities and challenges to utilization of quantitative imaging: Report of the AAPM practical big data workshop
Mackie, Thomas R
Jackson, Edward F
Giger, Maryellen
Medical Physics2018Journal Article, cited 1 times
Website
LIDC
Quantitative Imaging Network (QIN)
reference image database to evaluate response (RIDER)
Autosegmentation for thoracic radiation treatment planning: A grand challenge at AAPM 2017
Yang, J.
Veeraraghavan, H.
Armato, S. G., 3rd
Farahani, K.
Kirby, J. S.
Kalpathy-Kramer, J.
van Elmpt, W.
Dekker, A.
Han, X.
Feng, X.
Aljabar, P.
Oliveira, B.
van der Heyden, B.
Zamdborg, L.
Lam, D.
Gooding, M.
Sharp, G. C.
Med Phys2018Journal Article, cited 172 times
Website
LCTSC
Lung CT Segmentation Challenge 2017
Algorithm Development
Humans
Organs at Risk/radiation effects
Radiotherapy Planning
Computer-Assisted/*methods
Radiotherapy
Image-Guided/adverse effects/*methods
Thorax/*diagnostic imaging/*radiation effects
Tomography
X-Ray Computed
automatic segmentation
grand challenge
lung cancer
radiation therapy
PURPOSE: This report presents the methods and results of the Thoracic Auto-Segmentation Challenge organized at the 2017 Annual Meeting of American Association of Physicists in Medicine. The purpose of the challenge was to provide a benchmark dataset and platform for evaluating performance of autosegmentation methods of organs at risk (OARs) in thoracic CT images. METHODS: Sixty thoracic CT scans provided by three different institutions were separated into 36 training, 12 offline testing, and 12 online testing scans. Eleven participants completed the offline challenge, and seven completed the online challenge. The OARs were left and right lungs, heart, esophagus, and spinal cord. Clinical contours used for treatment planning were quality checked and edited to adhere to the RTOG 1106 contouring guidelines. Algorithms were evaluated using the Dice coefficient, Hausdorff distance, and mean surface distance. A consolidated score was computed by normalizing the metrics against interrater variability and averaging over all patients and structures. RESULTS: The interrater study revealed highest variability in Dice for the esophagus and spinal cord, and in surface distances for lungs and heart. Five out of seven algorithms that participated in the online challenge employed deep-learning methods. Although the top three participants using deep learning produced the best segmentation for all structures, there was no significant difference in the performance among them. The fourth place participant used a multi-atlas-based approach. The highest Dice scores were produced for lungs, with averages ranging from 0.95 to 0.98, while the lowest Dice scores were produced for esophagus, with a range of 0.55-0.72. CONCLUSION: The results of the challenge showed that the lungs and heart can be segmented fairly accurately by various algorithms, while deep-learning methods performed better on the esophagus. Our dataset together with the manual contours for all training cases continues to be available publicly as an ongoing benchmarking resource.
Identification of optimal mother wavelets in survival prediction of lung cancer patients using wavelet decomposition‐based radiomic features
Soufi, Mazen
Arimura, Hidetaka
Nagami, Noriyuki
Medical Physics2018Journal Article, cited 1 times
Website
Radiomics
LIDC-IDRI
QIN LUNG CT
RIDER Lung CT
High quality imaging from sparsely sampled computed tomography data with deep learning and wavelet transform in various domains
Lee, Donghoong
Choi, Sunghoon
Kim, Hee‐Joung
Medical Physics2018Journal Article, cited 0 times
Website
LungCT-Diagnosis
wavelet
deep learning
Radiomics
More accurate and efficient segmentation of organs‐at‐risk in radiotherapy with Convolutional Neural Networks Cascades
Men, Kuo
Geng, Huaizhi
Cheng, Chingyun
Zhong, Haoyu
Huang, Mi
Fan, Yong
Plastaras, John P
Lin, Alexander
Xiao, Ying
Medical Physics2018Journal Article, cited 0 times
Website
HNSCC
segmentation
CNN
AnatomyNet: Deep learning for fast and fully automated whole‐volume segmentation of head and neck anatomy
Zhu, Wentao
Huang, Yufang
Zeng, Liang
Chen, Xuming
Liu, Yong
Qian, Zhen
Du, Nan
Fan, Wei
Xie, Xiaohui
Medical Physics2018Journal Article, cited 4 times
Website
Segmentation
Deep learning
Head and Neck Neoplasms
Radiation Therapy
U-Net
Head-Neck Cetuximab
MICCAI 2015
Multicenter CT phantoms public dataset for radiomics reproducibility tests
Kalendralis, Petros
Traverso, Alberto
Shi, Zhenwei
Zhovannik, Ivan
Monshouwer, Rene
Starmans, Martijn P A
Klein, Stefan
Pfaehler, Elisabeth
Boellaard, Ronald
Dekker, Andre
Wee, Leonard
Med Phys2019Journal Article, cited 0 times
Credence-Cartridge-Radiomics-Phantom
Algorithm Development
Reproducibility
PURPOSE: The aim of this paper is to describe a public, open-access, computed tomography (CT) phantom image set acquired at three centers and collected especially for radiomics reproducibility research. The dataset is useful to test radiomic features reproducibility with respect to various parameters, such as acquisition settings, scanners, and reconstruction algorithms. ACQUISITION AND VALIDATION METHODS: Three phantoms were scanned in three independent institutions. Images of the following phantoms were acquired: Catphan 700 and COPDGene Phantom II (Phantom Laboratory, Greenwich, NY, USA), and the Triple modality 3D Abdominal Phantom (CIRS, Norfolk, VA, USA). Data were collected at three Dutch medical centers: MAASTRO Clinic (Maastricht, NL), Radboud University Medical Center (Nijmegen, NL), and University Medical Center Groningen (Groningen, NL) with scanners from two different manufacturers Siemens Healthcare and Philips Healthcare. The following acquisition parameter were varied in the phantom scans: slice thickness, reconstruction kernels, and tube current. DATA FORMAT AND USAGE NOTES: We made the dataset publically available on the Dutch instance of "Extensible Neuroimaging Archive Toolkit-XNAT" (https://xnat.bmia.nl). The dataset is freely available and reusable with attribution (Creative Commons 3.0 license). POTENTIAL APPLICATIONS: Our goal was to provide a findable, open-access, annotated, and reusable CT phantom dataset for radiomics reproducibility studies. Reproducibility testing and harmonization are fundamental requirements for wide generalizability of radiomics-based clinical prediction models. It is highly desirable to include only reproducible features into models, to be more assured of external validity across hitherto unseen contexts. In this view, phantom data from different centers represent a valuable source of information to exclude CT radiomic features that may already be unstable with respect to simplified structures and tightly controlled scan settings. The intended extension of our shared dataset is to include other modalities and phantoms with more realistic lesion simulations.
Medical Physics2019Journal Article, cited 0 times
Website
HNSCC-3D-CT-RT
squamous cell carcenoma
HEAD AND NECK
computed tomography
Purpose To describe in detail a dataset consisting of longitudinal fan-beam computed tomography (CT) imaging to visualize anatomical changes in head-and-neck squamous cell carcinoma (HNSCC) patients throughout radiotherapy (RT) treatment course. Acquisition and validation methods This dataset consists of CT images from 31 HNSCC patients who underwent volumetric modulated arc therapy (VMAT). Patients had three CT scans acquired throughout the duration of the radiation treatment course. Pretreatment planning CT scans with a median of 13 days before treatment (range: 2–27), mid-treatment CT at 22 days after start of treatment (range: 13–38), and post-treatment CT 65 days after start of treatment (range: 35–192). Patients received RT treatment to a total dose of 58–70 Gy, using daily 2.0–2.20 Gy, fractions for 30–35 fractions. The fan-beam CT images were acquired using a Siemens 16-slice CT scanner head protocol with 120 kV and current of 400 mAs. A helical scan with 1 rotation per second was used with a slice thickness of 2 mm and table increment of 1.2 mm. In addition to the imaging data, contours of anatomical structures for RT, demographic, and outcome measurements are provided. Data format and usage notes The dataset with DICOM files including images, RTSTRUCT files, and RTDOSE files can be found and publicly accessed in the Cancer Imaging Archive (TCIA, http://www.cancerimagingarchive.net/) as collection Head-and-neck squamous cell carcinoma patients with CT taken during pretreatment, mid-treatment, and post-treatment (HNSCC-3DCT-RT). Discussion This is the first dataset to date in TCIA which provides a collection of multiple CT imaging studies (pretreatment, mid-treatment, and post-treatment) throughout the treatment course. The dataset can serve a wide array of research projects including (but not limited to): quantitative imaging assessment, investigation on anatomical changes with treatment progress, dosimetry of target volumes and/or normal structures due to anatomical changes occurring during treatment, investigation of RT toxicity, and concurrent chemotherapy and RT effects on head-and-neck patients.
Multi-subtype classification model for non-small cell lung cancer based on radiomics: SLS model
Liu, J.
Cui, J.
Liu, F.
Yuan, Y.
Guo, F.
Zhang, G.
Med Phys2019Journal Article, cited 0 times
Website
NSCLC-Radiomics
Non Small Cell Lung Cancer (NSCLC)
Radiomics
Radiomic feature
PURPOSE: Histological subtypes of non-small cell lung cancer (NSCLC) are crucial for systematic treatment decisions. However, the current studies which used non-invasive radiomic methods to classify NSCLC histology subtypes mainly focused on two main subtypes: squamous cell carcinoma (SCC) and adenocarcinoma (ADC), while multi-subtype classifications that included the other two subtypes of NSCLC: large cell carcinoma (LCC) and not otherwise specified (NOS), were very few in the previous studies. The aim of this work is to establish a multi-subtype classification model for the four main subtypes of NSCLC and improve the classification performance and generalization ability compared with previous studies. METHODS: In this work, we extracted 1029 features from regions of interest in computed tomography (CT) images of 349 patients from two different datasets using radiomic methods. Based on 'three-in-one' concept, we proposed a model called SLS wrapping three algorithms, synthetic minority oversampling technique, l2,1-norm minimization, and support vector machines, into one hybrid technique to classify the four main subtypes of NSCLC: SCC, ADC, LCC and NOS, which could cover the whole range of NSCLC. RESULTS: We analyzed the 247 features obtained by dimension reduction, and found that the extracted features from three methods: first order statistics, gray level co-occurrence matrix, and gray level size zone matrix, were more conducive to the classification of NSCLC subtypes. The proposed SLS model achieved an average classification accuracy of 0.89 on the training set (95% confidence interval [CI]: 0.846 to 0.912) and a classification accuracy of 0.86 on the test set (95% CI: 0.779 to 0.941). CONCLUSIONS: The experiment results showed that the subtypes of NSCLC could be well classified by radiomic method. Our SLS model can accurately classify and diagnose the four subtypes of NSCLC based on CT images, and thus it has the potential to be used in the clinical practice to provide valuable information for lung cancer treatment and further promote the personalized medicine. This article is protected by copyright. All rights reserved.
Projection-domain scatter correction for cone beam computed tomography using a residual convolutional neural network
PURPOSE: Scatter is a major factor degrading the image quality of cone beam computed tomography (CBCT). Conventional scatter correction strategies require handcrafted analytical models with ad hoc assumptions, which often leads to less accurate scatter removal. This study aims to develop an effective scatter correction method using a residual convolutional neural network (CNN). METHODS: A U-net based 25-layer CNN was constructed for CBCT scatter correction. The establishment of the model consists of three steps: model training, validation, and testing. For model training, a total of 1800 pairs of x-ray projection and the corresponding scatter-only distribution in nonanthropomorphic phantoms taken in full-fan scan were generated using Monte Carlo simulation of a CBCT scanner installed with a proton therapy system. An end-to-end CNN training was implemented with two major loss functions for 100 epochs with a mini-batch size of 10. Image rotations and flips were randomly applied to augment the training datasets during training. For validation, 200 projections of a digital head phantom were collected. The proposed CNN-based method was compared to a conventional projection-domain scatter correction method named fast adaptive scatter kernel superposition (fASKS) method using 360 projections of an anthropomorphic head phantom. Two different loss functions were applied for the same CNN to evaluate the impact of loss functions on the final results. Furthermore, the CNN model trained with full-fan projections was fine-tuned for scatter correction in half-fan scan by using transfer learning with additional 360 half-fan projection pairs of nonanthropomorphic phantoms. The tuned-CNN model for half-fan scan was compared with the fASKS method as well as the CNN-based method without the fine-tuning using additional lung phantom projections. RESULTS: The CNN-based method provides projections with significantly reduced scatter and CBCT images with more accurate Hounsfield Units (HUs) than that of the fASKS-based method. Root mean squared error of the CNN-corrected projections was improved to 0.0862 compared to 0.278 for uncorrected projections or 0.117 for the fASKS-corrected projections. The CNN-corrected reconstruction provided better HU quantification, especially in regions near the air or bone interfaces. All four image quality measures, which include mean absolute error (MAE), mean squared error (MSE), peak signal-to-noise ratio (PSNR), and structural similarity (SSIM), indicated that the CNN-corrected images were significantly better than that of the fASKS-corrected images. Moreover, the proposed transfer learning technique made it possible for the CNN model trained with full-fan projections to be applicable to remove scatters in half-fan projections after fine-tuning with only a small number of additional half-fan training datasets. SSIM value of the tuned-CNN-corrected images was 0.9993 compared to 0.9984 for the non-tuned-CNN-corrected images or 0.9990 for the fASKS-corrected images. Finally, the CNN-based method is computationally efficient - the correction time for the 360 projections only took less than 5 s in the reported experiments on a PC (4.20 GHz Intel Core-i7 CPU) with a single NVIDIA GTX 1070 GPU. CONCLUSIONS: The proposed deep learning-based method provides an effective tool for CBCT scatter correction and holds significant value for quantitative imaging and image-guided radiation therapy.
Machine learning approach for distinguishing malignant and benign lung nodules utilizing standardized perinodular parenchymal features from CT
Uthoff, J.
Stephens, M. J.
Newell, J. D., Jr.
Hoffman, E. A.
Larson, J.
Koehn, N.
De Stefano, F. A.
Lusk, C. M.
Wenzlaff, A. S.
Watza, D.
Neslund-Dudas, C.
Carr, L. L.
Lynch, D. A.
Schwartz, A. G.
Sieren, J. C.
Med Phys2019Journal Article, cited 62 times
Website
PURPOSE: Computed tomography (CT) is an effective method for detecting and characterizing lung nodules in vivo. With the growing use of chest CT, the detection frequency of lung nodules is increasing. Noninvasive methods to distinguish malignant from benign nodules have the potential to decrease the clinical burden, risk, and cost involved in follow-up procedures on the large number of false-positive lesions detected. This study examined the benefit of including perinodular parenchymal features in machine learning (ML) tools for pulmonary nodule assessment. METHODS: Lung nodule cases with pathology confirmed diagnosis (74 malignant, 289 benign) were used to extract quantitative imaging characteristics from computed tomography scans of the nodule and perinodular parenchyma tissue. A ML tool development pipeline was employed using k-medoids clustering and information theory to determine efficient predictor sets for different amounts of parenchyma inclusion and build an artificial neural network classifier. The resulting ML tool was validated using an independent cohort (50 malignant, 50 benign). RESULTS: The inclusion of parenchymal imaging features improved the performance of the ML tool over exclusively nodular features (P < 0.01). The best performing ML tool included features derived from nodule diameter-based surrounding parenchyma tissue quartile bands. We demonstrate similar high-performance values on the independent validation cohort (AUC-ROC = 0.965). A comparison using the independent validation cohort with the Fleischner pulmonary nodule follow-up guidelines demonstrated a theoretical reduction in recommended follow-up imaging and procedures. CONCLUSIONS: Radiomic features extracted from the parenchyma surrounding lung nodules contain valid signals with spatial relevance for the task of lung cancer risk classification. Through standardization of feature extraction regions from the parenchyma, ML tool validation performance of 100% sensitivity and 96% specificity was achieved.
Reliability of tumor segmentation in glioblastoma: impact on the robustness of MRI‐radiomic features
Tixier, Florent
Um, Hyemin
Young, Robert J
Veeraraghavan, Harini
Med Phys2019Journal Article, cited 0 times
Website
TCGA-GBM
Radiomics
Glioblastoma Multiforme (GBM)
Purpose; The use of radiomic features as biomarkers of treatment response and outcome or as correlates to genomic variations requires that the computed features are robust and reproducible. Segmentation, a crucial step in radiomic analysis, is a major source of variability in the computed radiomic features. Therefore, we studied the impact of tumor segmentation variability on the robustness of MRI radiomic features.; Method; Fluid‐attenuated inversion recovery (FLAIR) and contrast‐enhanced T1‐weighted (T1WICE) MRI of 90 patients diagnosed with glioblastoma were segmented using a semi‐automatic algorithm and an interactive segmentation with two different raters. We analyzed the robustness of 108 radiomic features from 5 categories (intensity histogram, gray‐level co‐occurrence matrix, gray‐level size‐zone matrix (GLSZM), edge maps and shape) using intra‐class correlation coefficient (ICC) and Bland and Altman analysis. ; Results; Our results show that both segmentation methods are reliable with ICC ≥ 0.96 and standard deviation (SD) of mean differences between the two raters (SDdiffs) ≤ 30%. Features computed from the histogram and co‐occurrence matrices were found to be the most robust (ICC ≥ 0.8 and SDdiffs ≤ 30% for most features in these groups). Features from GLSZM were shown to have mixed robustness. Edge, shape and GLSZM features were the most impacted by the choice of segmentation method with the interactive method resulting in more robust features than the semi‐automatic method. Finally, features computed from T1WICE and FLAIR images were found to have similar robustness when computed with the interactive segmentation method. ; Conclusion; Semi‐automatic and interactive segmentation methods using two raters are both reliable. The interactive method produced more robust features than the semi‐automatic method. We also found that the robustness of radiomic features varied by categories. Therefore, this study could help motivate segmentation methods and feature selection in MRI radiomic studies.
Technical Note‐In silico imaging tools from the VICTRE clinical trial
Sharma, Diksha
Graff, Christian G.
Badal, Andreu
Zeng, Rongping
Sawant, Purva
Sengupta, Aunnasha
Dahal, Eshan
Badano, Aldo
Medical Physics2019Journal Article, cited 0 times
VICTRE
BREAST
Model
PURPOSE: In silico imaging clinical trials are emerging alternative sources of evidence for regulatory evaluation and are typically cheaper and faster than human trials. In this Note, we describe the set of in silico imaging software tools used in the VICTRE (Virtual Clinical Trial for Regulatory Evaluation) which replicated a traditional trial using a computational pipeline. MATERIALS AND METHODS: We describe a complete imaging clinical trial software package for comparing two breast imaging modalities (digital mammography and digital breast tomosynthesis). First, digital breast models were developed based on procedural generation techniques for normal anatomy. Second, lesions were inserted in a subset of breast models. The breasts were imaged using GPU-accelerated Monte Carlo transport methods and read using image interpretation models for the presence of lesions. All in silico components were assembled into a computational pipeline. The VICTRE images were made available in DICOM format for ease of use and visualization. RESULTS: We describe an open-source collection of in silico tools for running imaging clinical trials. All tools and source codes have been made freely available. CONCLUSION: The open-source tools distributed as part of the VICTRE project facilitate the design and execution of other in silico imaging clinical trials. The entire pipeline can be run as a complete imaging chain, modified to match needs of other trial designs, or used as independent components to build additional pipelines.
ALTIS: A fast and automatic lung and trachea CT-image segmentation method
Sousa, A. M.
Martins, S. B.
Falcão, A. X.
Reis, F.
Bagatin, E.
Irion, K.
Med Phys2019Journal Article, cited 0 times
LIDC-IDRI
Algorithm Development
Segmentation
PURPOSE: The automated segmentation of each lung and trachea in CT scans is commonly taken as a solved problem. Indeed, existing approaches may easily fail in the presence of some abnormalities caused by a disease, trauma, or previous surgery. For robustness, we present ALTIS (implementation is available at http://lids.ic.unicamp.br/downloads) - a fast automatic lung and trachea CT-image segmentation method that relies on image features and relative shape- and intensity-based characteristics less affected by most appearance variations of abnormal lungs and trachea. METHODS: ALTIS consists of a sequence of image foresting transforms (IFTs) organized in three main steps: (a) lung-and-trachea extraction, (b) seed estimation inside background, trachea, left lung, and right lung, and (c) their delineation such that each object is defined by an optimum-path forest rooted at its internal seeds. We compare ALTIS with two methods based on shape models (SOSM-S and MALF), and one algorithm based on seeded region growing (PTK). RESULTS: The experiments involve the highest number of scans found in literature - 1255 scans, from multiple public data sets containing many anomalous cases, being only 50 normal scans used for training and 1205 scans used for testing the methods. Quantitative experiments are based on two metrics, DICE and ASSD. Furthermore, we also demonstrate the robustness of ALTIS in seed estimation. Considering the test set, the proposed method achieves an average DICE of 0.987 for both lungs and 0.898 for the trachea, whereas an average ASSD of 0.938 for the right lung, 0.856 for the left lung, and 1.316 for the trachea. These results indicate that ALTIS is statistically more accurate and considerably faster than the compared methods, being able to complete segmentation in a few seconds on modern PCs. CONCLUSION: ALTIS is the most effective and efficient choice among the compared methods to segment left lung, right lung, and trachea in anomalous CT scans for subsequent detection, segmentation, and quantitative analysis of abnormal structures in the lung parenchyma and pleural space.
Stability and reproducibility of computed tomography radiomic features extracted from peritumoral regions of lung cancer lesions
Tunali, Ilke
Hall, Lawrence O
Napel, Sandy
Cherezov, Dmitry
Guvenis, Albert
Gillies, Robert J
Schabath, Matthew B
Med Phys2019Journal Article, cited 0 times
LUNG
Radiomics
PURPOSE: Recent efforts have demonstrated that radiomic features extracted from the peritumoral region, the area surrounding the tumor parenchyma, have clinical utility in various cancer types. However, as like any radiomic features, peritumoral features could also be unstable and/or nonreproducible. Hence, the purpose of this study was to assess the stability and reproducibility of computed tomography (CT) radiomic features extracted from the peritumoral regions of lung lesions where stability was defined as the consistency of a feature by different segmentations, and reproducibility was defined as the consistency of a feature to different image acquisitions. METHODS: Stability was measured utilizing the "moist run" dataset and reproducibility was measured utilizing the Reference Image Database to Evaluate Therapy Response test-retest dataset. Peritumoral radiomic features were extracted from incremental distances of 3-12 mm outside the tumor segmentation. A total of 264 statistical, histogram, and texture radiomic features were assessed from the selected peritumoral region-of-interests (ROIs). All features (except wavelet texture features) were extracted using standardized algorithms defined by the Image Biomarker Standardisation Initiative. Stability and reproducibility of features were assessed using the concordance correlation coefficient. The clinical utility of stable and reproducible peritumoral features was tested in three previously published lung cancer datasets using overall survival as the endpoint. RESULTS: Features found to be stable and reproducible, regardless of the peritumoral distances, included statistical, histogram, and a subset of texture features suggesting that these features are less affected by changes (e.g., size or shape) of the peritumoral region due to different segmentations and image acquisitions. The stability and reproducibility of Laws and wavelet texture features were inconsistent across all peritumoral distances. The analyses also revealed that a subset of features were consistently stable irrespective of the initial parameters (e.g., seed point) for a given segmentation algorithm. No significant differences were found in stability for features that were extracted from ROIs bounded by a lung parenchyma mask versus ROIs that were not bounded by a lung parenchyma mask (i.e., peritumoral regions that extended outside of lung parenchyma). After testing the clinical utility of peritumoral features, stable and reproducible features were shown to be more likely to create repeatable models than unstable and nonreproducible features. CONCLUSIONS: This study identified a subset of stable and reproducible CT radiomic features extracted from the peritumoral region of lung lesions. The stable and reproducible features identified in this study could be applied to a feature selection pipeline for CT radiomic analyses. According to our findings, top performing features in survival models were more likely to be stable and reproducible hence, it may be best practice to utilize them to achieve repeatable studies and reduce the chance of overfitting.
Automatic classification of lung nodule candidates based on a novel 3D convolution network and knowledge transferred from a 2D network
Zuo, Wangxia
Zhou, Fuqiang
He, Yuzhu
Li, Xiaosong
Med Phys2019Journal Article, cited 0 times
LIDC-IDRI
Algorithm Development
Computer Aided Detection (CADe)
OBJECTIVE: In the automatic lung nodule detection system, the authenticity of a large number of nodule candidates needs to be judged, which is a classification task. However, the variable shapes and sizes of the lung nodules have posed a great challenge to the classification of candidates. To solve this problem, we propose a method for classifying nodule candidates through three-dimensional (3D) convolution neural network (ConvNet) model which is trained by transferring knowledge from a multiresolution two-dimensional (2D) ConvNet model. METHODS: In this scheme, a novel 3D ConvNet model is preweighted with the weights of the trained 2D ConvNet model, and then the 3D ConvNet model is trained with 3D image volumes. In this way, the knowledge transfer method can make 3D network easier to converge and make full use of the spatial information of nodules with different sizes and shapes to improve the classification accuracy. RESULTS: The experimental results on 551 065 pulmonary nodule candidates in the LUNA16 dataset show that our method gains a competitive average score in the false-positive reduction track in lung nodule detection, with the sensitivities of 0.619 and 0.642 at 0.125 and 0.25 FPs per scan, respectively. CONCLUSIONS: The proposed method can maintain satisfactory classification accuracy even when the false-positive rate is extremely small in the face of nodules of different sizes and shapes. Moreover, as a transfer learning idea, the method to transfer knowledge from 2D ConvNet to 3D ConvNet is the first attempt to carry out full migration of parameters of various layers including convolution layers, full connection layers, and classifier between different dimensional models, which is more conducive to utilizing the existing 2D ConvNet resources and generalizing transfer learning schemes.
A fast and scalable method for quality assurance of deformable image registration on lung CT scans using convolutional neural networks
Galib, Shaikat M
Lee, Hyoung K
Guy, Christopher L
Riblett, Matthew J
Hugo, Geoffrey D
Med Phys2020Journal Article, cited 1 times
Website
4D-Lung
Deep Learning
Image registration
PURPOSE: To develop and evaluate a method to automatically identify and quantify deformable image registration (DIR) errors between lung computed tomography (CT) scans for quality assurance (QA) purposes. METHODS: We propose a deep learning method to flag registration errors. The method involves preparation of a dataset for machine learning model training and testing, design of a three-dimensional (3D) convolutional neural network architecture that classifies registrations into good or poor classes, and evaluation of a metric called registration error index (REI) which provides a quantitative measure of registration error. RESULTS: Our study shows that, despite having limited number of training images available (10 CT scan pairs for training and 17 CT scan pairs for testing), the method achieves 0.882 AUC-ROC on the test dataset. Furthermore, the combined standard uncertainty of the estimated REI by our model lies within +/- 0.11 (+/- 11% of true REI value), with a confidence level of approximately 68%. CONCLUSIONS: We have developed and evaluated our method using original clinical registrations without generating any synthetic/simulated data. Moreover, test data were acquired from a different environment than that of training data, so that the method was validated robustly. The results of this study showed that our algorithm performs reasonably well in challenging scenarios.
Head and neck cancer patient images for determining auto-segmentation accuracy in T2-weighted magnetic resonance imaging through expert manual segmentations
PURPOSE: The use of magnetic resonance imaging (MRI) in radiotherapy treatment planning has rapidly increased due to its ability to evaluate patient's anatomy without the use of ionizing radiation and due to its high soft tissue contrast. For these reasons, MRI has become the modality of choice for longitudinal and adaptive treatment studies. Automatic segmentation could offer many benefits for these studies. In this work, we describe a T2-weighted MRI dataset of head and neck cancer patients that can be used to evaluate the accuracy of head and neck normal tissue auto-segmentation systems through comparisons to available expert manual segmentations. ACQUISITION AND VALIDATION METHODS: T2-weighted MRI images were acquired for 55 head and neck cancer patients. These scans were collected after radiotherapy computed tomography (CT) simulation scans using a thermoplastic mask to replicate patient treatment position. All scans were acquired on a single 1.5 T Siemens MAGNETOM Aera MRI with two large four-channel flex phased-array coils. The scans covered the region encompassing the nasopharynx region cranially and supraclavicular lymph node region caudally, when possible, in the superior-inferior direction. Manual contours were created for the left/right submandibular gland, left/right parotids, left/right lymph node level II, and left/right lymph node level III. These contours underwent quality assurance to ensure adherence to predefined guidelines, and were corrected if edits were necessary. DATA FORMAT AND USAGE NOTES: The T2-weighted images and RTSTRUCT files are available in DICOM format. The regions of interest are named based on AAPM's Task Group 263 nomenclature recommendations (Glnd_Submand_L, Glnd_Submand_R, LN_Neck_II_L, Parotid_L, Parotid_R, LN_Neck_II_R, LN_Neck_III_L, LN_Neck_III_R). This dataset is available on The Cancer Imaging Archive (TCIA) by the National Cancer Institute under the collection "AAPM RT-MAC Grand Challenge 2019" (https://doi.org/10.7937/tcia.2019.bcfjqfqb). POTENTIAL APPLICATIONS: This dataset provides head and neck patient MRI scans to evaluate auto-segmentation systems on T2-weighted images. Additional anatomies could be provided at a later time to enhance the existing library of contours.
Spline curve deformation model with prior shapes for identifying adhesion boundaries between large lung tumors and tissues around lungs in CT images
Zhang, Xin
Wang, Jie
Yang, Ying
Wang, Bing
Gu, Lixu
Med Phys2020Journal Article, cited 0 times
Website
LIDC-IDRI
RIDER Lung CT
Segmentation
PURPOSE: Automated segmentation of lung tumors attached to anatomic structures such as the chest wall or mediastinum remains a technical challenge because of the similar Hounsfield units of these structures. To address this challenge, we propose herein a spline curve deformation model that combines prior shapes to correct large spatially contiguous errors (LSCEs) in input shapes derived from image-appearance cues.The model is then used to identify the adhesion boundaries between large lung tumors and tissue around the lungs. METHODS: The deformation of the whole curve is driven by the transformation of the control points (CPs) of the spline curve, which are influenced by external and internal forces. The external force drives the model to fit the positions of the non-LSCEs of the input shapes while the internal force ensures the local similarity of the displacements of the neighboring CPs. The proposed model corrects the gross errors in the lung input shape caused by large lung tumors, where the initial lung shape for the model is inferred from the training shapes by shape group-based sparse prior information and the input lung shape is inferred by adaptive-thresholding-based segmentation followed by morphological refinement. RESULTS: The accuracy of the proposed model is verified by applying it to images of lungs with either moderate large-sized (ML) tumors or giant large-sized (GL) tumors. The quantitative results in terms of the averages of the dice similarity coefficient (DSC) and the Jaccard similarity index (SI) are 0.982 +/- 0.006 and 0.965 +/- 0.012 for segmentation of lungs adhered by ML tumors, and 0.952 +/- 0.048 and 0.926 +/- 0.059 for segmentation of lungs adhered by GL tumors, which give 0.943 +/- 0.021 and 0.897 +/- 0.041 for segmentation of the ML tumors, and 0.907 +/- 0.057 and 0.888 +/- 0.091 for segmentation of the GL tumors, respectively. In addition, the bidirectional Hausdorff distances are 5.7 +/- 1.4 and 11.3 +/- 2.5 mm for segmentation of lungs with ML and GL tumors, respectively. CONCLUSIONS: When combined with prior shapes, the proposed spline curve deformation can deal with large spatially consecutive errors in object shapes obtained from image-appearance information. We verified this method by applying it to the segmentation of lungs with large tumors adhered to the tissue around the lungs and the large tumors. Both the qualitative and quantitative results are more accurate and repeatable than results obtained with current state-of-the-art techniques.
Stationary computed tomography with source and detector in linear symmetric geometry: Direct filtered backprojection reconstruction
Zhang, Tao
Xing, Yuxiang
Zhang, Li
Jin, Xin
Gao, Hewei
Chen, Zhiqiang
Medical Physics2020Journal Article, cited 0 times
Pancreas-CT
PURPOSE: Inverse-geometry computed tomography (IGCT) could have great potential in medical applications and security inspections, and has been actively investigated in recent years. In this work, we explore a special architecture of IGCT in a stationary configuration: symmetric-geometry computed tomography (SGCT), where the x-ray source and detector are linearly distributed in a symmetric design. A direct filtered backprojection (FBP)-type algorithm is developed to analytically reconstruct images from the SGCT projections.
METHODS: In our proposed SGCT system, a big number of x-ray source points equally distributed along a straight-line trajectory will sequentially fire in an ultra-fast manner in one side, and an equispaced detector whose total length is comparable to that of the source will continuously collect data in the opposite side, as the object to be scanned moves into the imaging plane. We firstly present the overall design of SGCT. An FBP-type reconstruction algorithm is then derived for this unique imaging configuration. With finite length of x-ray source and detector arrays, projection data from one segment of SGCT scan are insufficient for an exact reconstruction. As a result, in practical applications, dual-SGCT scan whose detector segments are placed perpendicular to each other, is of particular interest and is proposed. Two segments of SGCT together can make sure that the passing rays cover at least 180 degrees for each and every point if carefully designed. In general, however, there exists a data redundancy problem for a dual-SGCT. So a weighting strategy is developed to maximize the use of projection data collected while avoid image artifacts. In addition, we further extend the fan-beam SGCT to cone beam and obtain a Feldkamp-Davis-Kress (FDK)-type reconstruction algorithm. Finally, we conduct a set of experimental studies both in simulation and on a prototype SGCT system and validate our proposed methods.
RESULTS: A simulation study using the Shepp-Logan head phantom confirms that CT images can be exactly reconstructed from dual-SGCT scan and that our proposed weighting strategy is able to handle the data redundancy properly. Compared with the rebinning-to-parallel-beam method using the forward projection of an abdominal CT dataset, our proposed method is seen to be less sensitive to data truncation. Our algorithm can achieve 10.64 lp/cm of spatial resolution at 50% modulation transfer functions point, higher than that of the rebinning method which can only reach at 9.42 lp/cm even with extremely fine interpolation. Real experiments of a cylindrical object on a prototype SGCT further prove the effectiveness and practicability of the direct FBP method proposed, with similar level of noise performance to rebinning algorithm.
CONCLUSIONS: A new concept of SGCT with linearly distributed source and detector is investigated in this work, in which spinning of sources and detectors is no longer needed during data acquisition, simplifying its system design, development, and manufacturing. A direct FBP-type algorithm is developed for analytical reconstruction from SGCT projection data. Numerical and real experiments validate our method and show that exact CT image can be reconstructed from dual-SGCT scan, where data redundancy problem can be solved by our proposed weighting function.
Recurrent Attention Network for False Positive Reduction in the Detection of Pulmonary Nodules in Thoracic CT Scans
M. Mehdi Farhangi
Nicholas Petrick
Berkman Sahiner
Hichem Frigui
Amir A. Amini
Aria Pezeshk
Med Phys2020Journal Article, cited 0 times
Website
LIDC-IDRI
LUNA16 Challenge
National Lung Screening Trial (NLST)
PURPOSE: Multi-view 2-D Convolutional Neural Networks (CNNs) and 3-D CNNs have been successfully used for analyzing volumetric data in many state-of-the-art medical imaging applications. We propose an alternative modular framework that analyzes volumetric data with an approach that is analogous to radiologists' interpretation, and apply the framework to reduce false positives that are generated in Computer-Aided Detection (CADe) systems for pulmonary nodules in thoracic CT scans. METHODS: In our approach, a deep network consisting of 2-D CNNs first processes slices individually. The features extracted in this stage are then passed to a Recurrent Neural Network (RNN), thereby modeling consecutive slices as a sequence of temporal data and capturing the contextual information across all three dimensions in the volume of interest. Outputs of the RNN layer are weighed before the final fully connected layer, enabling the network to scale the importance of different slices within a volume of interest in an end-to-end training framework. RESULTS: We validated the proposed architecture on the false positive reduction track of the Lung Nodule Analysis (LUNA) challenge for pulmonary nodule detection in chest CT scans, and obtained competitive results compared to 3-D CNNs. Our results show that the proposed approach can encode the 3-D information in volumetric data effectively by achieving a sensitivity > 0.8 with just 1/8 false positives per scan. CONCLUSIONS: Our experimental results demonstrate the effectiveness of temporal analysis of volumetric images for the application of false positive reduction in chest CT scans and show that state-of-the-art 2-D architectures from the literature can be directly applied to analyzing volumetric medical data. As newer and better 2-D architectures are being developed at a much faster rate compared to 3-D architectures, our approach makes it easy to obtain state-of-the-art performance on volumetric data using new 2-D architectures.
Homology-based radiomic features for prediction of the prognosis of lung cancer based on CT-based radiomics
Kadoya, Noriyuki
Tanaka, Shohei
Kajikawa, Tomohiro
Tanabe, Shunpei
Abe, Kota
Nakajima, Yujiro
Yamamoto, Takaya
Takahashi, Noriyoshi
Takeda, Kazuya
Dobashi, Suguru
Takeda, Ken
Nakane, Kazuaki
Jingu, Keiichi
Med Phys2020Journal Article, cited 0 times
Website
NSCLC Radiogenomics
RIDER Lung CT
QIN LUNG CT
Radiomics
PURPOSE: Radiomics is a new technique that enables noninvasive prognostic prediction by extracting features from medical images. Homology is a concept used in many branches of algebra and topology that can quantify the contact degree. In the present study, we developed homology-based radiomic features to predict the prognosis of non-small-cell lung cancer (NSCLC) patients and then evaluated the accuracy of this prediction method. METHODS: Four data sets were used: two to provide training and test data and two for the selection of robust radiomic features. All the data sets were downloaded from The Cancer Imaging Archive (TCIA). In two-dimensional cases, the Betti numbers consist of two values: b0 (zero-dimensional Betti number), which is the number of isolated components, and b1 (one-dimensional Betti number), which is the number of one-dimensional or "circular" holes. For homology-based evaluation, CT images must be converted to binarized images in which each pixel has two possible values: 0 or 1. All CT slices of the gross tumor volume were used for calculating the homology histogram. First, by changing the threshold of the CT value (range: -150 to 300 HU) for all its slices, we developed homology-based histograms for b0 , b1 , and b1 /b0 using binarized images All histograms were then summed, and the summed histogram was normalized by the number of slices. 144 homology-based radiomic features were defined from the histogram. To compare the standard radiomic features, 107 radiomic features were calculated using the standard radiomics technique. To clarify the prognostic power, the relationship between the values of the homology-based radiomic features and overall survival was evaluated using LASSO Cox regression model and the Kaplan-Meier method. The retained features with non-zero coefficients calculated by the LASSO Cox regression model were used for fitting the regression model. Moreover, these features were then integrated into a radiomics signature. An individualized rad score was calculated from a linear combination of the selected features, which were weighted by their respective coefficients. RESULTS: When the patients in the training and test data sets were stratified into high-risk and low-risk groups according to the rad scores, the overall survival of the groups was significantly different. The C-index values for the homology-based features (rad score), standard features (rad score), and tumor size were 0.625, 0.603, and 0.607, respectively, for the training data sets and 0.689, 0.668, and 0.667 for the test data sets. This result showed that homology-based radiomic features had slightly higher prediction power than the standard radiomic features. CONCLUSIONS: Prediction performance using homology-based radiomic features had a comparable or slightly higher prediction power than standard radiomic features. These findings suggest that homology-based radiomic features may have great potential for improving the prognostic prediction accuracy of CT-based radiomics. In this result, it is noteworthy that there are some limitations.
CT images with expert manual contours of thoracic cancer for benchmarking auto-segmentation accuracy
Yang, J.
Veeraraghavan, H.
van Elmpt, W.
Dekker, A.
Gooding, M.
Sharp, G.
Med Phys2020Journal Article, cited 0 times
LCTSC
Lung CT Segmentation Challenge 2017
Automatic segmentation
Computed Tomography (CT)
Algorithm Development
PURPOSE: Automatic segmentation offers many benefits for radiotherapy treatment planning; however, the lack of publicly available benchmark datasets limits the clinical use of automatic segmentation. In this work, we present a well-curated computed tomography (CT) dataset of high-quality manually drawn contours from patients with thoracic cancer that can be used to evaluate the accuracy of thoracic normal tissue auto-segmentation systems. ACQUISITION AND VALIDATION METHODS: Computed tomography scans of 60 patients undergoing treatment simulation for thoracic radiotherapy were acquired from three institutions: MD Anderson Cancer Center, Memorial Sloan Kettering Cancer Center, and the MAASTRO clinic. Each institution provided CT scans from 20 patients, including mean intensity projection four-dimensional CT (4D CT), exhale phase (4D CT), or free-breathing CT scans depending on their clinical practice. All CT scans covered the entire thoracic region with a 50-cm field of view and slice spacing of 1, 2.5, or 3 mm. Manual contours of left/right lungs, esophagus, heart, and spinal cord were retrieved from the clinical treatment plans. These contours were checked for quality and edited if necessary to ensure adherence to RTOG 1106 contouring guidelines. DATA FORMAT AND USAGE NOTES: The CT images and RTSTRUCT files are available in DICOM format. The regions of interest were named according to the nomenclature recommended by American Association of Physicists in Medicine Task Group 263 as Lung_L, Lung_R, Esophagus, Heart, and SpinalCord. This dataset is available on The Cancer Imaging Archive (funded by the National Cancer Institute) under Lung CT Segmentation Challenge 2017 (http://doi.org/10.7937/K9/TCIA.2017.3r3fvz08). POTENTIAL APPLICATIONS: This dataset provides CT scans with well-delineated manually drawn contours from patients with thoracic cancer that can be used to evaluate auto-segmentation systems. Additional anatomies could be supplied in the future to enhance the existing library of contours.
A method of rapid quantification of patient-specific organ doses for CT using deep-learning-based multi-organ segmentation and GPU-accelerated Monte Carlo dose computing
Peng, Z.
Fang, X.
Yan, P.
Shan, H.
Liu, T.
Pei, X.
Wang, G.
Liu, B.
Kalra, M. K.
Xu, X. G.
Med Phys2020Journal Article, cited 0 times
Website
Lung CT Segmentation Challenge 2017
LCTSC
Segmentation
Algorithm Development
PURPOSE: One technical barrier to patient-specific computed tomography (CT) dosimetry has been the lack of computational tools for the automatic patient-specific multi-organ segmentation of CT images and rapid organ dose quantification. When previous CT images are available for the same body region of the patient, the ability to obtain patient-specific organ doses for CT - in a similar manner as radiation therapy treatment planning - will open the door to personalized and prospective CT scan protocols. This study aims to demonstrate the feasibility of combining deep-learning algorithms for automatic segmentation of multiple radiosensitive organs from CT images with the GPU-based Monte Carlo rapid organ dose calculation. METHODS: A deep convolutional neural network (CNN) based on the U-Net for organ segmentation is developed and trained to automatically delineate multiple radiosensitive organs from CT images. Two databases are used: The lung CT segmentation challenge 2017 (LCTSC) dataset that contains 60 thoracic CT scan patients, each consisting of five segmented organs, and the Pancreas-CT (PCT) dataset, which contains 43 abdominal CT scan patients each consisting of eight segmented organs. A fivefold cross-validation method is performed on both sets of data. Dice similarity coefficients (DSCs) are used to evaluate the segmentation performance against the ground truth. A GPU-based Monte Carlo dose code, ARCHER, is used to calculate patient-specific CT organ doses. The proposed method is evaluated in terms of relative dose errors (RDEs). To demonstrate the potential improvement of the new method, organ dose results are compared against those obtained for population-average patient phantoms used in an off-line dose reporting software, VirtualDose, at Massachusetts General Hospital. RESULTS: The median DSCs are found to be 0.97 (right lung), 0.96 (left lung), 0.92 (heart), 0.86 (spinal cord), 0.76 (esophagus) for the LCTSC dataset, along with 0.96 (spleen), 0.96 (liver), 0.95 (left kidney), 0.90 (stomach), 0.87 (gall bladder), 0.80 (pancreas), 0.75 (esophagus), and 0.61 (duodenum) for the PCT dataset. Comparing with organ dose results from population-averaged phantoms, the new patient-specific method achieved smaller absolute RDEs (mean +/- standard deviation) for all organs: 1.8% +/- 1.4% (vs 16.0% +/- 11.8%) for the lung, 0.8% +/- 0.7% (vs 34.0% +/- 31.1%) for the heart, 1.6% +/- 1.7% (vs 45.7% +/- 29.3%) for the esophagus, 0.6% +/- 1.2% (vs 15.8% +/- 12.7%) for the spleen, 1.2% +/- 1.0% (vs 18.1% +/- 15.7%) for the pancreas, 0.9% +/- 0.6% (vs 20.0% +/- 15.2%) for the left kidney, 1.7% +/- 3.1% (vs 19.1% +/- 9.8%) for the gallbladder, 0.3% +/- 0.3% (vs 24.2% +/- 18.7%) for the liver, and 1.6% +/- 1.7% (vs 19.3% +/- 13.6%) for the stomach. The trained automatic segmentation tool takes <5 s per patient for all 103 patients in the dataset. The Monte Carlo radiation dose calculations performed in parallel to the segmentation process using the GPU-accelerated ARCHER code take <4 s per patient to achieve <0.5% statistical uncertainty in all organ doses for all 103 patients in the database. CONCLUSION: This work shows the feasibility to perform combined automatic patient-specific multi-organ segmentation of CT images and rapid GPU-based Monte Carlo dose quantification with clinically acceptable accuracy and efficiency.
Automated proton treatment planning with robust optimization using constrained hierarchical optimization
Taasti, Vicki T.
Hong, Linda
Deasy, Joseph O.
Zarepisheh, Masoud
Medical Physics2020Journal Article, cited 0 times
HNSCC-3DCT-RT
PURPOSE: We present a method for fully automated generation of high quality robust proton treatment plans using hierarchical optimization. To fill the gap between the two common extreme robust optimization approaches, that is, stochastic and worst-case, a robust optimization approach based on the p-norm function is used whereby a single parameter, p , can be used to control the level of robustness in an intuitive way.
METHODS: A fully automated approach to treatment planning using Expedited Constrained Hierarchical Optimization (ECHO) is implemented in our clinic for photon treatments. ECHO strictly enforces critical (inviolable) clinical criteria as hard constraints and improves the desirable clinical criteria sequentially, as much as is feasible. We extend our in-house developed ECHO codes for proton therapy and integrate it with a new approach for robust optimization. Multiple scenarios accounting for both setup and range uncertainties are included (13scenarios), and the maximum/mean/dose-volume constraints on organs-at-risk (OARs) and target are fulfilled in all scenarios. We combine the objective functions of the individual scenarios using the p-norm function. The p-norm with a parameter p = 1 or p = ∞ result in the stochastic or the worst-case approach, respectively; an intermediate robustness level is obtained by employing p -values in-between. While the worst-case approach only focuses on the worst-case scenario(s), the p-norm approach with a large p value ( p ≈ 20 ) resembles the worst-case approach without completely neglecting other scenarios. The proposed approach is evaluated on three head-and-neck (HN) patients and one water phantom with different parameters, p ∈ 1 , 2 , 5 , 10 , 20 . The results are compared against the stochastic approach (p-norm approach with p = 1 ) and the worst-case approach, as well as the nonrobust approach (optimized solely on the nominal scenario).
RESULTS: The proposed algorithm successfully generates automated robust proton plans on all cases. As opposed to the nonrobust plans, the robust plans have narrower dose volume histogram (DVH) bands across all 13 scenarios, and meet all hard constraints (i.e., maximum/mean/dose-volume constraints) on OARs and the target for all scenarios. The spread in the objective function values is largest for the stochastic approach ( p = 1 ) and decreases with increasing p toward the worst-case approach. Compared to the worst-case approach, the p-norm approach results in DVH bands for clinical target volume (CTV) which are closer to the prescription dose at a negligible cost in the DVH for the worst scenario, thereby improving the overall plan quality. On average, going from the worst-case approach to the p-norm approach with p = 20 , the median objective function value across all the scenarios is improved by 15% while the objective function value for the worst scenario is only degraded by 3%.
CONCLUSION: An automated treatment planning approach for proton therapy is developed, including robustness, dose-volume constraints, and the ability to control the robustness level using the p-norm parameter p , to fit the priorities deemed most important.
Automating proton treatment planning with beam angle selection using Bayesian optimization
Taasti, Vicki T.
Hong, Linda
Shim, Jin Sup
Deasy, Joseph O.
Zarepisheh, Masoud
Medical Physics2020Journal Article, cited 0 times
HNSCC-3DCT-RT
PURPOSE: To present a fully automated treatment planning process for proton therapy including beam angle selection using a novel Bayesian optimization approach and previously developed constrained hierarchical fluence optimization method.
METHODS: We adapted our in-house automated intensity modulated radiation therapy (IMRT) treatment planning system, which is based on constrained hierarchical optimization and referred to as ECHO (expedited constrained hierarchical optimization), for proton therapy. To couple this to beam angle selection, we propose using a novel Bayesian approach. By integrating ECHO with this Bayesian beam selection approach, we obtain a fully automated treatment planning framework including beam angle selection. Bayesian optimization is a global optimization technique which only needs to search a small fraction of the search space for slowly varying objective functions (i.e., smooth functions). Expedited constrained hierarchical optimization is run for some initial beam angle candidates and the resultant treatment plan for each beam configuration is rated using a clinically relevant treatment score function. Bayesian optimization iteratively predicts the treatment score for not-yet-evaluated candidates to find the best candidate to be optimized next with ECHO. We tested this technique on five head-and-neck (HN) patients with two coplanar beams. In addition, tests were performed with two noncoplanar and three coplanar beams for two patients.
RESULTS: For the two coplanar configurations, the Bayesian optimization found the optimal beam configuration after running ECHO for, at most, 4% of all potential configurations (23 iterations) for all patients (range: 2%-4%). Compared with the beam configurations chosen by the planner, the optimal configurations reduced the mandible maximum dose by 6.6 Gy and high dose to the unspecified normal tissues by 3.8 Gy, on average. For the two noncoplanar and three coplanar beam configurations, the algorithm converged after 45 iterations (examining <1% of all potential configurations).
CONCLUSIONS: A fully automated and efficient treatment planning process for proton therapy, including beam angle optimization was developed. The algorithm automatically generates high-quality plans with optimal beam angle configuration by combining Bayesian optimization and ECHO. As the Bayesian optimization is capable of handling complex nonconvex functions, the treatment score function which is used in the algorithm to evaluate the dose distribution corresponding to each beam configuration can contain any clinically relevant metric.
FAIR-compliant clinical, radiomics and DICOM metadata of RIDER, interobserver, Lung1 and head-Neck1 TCIA collections
Kalendralis, Petros
Shi, Zhenwei
Traverso, Alberto
Choudhury, Ananya
Sloep, Matthijs
Zhovannik, Ivan
Starmans, Martijn P A
Grittner, Detlef
Feltens, Peter
Monshouwer, Rene
Klein, Stefan
Fijten, Rianne
Aerts, Hugo
Dekker, Andre
van Soest, Johan
Wee, Leonard
Med Phys2020Journal Article, cited 0 times
Website
Radiomics
NSCLC-Radiomics
RIDER Lung CT
Head-Neck-Radiomics-HN1
NSCLC-Radiomics- Interobserver1
Imaging features
PURPOSE: One of the most frequently cited radiomics investigations showed that features automatically extracted from routine clinical images could be used in prognostic modeling. These images have been made publicly accessible via The Cancer Imaging Archive (TCIA). There have been numerous requests for additional explanatory metadata on the following datasets - RIDER, Interobserver, Lung1, and Head-Neck1. To support repeatability, reproducibility, generalizability, and transparency in radiomics research, we publish the subjects' clinical data, extracted radiomics features, and digital imaging and communications in medicine (DICOM) headers of these four datasets with descriptive metadata, in order to be more compliant with findable, accessible, interoperable, and reusable (FAIR) data management principles. ACQUISITION AND VALIDATION METHODS: Overall survival time intervals were updated using a national citizens registry after internal ethics board approval. Spatial offsets of the primary gross tumor volume (GTV) regions of interest (ROIs) associated with the Lung1 CT series were improved on the TCIA. GTV radiomics features were extracted using the open-source Ontology-Guided Radiomics Analysis Workflow (O-RAW). We reshaped the output of O-RAW to map features and extraction settings to the latest version of Radiomics Ontology, so as to be consistent with the Image Biomarker Standardization Initiative (IBSI). Digital imaging and communications in medicine metadata was extracted using a research version of Semantic DICOM (SOHARD, GmbH, Fuerth; Germany). Subjects' clinical data were described with metadata using the Radiation Oncology Ontology. All of the above were published in Resource Descriptor Format (RDF), that is, triples. Example SPARQL queries are shared with the reader to use on the online triples archive, which are intended to illustrate how to exploit this data submission. DATA FORMAT: The accumulated RDF data are publicly accessible through a SPARQL endpoint where the triples are archived. The endpoint is remotely queried through a graph database web application at http://sparql.cancerdata.org. SPARQL queries are intrinsically federated, such that we can efficiently cross-reference clinical, DICOM, and radiomics data within a single query, while being agnostic to the original data format and coding system. The federated queries work in the same way even if the RDF data were partitioned across multiple servers and dispersed physical locations. POTENTIAL APPLICATIONS: The public availability of these data resources is intended to support radiomics features replication, repeatability, and reproducibility studies by the academic community. The example SPARQL queries may be freely used and modified by readers depending on their research question. Data interoperability and reusability are supported by referencing existing public ontologies. The RDF data are readily findable and accessible through the aforementioned link. Scripts used to create the RDF are made available at a code repository linked to this submission: https://gitlab.com/UM-CDS/FAIR-compliant_clinical_radiomics_and_DICOM_metadata.
Investigation of inter-fraction target motion variations in the context of pencil beam scanned proton therapy in non-small cell lung cancer patients
den Otter, L. A.
Anakotta, R. M.
Weessies, M.
Roos, C. T. G.
Sijtsema, N. M.
Muijs, C. T.
Dieters, M.
Wijsman, R.
Troost, E. G. C.
Richter, C.
Meijers, A.
Langendijk, J. A.
Both, S.
Knopf, A. C.
Med Phys2020Journal Article, cited 0 times
Website
4D-Lung
PURPOSE: For locally advanced-stage non-small cell lung cancer (NSCLC), inter-fraction target motion variations during the whole time span of a fractionated treatment course are assessed in a large and representative patient cohort. The primary objective is to develop a suitable motion monitoring strategy for pencil beam scanning proton therapy (PBS-PT) treatments of NSCLC patients during free breathing. METHODS: Weekly 4D computed tomography (4DCT; 41 patients) and daily 4D cone beam computed tomography (4DCBCT; 10 of 41 patients) scans were analyzed for a fully fractionated treatment course. Gross tumor volumes (GTVs) were contoured and the 3D displacement vectors of the centroid positions were compared for all scans. Furthermore, motion amplitude variations in different lung segments were statistically analyzed. The dosimetric impact of target motion variations and target motion assessment was investigated in exemplary patient cases. RESULTS: The median observed centroid motion was 3.4 mm (range: 0.2-12.4 mm) with an average variation of 2.2 mm (range: 0.1-8.8 mm). Ten of 32 patients (31.3%) with an initial motion <5 mm increased beyond a 5-mm motion amplitude during the treatment course. Motion observed in the 4DCBCT scans deviated on average 1.5 mm (range: 0.0-6.0 mm) from the motion observed in the 4DCTs. Larger motion variations for one example patient compromised treatment plan robustness while no dosimetric influence was seen due to motion assessment biases in another example case. CONCLUSIONS: Target motion variations were investigated during the course of radiotherapy for NSCLC patients. Patients with initial GTV motion amplitudes of < 2 mm can be assumed to be stable in motion during the treatment course. For treatments of NSCLC patients who exhibit motion amplitudes of > 2 mm, 4DCBCT should be considered for motion monitoring due to substantial motion variations observed.
A multi-objective radiomics model for the prediction of locoregional recurrence in head and neck squamous cell cancer
Wang, K.
Zhou, Z.
Wang, R.
Chen, L.
Zhang, Q.
Sher, D.
Wang, J.
Med Phys2020Journal Article, cited 0 times
Website
Classification
Radiomics
HNSCC
PURPOSE: Locoregional recurrence (LRR) is the predominant pattern of relapse after nonsurgical treatment of head and neck squamous cell cancer (HNSCC). Therefore, accurately identifying patients with HNSCC who are at high risk for LRR is important for optimizing personalized treatment plans. In this work, we developed a multi-classifier, multi-objective, and multi-modality (mCOM) radiomics-based outcome prediction model for HNSCC LRR. METHODS: In mCOM, we considered sensitivity and specificity simultaneously as the objectives to guide the model optimization. We used multiple classifiers, comprising support vector machine (SVM), discriminant analysis (DA), and logistic regression (LR), to build the model. We used features from multiple modalities as model inputs, comprising clinical parameters and radiomics feature extracted from X-ray computed tomography (CT) images and positron emission tomography (PET) images. We proposed a multi-task multi-objective immune algorithm (mTO) to train the mCOM model and used an evidential reasoning (ER)-based method to fuse the output probabilities from different classifiers and modalities in mCOM. We evaluated the effectiveness of the developed method using a retrospective public pretreatment HNSCC dataset downloaded from The Cancer Imaging Archive (TCIA). The input for our model included radiomics features extracted from pretreatment PET and CT using an open source radiomics software and clinical characteristics such as sex, age, stage, primary disease site, human papillomavirus (HPV) status, and treatment paradigm. In our experiment, 190 patients from two institutions were used for model training while the remaining 87 patients from the other two institutions were used for testing. RESULTS: When we built the predictive model using features from single modality, the multi-classifier (MC) models achieved better performance over the models built with the three base-classifiers individually. When we built the model using features from multiple modalities, the proposed method achieved area under the receiver operating characteristic curve (AUC) values of 0.76 for the radiomics-only model, and 0.77 for the model built with radiomics and clinical features, which is significantly higher than the AUCs of models built with single-modality features. The statistical analysis was performed using MATLAB software. CONCLUSIONS: Comparisons with other methods demonstrated the efficiency of the mTO algorithm and the superior performance of the proposed mCOM model for predicting HNSCC LRR.
Deep model with Siamese network for viable and necrotic tumor regions assessment in osteosarcoma
Fu, Yu
Xue, Peng
Ji, Huizhong
Cui, Wentao
Dong, Enqing
Medical Physics2020Journal Article, cited 0 times
Osteosarcoma-Tumor-Assessment
PURPOSE: To achieve automatic classification of viable and necrotic tumor regions in osteosarcoma, most of the existing deep learning methods can only design a simple model to prevent overfitting on small datasets, which leads to the weak ability of extracting image features and low accuracy of the models. In order to solve the above problem, a deep model with Siamese network (DS-Net) was designed in this paper.
METHODS: The DS-Net constructed on the basis of full convolutional networks is composed of an auxiliary supervision network (ASN) and a classification network. The construction of the ASN based on the Siamese network aims to solve the problem of a small training set (the main bottleneck of deep learning in medical images). It uses paired data as the input and updates the network through combined labels. The classification network uses the features extracted by the ASN to perform accurate classification.
RESULTS: Pathological diagnosis is the most accurate method to identify osteosarcoma. However, due to intraclass variation and interclass similarity, it is challenging for pathologists to accurately identify osteosarcoma. Through the experiments on hematoxylin and eosin (H&E)-stained osteosarcoma histology slides, the DS-Net we constructed can achieve an average accuracy of 95.1%. Compared with existing methods, the DS-Net performs best in the test dataset.
CONCLUSIONS: The DS-Net we constructed can not only effectively realize the histological classification of osteosarcoma, but also be applicable to many other medical image classification tasks affected by small datasets.
PleThora: Pleural effusion and thoracic cavity segmentations in diseased lungs for benchmarking chest CT processing pipelines
Kiser, K. J.
Ahmed, S.
Stieb, S.
Mohamed, A. S. R.
Elhalawani, H.
Park, P. Y. S.
Doyle, N. S.
Wang, B. J.
Barman, A.
Li, Z.
Zheng, W. J.
Fuller, C. D.
Giancardo, L.
Med Phys2020Journal Article, cited 0 times
Website
PleThora
NSCLC-Radiomics
Analysis Results
LUNG
U-Net
This manuscript describes a dataset of thoracic cavity segmentations and discrete pleural effusion segmentations we have annotated on 402 computed tomography (CT) scans acquired from patients with non-small cell lung cancer. The segmentation of these anatomic regions precedes fundamental tasks in image analysis pipelines such as lung structure segmentation, lesion detection, and radiomics feature extraction. Bilateral thoracic cavity volumes and pleural effusion volumes were manually segmented on CT scans acquired from The Cancer Imaging Archive "NSCLC Radiomics" data collection. Four hundred and two thoracic segmentations were first generated automatically by a U-Net based algorithm trained on chest CTs without cancer, manually corrected by a medical student to include the complete thoracic cavity (normal, pathologic, and atelectatic lung parenchyma, lung hilum, pleural effusion, fibrosis, nodules, tumor, and other anatomic anomalies), and revised by a radiation oncologist or a radiologist. Seventy-eight pleural effusions were manually segmented by a medical student and revised by a radiologist or radiation oncologist. Interobserver agreement between the radiation oncologist and radiologist corrections was acceptable. All expert-vetted segmentations are publicly available in NIfTI format through The Cancer Imaging Archive at https://doi.org/10.7937/tcia.2020.6c7y-gq39. Tabular data detailing clinical and technical metadata linked to segmentation cases are also available. Thoracic cavity segmentations will be valuable for developing image analysis pipelines on pathologic lungs - where current automated algorithms struggle most. In conjunction with gross tumor volume segmentations already available from "NSCLC Radiomics," pleural effusion segmentations may be valuable for investigating radiomics profile differences between effusion and primary tumor or training algorithms to discriminate between them.
PURPOSE: The dataset contains annotations for lung nodules collected by the Lung Imaging Data Consortium and Image Database Resource Initiative (LIDC) stored as standard DICOM objects. The annotations accompany a collection of computed tomography (CT) scans for over 1000 subjects annotated by multiple expert readers, and correspond to "nodules ≥ 3 mm", defined as any lesion considered to be a nodule with greatest in-plane dimension in the range 3-30 mm regardless of presumed histology. The present dataset aims to simplify reuse of the data with the readily available tools, and is targeted towards researchers interested in the analysis of lung CT images.
ACQUISITION AND VALIDATION METHODS: Open source tools were utilized to parse the project-specific XML representation of LIDC-IDRI annotations and save the result as standard DICOM objects. Validation procedures focused on establishing compliance of the resulting objects with the standard, consistency of the data between the DICOM and project-specific representation, and evaluating interoperability with the existing tools.
DATA FORMAT AND USAGE NOTES: The dataset utilizes DICOM Segmentation objects for storing annotations of the lung nodules, and DICOM Structured Reporting objects for communicating qualitative evaluations (nine attributes) and quantitative measurements (three attributes) associated with the nodules. The total of 875 subjects contain 6859 nodule annotations. Clustering of the neighboring annotations resulted in 2651 distinct nodules. The data are available in TCIA at https://doi.org/10.7937/TCIA.2018.h7umfurq.
POTENTIAL APPLICATIONS: The standardized dataset maintains the content of the original contribution of the LIDC-IDRI consortium, and should be helpful in developing automated tools for characterization of lung lesions and image phenotyping. In addition to those properties, the representation of the present dataset makes it more FAIR (Findable, Accessible, Interoperable, Reusable) for the research community, and enables its integration with other standardized data collections.
Comparison of iterative parametric and indirect deep learning-based reconstruction methods in highly undersampled DCE-MR Imaging of the breast
Rastogi, A.
Yalavarthy, P. K.
Med Phys2020Journal Article, cited 0 times
Website
QIN Breast DCE-MRI
PURPOSE: To compare the performance of iterative direct and indirect parametric reconstruction methods with indirect deep learning-based reconstruction methods in estimating tracer-kinetic parameters from highly undersampled DCE-MR Imaging breast data and provide a systematic comparison of the same. METHODS: Estimation of tracer-kinetic parameters using indirect methods from undersampled data requires to reconstruct the anatomical images initially by solving an inverse problem. This reconstructed images gets utilized in turn to estimate the tracer-kinetic parameters. In direct estimation, the parameters are estimated without reconstructing the anatomical images. Both problems are ill-posed and are typically solved using prior-based regularization or using deep learning. In this study, for indirect estimation, two deep learning-based reconstruction frameworks namely, ISTA-Net(+) and MODL, were utilized. For direct and indirect parametric estimation, sparsity inducing priors (L1 and Total-Variation) and limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm as solver was deployed. The performance of these techniques were compared systematically in estimation of vascular permeability ( K trans ) from undersampled DCE-MRI breast data using Patlak as pharmaco-kinetic model. The experiments involved retrospective undersampling of the data 20x, 50x, and 100x and compared the results using PSNR, nRMSE, SSIM, and Xydeas metrics. The K trans maps estimated from fully sampled data were utilized as ground truth. The developed code was made available as https://github.com/Medical-Imaging-Group/DCE-MRI-Compare open-source for enthusiastic users. RESULTS: The reconstruction methods performance was evaluated using ten patients breast data (five patients each for training and testing). Consistent with other studies, the results indicate that direct parametric reconstruction methods provide improved performance compared to the indirect parameteric reconstruction methods. The results also indicate that for 20x undersampling, deep learning-based methods performs better or at par with direct estimation in terms of PSNR, SSIM, and nRMSE. However, for higher undersampling rates (50x and 100x) direct estimation performs better in all metrics. For all undersampling rates, direct reconstruction performed better in terms of Xydeas metric, which indicated fidelity in magnitude and orientation of edges. CONCLUSION: Deep learning-based indirect techniques perform at par with direct estimation techniques for lower undersampling rates in the breast DCE-MR imaging. At higher undersampling rates, they are not able to provide much needed generalization. Direct estimation techniques are able to provide more accurate results than both deep learning- and parametric-based indirect methods in these high undersampling scenarios.
Technical Note: Automatic segmentation of CT images for ventral body composition analysis
Fu, Yabo
Ippolito, Joseph E.
Ludwig, Daniel R.
Nizamuddin, Rehan
Li, Harold H.
Yang, Deshan
Medical Physics2020Journal Article, cited 0 times
TCGA-KIRC
PURPOSE: Body composition is known to be associated with many diseases including diabetes, cancers, and cardiovascular diseases. In this paper, we developed a fully automatic body tissue decomposition procedure to segment three major compartments that are related to body composition analysis - subcutaneous adipose tissue (SAT), visceral adipose tissue (VAT), and muscle. Three additional compartments - the ventral cavity, lung, and bones - were also segmented during the segmentation process to assist segmentation of the major compartments.
METHODS: A convolutional neural network (CNN) model with densely connected layers was developed to perform ventral cavity segmentation. An image processing workflow was developed to segment the ventral cavity in any patient's computed tomography (CT) using the CNN model, then further segment the body tissue into multiple compartments using hysteresis thresholding followed by morphological operations. It is important to segment ventral cavity firstly to allow accurate separation of compartments with similar Hounsfield unit (HU) inside and outside the ventral cavity.
RESULTS: The ventral cavity segmentation CNN model was trained and tested with manually labeled ventral cavities in 60 CTs. Dice scores (mean ± standard deviation) for ventral cavity segmentation were 0.966 ± 0.012. Tested on CT datasets with intravenous (IV) and oral contrast, the Dice scores were 0.96 ± 0.02, 0.94 ± 0.06, 0.96 ± 0.04, 0.95 ± 0.04, and 0.99 ± 0.01 for bone, VAT, SAT, muscle, and lung, respectively. The respective Dice scores were 0.97 ± 0.02, 0.94 ± 0.07, 0.93 ± 0.06, 0.91 ± 0.04, and 0.99 ± 0.01 for non-contrast CT datasets.
CONCLUSION: A body tissue decomposition procedure was developed to automatically segment multiple compartments of the ventral body. The proposed method enables fully automated quantification of three-dimensional (3D) ventral body composition metrics from CT images.
Generating anthropomorphic phantoms using fully unsupervised deformable image registration with convolutional neural networks
Chen, Junyu
Li, Ye
Du, Yong
Frey, Eric C
Med Phys2020Journal Article, cited 0 times
Website
NaF-Prostate
PHANTOM
Image Registration
Medical Image Simulation
Deep convolutional neural network (DCNN)
PURPOSE: Computerized phantoms have been widely used in nuclear medicine imaging for imaging system optimization and validation. Although the existing computerized phantoms can model anatomical variations through organ and phantom scaling, they do not provide a way to fully reproduce the anatomical variations and details seen in humans. In this work, we present a novel registration-based method for creating highly anatomically detailed computerized phantoms. We experimentally show substantially improved image similarity of the generated phantom to a patient image. METHODS: We propose a deep-learning-based unsupervised registration method to generate a highly anatomically detailed computerized phantom by warping an XCAT phantom to a patient computed tomography (CT) scan. We implemented and evaluated the proposed method using the NURBS-based XCAT phantom and a publicly available low-dose CT dataset from TCIA. A rigorous tradeoff analysis between image similarity and deformation regularization was conducted to select the loss function and regularization term for the proposed method. A novel SSIM-based unsupervised objective function was proposed. Finally, ablation studies were conducted to evaluate the performance of the proposed method (using the optimal regularization and loss function) and the current state-of-the-art unsupervised registration methods. RESULTS: The proposed method outperformed the state-of-the-art registration methods, such as SyN and VoxelMorph, by more than 8%, measured by the SSIM and less than 30%, by the MSE. The phantom generated by the proposed method was highly detailed and was almost identical in appearance to a patient image. CONCLUSIONS: A deep-learning-based unsupervised registration method was developed to create anthropomorphic phantoms with anatomies labels that can be used as the basis for modeling organ properties. Experimental results demonstrate the effectiveness of the proposed method. The resulting anthropomorphic phantom is highly realistic. Combined with realistic simulations of the image formation process, the generated phantoms could serve in many applications of medical imaging research.
Reproducibility analysis of multi‐institutional paired expert annotations and radiomic features of the Ivy Glioblastoma Atlas Project (Ivy GAP) dataset
Pati, Sarthak
Verma, Ruchika
Akbari, Hamed
Bilello, Michel
Hill, Virginia B.
Sako, Chiharu
Correa, Ramon
Beig, Niha
Venet, Ludovic
Thakur, Siddhesh
Serai, Prashant
Ha, Sung Min
Blake, Geri D.
Shinohara, Russell Taki
Tiwari, Pallavi
Bakas, Spyridon
Medical Physics2020Journal Article, cited 0 times
IvyGAP
IvyGAP-Radiomics
PURPOSE: The availability of radiographic magnetic resonance imaging (MRI) scans for the Ivy Glioblastoma Atlas Project (Ivy GAP) has opened up opportunities for development of radiomic markers for prognostic/predictive applications in glioblastoma (GBM). In this work, we address two critical challenges with regard to developing robust radiomic approaches: (a) the lack of availability of reliable segmentation labels for glioblastoma tumor sub-compartments (i.e., enhancing tumor, non-enhancing tumor core, peritumoral edematous/infiltrated tissue) and (b) identifying "reproducible" radiomic features that are robust to segmentation variability across readers/sites.
ACQUISITION AND VALIDATION METHODS: From TCIA's Ivy GAP cohort, we obtained a paired set (n = 31) of expert annotations approved by two board-certified neuroradiologists at the Hospital of the University of Pennsylvania (UPenn) and at Case Western Reserve University (CWRU). For these studies, we performed a reproducibility study that assessed the variability in (a) segmentation labels and (b) radiomic features, between these paired annotations. The radiomic variability was assessed on a comprehensive panel of 11 700 radiomic features including intensity, volumetric, morphologic, histogram-based, and textural parameters, extracted for each of the paired sets of annotations. Our results demonstrated (a) a high level of inter-rater agreement (median value of DICE ≥0.8 for all sub-compartments), and (b) ≈24% of the extracted radiomic features being highly correlated (based on Spearman's rank correlation coefficient) to annotation variations. These robust features largely belonged to morphology (describing shape characteristics), intensity (capturing intensity profile statistics), and COLLAGE (capturing heterogeneity in gradient orientations) feature families.
DATA FORMAT AND USAGE NOTES: We make publicly available on TCIA's Analysis Results Directory (https://doi.org/10.7937/9j41-7d44), the complete set of (a) multi-institutional expert annotations for the tumor sub-compartments, (b) 11 700 radiomic features, and (c) the associated reproducibility meta-analysis.
POTENTIAL APPLICATIONS: The annotations and the associated meta-data for Ivy GAP are released with the purpose of enabling researchers toward developing image-based biomarkers for prognostic/predictive applications in GBM.
Low‐dose CT image and projection dataset
Moen, Taylor R.
Chen, Baiyu
Holmes, David R.
Duan, Xinhui
Yu, Zhicong
Yu, Lifeng
Leng, Shuai
Fletcher, Joel G.
McCollough, Cynthia H.
Medical Physics2020Journal Article, cited 0 times
LDCT-and-Projection-data
PURPOSE: To describe a large, publicly available dataset comprising computed tomography (CT) projection data from patient exams, both at routine clinical doses and simulated lower doses.
ACQUISITION AND VALIDATION METHODS: The library was developed under local ethics committee approval. Projection and image data from 299 clinically performed patient CT exams were archived for three types of clinical exams: noncontrast head CT scans acquired for acute cognitive or motor deficit, low-dose noncontrast chest scans acquired to screen high-risk patients for pulmonary nodules, and contrast-enhanced CT scans of the abdomen acquired to look for metastatic liver lesions. Scans were performed on CT systems from two different CT manufacturers using routine clinical protocols. Projection data were validated by reconstructing the data using several different reconstruction algorithms and through use of the data in the 2016 Low Dose CT Grand Challenge. Reduced dose projection data were simulated for each scan using a validated noise-insertion method. Radiologists marked location and diagnosis for detected pathologies. Reference truth was obtained from the patient medical record, either from histology or subsequent imaging.
DATA FORMAT AND USAGE NOTES: Projection datasets were converted into the previously developed DICOM-CT-PD format, which is an extended DICOM format created to store CT projections and acquisition geometry in a nonproprietary format. Image data are stored in the standard DICOM image format and clinical data in a spreadsheet. Materials are provided to help investigators use the DICOM-CT-PD files, including a dictionary file, data reader, and user manual. The library is publicly available from The Cancer Imaging Archive (https://doi.org/10.7937/9npb-2637).
POTENTIAL APPLICATIONS: This CT data library will facilitate the development and validation of new CT reconstruction and/or denoising algorithms, including those associated with machine learning or artificial intelligence. The provided clinical information allows evaluation of task-based diagnostic performance.
MAD‐UNet: A deep U‐shaped network combined with an attention mechanism for pancreas segmentation in CT images
Li, Weisheng
Qin, Sheng
Li, Feiyan
Wang, Linhong
Medical Physics2020Journal Article, cited 0 times
Pancreas-CT
PURPOSE: Pancreas segmentation is a difficult task because of the high intrapatient variability in the shape, size, and location of the organ, as well as the low contrast and small footprint of the CT scan. At present, the U-Net model is likely to lead to the problems of intraclass inconsistency and interclass indistinction in pancreas segmentation. To solve this problem, we improved the contextual and semantic feature information acquisition method of the biomedical image segmentation model (U-Net) based on a convolutional network and proposed an improved segmentation model called the multiscale attention dense residual U-shaped network (MAD-UNet).
METHODS: There are two aspects considered in this method. First, we adopted dense residual blocks and weighted binary cross-entropy to enhance the semantic features to learn the details of the pancreas. Using such an approach can reduce the effects of intraclass inconsistency. Second, we used an attention mechanism and multiscale convolution to enrich the contextual information and suppress learning in unrelated areas. We let the model be more sensitive to pancreatic marginal information and reduced the impact of interclass indistinction.
RESULTS: We evaluated our model using fourfold cross-validation on 82 abdominal enhanced three-dimensional (3D) CT scans from the National Institutes of Health (NIH-82) and 281 3D CT scans from the 2018 MICCAI segmentation decathlon challenge (MSD). The experimental results showed that our method achieved state-of-the-art performance on the two pancreatic datasets. The mean Dice coefficients were 86.10% ± 3.52% and 88.50% ± 3.70%.
CONCLUSIONS: Our model can effectively solve the problems of intraclass inconsistency and interclass indistinction in the segmentation of the pancreas, and it has value in clinical application. Code is available at https://github.com/Mrqins/pancreas-segmentation.
Two‐stage deep learning model for fully automated pancreas segmentation on computed tomography: Comparison with intra‐reader and inter‐reader reliability at full and reduced radiation dose on an external dataset
Panda, Ananya
Korfiatis, Panagiotis
Suman, Garima
Garg, Sushil K.
Polley, Eric C.
Singh, Dhruv P.
Chari, Suresh T.
Goenka, Ajit H.
Medical Physics2021Journal Article, cited 0 times
LDCT-and-Projection-data
Pancreas-CT
PURPOSE: To develop a two-stage three-dimensional (3D) convolutional neural networks (CNNs) for fully automated volumetric segmentation of pancreas on computed tomography (CT) and to further evaluate its performance in the context of intra-reader and inter-reader reliability at full dose and reduced radiation dose CTs on a public dataset.
METHODS: A dataset of 1994 abdomen CT scans (portal venous phase, slice thickness ≤ 3.75-mm, multiple CT vendors) was curated by two radiologists (R1 and R2) to exclude cases with pancreatic pathology, suboptimal image quality, and image artifacts (n = 77). Remaining 1917 CTs were equally allocated between R1 and R2 for volumetric pancreas segmentation [ground truth (GT)]. This internal dataset was randomly divided into training (n = 1380), validation (n = 248), and test (n = 289) sets for the development of a two-stage 3D CNN model based on a modified U-net architecture for automated volumetric pancreas segmentation. Model's performance for pancreas segmentation and the differences in model-predicted pancreatic volumes vs GT volumes were compared on the test set. Subsequently, an external dataset from The Cancer Imaging Archive (TCIA) that had CT scans acquired at standard radiation dose and same scans reconstructed at a simulated 25% radiation dose was curated (n = 41). Volumetric pancreas segmentation was done on this TCIA dataset by R1 and R2 independently on the full dose and then at the reduced radiation dose CT images. Intra-reader and inter-reader reliability, model's segmentation performance, and reliability between model-predicted pancreatic volumes at full vs reduced dose were measured. Finally, model's performance was tested on the benchmarking National Institute of Health (NIH)-Pancreas CT (PCT) dataset.
RESULTS: Three-dimensional CNN had mean (SD) Dice similarity coefficient (DSC): 0.91 (0.03) and average Hausdorff distance of 0.15 (0.09) mm on the test set. Model's performance was equivalent between males and females (P = 0.08) and across different CT slice thicknesses (P > 0.05) based on noninferiority statistical testing. There was no difference in model-predicted and GT pancreatic volumes [mean predicted volume 99 cc (31cc); GT volume 101 cc (33 cc), P = 0.33]. Mean pancreatic volume difference was -2.7 cc (percent difference: -2.4% of GT volume) with excellent correlation between model-predicted and GT volumes [concordance correlation coefficient (CCC)=0.97]. In the external TCIA dataset, the model had higher reliability than R1 and R2 on full vs reduced dose CT scans [model mean (SD) DSC: 0.96 (0.02), CCC = 0.995 vs R1 DSC: 0.83 (0.07), CCC = 0.89, and R2 DSC:0.87 (0.04), CCC = 0.97]. The DSC and volume concordance correlations for R1 vs R2 (inter-reader reliability) were 0.85 (0.07), CCC = 0.90 at full dose and 0.83 (0.07), CCC = 0.96 at reduced dose datasets. There was good reliability between model and R1 at both full and reduced dose CT [full dose: DSC: 0.81 (0.07), CCC = 0.83 and reduced dose DSC:0.81 (0.08), CCC = 0.87]. Likewise, there was good reliability between model and R2 at both full and reduced dose CT [full dose: DSC: 0.84 (0.05), CCC = 0.89 and reduced dose DSC:0.83(0.06), CCC = 0.89]. There was no difference in model-predicted and GT pancreatic volume in TCIA dataset (mean predicted volume 96 cc (33); GT pancreatic volume 89 cc (30), p = 0.31). Model had mean (SD) DSC: 0.89 (0.04) (minimum-maximum DSC: 0.79 -0.96) on the NIH-PCT dataset.
CONCLUSION: A 3D CNN developed on the largest dataset of CTs is accurate for fully automated volumetric pancreas segmentation and is generalizable across a wide range of CT slice thicknesses, radiation dose, and patient gender. This 3D CNN offers a scalable tool to leverage biomarkers from pancreas morphometrics and radiomics for pancreatic diseases including for early pancreatic cancer detection.
OpenKBP: The open‐access knowledge‐based planning grand challenge and dataset
Babier, A.
Zhang, B.
Mahmood, R.
Moore, K. L.
Purdie, T. G.
McNiven, A. L.
Chan, T. C. Y.
Medical Physics2021Journal Article, cited 0 times
Website
Head-Neck Cetuximab
Head-Neck-PET-CT
Head-Neck-CT-Atlas
TCGA-HNSC
Radiation Therapy
Machine Learning
Contouring
Computed Tomography (CT)
PURPOSE: To advance fair and consistent comparisons of dose prediction methods for knowledge-based planning (KBP) in radiation therapy research. METHODS: We hosted OpenKBP, a 2020 AAPM Grand Challenge, and challenged participants to develop the best method for predicting the dose of contoured computed tomography (CT) images. The models were evaluated according to two separate scores: (a) dose score, which evaluates the full three-dimensional (3D) dose distributions, and (b) dose-volume histogram (DVH) score, which evaluates a set DVH metrics. We used these scores to quantify the quality of the models based on their out-of-sample predictions. To develop and test their models, participants were given the data of 340 patients who were treated for head-and-neck cancer with radiation therapy. The data were partitioned into training ( n = 200 ), validation ( n = 40 ), and testing ( n = 100 ) datasets. All participants performed training and validation with the corresponding datasets during the first (validation) phase of the Challenge. In the second (testing) phase, the participants used their model on the testing data to quantify the out-of-sample performance, which was hidden from participants and used to determine the final competition ranking. Participants also responded to a survey to summarize their models. RESULTS: The Challenge attracted 195 participants from 28 countries, and 73 of those participants formed 44 teams in the validation phase, which received a total of 1750 submissions. The testing phase garnered submissions from 28 of those teams, which represents 28 unique prediction methods. On average, over the course of the validation phase, participants improved the dose and DVH scores of their models by a factor of 2.7 and 5.7, respectively. In the testing phase one model achieved the best dose score (2.429) and DVH score (1.478), which were both significantly better than the dose score (2.564) and the DVH score (1.529) that was achieved by the runner-up models. Lastly, many of the top performing teams reported that they used generalizable techniques (e.g., ensembles) to achieve higher performance than their competition. CONCLUSION: OpenKBP is the first competition for knowledge-based planning research. The Challenge helped launch the first platform that enables researchers to compare KBP prediction methods fairly and consistently using a large open-source dataset and standardized metrics. OpenKBP has also democratized KBP research by making it accessible to everyone, which should help accelerate the progress of KBP research. The OpenKBP datasets are available publicly to help benchmark future KBP research.
Interactive contouring through contextual deep learning
Trimpl, Michael J.
Boukerroui, Djamal
Stride, Eleanor P. J.
Vallis, Katherine A.
Gooding, Mark J.
Medical Physics2021Journal Article, cited 0 times
NSCLC-Radiomics
PURPOSE: To investigate a deep learning approach that enables three-dimensional (3D) segmentation of an arbitrary structure of interest given a user provided two-dimensional (2D) contour for context. Such an approach could decrease delineation times and improve contouring consistency, particularly for anatomical structures for which no automatic segmentation tools exist.
METHODS: A series of deep learning segmentation models using a Recurrent Residual U-Net with attention gates was trained with a successively expanding training set. Contextual information was provided to the models, using a previously contoured slice as an input, in addition to the slice to be contoured. In total, 6 models were developed, and 19 different anatomical structures were used for training and testing. Each of the models was evaluated for all 19 structures, even if they were excluded from the training set, in order to assess the model's ability to segment unseen structures of interest. Each model's performance was evaluated using the Dice similarity coefficient (DSC), Hausdorff distance, and relative added path length (APL).
RESULTS: The segmentation performance for seen and unseen structures improved when the training set was expanded by addition of structures previously excluded from the training set. A model trained exclusively on heart structures achieved a DSC of 0.33, HD of 44 mm, and relative APL of 0.85 when segmenting the spleen, whereas a model trained on a diverse set of structures, but still excluding the spleen, achieved a DSC of 0.80, HD of 13 mm, and relative APL of 0.35. Iterative prediction performed better compared to direct prediction when considering unseen structures.
CONCLUSIONS: Training a contextual deep learning model on a diverse set of structures increases the segmentation performance for the structures in the training set, but importantly enables the model to generalize and make predictions even for unseen structures that were not represented in the training set. This shows that user-provided context can be incorporated into deep learning contouring to facilitate semi-automatic segmentation of CT images for any given structure. Such an approach can enable faster de-novo contouring in clinical practice.
Multilayer residual sparsifying transform (MARS) model for low‐dose CT image reconstruction
Yang, Xikai
Long, Yong
Ravishankar, Saiprasad
Medical Physics2021Journal Article, cited 0 times
LDCT-and-Projection-data
PURPOSE: Signal models based on sparse representations have received considerable attention in recent years. On the other hand, deep models consisting of a cascade of functional layers, commonly known as deep neural networks, have been highly successful for the task of object classification and have been recently introduced to image reconstruction. In this work, we develop a new image reconstruction approach based on a novel multilayer model learned in an unsupervised manner by combining both sparse representations and deep models. The proposed framework extends the classical sparsifying transform model for images to a Multilayer residual sparsifying transform (MARS) model, wherein the transform domain data are jointly sparsified over layers. We investigate the application of MARS models learned from limited regular-dose images for low-dose CT reconstruction using penalized weighted least squares (PWLS) optimization.
METHODS: We propose new formulations for multilayer transform learning and image reconstruction. We derive an efficient block coordinate descent algorithm to learn the transforms across layers, in an unsupervised manner from limited regular-dose images. The learned model is then incorporated into the low-dose image reconstruction phase.
RESULTS: Low-dose CT experimental results with both the XCAT phantom and Mayo Clinic data show that the MARS model outperforms conventional methods such as filtered back-projection and PWLS methods based on the edge-preserving (EP) regularizer in terms of two numerical metrics (RMSE and SSIM) and noise suppression. Compared with the single-layer learned transform (ST) model, the MARS model performs better in maintaining some subtle details.
CONCLUSIONS: This work presents a novel data-driven regularization framework for CT image reconstruction that exploits learned multilayer or cascaded residual sparsifying transforms. The image model is learned in an unsupervised manner from limited images. Our experimental results demonstrate the promising performance of the proposed multilayer scheme over single-layer learned sparsifying transforms. Learned MARS models also offer better image quality than typical nonadaptive PWLS methods.
A hybrid feature selection‐based approach for brain tumor detection and automatic segmentation on multiparametric magnetic resonance images
Chen, Hao
Ban, Duo
Qi, X. Sharon
Pan, Xiaoying
Qiang, Yongqian
Yang, Qing
Medical Physics2021Journal Article, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
PURPOSE: To develop a novel method based on feature selection, combining convolutional neural network (CNN) and ensemble learning (EL), to achieve high accuracy and efficiency of glioma detection and segmentation using multiparametric MRIs.
METHODS: We proposed an evolutionary feature selection-based hybrid approach for glioma detection and segmentation on 4 MR sequences (T2-FLAIR, T1, T1Gd, and T2). First, we trained a lightweight CNN to detect glioma and mask the suspected region to process large batch of MRI images. Second, we employed a differential evolution algorithm to search a feature space, which composed of 416-dimension radiomic features extracted from four sequences of MRIs and 128-dimension high-order features extracted by the CNN, to generate an optimal feature combination for pixel classification. Finally, we trained an EL classifier using the optimal feature combination to segment whole tumor (WT) and its subregions including nonenhancing tumor (NET), peritumoral edema (ED), and enhancing tumor (ET) in the suspected region. Experiments were carried out on 300 glioma patients from the BraTS2019 dataset using fivefold cross validation, the model was independently validated using the rest 35 patients from the same database.
RESULTS: The approach achieved a detection accuracy of 98.8% using four MRIs. The Dice coefficients (and standard deviations) were 0.852 ± 0.057, 0.844 ± 0.046, and 0.799 ± 0.053 for segmentation of WT (NET+ET+ED), tumor core (NET+ET), and ET, respectively. The sensitivities and specificities were 0.873 ± 0.074, 0.863 ± 0.072, and 0.852 ± 0.082; the specificities were 0.994 ± 0.005, 0.994 ± 0.005, and 0.995 ± 0.004 for the WT, tumor core, and ET, respectively. The performances and calculation times were compared with the state-of-the-art approaches, our approach yielded a better overall performance with average processing time of 139.5 s per set of four sequence MRIs.
CONCLUSIONS: We demonstrated a robust and computational cost-effective hybrid segmentation approach for glioma and its subregions on multi-sequence MR images. The proposed approach can be used for automated target delineation for glioma patients.
Fully automated segmentation of brain tumor from multiparametric MRI using 3D context deep supervised U‐Net
Lin, Mingquan
Momin, Shadab
Lei, Yang
Wang, Hesheng
Curran, Walter J.
Liu, Tian
Yang, Xiaofeng
Medical Physics2021Journal Article, cited 0 times
BraTS-TCGA-GBM
PURPOSE: Owing to histologic complexities of brain tumors, its diagnosis requires the use of multimodalities to obtain valuable structural information so that brain tumor subregions can be properly delineated. In current clinical workflow, physicians typically perform slice-by-slice delineation of brain tumor subregions, which is a time-consuming process and also more susceptible to intra- and inter-rater variabilities possibly leading to misclassification. To deal with this issue, this study aims to develop an automatic segmentation of brain tumor in MR images using deep learning.
METHOD: In this study, we develop a context deep-supervised U-Net to segment brain tumor subregions. A context block which aggregates multiscale contextual information for dense segmentation was proposed. This approach enlarges the effective receptive field of convolutional neural networks, which, in turn, improves the segmentation accuracy of brain tumor subregions. We performed the fivefold cross-validation on the Brain Tumor Segmentation Challenge (BraTS) 2020 training dataset. The BraTS 2020 testing datasets were obtained via BraTS online website as a hold-out test. For BraTS, the evaluation system divides the tumor into three regions: whole tumor (WT), tumor core (TC), and enhancing tumor (ET). The performance of our proposed method was compared against two state-of-the-arts CNN networks in terms of segmentation accuracy via Dice similarity coefficient (DSC) and Hausdorff distance (HD). The tumor volumes generated by our proposed method were compared with manually contoured volumes via Bland-Altman plots and Pearson analysis.
RESULTS: The proposed method achieved the segmentation results with a DSC of 0.923 ± 0.047, 0.893 ± 0.176, and 0.846 ± 0.165 and a 95% HD95 of 3.946 ± 7.041, 3.981 ± 6.670, and 10.128 ± 51.136 mm on WT, TC, and ET, respectively. Experimental results demonstrate that our method achieved comparable to significantly (p < 0.05) better segmentation accuracies than other two state-of-the-arts CNN networks. Pearson correlation analysis showed a high positive correlation between the tumor volumes generated by proposed method and manual contour.
CONCLUSION: Overall qualitative and quantitative results of this work demonstrate the potential of translating proposed technique into clinical practice for segmenting brain tumor subregions, and further facilitate brain tumor radiotherapy workflow.
Using neural networks to extend cropped medical images for deformable registration among images with differing scan extents
McKenzie, E. M.
Tong, N.
Ruan, D.
Cao, M.
Chin, R. K.
Sheng, K.
Med Phys2021Journal Article, cited 1 times
Website
Algorithm Development
HEAD
*Image Processing
Computer-Assisted
HEADNECK
*Neural Networks
Computer
Deep learning
Image Registration
PURPOSE: Missing or discrepant imaging volume is a common challenge in deformable image registration (DIR). To minimize the adverse impact, we train a neural network to synthesize cropped portions of head and neck CT's and then test its use in DIR. METHODS: Using a training dataset of 409 head and neck CT's, we trained a generative adversarial network to take in a cropped 3D image and output an image with synthesized anatomy in the cropped region. The network used a 3D U-Net generator along with Visual Geometry Group (VGG) deep feature losses. To test our technique, for each of the 53 test volumes, we used Elastix to deformably register combinations of a randomly cropped, full, and synthetically full volume to a single cropped, full, and synthetically full target volume. We additionally tested our method's robustness to crop extent by progressively increasing the amount of cropping, synthesizing the missing anatomy using our network, and then performing the same registration combinations. Registration performance was measured using 95% Hausdorff distance across 16 contours. RESULTS: We successfully trained a network to synthesize missing anatomy in superiorly and inferiorly cropped images. The network can estimate large regions in an incomplete image, far from the cropping boundary. Registration using our estimated full images was not significantly different from registration using the original full images. The average contour matching error for full image registration was 9.9 mm, whereas our method was 11.6, 12.1, and 13.6 mm for synthesized-to-full, full-to-synthesized, and synthesized-to-synthesized registrations, respectively. In comparison, registration using the cropped images had errors of 31.7 mm and higher. Plotting the registered image contour error as a function of initial preregistered error shows that our method is robust to registration difficulty. Synthesized-to-full registration was statistically independent of cropping extent up to 18.7 cm superiorly cropped. Synthesized-to-synthesized registration was nearly independent, with a -0.04 mm of change in average contour error for every additional millimeter of cropping. CONCLUSIONS: Different or inadequate in scan extent is a major cause of DIR inaccuracies. We address this challenge by training a neural network to complete cropped 3D images. We show that with image completion, the source of DIR inaccuracy is eliminated, and the method is robust to varying crop extent.
An effective deep network for automatic segmentation of complex lung tumors in CT images
Wang, B.
Chen, K.
Tian, X.
Yang, Y.
Zhang, X.
Med Phys2021Journal Article, cited 0 times
Website
RIDER NEURO MRI
LIDC-IDRI
Computed Tomography (CT)
Segmentation
Semantic features
Deep Learning
Algorithm Development
PURPOSE: Accurate segmentation of complex tumors in lung computed tomography (CT) images is essential to improve the effectiveness and safety of lung cancer treatment. However, the characteristics of heterogeneity, blurred boundaries, and large-area adhesion to tissues with similar gray-scale features always make the segmentation of complex tumors difficult. METHODS: This study proposes an effective deep network for the automatic segmentation of complex lung tumors (CLT-Net). The network architecture uses an encoder-decoder model that combines long and short skip connections and a global attention unit to identify target regions using multiscale semantic information. A boundary-aware loss function integrating Tversky loss and boundary loss based on the level-set calculation is designed to improve the network's ability to perceive boundary positions of difficult-to-segment (DTS) tumors. We use a dynamic weighting strategy to balance the contributions of the two parts of the loss function. RESULTS: The proposed method was verified on a dataset consisting of 502 lung CT images containing DTS tumors. The experiments show that the Dice similarity coefficient and Hausdorff distance metric of the proposed method are improved by 13.2% and 8.5% on average, respectively, compared with state-of-the-art segmentation models. Furthermore, we selected three additional medical image datasets with different modalities to evaluate the proposed model. Compared with mainstream architectures, the Dice similarity coefficient is also improved to a certain extent, which demonstrates the effectiveness of our method for segmenting medical images. CONCLUSIONS: Quantitative and qualitative results show that our method outperforms current mainstream lung tumor segmentation networks in terms of Dice similarity coefficient and Hausdorff distance. Note that the proposed method is not limited to the segmentation of complex lung tumors but also performs in different modalities of medical image segmentation.
Deformation driven Seq2Seq longitudinal tumor and organs‐at‐risk prediction for radiotherapy
Lee, Donghoon
Alam, Sadegh R.
Jiang, Jue
Zhang, Pengpeng
Nadeem, Saad
Hu, Yu‐chi
Medical Physics2021Journal Article, cited 0 times
HNSCC-3DCT-RT
PURPOSE: Radiotherapy presents unique challenges and clinical requirements for longitudinal tumor and organ-at-risk (OAR) prediction during treatment. The challenges include tumor inflammation/edema and radiation-induced changes in organ geometry, whereas the clinical requirements demand flexibility in input/output sequence timepoints to update the predictions on rolling basis and the grounding of all predictions in relationship to the pre-treatment imaging information for response and toxicity assessment in adaptive radiotherapy.
METHODS: To deal with the aforementioned challenges and to comply with the clinical requirements, we present a novel 3D sequence-to-sequence model based on Convolution Long Short-Term Memory (ConvLSTM) that makes use of series of deformation vector fields (DVFs) between individual timepoints and reference pre-treatment/planning CTs to predict future anatomical deformations and changes in gross tumor volume as well as critical OARs. High-quality DVF training data are created by employing hyper-parameter optimization on the subset of the training data with DICE coefficient and mutual information metric. We validated our model on two radiotherapy datasets: a publicly available head-and-neck dataset (28 patients with manually contoured pre-, mid-, and post-treatment CTs), and an internal non-small cell lung cancer dataset (63 patients with manually contoured planning CT and 6 weekly CBCTs).
RESULTS: The use of DVF representation and skip connections overcomes the blurring issue of ConvLSTM prediction with the traditional image representation. The mean and standard deviation of DICE for predictions of lung GTV at weeks 4, 5, and 6 were 0.83 ± 0.09, 0.82 ± 0.08, and 0.81 ± 0.10, respectively, and for post-treatment ipsilateral and contralateral parotids, were 0.81 ± 0.06 and 0.85 ± 0.02.
CONCLUSION: We presented a novel DVF-based Seq2Seq model for medical images, leveraging the complete 3D imaging information of a relatively large longitudinal clinical dataset, to carry out longitudinal GTV/OAR predictions for anatomical changes in HN and lung radiotherapy patients, which has potential to improve RT outcomes.
Low‐dose CT reconstruction with Noise2Noise network and testing‐time fine‐tuning
Wu, Dufan
Kim, Kyungsang
Li, Quanzheng
Medical Physics2021Journal Article, cited 0 times
LDCT-and-Projection-data
PURPOSE: Deep learning-based image denoising and reconstruction methods demonstrated promising performance on low-dose CT imaging in recent years. However, most existing deep learning-based low-dose CT reconstruction methods require normal-dose images for training. Sometimes such clean images do not exist such as for dynamic CT imaging or very large patients. The purpose of this work is to develop a low-dose CT image reconstruction algorithm based on deep learning which does not need clean images for training.
METHODS: In this paper, we proposed a novel reconstruction algorithm where the image prior was expressed via the Noise2Noise network, whose weights were fine-tuned along with the image during the iterative reconstruction. The Noise2Noise network built a self-consistent loss by projection data splitting and mapping the corresponding filtered backprojection (FBP) results to each other with a deep neural network. Besides, the network weights are optimized along with the image to be reconstructed under an alternating optimization scheme. In the proposed method, no clean image is needed for network training and the testing-time fine-tuning leads to optimization for each reconstruction.
RESULTS: We used the 2016 Low-dose CT Challenge dataset to validate the feasibility of the proposed method. We compared its performance to several existing iterative reconstruction algorithms that do not need clean training data, including total variation, non-local mean, convolutional sparse coding, and Noise2Noise denoising. It was demonstrated that the proposed Noise2Noise reconstruction achieved better RMSE, SSIM and texture preservation compared to the other methods. The performance is also robust against the different noise levels, hyperparameters, and network structures used in the reconstruction. Furthermore, we also demonstrated that the proposed methods achieved competitive results without any pre-training of the network at all, that is, using randomly initialized network weights during testing. The proposed iterative reconstruction algorithm also has empirical convergence with and without network pre-training.
CONCLUSIONS: The proposed Noise2Noise reconstruction method can achieve promising image quality in low-dose CT image reconstruction. The method works both with and without pre-training, and only noisy data are required for pre-training.
Assessment of the global noise algorithm for automatic noise measurement in head CT examinations
Ahmad, M.
Tan, D.
Marisetty, S.
Med Phys2021Journal Article, cited 0 times
Website
LDCT-and-Projection-data
Computed Tomography (CT)
Image processing
quality control
PURPOSE: The global noise (GN) algorithm has been previously introduced as a method for automatic noise measurement in clinical CT images. The accuracy of the GN algorithm has been assessed in abdomen CT examinations, but not in any other body part until now. This work assesses the GN algorithm accuracy in automatic noise measurement in head CT examinations. METHODS: A publicly available image dataset of 99 head CT examinations was used to evaluate the accuracy of the GN algorithm in comparison to reference noise values. Reference noise values were acquired using a manual noise measurement procedure. The procedure used a consistent instruction protocol and multiple observers to mitigate the influence of intra- and interobserver variation, resulting in precise reference values. Optimal GN algorithm parameter values were determined. The GN algorithm accuracy and the corresponding statistical confidence interval were determined. The GN measurements were compared across the six different scan protocols used in this dataset. The correlation of GN to patient head size was also assessed using a linear regression model, and the CT scanner's X-ray beam quality was inferred from the model fit parameters. RESULTS: Across all head CT examinations in the dataset, the range of reference noise was 2.9-10.2 HU. A precision of +/-0.33 HU was achieved in the reference noise measurements. After optimization, the GN algorithm had a RMS error 0.34 HU corresponding to a percent RMS error of 6.6%. The GN algorithm had a bias of +3.9%. Statistically significant differences in GN were detected in 11 out of the 15 different pairs of scan protocols. The GN measurements were correlated with head size with a statistically significant regression slope parameter (p < 10(-7) ). The CT scanner X-ray beam quality estimated from the slope parameter was 3.5 cm water HVL (2.8-4.8 cm 95% CI). CONCLUSION: The GN algorithm was validated for application in head CT examinations. The GN algorithm was accurate in comparison to reference manual measurement, with errors comparable to interobserver variation in manual measurement. The GN algorithm can detect noise differences in examinations performed on different scanner models or using different scan protocols. The trend in GN across patients of different head sizes closely follows that predicted by a physical model of X-ray attenuation.
Low‐dose CT denoising via convolutional neural network with an observer loss function
Han, Minah
Shim, Hyunjung
Baek, Jongduk
Medical Physics2021Journal Article, cited 0 times
LDCT-and-Projection-data
PURPOSE: Convolutional neural network (CNN)-based denoising is an effective method for reducing complex computed tomography (CT) noise. However, the image blur induced by denoising processes is a major concern. The main source of image blur is the pixel-level loss (e.g., mean squared error [MSE] and mean absolute error [MAE]) used to train a CNN denoiser. To reduce the image blur, feature-level loss is utilized to train a CNN denoiser. A CNN denoiser trained using visual geometry group (VGG) loss can preserve the small structures, edges, and texture of the image.However, VGG loss, derived from an ImageNet-pretrained image classifier, is not optimal for training a CNN denoiser for CT images. ImageNet contains natural RGB images, so the features extracted by the ImageNet-pretrained model cannot represent the characteristics of CT images that are highly correlated with diagnosis. Furthermore, a CNN denoiser trained with VGG loss causes bias in CT number. Therefore, we propose to use a binary classification network trained using CT images as a feature extractor and newly define the feature-level loss as observer loss.
METHODS: As obtaining labeled CT images for training classification network is difficult, we create labels by inserting simulated lesions. We conduct two separate classification tasks, signal-known-exactly (SKE) and signal-known-statistically (SKS), and define the corresponding feature-level losses as SKE loss and SKS loss, respectively. We use SKE loss and SKS loss to train CNN denoiser.
RESULTS: Compared to pixel-level losses, a CNN denoiser trained using observer loss (i.e., SKE loss and SKS loss) is effective in preserving structure, edge, and texture. Observer loss also resolves the bias in CT number, which is a problem of VGG loss. Comparing observer losses using SKE and SKS tasks, SKS yields images having a more similar noise structure to reference images.
CONCLUSIONS: Using observer loss for training CNN denoiser is effective to preserve structure, edge, and texture in denoised images and prevent the CT number bias. In particular, when using SKS loss, denoised images having a similar noise structure to reference images are generated.
CARes‐UNet: Content‐Aware residual UNet for lesion segmentation of COVID‐19 from chest CT images
Xu, Xinhua
Wen, Yuhang
Zhao, Lu
Zhang, Yi
Zhao, Youjun
Tang, Zixuan
Yang, Ziduo
Chen, Calvin Yu‐Chian
Medical Physics2021Journal Article, cited 0 times
Website
CT Images in COVID-19
U-Net
Machine Learning
COVID-19
Image‐based shading correction for narrow‐FOV truncated pelvic CBCT with deep convolutional neural networks and transfer learning
Rossi, Matteo
Belotti, Gabriele
Paganelli, Chiara
Pella, Andrea
Barcellini, Amelia
Cerveri, Pietro
Baroni, Guido
Medical Physics2021Journal Article, cited 0 times
Pelvic-Reference-Data
PURPOSE: Cone beam computed tomography (CBCT) is a standard solution for in-room image guidance for radiation therapy. It is used to evaluate and compensate for anatomopathological changes between the dose delivery plan and the fraction delivery day. CBCT is a fast and versatile solution, but it suffers from drawbacks like low contrast and requires proper calibration to derive density values. Although these limitations are even more prominent with in-room customized CBCT systems, strategies based on deep learning have shown potential in improving image quality. As such, this article presents a method based on a convolutional neural network and a novel two-step supervised training based on the transfer learning paradigm for shading correction in CBCT volumes with narrow field of view (FOV) acquired with an ad hoc in-room system.
METHODS: We designed a U-Net convolutional neural network, trained on axial slices of corresponding CT/CBCT couples. To improve the generalization capability of the network, we exploited two-stage learning using two distinct data sets. At first, the network weights were trained using synthetic CBCT scans generated from a public data set, and then only the deepest layers of the network were trained again with real-world clinical data to fine-tune the weights. Synthetic data were generated according to real data acquisition parameters. The network takes a single grayscale volume as input and outputs the same volume with corrected shading and improved HU values.
RESULTS: Evaluation was carried out with a leave-one-out cross-validation, computed on 18 unique CT/CBCT pairs from six different patients from a real-world dataset. Comparing original CBCT to CT and improved CBCT to CT, we obtained an average improvement of 6 dB on peak signal-to-noise ratio (PSNR), +2% on structural similarity index measure (SSIM). The median interquartile range (IQR) Hounsfield unit (HU) difference between CBCT and CT improved from 161.37 (162.54) HU to 49.41 (66.70) HU. Region of interest (ROI)-based HU difference was narrowed by 75% in the spongy bone (femoral head), 89% in the bladder, 85% for fat, and 83% for muscle. The improvement in contrast-to-noise ratio for these ROIs was about 67%.
CONCLUSIONS: We demonstrated that shading correction obtaining CT-compatible data from narrow-FOV CBCTs acquired with a customized in-room system is possible. Moreover, the transfer learning approach proved particularly beneficial for such a shading correction approach.
Lung-CRNet: A convolutional recurrent neural network for lung 4DCT image registration
Lu, J.
Jin, R.
Song, E.
Ma, G.
Wang, M.
Med Phys2021Journal Article, cited 0 times
Website
4D-Lung
Computed Tomography (CT)
Deep Learning
Image Registration
recurrent neural network
PURPOSE: Deformable image registration (DIR) of lung four-dimensional computed tomography (4DCT) plays a vital role in a wide range of clinical applications. Most of the existing deep learning-based lung 4DCT DIR methods focus on pairwise registration which aims to register two images with large deformation. However, the temporal continuities of deformation fields between phases are ignored. This paper proposes a fast and accurate deep learning-based lung 4DCT DIR approach that leverages the temporal component of 4DCT images. METHODS: We present Lung-CRNet, an end-to-end convolutional recurrent registration neural network for lung 4DCT images and reformulate 4DCT DIR as a spatiotemporal sequence predicting problem in which the input is a sequence of three-dimensional computed tomography images from the inspiratory phase to the expiratory phase in a respiratory cycle. The first phase in the sequence is selected as the only reference image and the rest as moving images. Multiple convolutional gated recurrent units (ConvGRUs) are stacked to capture the temporal clues between images. The proposed network is trained in an unsupervised way using a spatial transformer layer. During inference, Lung-CRNet is able to yield the respective displacement field for each reference-moving image pair in the input sequence. RESULTS: We have trained the proposed network using a publicly available lung 4DCT dataset and evaluated performance on the widely used the DIR-Lab dataset. The mean and standard deviation of target registration error are 1.56 +/- 1.05 mm on the DIR-Lab dataset. The computation time for each forward prediction is less than 1 s on average. CONCLUSIONS: The proposed Lung-CRNet is comparable to the existing state-of-the-art deep learning-based 4DCT DIR methods in both accuracy and speed. Additionally, the architecture of Lung-CRNet can be generalized to suit other groupwise registration tasks which align multiple images simultaneously.
Progressive attention module for segmentation of volumetric medical images
Zhang, Minghui
Pan, Hong
Zhu, Yaping
Gu, Yun
Medical Physics2021Journal Article, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
PURPOSE: Medical image segmentation is critical for many medical image analysis applications. 3D convolutional neural networks (CNNs) have been widely adopted in the segmentation of volumetric medical images. The recent development of channelwise and spatialwise attentions achieves the state-of-the-art feature representation performance. However, these attention strategies have not explicitly modeled interdependencies among slices in 3D medical volumes. In this work, we propose a novel attention module called progressive attention module (PAM) to explicitly model the slicewise importance for 3D medical image analysis.
METHODS: The proposed method is composed of three parts: Slice Attention (SA) block, Key-Slice-Selection (KSS) block, and Channel Attention (CA) block. First, the SA is a novel attention block to explore the correlation among slices for 3D medical image segmentation. SA is designed to explicitly reweight the importance of each slice in the 3D medical image scan. Second, the KSS block, cooperating with the SA block, is designed to adaptively emphasize the critical slice features while suppressing the irrelevant slice features, which helps the model focus on the slices with rich structural and contextual information. Finally, the CA block receives the output of KSS as input for further feature recalibration. Our proposed PAM organically combines SA, KSS, and CA, progressively highlights the key slices containing rich information for the relevant tasks while suppressing those irrelevant slices.
RESULTS: To demonstrate the effectiveness of PAM, we embed it into 3D CNNs architectures and evaluate the segmentation performance on three public challenging data sets: BraTS 2018 data set, MALC data set, and HVSMR data set. We achieve 80.34%, 88.98%, and 84.43% of the Dice similarity coefficient on these three data sets, respectively. Experimental results show that the proposed PAM not only boosts the segmentation accuracy of the standard 3D CNNs methods consistently, but also outperforms the other attention mechanisms with slight extra costs.
CONCLUSIONS: We propose a new PAM to identify the most informative slices and recalibrate channelwise feature responses for volumetric medical image segmentation. The proposed method is evaluated on three public data sets, and the results show improvements over other methods. This proposed technique can effectively assist physicians in many medical image analysis. It is also anticipated to be generalizable and transferable to assist physicians in a wider range of medical imaging applications to produce greater value and impact to health.
Dynamic boundary‐insensitive loss for magnetic resonance medical image segmentation
Qiu, Mingyan
Zhang, Chenxi
Song, Zhijian
Medical Physics2021Journal Article, cited 0 times
ISBI-MR-Prostate-2013
PURPOSE: A deep learning method has achieved great success in MR medical image segmentation. One challenge in applying deep learning segmentation models to clinical practice is their poor generalization mainly due to limited labeled training samples, inter-site heterogeneity of different datasets, and ambiguous boundary definition, etc. The objective of this work is to develop a dynamic boundary-insensitive (DBI) loss to address this poor generalization caused by an uncertain boundary.
METHODS: The DBI loss is designed to assign higher penalties to misclassified voxels farther from the boundaries in each training iteration to reduce the sensitivity of the segmentation model to the uncertain boundary. The weighting factor of the DBI loss can be adjusted adaptively without any manual setting and adjustment. Extensive experiments were conducted to verify the performance of our DBI loss and its variant, DiceDBI, on four heterogeneous prostate MRI datasets for prostate zonal segmentation and whole prostate segmentation.
RESULTS: Experimental results show that our DBI loss, when combined with Dice loss, outperforms all competing loss functions in dice similarity coefficient (DSC) and improves the segmentation performance across all datasets consistently, especially on unseen datasets and when segmenting small or narrow targets.
CONCLUSIONS: The proposed DiceDBI loss will be valuable for enhancement of the generalization performance of the segmentation model.
Pediatric chest-abdomen-pelvis and abdomen-pelvis CT images with expert organ contours
Jordan, P.
Adamson, P. M.
Bhattbhatt, V.
Beriwal, S.
Shen, S.
Radermecker, O.
Bose, S.
Strain, L. S.
Offe, M.
Fraley, D.
Principi, S.
Ye, D. H.
Wang, A. S.
Van Heteren, J.
Vo, N. J.
Schmidt, T. G.
Med Phys2022Journal Article, cited 0 times
Website
Pediatric-CT-SEG
PURPOSE: Organ autosegmentation efforts to date have largely been focused on adult populations, due to limited availability of pediatric training data. Pediatric patients may present additional challenges for organ segmentation. This paper describes a dataset of 359 pediatric chest-abdomen-pelvis and abdomen-pelvis CT images with expert contours of up to 29 anatomical organ structures to aid in the evaluation and development of autosegmentation algorithms for pediatric CT imaging. ACQUISITION AND VALIDATION METHODS: The dataset collection consists of axial CT images in DICOM format of 180 male and 179 female pediatric chest-abdomen-pelvis or abdomen-pelvis exams acquired from one of three CT scanners at Children's Wisconsin. The datasets represent random pediatric cases based upon routine clinical indications. Subjects ranged in age from 5 days to 16 years, with a mean age of seven years. The CT acquisition, contrast, and reconstruction protocols varied across the scanner models and patients, with specifications available in the DICOM headers. Expert contours were manually labeled for up to 29 organ structures per subject. Not all contours are available for all subjects, due to limited field of view or unreliable contouring due to high noise. DATA FORMAT AND USAGE NOTES: The data are available on TCIA (https://www.cancerimagingarchive.net/) under the collection Pediatric-CT-SEG. The axial CT image slices for each subject are available in DICOM format. The expert contours are stored in a single DICOM RTSTRUCT file for each subject. The contours are names as listed in Table 2. POTENTIAL APPLICATIONS: This dataset will enable the evaluation and development of organ autosegmentation algorithms for pediatric populations, which exhibit variations in organ shape and size across age. Automated organ segmentation from CT images has numerous applications including radiation therapy, diagnostic tasks, surgical planning, and patient-specific organ dose estimation. This article is protected by copyright. All rights reserved.
Technical note: Evaluation of a V‐Net autosegmentation algorithm for pediatric CT scans: Performance, generalizability, and application to patient‐specific CT dosimetry
Adamson, Philip M.
Bhattbhatt, Vrunda
Principi, Sara
Beriwal, Surabhi
Strain, Linda S.
Offe, Michael
Wang, Adam S.
Vo, Nghia‐Jack
Schmidt, Taly Gilat
Jordan, Petr
Medical Physics2022Journal Article, cited 0 times
Pediatric-CT-SEG
PURPOSE: This study developed and evaluated a fully convolutional network (FCN) for pediatric CT organ segmentation and investigated the generalizability of the FCN across image heterogeneities such as CT scanner model protocols and patient age. We also evaluated the autosegmentation models as part of a software tool for patient-specific CT dose estimation.
METHODS: A collection of 359 pediatric CT datasets with expert organ contours were used for model development and evaluation. Autosegmentation models were trained for each organ using a modified FCN 3D V-Net. An independent test set of 60 patients was withheld for testing. To evaluate the impact of CT scanner model protocol and patient age heterogeneities, separate models were trained using a subset of scanner model protocols and pediatric age groups. Train and test sets were split to answer questions about the generalizability of pediatric FCN autosegmentation models to unseen age groups and scanner model protocols, as well as the merit of scanner model protocol or age-group-specific models. Finally, the organ contours resulting from the autosegmentation models were applied to patient-specific dose maps to evaluate the impact of segmentation errors on organ dose estimation.
RESULTS: Results demonstrate that the autosegmentation models generalize to CT scanner acquisition and reconstruction methods which were not present in the training dataset. While models are not equally generalizable across age groups, age-group-specific models do not hold any advantage over combining heterogeneous age groups into a single training set. Dice similarity coefficient (DSC) and mean surface distance results are presented for 19 organ structures, for example, median DSC of 0.52 (duodenum), 0.74 (pancreas), 0.92 (stomach), and 0.96 (heart). The FCN models achieve a mean dose error within 5% of expert segmentations for all 19 organs except for the spinal canal, where the mean error was 6.31%.
CONCLUSIONS: Overall, these results are promising for the adoption of FCN autosegmentation models for pediatric CT, including applications for patient-specific CT dose estimation.
Anatomically and physiologically informed computational model of hepatic contrast perfusion for virtual imaging trials
Sauer, Thomas J.
Abadi, Ehsan
Segars, Paul
Samei, Ehsan
Medical Physics2022Journal Article, cited 0 times
TCGA-LIHC
PURPOSE: Virtual (in silico) imaging trials (VITs), involving computerized phantoms and models of the imaging process, provide a modern alternative to clinical imaging trials. VITs are faster, safer, and enable otherwise-impossible investigations. Current phantoms used in VITs are limited in their ability to model functional behavior such as contrast perfusion which is an important determinant of dose and image quality in CT imaging. In our prior work with the XCAT computational phantoms, we determined and modeled inter-organ (organ to organ) intravenous contrast concentration as a function of time from injection. However, intra-organ concentration, heterogeneous distribution within a given organ, was not pursued. We extend our methods in this work to model intra-organ concentration within the XCAT phantom with a specific focus on the liver.
METHODS: Intra-organ contrast perfusion depends on the organ's vessel network. We modeled the intricate vascular structures of the liver, informed by empirical and theoretical observations of anatomy and physiology. The developed vessel generation algorithm modeled a dual-input-single-output vascular network as a series of bifurcating surfaces to optimally deliver flow within the bounding surface of a given XCAT liver. Using this network, contrast perfusion was simulated within voxelized versions of the phantom by using knowledge of the blood velocities in each vascular structure, vessel diameters and length, and the time since the contrast entered the hepatic artery. The utility of the enhanced phantom was demonstrated through a simulation study with the phantom voxelized prior to CT simulation with the relevant liver vasculature prepared to represent blood and iodinated contrast media. The spatial extent of the blood-contrast mixture was compared to clinical data.
RESULTS: The vascular structures of the liver were generated with size and orientation which resulted in minimal energy expenditure required to maintain blood flow. Intravenous contrast was simulated as having known concentration and known total volume in the liver as calibrated from time-concentration curves. Measurements of simulated CT ROIs were found to agree with clinically observed values of early arterial phase contrast enhancement of the parenchyma ( ∼ 5 $ \sim 5$ HU). Similarly, early enhancement in the hepatic artery was found to agree with average clinical enhancement ( 180 $(180$ HU).
CONCLUSIONS: The computational methods presented here furthered the development of the XCAT phantoms allowing for multi-timepoint contrast perfusion simulations, enabling more anthropomorphic virtual clinical trials intended for optimization of current clinical imaging technologies and applications.
Feature fusion Siamese network for breast cancer detection comparing current and prior mammograms
Bai, J.
Jin, A.
Wang, T.
Yang, C.
Nabavi, S.
Med Phys2022Journal Article, cited 0 times
CBIS-DDSM
CMMD
BCS-DBT
BREAST
Automatic detection
Artificial Intelligence
*Breast Neoplasms/diagnostic imaging
Female
Humans
Machine Learning
Mammography/methods
Neural Networks
Computer
Siamese
deep learning
prior mammogram
PURPOSE: Automatic detection of very small and nonmass abnormalities from mammogram images has remained challenging. In clinical practice for each patient, radiologists commonly not only screen the mammogram images obtained during the examination, but also compare them with previous mammogram images to make a clinical decision. To design an artificial intelligence (AI) system to mimic radiologists for better cancer detection, in this work we proposed an end-to-end enhanced Siamese convolutional neural network to detect breast cancer using previous year and current year mammogram images. METHODS: The proposed Siamese-based network uses high-resolution mammogram images and fuses features of pairs of previous year and current year mammogram images to predict cancer probabilities. The proposed approach is developed based on the concept of one-shot learning that learns the abnormal differences between current and prior images instead of abnormal objects, and as a result can perform better with small sample size data sets. We developed two variants of the proposed network. In the first model, to fuse the features of current and previous images, we designed an enhanced distance learning network that considers not only the overall distance, but also the pixel-wise distances between the features. In the other model, we concatenated the features of current and previous images to fuse them. RESULTS: We compared the performance of the proposed models with those of some baseline models that use current images only (ResNet and VGG) and also use current and prior images (long short-term memory [LSTM] and vanilla Siamese) in terms of accuracy, sensitivity, precision, F1 score, and area under the curve (AUC). Results show that the proposed models outperform the baseline models and the proposed model with the distance learning network performs the best (accuracy: 0.92, sensitivity: 0.93, precision: 0.91, specificity: 0.91, F1: 0.92 and AUC: 0.95). CONCLUSIONS: Integrating prior mammogram images improves automatic cancer classification, specially for very small and nonmass abnormalities. For classification models that integrate current and prior mammogram images, using an enhanced and effective distance learning network can advance the performance of the models.
HFCF‐Net: A hybrid‐feature cross fusion network for COVID‐19 lesion segmentation from CT volumetric images
Wang, Yanting
Yang, Qingyu
Tian, Lixia
Zhou, Xuezhong
Rekik, Islem
Huang, Huifang
Medical Physics2022Journal Article, cited 0 times
CT Images in COVID-19
BACKGROUND: The coronavirus disease 2019 (COVID-19) spreads rapidly across the globe, seriously threatening the health of people all over the world. To reduce the diagnostic pressure of front-line doctors, an accurate and automatic lesion segmentation method is highly desirable in clinic practice.
PURPOSE: Many proposed two-dimensional (2D) methods for sliced-based lesion segmentation cannot take full advantage of spatial information in the three-dimensional (3D) volume data, resulting in limited segmentation performance. Three-dimensional methods can utilize the spatial information but suffer from long training time and slow convergence speed. To solve these problems, we propose an end-to-end hybrid-feature cross fusion network (HFCF-Net) to fuse the 2D and 3D features at three scales for the accurate segmentation of COVID-19 lesions.
METHODS: The proposed HFCF-Net incorporates 2D and 3D subnets to extract features within and between slices effectively. Then the cross fusion module is designed to bridge 2D and 3D decoders at the same scale to fuse both types of features. The module consists of three cross fusion blocks, each of which contains a prior fusion path and a context fusion path to jointly learn better lesion representations. The former aims to explicitly provide the 3D subnet with lesion-related prior knowledge, and the latter utilizes the 3D context information as the attention guidance of the 2D subnet, which promotes the precise segmentation of the lesion regions. Furthermore, we explore an imbalance-robust adaptive learning loss function that includes image-level loss and pixel-level loss to tackle the problems caused by the apparent imbalance between the proportions of the lesion and non-lesion voxels, providing a learning strategy to dynamically adjust the learning focus between 2D and 3D branches during the training process for effective supervision.
RESULT: Extensive experiments conducted on a publicly available dataset demonstrate that the proposed segmentation network significantly outperforms some state-of-the-art methods for the COVID-19 lesion segmentation, yielding a Dice similarity coefficient of 74.85%. The visual comparison of segmentation performance also proves the superiority of the proposed network in segmenting different-sized lesions.
CONCLUSIONS: In this paper, we propose a novel HFCF-Net for rapid and accurate COVID-19 lesion segmentation from chest computed tomography volume data. It innovatively fuses hybrid features in a cross manner for lesion segmentation, aiming to utilize the advantages of 2D and 3D subnets to complement each other for enhancing the segmentation performance. Benefitting from the cross fusion mechanism, the proposed HFCF-Net can segment the lesions more accurately with the knowledge acquired from both subnets.
Limited parameter denoising for low-dose X-ray computed tomography using deep reinforcement learning
Patwari, M.
Gutjahr, R.
Raupach, R.
Maier, A.
Med Phys2022Journal Article, cited 0 times
Website
LDCT-and-Projection-data
Computed Tomography (CT)
Image denoising
Convolutional Neural Network (CNN)
BACKGROUND: The use of deep learning has successfully solved several problems in the field of medical imaging. Deep learning has been applied to the CT denoising problem successfully. However, the use of deep learning requires large amounts of data to train deep convolutional networks (CNNs). Moreover, due to the large parameter count, such deep CNNs may cause unexpected results. PURPOSE: In this study, we introduce a novel CT denoising framework, which has interpretable behavior and provides useful results with limited data. METHODS: We employ bilateral filtering in both the projection and volume domains to remove noise. To account for nonstationary noise, we tune the sigma parameters of the volume for every projection view and every volume pixel. The tuning is carried out by two deep CNNs. Due to the impracticality of labeling, the two-deep CNNs are trained via a Deep-Q reinforcement learning task. The reward for the task is generated by using a custom reward function represented by a neural network. Our experiments were carried out on abdominal scans for the Mayo Clinic the cancer imaging archive (TCIA) dataset and the American association of physicists in medicine (AAPM) Low Dose CT Grand Challenge. RESULTS: Our denoising framework has excellent denoising performance increasing the peak signal to noise ratio (PSNR) from 28.53 to 28.93 and increasing the structural similarity index (SSIM) from 0.8952 to 0.9204. We outperform several state-of-the-art deep CNNs, which have several orders of magnitude higher number of parameters (p-value [PSNR] = 0.000, p-value [SSIM] = 0.000). Our method does not introduce any blurring, which is introduced by mean squared error (MSE) loss-based methods, or any deep learning artifacts, which are introduced by wasserstein generative adversarial network (WGAN)-based models. Our ablation studies show that parameter tuning and using our reward network results in the best possible results. CONCLUSIONS: We present a novel CT denoising framework, which focuses on interpretability to deliver good denoising performance, especially with limited data. Our method outperforms state-of-the-art deep neural networks. Future work will be focused on accelerating our method and generalizing it to different geometries and body parts.
Distributed and scalable optimization for robust proton treatment planning
Fu, Anqi
Taasti, Vicki T.
Zarepisheh, Masoud
Medical Physics2022Journal Article, cited 0 times
HNSCC-3DCT-RT
BACKGROUND: The importance of robust proton treatment planning to mitigate the impact of uncertainty is well understood. However, its computational cost grows with the number of uncertainty scenarios, prolonging the treatment planning process.
PURPOSE: We developed a fast and scalable distributed optimization platform that parallelizes the robust proton treatment plan computation over the uncertainty scenarios.
METHODS: We modeled the robust proton treatment planning problem as a weighted least-squares problem. To solve it, we employed an optimization technique called the alternating direction method of multipliers with Barzilai-Borwein step size (ADMM-BB). We reformulated the problem in such a way as to split the main problem into smaller subproblems, one for each proton therapy uncertainty scenario. The subproblems can be solved in parallel, allowing the computational load to be distributed across multiple processors (e.g., CPU threads/cores). We evaluated ADMM-BB on four head-and-neck proton therapy patients, each with 13 scenarios accounting for 3 mm setup and 3.5% range uncertainties. We then compared the performance of ADMM-BB with projected gradient descent (PGD) applied to the same problem.
RESULTS: For each patient, ADMM-BB generated a robust proton treatment plan that satisfied all clinical criteria with comparable or better dosimetric quality than the plan generated by PGD. However, ADMM-BB's total runtime averaged about 6 to 7 times faster. This speedup increased with the number of scenarios.
CONCLUSIONS: ADMM-BB is a powerful distributed optimization method that leverages parallel processing platforms, such as multicore CPUs, GPUs, and cloud servers, to accelerate the computationally intensive work of robust proton treatment planning. This results in (1) a shorter treatment planning process and (2) the ability to consider more uncertainty scenarios, which improves plan quality.
Development and verification of radiomics framework for computed tomography image segmentation
Gu, Jiabing
Li, Baosheng
Shu, Huazhong
Zhu, Jian
Qiu, Qingtao
Bai, Tong
Medical Physics2022Journal Article, cited 0 times
Website
Credence Cartridge Radiomics Phantom CT Scans
PHANTOM
radiomics
Computed Tomography (CT)
Automated segmentation of five different body tissues on computed tomography using deep learning
Pu, L.
Gezer, N. S.
Ashraf, S. F.
Ocak, I.
Dresser, D. E.
Dhupar, R.
Med Phys2022Journal Article, cited 0 times
Website
NSCLC Radiogenomics
ACRIN-NSCLC-FDG-PET
NLST
C4KC-KiTS
computed Tomography (CT)
convolutional Neural Network (CNN)
Segmentation
PET/CT
PURPOSE: To develop and validate a computer tool for automatic and simultaneous segmentation of five body tissues depicted on computed tomography (CT) scans: visceral adipose tissue (VAT), subcutaneous adipose tissue (SAT), intermuscular adipose tissue (IMAT), skeletal muscle (SM), and bone. METHODS: A cohort of 100 CT scans acquired on different subjects were collected from The Cancer Imaging Archive-50 whole-body positron emission tomography-CTs, 25 chest, and 25 abdominal. Five different body tissues (i.e., VAT, SAT, IMAT, SM, and bone) were manually annotated. A training-while-annotating strategy was used to improve the annotation efficiency. The 10-fold cross-validation method was used to develop and validate the performance of several convolutional neural networks (CNNs), including UNet, Recurrent Residual UNet (R2Unet), and UNet++. A grid-based three-dimensional patch sampling operation was used to train the CNN models. The CNN models were also trained and tested separately for each body tissue to see if they could achieve a better performance than segmenting them jointly. The paired sample t-test was used to statistically assess the performance differences among the involved CNN models RESULTS: When segmenting the five body tissues simultaneously, the Dice coefficients ranged from 0.826 to 0.840 for VAT, from 0.901 to 0.908 for SAT, from 0.574 to 0.611 for IMAT, from 0.874 to 0.889 for SM, and from 0.870 to 0.884 for bone, which were significantly higher than the Dice coefficients when segmenting the body tissues separately (p < 0.05), namely, from 0.744 to 0.819 for VAT, from 0.856 to 0.896 for SAT, from 0.433 to 0.590 for IMAT, from 0.838 to 0.871 for SM, and from 0.803 to 0.870 for bone. CONCLUSION: There were no significant differences among the CNN models in segmenting body tissues, but jointly segmenting body tissues achieved a better performance than segmenting them separately.
Automated lung tumor delineation on positron emission tomography/computed tomography via a hybrid regional network
Lei, Y.
Wang, T.
Jeong, J. J.
Janopaul-Naylor, J.
Kesarwala, A. H.
Roper, J.
Tian, S.
Bradley, J. D.
Liu, T.
Higgins, K.
Yang, X.
Med Phys2022Journal Article, cited 0 times
Website
Lung-PET-CT-Dx
Positron Emission Tomography (PET)
Computed Tomography (CT)
PET-CT
Deep learning
LUNG
Radiotherapy
Segmentation
BACKGROUND: Multimodality positron emission tomography/computed tomography (PET/CT) imaging combines the anatomical information of CT with the functional information of PET. In the diagnosis and treatment of many cancers, such as non-small cell lung cancer (NSCLC), PET/CT imaging allows more accurate delineation of tumor or involved lymph nodes for radiation planning. PURPOSE: In this paper, we propose a hybrid regional network method of automatically segmenting lung tumors from PET/CT images. METHODS: The hybrid regional network architecture synthesizes the functional and anatomical information from the two image modalities, whereas the mask regional convolutional neural network (R-CNN) and scoring fine-tune the regional location and quality of the output segmentation. This model consists of five major subnetworks, that is, a dual feature representation network (DFRN), a regional proposal network (RPN), a specific tumor-wise R-CNN, a mask-Net, and a score head. Given a PET/CT image as inputs, the DFRN extracts feature maps from the PET and CT images. Then, the RPN and R-CNN work together to localize lung tumors and reduce the image size and feature map size by removing irrelevant regions. The mask-Net is used to segment tumor within a volume-of-interest (VOI) with a score head evaluating the segmentation performed by the mask-Net. Finally, the segmented tumor within the VOI was mapped back to the volumetric coordinate system based on the location information derived via the RPN and R-CNN. We trained, validated, and tested the proposed neural network using 100 PET/CT images of patients with NSCLC. A fivefold cross-validation study was performed. The segmentation was evaluated with two indicators: (1) multiple metrics, including the Dice similarity coefficient, Jacard, 95th percentile Hausdorff distance, mean surface distance (MSD), residual mean square distance, and center-of-mass distance; (2) Bland-Altman analysis and volumetric Pearson correlation analysis. RESULTS: In fivefold cross-validation, this method achieved Dice and MSD of 0.84 +/- 0.15 and 1.38 +/- 2.2 mm, respectively. A new PET/CT can be segmented in 1 s by this model. External validation on The Cancer Imaging Archive dataset (63 PET/CT images) indicates that the proposed model has superior performance compared to other methods. CONCLUSION: The proposed method shows great promise to automatically delineate NSCLC tumors on PET/CT images, thereby allowing for a more streamlined clinical workflow that is faster and reduces physician effort.
Prognostic generalization of multi-level CT-dose fusion dosiomics from primary tumor and lymph node in nasopharyngeal carcinoma
Cai, C.
Lv, W.
Chi, F.
Zhang, B.
Zhu, L.
Yang, G.
Zhao, S.
Zhu, Y.
Han, X.
Dai, Z.
Wang, X.
Lu, L.
Med Phys2022Journal Article, cited 0 times
Website
Head-Neck-PET-CT
HNSCC
Radiomics
Computed Tomography (CT)
dosiomics
multi-level fusion
Segmentation
Algorithm Development
OBJECTIVES: To investigate the prognostic performance of multi-level CT-dose fusion dosiomics at the image-, matrix- and feature-levels from the gross tumor volume at nasopharynx and the involved lymph node for nasopharyngeal carcinoma (NPC) patients. MATERIALS AND METHODS: Two hundred and nineteen NPC patients (175 vs. 44 for training vs. internal validation) were used to train prediction model, and thirty two NPC patients were used for external validation. We first extracted CT and dose information from intratumoral nasopharynx (GTV_nx) and lymph node (GTV_nd) regions. Then the corresponding peritumoral regions (RING_3mm and RING_5mm) were also considered. Thus, the individual and combination of intra- and peri-tumoral regions were as follows: GTV_nx, GTV_nd, RING_3mm_nx, RING_3mm_nd, RING_5mm_nx, RING_5mm_nd, GTV_nxnd, RING_3mm_nxnd, RING_5mm_nxnd, GTV+RING_3mm_nxnd and GTV+RING_5mm_nxnd. For each region, eleven models were built by combining 5 clinical parameters and 127 features from (1) dose images alone; (2-7) fused dose and CT images via wavelet-based fusion (WF) using CT weights of 0.2, 0.4, 0.6 and 0.8, gradient transfer fusion (GTF), and guided filtering-based fusion (GFF); (8) fused matrices (sumMat); (9-10) fused features derived via feature averaging (avgFea) and feature concatenation (conFea); and finally, (11) CT images alone. The C-index and Kaplan-Meier curves with log-rank test were used to assess model performance. RESULTS: The fusion models' performance was better than single CT/dose model on both internal and external validation. Models combined the information from both GTV_nx and GTV_nd regions outperformed the single region model. For internal validation, GTV+RING_3mm_nxnd GFF model achieved the highest C-index both in recurrence-free survival (RFS) and metastasis-free survival (MFS) predictions (RFS: 0.822; MFS: 0.786). The highest C-index in external validation set was achieved by RING_3mm_nxnd model (RFS: 0.762; MFS: 0.719). The GTV+RING_3mm_nxnd GFF model is able to significantly separate patients into high-risk and low-risk groups compared to dose-only or CT-only models. CONCLUSION: Fusion dosiomics model combining the primary tumor, the involved lymph node, and 3mm peritumoral information outperformed single modality models for different outcome predictions, which is helpful for clinical decision-making and the development of personalized treatment. This article is protected by copyright. All rights reserved.
VTDCE‐Net: A time invariant deep neural network for direct estimation of pharmacokinetic parameters from undersampled DCE MRI data
Rastogi, Aditya
Dutta, Arindam
Yalavarthy, Phaneendra Kumar
Medical Physics2022Journal Article, cited 0 times
QIN Breast DCE-MRI
PURPOSE: To propose a robust time and space invariant deep learning (DL) method to directly estimate the pharmacokinetic/tracer kinetic (PK/TK) parameters from undersampled dynamic contrast-enhanced (DCE) magnetic resonance imaging (MRI) data.
METHODS: DCE-MRI consists of 4D (3D-spatial + temporal) data and has been utilized to estimate 3D (spatial) tracer kinetic maps. Existing DL architecture for this task needs retraining for variation in temporal and/or spatial dimensions. This work proposes a DL algorithm that is invariant to training and testing in both temporal and spatial dimensions. The proposed network was based on a 2.5-dimensional Unet architecture, where the encoder consists of a 3D convolutional layer and the decoder consists of a 2D convolutional layer. The proposed VTDCE-Net was evaluated for solving the ill-posed inverse problem of directly estimating TK parameters from undersampled k - t $k-t$ space data of breast cancer patients, and the results were systematically compared with a total variation (TV) regularization based direct parameter estimation scheme. In the breast dataset, the training was performed on patients with 32 time samples, and testing was carried out on patients with 26 and 32 time samples. Translation of the proposed VTDCE-Net for brain dataset to show the generalizability was also carried out. Undersampling rates (R) of 8× , 12× , and 20× were utilized with PSNR and SSIM as the figures of merit in this evaluation. TK parameter maps estimated from fully sampled data were utilized as ground truth.
RESULTS: Experiments carried out in this work demonstrate that the proposed VTDCE-Net outperforms the TV scheme on both breast and brain datasets across all undersampling rates. For K trans $\mathbf {K_{trans}}$ and V p $\mathbf {V_{p}}$ maps, the improvement over TV is as high as 2 and 5 dB, respectively, using the proposed VTDCE-Net.
CONCLUSION: Temporal points invariant DL network that was proposed in this work to estimate the TK-parameters using DCE-MRI data has provided state-of-the-art performance compared to standard image reconstruction methods and is shown to work across all undersampling rates.
Technical note: Performance evaluation of volumetric imaging based on motion modeling by principal component analysis
Asano, Suzuka
Oseki, Keishi
Takao, Seishin
Miyazaki, Koichi
Yokokawa, Kohei
Matsuura, Taeko
Taguchi, Hiroshi
Katoh, Norio
Aoyama, Hidefumi
Umegaki, Kikuo
Miyamoto, Naoki
Medical Physics2022Journal Article, cited 0 times
4D-Lung
PURPOSE: To quantitatively evaluate the achievable performance of volumetric imaging based on lung motion modeling by principal component analysis (PCA).
METHODS: In volumetric imaging based on PCA, internal deformation was represented as a linear combination of the eigenvectors derived by PCA of the deformation vector fields evaluated from patient-specific four-dimensional-computed tomography (4DCT) datasets. The volumetric image was synthesized by warping the reference CT image with a deformation vector field which was evaluated using optimal principal component coefficients (PCs). Larger PCs were hypothesized to reproduce deformations larger than those included in the original 4DCT dataset. To evaluate the reproducibility of PCA-reconstructed volumetric images synthesized to be close to the ground truth as possible, mean absolute error (MAE), structure similarity index measure (SSIM) and discrepancy of diaphragm position were evaluated using 22 4DCT datasets of nine patients.
RESULTS: Mean MAE and SSIM values for the PCA-reconstructed volumetric images were approximately 80 HU and 0.88, respectively, regardless of the respiratory phase. In most test cases including the data of which motion range was exceeding that of the modeling data, the positional error of diaphragm was less than 5 mm. The results suggested that large deformations not included in the modeling 4DCT dataset could be reproduced. Furthermore, since the first PC correlated with the displacement of the diaphragm position, the first eigenvector became the dominant factor representing the respiration-associated deformations. However, other PCs did not necessarily change with the same trend as the first PC, and no correlation was observed between the coefficients. Hence, randomly allocating or sampling these PCs in expanded ranges may be applicable to reasonably generate an augmented dataset with various deformations.
CONCLUSIONS: Reasonable accuracy of image synthesis comparable to those in the previous research were shown by using clinical data. These results indicate the potential of PCA-based volumetric imaging for clinical applications.
TrEnD: A transformer‐based encoder‐decoder model with adaptive patch embedding for mass segmentation in mammograms
Liu, Dongdong
Wu, Bo
Li, Changbo
Sun, Zheng
Zhang, Nan
Medical Physics2023Journal Article, cited 0 times
CBIS-DDSM
BACKGROUND: Breast cancer is one of the most prevalent malignancies diagnosed in women. Mammogram inspection in the search and delineation of breast tumors is an essential prerequisite for a reliable diagnosis. However, analyzing mammograms by radiologists is time-consuming and prone to errors. Therefore, the development of computer-aided diagnostic (CAD) systems to automate the mass segmentation procedure is greatly expected.
PURPOSE: Accurate breast mass segmentation in mammograms remains challenging in CAD systems due to the low contrast, various shapes, and fuzzy boundaries of masses. In this paper, we propose a fully automatic and effective mass segmentation model based on deep learning for improving segmentation performance.
METHODS: We propose an effective transformer-based encoder-decoder model (TrEnD). Firstly, we introduce a lightweight method for adaptive patch embedding (APE) of the transformer, which utilizes superpixels to adaptively adjust the size and position of each patch. Secondly, we introduce a hierarchical transformer-encoder and attention-gated-decoder structure, which is beneficial for progressively suppressing interference feature activations in irrelevant background areas. Thirdly, a dual-branch design is employed to extract and fuse globally coarse and locally fine features in parallel, which could capture the global contextual information and ensure the relevance and integrity of local information. The model is evaluated on two public datasets CBIS-DDSM and INbreast. To further demonstrate the robustness of TrEnD, different cropping strategies are applied to these datasets, termed tight, loose, maximal, and mix-frame. Finally, ablation analysis is performed to assess the individual contribution of each module to the model performance.
RESULTS: The proposed segmentation model provides a high Dice coefficient and Intersection over Union (IoU) of 92.20% and 85.81% on the mix-frame CBIS-DDSM, while 91.83% and 85.29% for the mix-frame INbreast, respectively. The segmentation performance outperforms the current state-of-the-art approaches. By adding the APE and attention-gated module, the Dice and IoU have improved by 6.54% and 10.07%.
CONCLUSION: According to extensive qualitative and quantitative assessments, the proposed network is effective for automatic breast mass segmentation, and has adequate potential to offer technical assistance for subsequent clinical diagnoses.
Likelihood‐based bilateral filters for pre‐estimated basis sinograms using photon‐counting CT
Lee, Okkyun
Medical Physics2023Journal Article, cited 0 times
Pancreas-CT
BACKGROUND: Noise amplification in material decomposition is an issue for exploiting photon-counting computed tomography (PCCT). Regularization techniques and neighborhood filters have been widely used, but degraded spatial resolution and bias are concerns.
PURPOSE: This paper proposes likelihood-based bilateral filters that can be applied to pre-estimated basis sinograms to reduce the noise while minimally affecting spatial resolution and accuracy.
METHODS: The proposed method needs system models (e.g., incident spectrum, detector response) to calculate the likelihood. First, it performs maximum likelihood (ML)-based estimation in the projection domain to obtain basis sinograms. The estimated basis sinograms suffer from severe noise but are asymptotically unbiased without degrading spatial resolution. Then it calculates the neighborhood likelihoods for a given measurement at the center pixel using the neighborhood estimates and designs the weights based on the distance of likelihoods. It is also analyzed in terms of statistical inference, and then two variations of the filter are introduced: one that requires the significance level instead of the empirical hyperparameter. The other is a measurement-based filter, which can be applied when accurate estimates are given without the system models. The proposed methods were validated by analyzing the local property of noise and spatial resolution and the global trend of noise and bias using numerical thorax and abdominal phantoms for a two-material decomposition (water and bone). They were compared to the conventional neighborhood filters and the model-based iterative reconstruction with an edge-preserving penalty applied in the basis images.
RESULTS: The proposed method showed comparable or superior performance for the local and global properties to conventional methods in many cases. The thorax phantom: The full width at half maximum (FWHM) decreased by -2%-31% (-2 indicates that it increased by 2% compared to the best performance from conventional methods), and the global bias was reduced by 2%-19% compared to other methods for similar noise levels (local: 51% of the ML, global: 49%) in the water basis image. The FWHM decreased by 8%-31%, and the global bias was reduced by 9%-44% for similar noise levels (local: 44% of the ML, global: 36%) in the CT image at 65 keV. The abdominal phantom: The FWHM decreased by 10%-32%, and the global bias was reduced by 3%-35% compared to other methods for similar noise levels (local: 66% of the ML, global: 67%) in the water basis image. The FWHM decreased by up to -11%-47%, and the global bias was reduced by 13%-35% for similar noise levels (local: 71% of the ML, global: 70%) in the CT image at 60 keV.
CONCLUSIONS: This paper introduced the likelihood-based bilateral filters as a post-processing method applied to the ML-based estimates of basis sinograms. The proposed filters effectively reduced the noise in the basis images and the synthesized monochromatic CT images. It showed the potential of using likelihood-based filters in the projection domain as a substitute for conventional regularization or filtering methods.
Utilization of an attentive map to preserve anatomical features for training convolutional neural‐network‐based low‐dose CT denoiser
Han, Minah
Shim, Hyunjung
Baek, Jongduk
Medical Physics2023Journal Article, cited 0 times
LDCT-and-Projection-data
BACKGROUND: The purpose of a convolutional neural network (CNN)-based denoiser is to increase the diagnostic accuracy of low-dose computed tomography (LDCT) imaging. To increase diagnostic accuracy, there is a need for a method that reflects the features related to diagnosis during the denoising process.
PURPOSE: To provide a training strategy for LDCT denoisers that relies more on diagnostic task-related features to improve diagnostic accuracy.
METHODS: An attentive map derived from a lesion classifier (i.e., determining lesion-present or not) is created to represent the extent to which each pixel influences the decision by the lesion classifier. This is used as a weight to emphasize important parts of the image. The proposed training method consists of two steps. In the first one, the initial parameters of the CNN denoiser are trained using LDCT and normal-dose CT image pairs via supervised learning. In the second one, the learned parameters are readjusted using the attentive map to restore the fine details of the image.
RESULTS: Structural details and the contrast are better preserved in images generated by using the denoiser trained via the proposed method than in those generated by conventional denoisers. The proposed denoiser also yields higher lesion detectability and localization accuracy than conventional denoisers.
CONCLUSIONS: A denoiser trained using the proposed method preserves the small structures and the contrast in the denoised images better than without it. Specifically, using the attentive map improves the lesion detectability and localization accuracy of the denoiser.
Deep learning‐based dominant index lesion segmentation for MR‐guided radiation therapy of prostate cancer
Simeth, Josiah
Jiang, Jue
Nosov, Anton
Wibmer, Andreas
Zelefsky, Michael
Tyagi, Neelam
Veeraraghavan, Harini
Medical Physics2023Journal Article, cited 0 times
PROSTATEx
BACKGROUND: Dose escalation radiotherapy enables increased control of prostate cancer (PCa) but requires segmentation of dominant index lesions (DIL). This motivates the development of automated methods for fast, accurate, and consistent segmentation of PCa DIL.
PURPOSE: To construct and validate a model for deep-learning-based automatic segmentation of PCa DIL defined by Gleason score (GS) ≥3+4 from MR images applied to MR-guided radiation therapy. Validate generalizability of constructed models across scanner and acquisition differences.
METHODS: Five deep-learning networks were evaluated on apparent diffusion coefficient (ADC) MRI from 500 lesions in 365 patients arising from internal training Dataset 1 (156 lesions in 125 patients, 1.5Tesla GE MR with endorectal coil), testing using Dataset 1 (35 lesions in 26 patients), external ProstateX Dataset 2 (299 lesions in 204 patients, 3Tesla Siemens MR), and internal inter-rater Dataset 3 (10 lesions in 10 patients, 3Tesla Philips MR). The five networks include: multiple resolution residually connected network (MRRN) and MRRN regularized in training with deep supervision implemented into the last convolutional block (MRRN-DS), Unet, Unet++, ResUnet, and fast panoptic segmentation (FPSnet) as well as fast panoptic segmentation with smoothed labels (FPSnet-SL). Models were evaluated by volumetric DIL segmentation accuracy using Dice similarity coefficient (DSC) and the balanced F1 measure of detection accuracy, as a function of lesion aggressiveness and size (Dataset 1 and 2), and accuracy with respect to two-raters (on Dataset 3). Upon acceptance for publication segmentation models will be made available in an open-source GitHub repository.
RESULTS: In general, MRRN-DS more accurately segmented tumors than other methods on the testing datasets. MRRN-DS significantly outperformed ResUnet in Dataset2 (DSC of 0.54 vs. 0.44, p < 0.001) and the Unet++ in Dataset3 (DSC of 0.45 vs. p = 0.04). FPSnet-SL was similarly accurate as MRRN-DS in Dataset2 (p = 0.30), but MRRN-DS significantly outperformed FPSnet and FPSnet-SL in both Dataset1 (0.60 vs. 0.51 [p = 0.01] and 0.54 [p = 0.049] respectively) and Dataset3 (0.45 vs. 0.06 [p = 0.002] and 0.24 [p = 0.004] respectively). Finally, MRRN-DS produced slightly higher agreement with experienced radiologist than two radiologists in Dataset 3 (DSC of 0.45 vs. 0.41).
CONCLUSIONS: MRRN-DS was generalizable to different MR testing datasets acquired using different scanners. It produced slightly higher agreement with an experienced radiologist than that between two radiologists. Finally, MRRN-DS more accurately segmented aggressive lesions, which are generally candidates for radiative dose ablation.
Clinical capability of modern brain tumor segmentation models
Berkley, Adam
Saueressig, Camillo
Shukla, Utkarsh
Chowdhury, Imran
Munoz‐Gauna, Anthony
Shehu, Olalekan
Singh, Ritambhara
Munbodh, Reshma
Medical Physics2023Journal Article, cited 0 times
QIN-BRAIN-DSC-MRI
Glioma
PURPOSE: State-of-the-art automated segmentation methods achieve exceptionally high performance on the Brain Tumor Segmentation (BraTS) challenge, a dataset of uniformly processed and standardized magnetic resonance generated images (MRIs) of gliomas. However, a reasonable concern is that these models may not fare well on clinical MRIs that do not belong to the specially curated BraTS dataset. Research using the previous generation of deep learning models indicates significant performance loss on cross-institutional predictions. Here, we evaluate the cross-institutional applicability and generalzsability of state-of-the-art deep learning models on new clinical data.
METHODS: We train a state-of-the-art 3D U-Net model on the conventional BraTS dataset comprising low- and high-grade gliomas. We then evaluate the performance of this model for automatic tumor segmentation of brain tumors on in-house clinical data. This dataset contains MRIs of different tumor types, resolutions, and standardization than those found in the BraTS dataset. Ground truth segmentations to validate the automated segmentation for in-house clinical data were obtained from expert radiation oncologists.
RESULTS: We report average Dice scores of 0.764, 0.648, and 0.61 for the whole tumor, tumor core, and enhancing tumor, respectively, in the clinical MRIs. These means are higher than numbers reported previously on same institution and cross-institution datasets of different origin using different methods. There is no statistically significant difference when comparing the dice scores to the inter-annotation variability between two expert clinical radiation oncologists. Although performance on the clinical data is lower than on the BraTS data, these numbers indicate that models trained on the BraTS dataset have impressive segmentation performance on previously unseen images obtained at a separate clinical institution. These images differ in the imaging resolutions, standardization pipelines, and tumor types from the BraTS data.
CONCLUSIONS: State-of-the-art deep learning models demonstrate promising performance on cross-institutional predictions. They considerably improve on previous models and can transfer knowledge to new types of brain tumors without additional modeling.
Using 3D deep features from CT scans for cancer prognosis based on a video classification model: A multi-dataset feasibility study
Chen, J.
Wee, L.
Dekker, A.
Bermejo, I.
Med Phys2023Journal Article, cited 0 times
Website
NSCLC-Radiomics
OPC-Radiomics
Head-Neck-Radiomics-HN1
RIDER LUNG CT
3D deep neural network
cancer prognosis
deep features
Radiomics
Transfer learning
BACKGROUND: Cancer prognosis before and after treatment is key for patient management and decision making. Handcrafted imaging biomarkers-radiomics-have shown potential in predicting prognosis. PURPOSE: However, given the recent progress in deep learning, it is timely and relevant to pose the question: could deep learning based 3D imaging features be used as imaging biomarkers and outperform radiomics? METHODS: Effectiveness, reproducibility in test/retest, across modalities, and correlation of deep features with clinical features such as tumor volume and TNM staging were tested in this study. Radiomics was introduced as the reference image biomarker. For deep feature extraction, we transformed the CT scans into videos, and we adopted the pre-trained Inflated 3D ConvNet (I3D) video classification network as the architecture. We used four datasets-LUNG 1 (n = 422), LUNG 4 (n = 106), OPC (n = 605), and H&N 1 (n = 89)-with 1270 samples from different centers and cancer types-lung and head and neck cancer-to test deep features' predictiveness and two additional datasets to assess the reproducibility of deep features. RESULTS: Support Vector Machine-Recursive Feature Elimination (SVM-RFE) selected top 100 deep features achieved a concordance index (CI) of 0.67 in survival prediction in LUNG 1, 0.87 in LUNG 4, 0.76 in OPC, and 0.87 in H&N 1, while SVM-RFE selected top 100 radiomics achieved CIs of 0.64, 0.77, 0.73, and 0.74, respectively, all statistically significant differences (p < 0.01, Wilcoxon's test). Most selected deep features are not correlated with tumor volume and TNM staging. However, full radiomics features show higher reproducibility than full deep features in a test/retest setting (0.89 vs. 0.62, concordance correlation coefficient). CONCLUSION: The results show that deep features can outperform radiomics while providing different views for tumor prognosis compared to tumor volume and TNM staging. However, deep features suffer from lower reproducibility than radiomic features and lack the interpretability of the latter.
A repository of grade 1 and 2 meningioma MRIs in a public dataset for radiomics reproducibility tests
Vassantachart, April
Cao, Yufeng
Shen, Zhilei
Cheng, Karen
Gribble, Michael
Ye, Jason C.
Zada, Gabriel
Hurth, Kyle
Mathew, Anna
Guzman, Samuel
Yang, Wensha
Medical Physics2023Journal Article, cited 0 times
Meningioma-SEG-CLASS
Radiomics
Magnetic Resonance Imaging (MRI)
Manual classification
Purpose; Meningiomas are the most common primary brain tumors in adults with management varying widely based on World Health Organization (WHO) grade. However, there are limited datasets available for researchers to develop and validate radiomic models. The purpose of our manuscript is to report on the first dataset of meningiomas in The Cancer Imaging Archive (TCIA).; ; Acquisition and validation methods; The dataset consists of pre-operative MRIs from 96 patients with meningiomas who underwent resection from 2010–2019 and include axial T1post and T2-FLAIR sequences—55 grade 1 and 41 grade 2. Meningioma grade was confirmed based on the 2016 WHO Bluebook classification guideline by two neuropathologists and one neuropathology fellow. The hyperintense T1post tumor and hyperintense T2-FLAIR regions were manually contoured on both sequences and resampled to an isotropic resolution of 1 × 1 × 1 mm3. The entire dataset was reviewed by a certified medical physicist.; ; Data format and usage notes; The data was imported into TCIA for storage and can be accessed at https://doi.org/10.7937/0TKV-1A36. The total size of the dataset is 8.8GB, with 47 519 individual Digital Imaging and Communications in Medicine (DICOM) files consisting of 384 image series, and 192 structures.; ; Potential applications; Grade 1 and 2 meningiomas have different treatment paradigms and are often treated based on radiologic diagnosis alone. Therefore, predicting grade prior to treatment is essential in clinical decision-making. This dataset will allow researchers to create models to auto-differentiate grade 1 and 2 meningiomas as well as evaluate for other pathologic features including mitotic index, brain invasion, and atypical features. Limitations of this study are the small sample size and inclusion of only two MRI sequences. However, there are no meningioma datasets on TCIA and limited datasets elsewhere although meningiomas are the most common intracranial tumor in adults.
Transfer learning for auto-segmentation of 17 organs-at-risk in the head and neck: Bridging the gap between institutional and public datasets
Clark, B.
Hardcastle, N.
Johnston, L. A.
Korte, J.
Med Phys2024Journal Article, cited 0 times
Website
HEAD-NECK-RADIOMICS-HN1
Head-Neck-PET-CT
Head-Neck-CT-Atlas
OPC-Radiomics
Algorithm Development
Deep Learning
Image Segmentation
Transfer learning
BACKGROUND: Auto-segmentation of organs-at-risk (OARs) in the head and neck (HN) on computed tomography (CT) images is a time-consuming component of the radiation therapy pipeline that suffers from inter-observer variability. Deep learning (DL) has shown state-of-the-art results in CT auto-segmentation, with larger and more diverse datasets showing better segmentation performance. Institutional CT auto-segmentation datasets have been small historically (n < 50) due to the time required for manual curation of images and anatomical labels. Recently, large public CT auto-segmentation datasets (n > 1000 aggregated) have become available through online repositories such as The Cancer Imaging Archive. Transfer learning is a technique applied when training samples are scarce, but a large dataset from a closely related domain is available. PURPOSE: The purpose of this study was to investigate whether a large public dataset could be used in place of an institutional dataset (n > 500), or to augment performance via transfer learning, when building HN OAR auto-segmentation models for institutional use. METHODS: Auto-segmentation models were trained on a large public dataset (public models) and a smaller institutional dataset (institutional models). The public models were fine-tuned on the institutional dataset using transfer learning (transfer models). We assessed both public model generalizability and transfer model performance by comparison with institutional models. Additionally, the effect of institutional dataset size on both transfer and institutional models was investigated. All DL models used a high-resolution, two-stage architecture based on the popular 3D U-Net. Model performance was evaluated using five geometric measures: the dice similarity coefficient (DSC), surface DSC, 95(th) percentile Hausdorff distance, mean surface distance (MSD), and added path length. RESULTS: For a small subset of OARs (left/right optic nerve, spinal cord, left submandibular), the public models performed significantly better (p < 0.05) than, or showed no significant difference to, the institutional models under most of the metrics examined. For the remaining OARs, the public models were inferior to the institutional models, although performance differences were small (DSC </= 0.03, MSD < 0.5 mm) for seven OARs (brainstem, left/right lens, left/right parotid, mandible, right submandibular). The transfer models performed significantly better than the institutional models for seven OARs (brainstem, right lens, left/right optic nerve, left/right parotid, spinal cord) with a small margin of improvement (DSC </= 0.02, MSD < 0.4 mm). When numbers of institutional training samples were limited, public and transfer models outperformed the institutional models for most OARs (brainstem, left/right lens, left/right optic nerve, left/right parotid, spinal cord, and left/right submandibular). CONCLUSION: Training auto-segmentation models with public data alone was suitable for a small number of OARs. Using only public data incurred a small performance deficit for most other OARs, when compared with institutional data alone, but may be preferable over time-consuming curation of a large institutional dataset. When a large institutional dataset was available, transfer learning with models pretrained on a large public dataset provided a modest performance improvement for several OARs. When numbers of institutional samples were limited, using the public dataset alone, or as a pretrained model, was beneficial for most OARs.
A comprehensive lung CT landmark pair dataset for evaluating deformable image registration algorithms
Criscuolo, E. R.
Fu, Y.
Hao, Y.
Zhang, Z.
Yang, D.
Med Phys2024Journal Article, cited 0 times
TCGA-LUAD
TCGA-LUSC
Image Registration
Computed Tomography (CT)
deformable image registration
Motion correction
Algorithm Development
ground truth dataset
lung motion
PURPOSE: Deformable image registration (DIR) is a key enabling technology in many diagnostic and therapeutic tasks, but often does not meet the required robustness and accuracy for supporting clinical tasks. This is in large part due to a lack of high-quality benchmark datasets by which new DIR algorithms can be evaluated. Our team was supported by the National Institute of Biomedical Imaging and Bioengineering to develop DIR benchmark dataset libraries for multiple anatomical sites, comprising of large numbers of highly accurate landmark pairs on matching blood vessel bifurcations. Here we introduce our lung CT DIR benchmark dataset library, which was developed to improve upon the number and distribution of landmark pairs in current public lung CT benchmark datasets. ACQUISITION AND VALIDATION METHODS: Thirty CT image pairs were acquired from several publicly available repositories as well as authors' institution with IRB approval. The data processing workflow included multiple steps: (1) The images were denoised. (2) Lungs, airways, and blood vessels were automatically segmented. (3) Bifurcations were directly detected on the skeleton of the segmented vessel tree. (4) Falsely identified bifurcations were filtered out using manually defined rules. (5) A DIR was used to project landmarks detected on the first image onto the second image of the image pair to form landmark pairs. (6) Landmark pairs were manually verified. This workflow resulted in an average of 1262 landmark pairs per image pair. Estimates of the landmark pair target registration error (TRE) using digital phantoms were 0.4 mm +/- 0.3 mm. DATA FORMAT AND USAGE NOTES: The data is published in Zenodo at https://doi.org/10.5281/zenodo.8200423. Instructions for use can be found at https://github.com/deshanyang/Lung-DIR-QA. POTENTIAL APPLICATIONS: The dataset library generated in this work is the largest of its kind to date and will provide researchers with a new and improved set of ground truth benchmarks for quantitatively validating DIR algorithms within the lung.
Automatic vessel attenuation measurement for quality control of contrast-enhanced CT: Validation on the portal vein
McCoy, K.
Marisetty, S.
Tan, D.
Jensen, C. T.
Siewerdsen, J. H.
Peterson, C. B.
Ahmad, M.
Med Phys2024Journal Article, cited 0 times
Website
Pancreas-CT
Computed Tomography (CT)
contrast enhancement
image quality
machine learning
quality control
random forest
BACKGROUND: Adequate image enhancement of organs and blood vessels of interest is an important aspect of image quality in contrast-enhanced computed tomography (CT). There is a need for an objective method for evaluation of vessel contrast that can be automatically and systematically applied to large sets of CT exams. PURPOSE: The purpose of this work was to develop a method to automatically segment and measure attenuation Hounsfield Unit (HU) in the portal vein (PV) in contrast-enhanced abdomen CT examinations. METHODS: Input CT images were processed by a vessel enhancing filter to determine candidate PV segmentations. Multiple machine learning (ML) classifiers were evaluated for classifying a segmentation as corresponding to the PV based on segmentation shape, location, and intensity features. A public data set of 82 contrast-enhanced abdomen CT examinations was used to train the method. An optimal ML classifier was selected by training and tuning on 66 out of the 82 exams (80% training split) in the public data set. The method was evaluated in terms of segmentation classification accuracy and PV attenuation measurement accuracy, compared to manually determined ground truth, on a test set of the remaining 16 exams (20% test split) held out from public data set. The method was further evaluated on a separate, independently collected test set of 21 examinations. RESULTS: The best classifier was found to be a random forest, with a precision of 0.892 in the held-out test set to correctly identify the PV from among the input candidate segmentations. The mean absolute error of the measured PV attenuation relative to ground truth manual measurement was 13.4 HU. On the independent test set, the overall precision decreased to 0.684. However, the PV attenuation measurement remained relatively accurate with a mean absolute error of 15.2 HU. CONCLUSIONS: The method was shown to accurately measure PV attenuation over a large range of attenuation values, and was validated in an independently collected dataset. The method did not require time-consuming manual contouring to supervise training. The method may be applied to systematic quality control of contrast-enhanced CT examinations.
Increased robustness in reference region model analysis of DCE MRI using two‐step constrained approaches
Ahmed, Zaki
Levesque, Ives R
Magnetic Resonance in Medicine2016Journal Article, cited 1 times
Website
DCE-MRI
Algorithm development
QIN Breast DCE-MRI
Analysis of dual tree M‐band wavelet transform based features for brain image classification
Ayalapogu, Ratna Raju
Pabboju, Suresh
Ramisetty, Rajeswara Rao
Magnetic Resonance in Medicine2018Journal Article, cited 1 times
Website
REMBRANDT
brain cancer
Pharmacokinetic modeling of dynamic contrast-enhanced MRI using a reference region and input function tail
Ahmed, Z.
Levesque, I. R.
Magn Reson Med2020Journal Article, cited 0 times
Website
TCGA-GBM
Dynamic Contrast-Enhanced (DCE)-MRI
Dynamic contrast-enhanced magnetic resonance imaging
Extended Tofts model (ETM)
PURPOSE: Quantitative analysis of dynamic contrast-enhanced MRI (DCE-MRI) requires an arterial input function (AIF) which is difficult to measure. We propose the reference region and input function tail (RRIFT) approach which uses a reference tissue and the washout portion of the AIF. METHODS: RRIFT was evaluated in simulations with 100 parameter combinations at various temporal resolutions (5-30 s) and noise levels (sigma = 0.01-0.05 mM). RRIFT was compared against the extended Tofts model (ETM) in 8 studies from patients with glioblastoma multiforme. Two versions of RRIFT were evaluated: one using measured patient-specific AIF tails, and another assuming a literature-based AIF tail. RESULTS: RRIFT estimated the transfer constant K trans and interstitial volume v e with median errors within 20% across all simulations. RRIFT was more accurate and precise than the ETM at temporal resolutions slower than 10 s. The percentage error of K trans had a median and interquartile range of -9 +/- 45% with the ETM and -2 +/- 17% with RRIFT at a temporal resolution of 30 s under noiseless conditions. RRIFT was in excellent agreement with the ETM in vivo, with concordance correlation coefficients (CCC) of 0.95 for K trans , 0.96 for v e , and 0.73 for the plasma volume v p using a measured AIF tail. With the literature-based AIF tail, the CCC was 0.89 for K trans , 0.93 for v e and 0.78 for v p . CONCLUSIONS: Quantitative DCE-MRI analysis using the input function tail and a reference tissue yields absolute kinetic parameters with the RRIFT method. This approach was viable in simulation and in vivo for temporal resolutions as low as 30 s.
Using a deep learning prior for accelerating hyperpolarized (13) C MRSI on synthetic cancer datasets
Wang, Z.
Luo, G.
Li, Y.
Cao, P.
Magn Reson Med2024Journal Article, cited 0 times
Website
UPENN-GBM
Magnetic Resonance Spectroscopic Imaging (MRSI)
Deep Learning
pyruvate
lactate
glutamate
(13C) MRSI
Generative Adversarial Network (GAN)
Singular value decomposition (SVD)
Synthetic images
PURPOSE: We aimed to incorporate a deep learning prior with k-space data fidelity for accelerating hyperpolarized carbon-13 MRSI, demonstrated on synthetic cancer datasets. METHODS: A two-site exchange model, derived from the Bloch equation of MR signal evolution, was firstly used in simulating training and testing data, that is, synthetic phantom datasets. Five singular maps generated from each simulated dataset were used to train a deep learning prior, which was then employed with the fidelity term to reconstruct the undersampled MRI k-space data. The proposed method was assessed on synthetic human brain tumor images (N = 33), prostate cancer images (N = 72), and mouse tumor images (N = 58) for three undersampling factors and 2.5% additive Gaussian noise. Furthermore, varied levels of Gaussian noise with SDs of 2.5%, 5%, and 10% were added on synthetic prostate cancer data, and corresponding reconstruction results were evaluated. RESULTS: For quantitative evaluation, peak SNRs were approximately 32 dB, and the accuracy was generally improved for 5 to 8 dB compared with those from compressed sensing with L1-norm regularization or total variation regularization. Reasonable normalized RMS error were obtained. Our method also worked robustly against noise, even on a data with noise SD of 10%. CONCLUSION: The proposed singular value decomposition + iterative deep learning model could be considered as a general framework that extended the application of deep learning MRI reconstruction to metabolic imaging. The morphology of tumors and metabolic images could be measured robustly in six times acceleration using our method.
An extended reference region model for DCE‐MRI that accounts for plasma volume
Ahmed, Zaki
Levesque, Ives R
NMR in Biomedicine2018Journal Article, cited 0 times
Website
DCE-MRI
TCGA-GBM
reference region model (RRM)
extended reference region model (ERRM)
constrained ERRM (CERRM)
Molecular subtype classification of low‐grade gliomas using magnetic resonance imaging‐based radiomics and machine learning
Lam, Luu Ho Thanh
Thi, Duyen
Diep, Doan Thi Ngoc
Le Nhu Nguyet, Dang
Truong, Quang Dinh
Tri, Tran Thanh
Thanh, Huynh Ngoc
Le, Nguyen Quoc Khanh
NMR in Biomedicine2022Journal Article, cited 0 times
TCGA-LGG
In 2016, the World Health Organization (WHO) updated the glioma classification by incorporating molecular biology parameters, including low-grade glioma (LGG). In the new scheme, LGGs have three molecular subtypes: isocitrate dehydrogenase (IDH)-mutated 1p/19q-codeleted, IDH-mutated 1p/19q-noncodeleted, and IDH-wild type 1p/19q-noncodeleted entities. This work proposes a model prediction of LGG molecular subtypes using magnetic resonance imaging (MRI). MR images were segmented and converted into radiomics features, thereby providing predictive information about the brain tumor classification. With 726 raw features obtained from the feature extraction procedure, we developed a hybrid machine learning-based radiomics by incorporating a genetic algorithm and eXtreme Gradient Boosting (XGBoost) classifier, to ascertain 12 optimal features for tumor classification. To resolve imbalanced data, the synthetic minority oversampling technique (SMOTE) was applied in our study. The XGBoost algorithm outperformed the other algorithms on the training dataset by an accuracy value of 0.885. We continued evaluating the XGBoost model, then achieved an overall accuracy of 0.6905 for the three-subtype classification of LGGs on an external validation dataset. Our model is among just a few to have resolved the three-subtype LGG classification challenge with high accuracy compared with previous studies performing similar work.
Deep-learning-based super-resolution for accelerating chemical exchange saturation transfer MRI
Pemmasani Prabakaran, R. S.
Park, S. W.
Lai, J. H. C.
Wang, K.
Xu, J.
Chen, Z.
Ilyas, A. O.
Liu, H.
Huang, J.
Chan, K. W. Y.
NMR Biomed2024Journal Article, cited 0 times
Website
LGG-1p19qDeletion
Magnetic Resonance Imaging (MRI)
acquisition time
Transfer learning
amide CEST (amideCEST)
BRAIN
chemical exchange saturation transfer (CEST)
deep-learning-based super-resolution (DLSR)
relayed nuclear Overhauser effect (rNOE)
Chemical exchange saturation transfer (CEST) MRI is a molecular imaging tool that provides physiological information about tissues, making it an invaluable tool for disease diagnosis and guided treatment. Its clinical application requires the acquisition of high-resolution images capable of accurately identifying subtle regional changes in vivo, while simultaneously maintaining a high level of spectral resolution. However, the acquisition of such high-resolution images is time consuming, presenting a challenge for practical implementation in clinical settings. Among several techniques that have been explored to reduce the acquisition time in MRI, deep-learning-based super-resolution (DLSR) is a promising approach to address this problem due to its adaptability to any acquisition sequence and hardware. However, its translation to CEST MRI has been hindered by the lack of the large CEST datasets required for network development. Thus, we aim to develop a DLSR method, named DLSR-CEST, to reduce the acquisition time for CEST MRI by reconstructing high-resolution images from fast low-resolution acquisitions. This is achieved by first pretraining the DLSR-CEST on human brain T1w and T2w images to initialize the weights of the network and then training the network on very small human and mouse brain CEST datasets to fine-tune the weights. Using the trained DLSR-CEST network, the reconstructed CEST source images exhibited improved spatial resolution in both peak signal-to-noise ratio and structural similarity index measure metrics at all downsampling factors (2-8). Moreover, amide CEST and relayed nuclear Overhauser effect maps extrapolated from the DLSR-CEST source images exhibited high spatial resolution and low normalized root mean square error, indicating a negligible loss in Z-spectrum information. Therefore, our DLSR-CEST demonstrated a robust reconstruction of high-resolution CEST source images from fast low-resolution acquisitions, thereby improving the spatial resolution and preserving most Z-spectrum information.
Machine learning models predict the primary sites of head and neck squamous cell carcinoma metastases based on DNA methylation
Cox models with time‐varying covariates and partly‐interval censoring–A maximum penalised likelihood approach
Webb, Annabel
Ma, Jun
Statistics in Medicine2022Journal Article, cited 0 times
Duke-Breast-Cancer-MRI
Algorithm Development
Cox proportional hazard model
Time-varying covariates can be important predictors when model based predictions are considered. A Cox model that includes time-varying covariates is usually referred to as an extended Cox model. When only right censoring is presented in the observed survival times, the conventional partial likelihood method is still applicable to estimate the regression coefficients of an extended Cox model. However, if there are interval-censored survival times, then the partial likelihood method is not directly available unless an imputation, such as the middle point imputation, is used to replaced the left- and interval-censored data. However, such imputation methods are well known for causing biases. This paper considers fitting of the extended Cox models using the maximum penalised likelihood method allowing observed survival times to be partly interval censored, where a penalty function is used to regularise the baseline hazard estimate. We present simulation studies to demonstrate the performance of our proposed method, and illustrate our method with applications to two real datasets from medical research.
Imaging Biomarker Development for Lower Back Pain Using Machine Learning: How Image Analysis Can Help Back Pain
Gaonkar, B.
Cook, K.
Yoo, B.
Salehi, B.
Macyszyn, L.
Methods Mol Biol2022Journal Article, cited 0 times
Website
LIDC-IDRI
Deep Learning
Degenerative disease
Image segmentation
Machine learning
Spine MRI
State-of-the-art diagnosis of radiculopathy relies on "highly subjective" radiologist interpretation of magnetic resonance imaging of the lower back. Currently, the treatment of lumbar radiculopathy and associated lower back pain lacks coherence due to an absence of reliable, objective diagnostic biomarkers. Using emerging machine learning techniques, the subjectivity of interpretation may be replaced by the objectivity of automated analysis. However, training computer vision methods requires a curated database of imaging data containing anatomical delineations vetted by a team of human experts. In this chapter, we outline our efforts to develop such a database of curated imaging data alongside the required delineations. We detail the processes involved in data acquisition and subsequent annotation. Then we explain how the resulting database can be utilized to develop a machine learning-based objective imaging biomarker. Finally, we present an explanation of how we validate our machine learning-based anatomy delineation algorithms. Ultimately, we hope to allow validated machine learning models to be used to generate objective biomarkers from imaging data-for clinical use to diagnose lumbar radiculopathy and guide associated treatment plans.
Tumor Growth in the Brain: Complexity and Fractality
Tumor growth is a complex process characterized by uncontrolled cell proliferation and invasion of neighboring tissues. The understanding of these phenomena is of vital importance to establish appropriate diagnosis and therapy strategies and starts with the evaluation of their complexity with suitable descriptors produced by scaling analysis. There has been considerable effort in the evaluation of fractal dimension as a suitable parameter to describe differences between normal and pathological tissues, and it has been used for brain tumor grading with great success. In the present work, several contributions, which exploit scaling analysis in the context of brain tumors, are reviewed. These include very promising results in tumor segmentation, grading, and therapy monitoring. Emphasis is done on scaling analysis techniques applicable to multifractal systems, proposing new descriptors to advance the understanding of tumor growth dynamics in brain. These techniques serve as a starting point to develop innovative practical growth models for therapy simulation and optimization, drug delivery, and the evaluation of related neurological disorders.
Detection of Liver Tumor Using Gradient Vector Flow Algorithm
Baby, Jisha
Rajalakshmi, T.
Snekhalatha, U.
2019Book Section, cited 0 times
LungCT-Diagnosis
Liver tumor also known as the hepatic tumor is a type of growth found in or on the liver. Identifying the tumor location can be a tedious, error-prone and need an experts study to identify it. This paper presents a segmentation technique to segment the liver tumor using Gradient Vector Flow (GVF) snakes algorithm. To initiate snakes algorithm the images need to be insensitive to noise, Wiener Filter is proposed to remove the noise. The GVF snake starts its process by initially extending it to create an initial boundary. The GVF forces are calculated and help in driving the algorithm to stretch and bend the initial contour towards the region of interest due to the difference in intensity. The images were classified into tumor and non-tumor categories by Artificial Neural Network Classifier depending on the features extracted which showed notable dissimilarity between normal and abnormal images.
Adverse Effects of Image Tiling on Convolutional Neural Networks
Reina, G. Anthony
Panchumarthy, Ravi
2019Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Machine Learning
Convolutional neural network models perform state of the art accuracy on image classification, localization, and segmentation tasks. A fully convolutional topology, such as U-Net, may be trained on images of one size and perform inference on images of another size. This feature allows researchers to work with images too large to fit into memory by simply dividing the image into small tiles, making predictions on these tiles, and stitching these tiles back together as the prediction of the whole image.We compare how a tiled prediction of a U-Net model compares to a prediction that is based on the whole image. Our results show that using tiling to perform inference results in a significant increase in both false positive and false negative predictions when compared to using the whole image for inference. We are able to modestly improve the predictions by increasing both tile size and amount of tile overlap, but this comes at a greater computational cost and still produces inferior results to using the whole image.Although tiling has been used to produce acceptable segmentation results in the past, we recommend performing inference on the whole image to achieve the best results and increase the state of the art accuracy for CNNs.
Multi-institutional Deep Learning Modeling Without Sharing Patient Data: A Feasibility Study on Brain Tumor Segmentation
Deep learning models for semantic segmentation of images require large amounts of data. In the medical imaging domain, acquiring sufficient data is a significant challenge. Labeling medical image data requires expert knowledge. Collaboration between institutions could address this challenge, but sharing medical data to a centralized location faces various legal, privacy, technical, and data-ownership challenges, especially among international institutions. In this study, we introduce the first use of federated learning for multi-institutional collaboration, enabling deep learning modeling without sharing patient data. Our quantitative results demonstrate that the performance of federated semantic segmentation models (Dice = 0.852) on multimodal brain scans is similar to that of models trained by sharing data (Dice = 0.862). We compare federated learning with two alternative collaborative learning methods and find that they fail to match the performance of federated learning.
Multi-stage Association Analysis of Glioblastoma Gene Expressions with Texture and Spatial Patterns
Elsheikh, Samar S. M.
Bakas, Spyridon
Mulder, Nicola J.
Chimusa, Emile R.
Davatzikos, Christos
Crimi, Alessandro
2019Book Section, cited 0 times
BraTS-TCGA-GBM
Glioblastoma is the most aggressive malignant primary brain tumor with a poor prognosis. Glioblastoma heterogeneous neuroimaging, pathologic, and molecular features provide opportunities for subclassification, prognostication, and the development of targeted therapies. Magnetic resonance imaging has the capability of quantifying specific phenotypic imaging features of these tumors. Additional insight into disease mechanism can be gained by exploring genetics foundations. Here, we use the gene expressions to evaluate the associations with various quantitative imaging phenomic features extracted from magnetic resonance imaging. We highlight a novel correlation by carrying out multi-stage genome-wide association tests at the gene-level through a non-parametric correlation framework that allows testing multiple hypotheses about the integrated relationship of imaging phenotype-genotype more efficiently and less expensive computationally. Our result showed several novel genes previously associated with glioblastoma and other types of cancers, as the LRRC46 (chromosome 17), EPGN (chromosome 4) and TUBA1C (chromosome 12), all associated with our radiographic tumor features.
Segmenting Brain Tumors from MRI Using Cascaded Multi-modal U-Nets
Marcinkiewicz, Michal
Nalepa, Jakub
Lorenzo, Pablo Ribalta
Dudzik, Wojciech
Mrukwa, Grzegorz
2019Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Machine Learning
Gliomas are the most common primary brain tumors, and their accurate manual delineation is a time- consuming and very user-dependent process. Therefore, developing automated techniques for reproducible detection and segmentation of brain tumors from magnetic resonance imaging is a vital research topic. In this paper, we present a deep learning-powered approach for brain tumor segmentation which exploits multiple magnetic-resonance modalities and processes them in two cascaded stages. In both stages, we use multi-modal fully-convolutional neural nets inspired by U-Nets. The first stage detects regions of interests, whereas the second stage performs the multi-class classification. Our experimental study, performed over the newest release of the BraTS dataset (BraTS 2018) showed that our method delivers accurate brain-tumor delineation and offers very fast processing—the total time required to segment one study using our approach amounts to around 18 s.
Hierarchical Multi-class Segmentation of Glioma Images Using Networks with Multi-level Activation Function
Hu, Xiaobin
Li, Hongwei
Zhao, Yu
Dong, Chao
Menze, Bjoern H.
Piraud, Marie
2019Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Machine Learning
For many segmentation tasks, especially for the biomedical image, the topological prior is vital information which is useful to exploit. The containment/nesting is a typical inter-class geometric relationship. In the MICCAI Brain tumor segmentation challenge, with its three hierarchically nested classes ‘whole tumor’, ‘tumor core’, ‘active tumor’, the nested classes relationship is introduced into the 3D-residual-Unet architecture. The network comprises a context aggregation pathway and a localization pathway, which encodes increasingly abstract representation of the input as going deeper into the network, and then recombines these representations with shallower features to precisely localize the interest domain via a localization path. The nested-class-prior is combined by proposing the multi-class activation function and its corresponding loss function. The model is trained on the training dataset of Brats2018, and 20% of the dataset is regarded as the validation dataset to determine parameters. When the parameters are fixed, we retrain the model on the whole training dataset. The performance achieved on the validation leaderboard is 86%, 77% and 72% Dice scores for the whole tumor, enhancing tumor and tumor core classes without relying on ensembles or complicated post-processing steps. Based on the same start-of-the-art network architecture, the accuracy of nested-class (enhancing tumor) is reasonably improved from 69% to 72% compared with the traditional Softmax-based method which blind to topological prior.
Brain Tumor Segmentation and Tractographic Feature Extraction from Structural MR Images for Overall Survival Prediction
Kao, Po-Yu
Ngo, Thuyen
Zhang, Angela
Chen, Jefferson W.
Manjunath, B. S.
2019Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Machine Learning
This paper introduces a novel methodology to integrate human brain connectomics and parcellation for brain tumor segmentation and survival prediction. For segmentation, we utilize an existing brain parcellation atlas in the MNI152 1 mm space and map this parcellation to each individual subject data. We use deep neural network architectures together with hard negative mining to achieve the final voxel level classification. For survival prediction, we present a new method for combining features from connectomics data, brain parcellation information, and the brain tumor mask. We leverage the average connectome information from the Human Connectome Project and map each subject brain volume onto this common connectome space. From this, we compute tractographic features that describe potential neural disruptions due to the brain tumor. These features are then used to predict the overall survival of the subjects. The main novelty in the proposed methods is the use of normalized brain parcellation data and tractography data from the human connectome project for analyzing MR images for segmentation and survival prediction. Experimental results are reported on the BraTS2018 dataset.
Glioma Prognosis: Segmentation of the Tumor and Survival Prediction Using Shape, Geometric and Clinical Information
Islam, Mobarakol
Jose, V. Jeya Maria
Ren, Hongliang
2019Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Segmentation of brain tumor from magnetic resonance imaging (MRI) is a vital process to improve diagnosis, treatment planning and to study the difference between subjects with tumor and healthy subjects. In this paper, we exploit a convolutional neural network (CNN) with hypercolumn technique to segment tumor from healthy brain tissue. Hypercolumn is the concatenation of a set of vectors which form by extracting convolutional features from multiple layers. Proposed model integrates batch normalization (BN) approach with hypercolumn. BN layers help to alleviate the internal covariate shift during stochastic gradient descent (SGD) training by zero-mean and unit variance of each mini-batch. Survival Prediction is done by first extracting features (Geometric, Fractal, and Histogram) from the segmented brain tumor data. Then, the number of days of overall survival is predicted by implementing regression on the extracted features using an artificial neural network (ANN). Our model achieves a mean dice score of 89.78%, 82.53% and 76.54% for the whole tumor, tumor core and enhancing tumor respectively in segmentation task and 67.9% in overall survival prediction task with the validation set of BraTS 2018 challenge. It obtains a mean dice accuracy of 87.315%, 77.04% and 70.22% for the whole tumor, tumor core and enhancing tumor respectively in the segmentation task and a 46.8% in overall survival prediction task in the BraTS 2018 test data set.
Segmentation of Brain Tumors Using DeepLabv3+
Roy Choudhury, Ahana
Vanguri, Rami
Jambawalikar, Sachin R.
Kumar, Piyush
2019Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Machine Learning
Multi-modal MRI scans are commonly used to grade brain tumors based on size and imaging appearance. As a result, imaging plays an important role in the diagnosis and treatment administered to patients. Deep learning based approaches in general, and convolutional neural networks in particular, have been utilized to achieve superior performance in the fields of object detection and image segmentation. In this paper, we propose to utilize the DeepLabv3+ network for the task of brain tumor segmentation. For this task, we build 18 different models using various combinations of the T1CE, FLAIR, T1 and T2 images to identify the whole tumor, the tumor core and the enhancing core of the brain tumor for the testing and validation data sets. We use the MICCAI BraTS training data, which consists of 285 cases, to train our network. Our method involves the segmentation of individual slices in three orientations using 18 different combinations of slices and a majority voting-based combination of the results of some of the classifiers that use the same combination of slices, but in different orientations. Finally, for each of the three regions, we train a separate model, which uses the results from the 18 classifiers as its inputs. The outputs of the 18 models are combined using bit packing to prepare the inputs to the final classifiers for the three regions. We achieve mean Dice coefficients of 0.7086, 0.7897 and 0.8755 for the enhancing tumor, the tumor core and the whole tumor regions respectively.
Brain Tumor Segmentation on Multimodal MR Imaging Using Multi-level Upsampling in Decoder
Hu, Yan
Liu, Xiang
Wen, Xin
Niu, Chen
Xia, Yong
2019Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Accurate brain tumor segmentation plays a pivotal role in clinical practice and research settings. In this paper, we propose the multi-level up-sampling network (MU-Net) to learn the image presentations of transverse, sagittal and coronal view and fuse them to automatically segment brain tumors, including necrosis, edema, non-enhancing, and enhancing tumor, in multimodal magnetic resonance (MR) sequences. The MU-Net model has an encoder–decoder structure, in which low level feature maps obtained by the encoder and high level feature maps obtained by the decoder are combined by using a newly designed global attention (GA) module. The proposed model has been evaluated on the BraTS 2018 Challenge validation dataset and achieved an average Dice similarity coefficient of 0.88, 0.74, 0.69 and 0.85, 0.72, 0.66 for the whole tumor, core tumor and enhancing tumor on the validation dataset and testing dataset, respectively. Our results indicate that the proposed model has a promising performance in automated brain tumor segmentation.
Neuromorphic Neural Network for Multimodal Brain Image Segmentation and Overall Survival Analysis
Han, Woo-Sup
Han, Il Song
2019Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Machine Learning
Image analysis of brain tumors is one of key elements for clinical decision, while manual segmentation is time consuming and known to be subjective to clinicians or radiologists. In this paper, we examined the neuromorphic convolutional neural network on this task of multimodal images, using a down-up resizing network structure. The controlled rectifier neuron function was incorporated in neuromorphic neural network, for introducing the efficiency of segmentation and saliency map generation used in noisy image processing of X-ray CT data and dark road video data. The neuromorphic neural network is proposed to the brain imaging analytic, based on the visual cortex-inspired deep neural network developed for 3 dimensional tooth segmentation and robust visual object detection. Experiment results illustrated the effectiveness and feasibility of our proposed method with flexible requirements of clinical diagnostic decision data, from segmentation to overall survival analysis. The survival prediction was 71% accuracy for the data with true result and 50.6% accuracy of predicting survival days for the individual challenge data without any clinical diagnostic data.
MRI analysis takes central position in brain tumor diagnosis and treatment, thus its precise evaluation is crucially important. However, its 3D nature imposes several challenges, so the analysis is often performed on 2D projections that reduces the complexity, but increases bias. On the other hand, time consuming 3D evaluation, like segmentation, is able to provide precise estimation of a number of valuable spatial characteristics, giving us understanding about the course of the disease.Recent studies focusing on the segmentation task, report superior performance of Deep Learning methods compared to classical computer vision algorithms. But still, it remains a challenging problem. In this paper we present deep cascaded approach for automatic brain tumor segmentation. Similar to recent methods for object detection, our implementation is based on neural networks; we propose modifications to the 3D UNet architecture and augmentation strategy to efficiently handle multimodal MRI input, besides this we introduce approach to enhance segmentation quality with context obtained from models of the same topology operating on downscaled data. We evaluate presented approach on BraTS 2018 dataset and achieve promising results on test dataset with 14th place and Dice score of 0.720/0.878/0.785 for enhancing tumor, whole tumor and tumor core segmentation respectively.
Segmentation of Gliomas and Prediction of Patient Overall Survival: A Simple and Fast Procedure
Puybareau, Elodie
Tochon, Guillaume
Chazalon, Joseph
Fabrizio, Jonathan
2019Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
This paper proposes, in the context of brain tumor study, a fast automatic method that segments tumors and predicts patient overall survival. The segmentation stage is implemented using a fully convolutional network based on VGG-16, pre-trained on ImageNet for natural image classification, and fine tuned with the training dataset of the MICCAI 2018 BraTS Challenge. It relies on the “pseudo-3D” method published at ICIP 2017, which allows for segmenting objects from 2D color-like images which contain 3D information of MRI volumes. With such a technique, the segmentation of a 3D volume takes only a few seconds. The prediction stage is implemented using Random Forests. It only requires a predicted segmentation of the tumor and a homemade atlas. Its simplicity allows to train it with very few examples and it can be used after any segmentation process. The presented method won the second place of the MICCAI 2018 BraTS Challenge for the overall survival prediction task. A Docker image is publicly available on https://www.lrde.epita.fr/wiki/NeoBrainSeg.
Multi-scale Masked 3-D U-Net for Brain Tumor Segmentation
Xu, Yanwu
Gong, Mingming
Fu, Huan
Tao, Dacheng
Zhang, Kun
Batmanghelich, Kayhan
2019Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
The brain tumor segmentation task aims to classify sub-regions into peritumoral edema, necrotic core, enhancing and non-enhancing tumor core using multimodal MRI scans. This task is very challenging due to its intrinsic high heterogeneity of appearance and shape. Recently, with the development of deep models and computing resources, deep convolutional neural networks have shown their effectiveness on brain tumor segmentation from 3D MRI cans, obtaining the top performance in the MICCAI BraTS challenge 2017. In this paper we further boost the performance of brain tumor segmentation by proposing a multi-scale masked 3D U-Net which captures multi-scale information by stacking multi-scale images as inputs and incorporating a 3-D Atrous Spatial Pyramid Pooling (ASPP) layer. To filter noisy results for tumor core (TC) and enhancing tumor (ET), we train the TC and ET segmentation networks from the bounding box for whole tumor (WT) and TC, respectively. On the BraTS 2018 validation set, our method achieved average Dice scores of 0.8094, 0.9034, 0.8319 for ET, WT and TC, respectively. On the BraTS 2018 test set, our method achieved 0.7690, 0.8711, and 0.7792 dice scores for ET, WT and TC, respectively. Especially, our multi-scale masked 3D network achieved very promising results enhancing tumor (ET), which is hardest to segment due to small scales and irregular shapes.
3D-ESPNet with Pyramidal Refinement for Volumetric Brain Tumor Image Segmentation
Nuechterlein, Nicholas
Mehta, Sachin
2019Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Automatic quantitative analysis of structural magnetic resonance (MR) images of brain tumors is critical to the clinical care of glioma patients, and for the future of advanced MR imaging research. In particular, automatic brain tumor segmentation can provide volumes of interest (VOIs) to scale the analysis of advanced MR imaging modalities such as perfusion-weighted imaging (PWI), diffusion-weighted imaging (DTI), and MR spectroscopy (MRS), which is currently hindered by the prohibitive cost and time of manual segmentations. However, automatic brain tumor segmentation is complicated by the high heterogeneity and dimensionality of MR data, and the relatively small size of available datasets. This paper extends ESPNet, a fast and efficient network designed for vanilla 2D semantic segmentation, to challenging 3D data in the medical imaging domain [11]. Even without substantive pre- and post-processing, our model achieves respectable brain tumor segmentation results, while learning only 3.8 million parameters. 3D-ESPNet achieves dice scores of 0.850, 0.665, and 0.782 on whole tumor, enhancing tumor, and tumor core classes on the test set of the 2018 BraTS challenge [1–4, 12]. Our source code is open-source and available at https://github.com/sacmehta/3D-ESPNet.
Brain Tumor Segmentation Using an Ensemble of 3D U-Nets and Overall Survival Prediction Using Radiomic Features
Feng, Xue
Tustison, Nicholas
Meyer, Craig
2019Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Machine Learning
Abstract
Accurate segmentation of different sub-regions of gliomas including peritumoral edema, necrotic core, enhancing and non-enhancing tumor core from multimodal MRI scans has important clinical relevance in diagnosis, prognosis and treatment of brain tumors. However, due to the highly heterogeneous appearance and shape, segmentation of the sub-regions is very challenging. Recent development using deep learning models has proved its effectiveness in the past several brain segmentation challenges as well as other semantic and medical image segmentation problems. Most models in brain tumor segmentation use a 2D/3D patch to predict the class label for the center voxel and variant patch sizes and scales are used to improve the model performance. However, it has low computation efficiency and also has limited receptive field. U-Net is a widely used network structure for end-to-end segmentation and can be used on the entire image or extracted patches to provide classification labels over the entire input voxels so that it is more efficient and expect to yield better performance with larger input size. Furthermore, instead of picking the best network structure, an ensemble of multiple models, trained on different dataset or different hyper-parameters, can generally improve the segmentation performance. In this study we propose to use an ensemble of 3D U-Nets with different hyper-parameters for brain tumor segmentation. Preliminary results showed effectiveness of this model. In addition, we developed a linear model for survival prediction using extracted imaging and non-imaging features, which, despite the simplicity, can effectively reduce overfitting and regression errors.
A Novel Domain Adaptation Framework for Medical Image Segmentation
Gholami, Amir
Subramanian, Shashank
Shenoy, Varun
Himthani, Naveen
Yue, Xiangyu
Zhao, Sicheng
Jin, Peter
Biros, George
Keutzer, Kurt
2019Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Machine Learning
We propose a segmentation framework that uses deep neural networks and introduce two innovations. First, we describe a biophysics-based domain adaptation method. Second, we propose an automatic method to segment white matter, gray matter, glial matter and cerebrospinal fluid, in addition to tumorous tissue. Regarding our first innovation, we use a domain adaptation framework that combines a novel multispecies biophysical tumor growth model with a generative adversarial model to create realistic looking synthetic multimodal MR images with known segmentation. These images are used for the purpose of training time data augmentation. Regarding our second innovation, we propose an automatic approach to enrich available segmentation data by computing the segmentation for healthy tissues. This segmentation, which is done using diffeomorphic image registration between the BraTS training data and a set of pre-labeled atlases, provides more information for training and reduces the class imbalance problem. Our overall approach is not specific to any particular neural network and can be used in conjunction with existing solutions. We demonstrate the performance improvement using a 2D U-Net for the BraTS’18 segmentation challenge. Our biophysics based domain adaptation achieves better results, as compared to the existing state-of-the-art GAN model used to create synthetic data for training.
3D MRI Brain Tumor Segmentation Using Autoencoder Regularization
Myronenko, Andriy
2019Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Automated segmentation of brain tumors from 3D magnetic resonance images (MRIs) is necessary for the diagnosis, monitoring, and treatment planning of the disease. Manual delineation practices require anatomical knowledge, are expensive, time consuming and can be inaccurate due to human error. Here, we describe a semantic segmentation network for tumor subregion segmentation from 3D MRIs based on encoder-decoder architecture. Due to a limited training dataset size, a variational auto-encoder branch is added to reconstruct the input image itself in order to regularize the shared decoder and impose additional constraints on its layers. The current approach won 1st place in the BraTS 2018 challenge.
voxel-GAN: Adversarial Framework for Learning Imbalanced Brain Tumor Segmentation
Rezaei, Mina
Yang, Haojin
Meinel, Christoph
2019Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Machine Learning
We propose a new adversarial network, named voxel-GAN, to mitigate imbalanced data problem in brain tumor semantic segmentation where the majority of voxels belong to a healthy region and few belong to tumor or non-health region. We introduce a 3D conditional generative adversarial network (cGAN) comprises two components: a segmentor and a discriminator. The segmentor is trained on 3D brain MR or CT images to learn the segmentation label’s in voxel-level, while the discriminator is trained to distinguish a segmentor output, coming from the ground truth or generated artificially. The segmentor and discriminator networks simultaneously train with new weighted adversarial loss to mitigate imbalanced training data issue. We show evidence that the proposed framework is applicable to different types of brain images of varied sizes. In our experiments on BraTS-2018 and ISLES-2018 benchmarks, we find improved results, demonstrating the efficacy of our approach.
Brain Tumor Segmentation and Survival Prediction Using a Cascade of Random Forests
Lefkovits, Szidónia
Szilágyi, László
Lefkovits, László
2019Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Brain tumor segmentation is a difficult task due to the strongly varying intensity and shape of gliomas. In this paper we propose a multi-stage discriminative framework for brain tumor segmentation based on BraTS 2018 dataset. The framework presented in this paper is a more complex segmentation system than our previous work presented at BraTS 2016. Here we propose a multi-stage discriminative segmentation model, where every stage is a binary classifier based on the random forest algorithm. Our multi-stage system attempts to follow the layered structure of tumor tissues provided in the annotation protocol. In each segmentation stage we dealt with four major difficulties: feature selection, determination of training database used, optimization of classifier performances and image post-processing. The framework was tested on the evaluation images from BraTS 2018. One of the most important results is the determination of the tumor ROI with a sensitivity of approximately 0.99 in stage I by considering only 16% of the brain in the subsequent stages. Based on the segmentation obtained we solved the survival prediction task using a random forest regressor. The results obtained are comparable to the best ones presented in previous BraTS Challenges.
Automatic Segmentation of Brain Tumor Using 3D SE-Inception Networks with Residual Connections
Yao, Hongdou
Zhou, Xiaobing
Zhang, Xuejie
2019Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Machine Learning
Nowadays, there are various kinds of methods in medical image segmentation tasks, in which Cascaded FCN is an effective one. The idea of this method is to convert multiple classification tasks into a sequence of two categorization tasks, according to a series of sub-hierarchy regions of multi-modal Magnetic Resonance Images. We propose a model based on this idea, by combining the mainstream deep learning models for two dimensional images and modifying the 2D model to adapt to 3D medical image data set. Our model uses the Inception model, 3D Squeeze and Excitation structures, and dilated convolution filters, which are well known in 2D image segmentation tasks. When segmenting the whole tumor, we set the bounding box of the result, which is used to segment tumor core, and the bounding box of tumor core segmentation result will be used to segment enhancing tumor. We not only use the final output of the model, but also combine the results of intermediate output. In MICCAI BraTs 2018 gliomas segmentation task, we achieve a competitive performance without data augmentation.
Semi-automatic Brain Tumor Segmentation by Drawing Long Axes on Multi-plane Reformat
Gering, David
Sun, Kay
Avery, Aaron
Chylla, Roger
Vivekanandan, Ajeet
Kohli, Lisa
Knapp, Haley
Paschke, Brad
Young-Moxon, Brett
King, Nik
Mackie, Thomas
2019Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Abstract
A semi-automatic image segmentation method, called SAMBAS, based on workflow familiar to clinical radiologists is described. The user initializes 3D segmentation by drawing a long axis on a multi-plane reformat (MPR). As the user draws, a 2D segmentation updates in real-time for interactive feedback. When necessary, additional long axes, short axes, or other editing operations may be drawn on one or more MPR planes. The method learns probability distributions from the drawing to perform the MPR segmentation, and in turn, it learns from the MPR segmentation to perform the 3D segmentation. As a preliminary experiment, a batch simulation was performed where long and short axes were automatically drawn on each of 285 multispectral MR brain scans of glioma patients in the 2018 BraTS Challenge training data. Average Dice coefficient for tumor core was 0.86, and the Hausdorff-95% distance was 4.4 mm. As another experiment, a convolution neural network was trained on the same data, and applied to the BraTS validation and test data. Its outputs, computed offline, were integrated into the interactive method. Ten volunteers used the interface on the BraTS validation and test data. On the 66 scans of the validation data, average Dice coefficient for core tumor improved from 0.76 with deep learning alone, to 0.82 as an interactive system.
Brain Tumor Segmentation Using Bit-plane and UNET
Tuan, Tran Anh
Tuan, Tran Anh
Bao, Pham The
2019Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
The extraction of brain tumor tissues in 3D Brain Magnetic Resonance Imaging plays an important role in diagnosis gliomas. In this paper, we use clinical data to develop an approach to segment Enhancing Tumor, Tumor Core, and Whole Tumor which are the sub-regions of glioma. Our proposed method starts with Bit-plane to get the most significant and least significant bits which can cluster and generate more images. Then U-Net, a popular CNN model for object segmentation, is applied to segment all of the glioma regions. In the process, U-Net is implemented by multiple kernels to acquire more accurate results. We evaluated the proposed method with the database BRATS challenge in 2018. On validation data, the method achieves a performance of 82%, 68%, and 70% Dice scores and of 77%, 48%, and 51% on testing data for the Whole Tumor, Enhancing Tumor, and Tumor Core respectively.
Glioma Segmentation and a Simple Accurate Model for Overall Survival Prediction
Gates, Evan
Pauloski, J. Gregory
Schellingerhout, Dawid
Fuentes, David
2019Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Brain tumor segmentation is a challenging task necessary for quantitative tumor analysis and diagnosis. We apply a multi-scale convolutional neural network based on the DeepMedic to segment glioma subvolumes provided in the 2018 MICCAI Brain Tumor Segmentation Challenge. We go on to extract intensity and shape features from the images and cross-validate machine learning models to predict overall survival. Using only the mean FLAIR intensity, nonenhancing tumor volume, and patient age we are able to predict patient overall survival with reasonable accuracy.
Wavelet Convolution Neural Network for Classification of Spiculated Findings in Mammograms
Jasionowska, Magdalena
Gacek, Aleksandra
2019Book Section, cited 0 times
CBIS-DDSM
Wavelet
Convolutional Neural Network (CNN)
Breast cancer
Computer Aided Detection (CADe)
Mammogram
The subject of this paper is computer-aided recognition of spiculated findings in low-contrast noisy mammograms, such as architectural distortions and spiculated masses. The issue of computer-aided detection still remains unresolved, especially for architectural distortions. The methodology applied was based on wavelet convolution neural network. The originality of the proposed method lies in the way of input image creation. The input images were created as the maximum value maps based on three wavelet decomposition subbands (HL,LH,HH), each describing local details in the original image. Moreover, two types of convolution neural network architecture were optimized and empirically verified. The experimental study was conducted on the basis of 1585 regions of interest (512 ; 512 pixels) taken from the Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM), containing both normal (1191) and abnormal (406) breast tissue images including clinically confirmed architectural distortions (141) and spiculated masses (265). With the use of wavelet convolutional neural network with a reverse bioorthogonal wavelet, the recognition accuracy of both types of pathologies reached over 87%, whereas the recognition accuracy for architectural distortions was 85% and for spiculated masses - 88%.
Radiogenomics: Lung Cancer-Related Genes Mutation Status Prediction
Dias, Catarina
Pinheiro, Gil
Cunha, António
Oliveira, Hélder P.
2019Book Section, cited 0 times
NSCLC Radiogenomics
Advances in genomics have driven to the recognition that tumours are populated by different minor subclones of malignant cells that control the way the tumour progresses. However, the spatial and temporal genomic heterogeneity of tumours has been a hurdle in clinical oncology. This is mainly because the standard methodology for genomic analysis is the biopsy, that besides being an invasive technique, it does not capture the entire tumour spatial state in a single exam. Radiographic medical imaging opens new opportunities for genomic analysis by providing full state visualisation of a tumour at a macroscopic level, in a non-invasive way. Having in mind that mutational testing of EGFR and KRAS is a routine in lung cancer treatment, it was studied whether clinical and imaging data are valuable for predicting EGFR and KRAS mutations in a cohort of NSCLC patients. A reliable predictive model was found for EGFR (AUC = 0.96) using both a Multi-layer Perceptron model and a Random Forest model but not for KRAS (AUC = 0.56). A feature importance analysis using Random Forest reported that the presence of emphysema and lung parenchymal features have the highest correlation with EGFR mutation status. This study opens new opportunities for radiogenomics on predicting molecular properties in a more readily available and non-invasive way.
3D Modelling and Radiofrequency Ablation of Breast Tumor Using MRI Images
Nirmala Devi, S.
Gowri Sree, V.
Poompavai, S.
Kaviya Priyaa, A.
2019Book Section, cited 0 times
TCGA-BRCA
The purpose of the work is to develop patient specific treatment analysis for tumor removal procedure using radio frequency ablation. The proposed method increases the efficiency of treatment protocol for patient specific models. Breast cancer is the most common cancer in Indian women. Thermal ablation procedure is to overheat the tissue cells and kills the cancerous tumor using probe. Ablation of tumor is a difficult procedure in case of placement of probe and killing cancerous tissue without much damage to healthy tissues. Directional removal of tumor is essential to avoid damage in surrounding tissue.This work aims to contribute 3 steps (i) segmentation (ii) building 3D model (iii) Analysis and measurement. Segmentation of tumor is achieved by using MIMICS software. 3D modeling and simulation of thermal ablation treatment procedure using COMSOL MULTIPHYSICS was developed, necrosis of tissue and temperature at various points and position of probe have been evaluated. Temperature variation is analyzed to view the necrosis covering area and to plan for thermal ablation procedure for removal of breast tumor. Various measurement parameters of 3D tumor have been identified for further diagnosis.
Bronchus Segmentation and Classification by Neural Networks and Linear Programming
Zhao, Tianyi
Yin, Zhaozheng
Wang, Jiao
Gao, Dashan
Chen, Yunqiang
Mao, Yunxiang
2019Book Section, cited 0 times
LIDC-IDRI
The Lung Tissue Research Consortium (LTRC)
National Lung Screening Trial (NLST)
Segmentation
Classification
Algorithm Development
Airway segmentation is a critical problem for lung disease analysis. However, building a complete airway tree is still a challenging problem because of the complex tree structure, and tracing the deep bronchi is not trivial in CT images because there are numerous small airways with various directions. In this paper, we develop two-stage 2D+3D neural networks and a linear programming based tracking algorithm for airway segmentation. Furthermore, we propose a bronchus classification algorithm based on the segmentation results. Our algorithm is evaluated on a dataset collected from 4 resources. We achieved the dice coefficient of 0.94 and F1 score of 0.86 by a centerline based evaluation metric, compared to the ground-truth manually labeled by our radiologists.
The main challenge preventing a fully-automatic X-ray to CT registration is an initialization scheme that brings the X-ray pose within the capture range of existing intensity-based registration methods. By providing such an automatic initialization, the present study introduces the first end-to-end fully-automatic registration framework. A network is first trained once on artificial X-rays to extract 2D landmarks resulting from the projection of CT-labels. A patient-specific refinement scheme is then carried out: candidate points detected from a new set of artificial X-rays are back-projected onto the patient CT and merged into a refined meaningful set of landmarks used for network re-training. This network-landmarks combination is finally exploited for intraoperative pose-initialization with a runtime of 102 ms. Evaluated on 6 pelvis anatomies (486 images in total), the mean Target Registration Error was 15.0±7.3 mm. When used to initialize the BOBYQA optimizer with normalized cross-correlation, the average (± STD) projection distance was 3.4±2.3 mm, and the registration success rate (projection distance <2.5% of the detector width) greater than 97%.
HFA-Net: 3D Cardiovascular Image Segmentation with Asymmetrical Pooling and Content-Aware Fusion
Zheng, Hao
Yang, Lin
Han, Jun
Zhang, Yizhe
Liang, Peixian
Zhao, Zhuo
Wang, Chaoli
Chen, Danny Z.
2019Book Section, cited 0 times
LCTSC
Automatic and accurate cardiovascular image segmentation is important in clinical applications. However, due to ambiguous borders and subtle structures (e.g., thin myocardium), parsing fine-grained structures in 3D cardiovascular images is very challenging. In this paper, we propose a novel deep heterogeneous feature aggregation network (HFA-Net) to fully exploit complementary information from multiple views of 3D cardiac data. First, we utilize asymmetrical 3D kernels and pooling to obtain heterogeneous features in parallel encoding paths. Thus, from a specific view, distinguishable features are extracted and indispensable contextual information is kept (rather than quickly diminished after symmetrical convolution and pooling operations). Then, we employ a content-aware multi-planar fusion module to aggregate meaningful features to boost segmentation performance. Further, to reduce the model size, we devise a new DenseVoxNet model by sparsifying residual connections, which can be trained in an end-to-end manner. We show the effectiveness of our new HFA-Net on the 2016 HVSMR and 2017 MM-WHS CT datasets, achieving state-of-the-art performance. In addition, HFA-Net obtains competitive results on the 2017 AAPM CT dataset, especially on segmenting subtle structures among multi-objects with large variations, illustrating the robustness of our new segmentation approach.
A Novel Deep Learning Framework for Standardizing the Label of OARs in CT
Yang, Qiming
Chao, Hongyang
Nguyen, Dan
Jiang, Steve
2019Conference Paper, cited 0 times
Head-Neck-PET-CT
When organs at risk (OARs) are contoured in computed tomography (CT) images for radiotherapy treatment planning, the labels are often inconsistent, which severely hampers the collection and curation of clinical data for research purpose. Currently, data cleaning is mainly done manually, which is time-consuming. The existing methods for automatically relabeling OARs remain unpractical with real patient data, due to the inconsistent delineation and similar small-volume OARs. This paper proposes an improved data augmentation technique according to the characteristics of clinical data. Besides, a novel 3D non-local convolutional neural network is proposed, which includes a decision making network with voting strategy. The resulting model can automatically identify OARs and solve the problems in existing methods, achieving the accurate OAR re-labeling goal. We used partial data from a public head-and-neck dataset (HN_PETCT) for training, and then tested the model on datasets from three different medical institutions. We have obtained the state-of-the-art results for identifying 28 OARs in the head-and-neck region, and also our model is capable of handling multi-center datasets indicating strong generalization ability. Compared to the baseline, the final result of our model achieved a significant improvement in the average true positive rate (TPR) on the three test datasets (+8.27%, +2.39%, +5.53%, respectively). More importantly, the F1 score of small-volume OAR with only 9 training samples increased from 28.63% to 91.17%.
Unpaired Synthetic Image Generation in Radiology Using GANs
In this work, we investigate approaches to generating synthetic Computed Tomography (CT) images from the real Magnetic Resonance Imaging (MRI) data. Generating the radiological scans has grown in popularity in the recent years due to its promise to enable single-modality radiotherapy planning in clinical oncology, where the co-registration of the radiological modalities is cumbersome. We rely on the Generative Adversarial Network (GAN) models with cycle consistency which permit unpaired image-to-image translation between the modalities. We also introduce the perceptual loss function term and the coordinate convolutional layer to further enhance the quality of translated images. The Unsharp masking and the Super-Resolution GAN (SRGAN) were considered to improve the quality of synthetic images. The proposed architectures were trained on the unpaired MRI-CT data and then evaluated on the paired brain dataset. The resulting CT scans were generated with the mean absolute error (MAE), the peak signal-to-noise ratio (PSNR) and the structural similarity (SSIM) scores of 60.83 HU, 17.21 dB, and 0.8, respectively. DualGAN with perceptual loss function term and coordinate convolutional layer proved to perform best. The MRI-CT translation approach holds potential to eliminate the need for the patients to undergo both examinations and to be clinically accepted as a new tool for radiotherapy planning.
Reg R-CNN: Lesion Detection and Grading Under Noisy Labels
For the task of concurrently detecting and categorizing objects, the medical imaging community commonly adopts methods developed on natural images. Current state-of-the-art object detectors are comprised of two stages: the first stage generates region proposals, the second stage subsequently categorizes them. Unlike in natural images, however, for anatomical structures of interest such as tumors, the appearance in the image (e.g., scale or intensity) links to a malignancy grade that lies on a continuous ordinal scale. While classification models discard this ordinal relation between grades by discretizing the continuous scale to an unordered bag of categories, regression models are trained with distance metrics, which preserve the relation. This advantage becomes all the more important in the setting of label confusions on ambiguous data sets, which is the usual case with medical images. To this end, we propose Reg R-CNN, which replaces the second-stage classification model of a current object detector with a regression model. We show the superiority of our approach on a public data set with 1026 patients and a series of toy experiments. Code will be available at github.com/MIC-DKFZ/RegRCNN.
Recovering Physiological Changes in Nasal Anatomy with Confidence Estimates
Sinha, A.
Liu, X.
Ishii, M.
Hager, G. D.
Taylor, Russell H
2019Conference Proceedings, cited 0 times
Head-Neck Cetuximab
Image registration
Purpose; ; Clinical examinations that involve endoscopic exploration of the nasal cavity and sinuses often do not have a reference preoperative image, like a computed tomography (CT) scan, to provide structural context to the clinician. The aim of this work is to provide structural context during clinical exploration without requiring additional CT acquisition.; Methods; ; We present a method for registration during clinical endoscopy in the absence of CT scans by making use of shape statistics from past CT scans. Using a deformable registration algorithm that uses these shape statistics along with dense point clouds from video, we simultaneously achieve two goals: (1) register the statistically mean shape of the target anatomy with the video point cloud, and (2) estimate patient shape by deforming the mean shape to fit the video point cloud. Finally, we use statistical tests to assign confidence to the computed registration.; Results; ; We are able to achieve submillimeter errors in registrations and patient shape reconstructions using simulated data. We establish and evaluate the confidence criteria for our registrations using simulated data. Finally, we evaluate our registration method on in vivo clinical data and assign confidence to these registrations using the criteria established in simulation. All registrations that are not rejected by our criteria produce submillimeter residual errors.; Conclusion; ; Our deformable registration method can produce submillimeter registrations and reconstructions as well as statistical scores that can be used to assign confidence to the registrations.
Cross-Modality Knowledge Transfer for Prostate Segmentation from CT Scans
Liu, Yucheng
Khosravan, Naji
Liu, Yulin
Stember, Joseph
Shoag, Jonathan
Bagci, Ulas
Jambawalikar, Sachin
2019Book Section, cited 0 times
PROSTATEx
Segmentation
Comparison of Active Learning Strategies Applied to Lung Nodule Segmentation in CT Scans
Zotova, Daria
Lisowska, Aneta
Anderson, Owen
Dilys, Vismantas
O’Neil, Alison
2019Book Section, cited 0 times
LIDC-IDRI
Supervised machine learning techniques require large amounts of annotated training data to attain good performance. Active learning aims to ease the data collection process by automatically detecting which instances an expert should annotate in order to train a model as quickly and effectively as possible. Such strategies have been previously reported for medical imaging, but for other tasks than focal pathologies where there is high class imbalance and heterogeneous background appearance. In this study we evaluate different data selection approaches (random, uncertain, and representative sampling) and a semi-supervised model training procedure (pseudo-labelling), in the context of lung nodule segmentation in CT volumes from the publicly available LIDC-IDRI dataset. We find that active learning strategies allow us to train a model with equal performance but less than half of the annotation effort; data selection by uncertainty sampling offers the most gain, with the incorporation of representativeness or the addition of pseudo-labelling giving further small improvements. We conclude that active learning is a valuable tool and that further development of these strategies can play a key role in making diagnostic algorithms viable.
Automated Segmentation of the Pectoral Muscle in Axial Breast MR Images
Zafari, Sahar
Diab, Mazen
Eerola, Tuomas
Hanson, Summer E.
Reece, Gregory P.
Whitman, Gary J.
Markey, Mia K.
Ravi-Chandar, Krishnaswamy
Bovik, Alan
Kälviäinen, Heikki
2019Book Section, cited 0 times
BREAST-DIAGNOSIS
Abstract
Pectoral muscle segmentation is a crucial step in various computer-aided applications of breast Magnetic Resonance Imaging (MRI). Due to imaging artifact and homogeneity between the pectoral and breast regions, the pectoral muscle boundary estimation is not a trivial task. In this paper, a fully automatic segmentation method based on deep learning is proposed for accurate delineation of the pectoral muscle boundary in axial breast MR images. The proposed method involves two main steps: pectoral muscle segmentation and boundary estimation. For pectoral muscle segmentation, a model based on the U-Net architecture is used to segment the pectoral muscle from the input image. Next, the pectoral muscle boundary is estimated through candidate points detection and contour segmentation. The proposed method was evaluated quantitatively with two real-world datasets, our own private dataset, and a publicly available dataset. The first dataset includes 12 patients breast MR images and the second dataset consists of 80 patients breast MR images. The proposed method achieved a Dice score of 95% in the first dataset and 89% in the second dataset. The high segmentation performance of the proposed method when evaluated on large scale quantitative breast MR images confirms its potential applicability in future breast cancer clinical applications.
Deep Neural Network Based Classifier Model for Lung Cancer Diagnosis and Prediction System in Healthcare Informatics
Jayaraj, D.
Sathiamoorthy, S.
2019Conference Paper, cited 0 times
LIDC-IDRI
LUNG
Lung cancer is a most important deadly disease which results to mortality of people because of the cells growth in unmanageable way. This problem leads to increased significance among physicians as well as academicians to develop efficient diagnosis models. Therefore, a novel method for automated identification of lung nodule becomes essential and it forms the motivation of this study. This paper presents a new deep learning classification model for lung cancer diagnosis. The presented model involves four main steps namely preprocessing, feature extraction, segmentation and classification. A particle swarm optimization (PSO) algorithm is sued for segmentation and deep neural network (DNN) is applied for classification. The presented PSO-DNN model is tested against a set of sample lung images and the results verified the goodness of the projected model on all the applied images.
Imaging Signature of 1p/19q Co-deletion Status Derived via Machine Learning in Lower Grade Glioma
We present a new approach to quantify the co-deletion of chromosomal arms 1p/19q status in lower grade glioma (LGG). Though the surgical biopsy followed by fluorescence in-situ hybridization test is the gold standard currently to identify mutational status for diagnosis and treatment planning, there are several imaging studies to predict the same. Our study aims to determine the 1p/19q mutational status of LGG non-invasively by advanced pattern analysis using multi-parametric MRI. The publicly available dataset at TCIA was used. T1-W and T2-W MRIs of a total 159 patients with grade-II and grade-III glioma, who had biopsy proven 1p/19q status consisting either no deletion (n = 57) or co-deletion (n = 102), were used in our study. We quantified the imaging profile of these tumors by extracting diverse imaging features, including the tumor’s spatial distribution pattern, volumetric, texture, and intensity distribution measures. We integrated these diverse features via support vector machines, to construct an imaging signature of 1p/19q, which was evaluated in independent discovery (n = 85) and validation (n = 74) cohorts, and compared with the 1p/19q status obtained through fluorescence in-situ hybridization test. The classification accuracy on complete, discovery and replication cohorts was 86.16%, 88.24%, and 85.14%, respectively. The classification accuracy when the model developed on training cohort was applied on unseen replication set was 82.43%. Non-invasive prediction of 1p/19q status from MRIs would allow improved treatment planning for LGG patients without the need of surgical biopsies and would also help in potentially monitoring the dynamic mutation changes during the course of the treatment.
Radiomics-Enhanced Multi-task Neural Network for Non-invasive Glioma Subtyping and Segmentation
Xue, Zhiyuan
Xin, Bowen
Wang, Dingqian
Wang, Xiuying
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Non-invasive glioma subtyping can provide diagnostic support for pre-operative treatments. Traditional radiomics method for subtyping is based on hand-crafted features, so the capacity of capturing comprehensive features from MR images is still limited compared with deep learning method. In this work, we propose a radiomics enhanced multi-task neural network, which utilizes both deep features and radiomic features, to simultaneously perform glioma subtyping, and multi-region segmentation. Our network is composed of three branches, namely shared CNN encoder, segmentation decoder, and subtyping branch, constructed based on 3D U-Net. Enhanced with radiomic features, the network achieved 96.77% for two-class grading and 93.55% for three-class subtyping over the validation set of 31 cases, showing the potential in non-invasive glioma diagnosis, and achieved better segmentation performance than single-task network.
U-Net Based Glioblastoma Segmentation with Patient’s Overall Survival Prediction
Rafi, Asra
Ali, Junaid
Akram, Tahir
Fiaz, Kiran
Raza Shahid, Ahmad
Raza, Basit
Mustafa Madni, Tahir
2020Conference Proceedings, cited 0 times
Algorithm Development
BRAIN
BraTS
Radiomics
Glioma is a type of malignant brain tumors which requires early detection for patients Overall Survival (OS) prediction and better treatment planning. This task can be simplified by computer aided automatic segmentation of brain MRI volumes into sub-regions. The MRI volumes segmentation can be achieved by deep learning methods but due to highly imbalance data, it becomes very challenging. In this article, we propose deep learning based solutions for Glioma segmentation and patient’s OS. To segment each pixel, we have designed a simplified version of 2D U-Net which is slice based and to predict OS, we have analyzed radiomic features. The training dataset of BraTS 2019 challenge is partitioned into train and test set and our primary results on test set are promising as dice score of (whole tumor 0.84, core tumor 0.80 and enhancing tumor 0.63) in glioma segmentation. Radiomic features based on intensity and shape are extracted from the MRI volumes and segmented tumor for OS prediction task. We further eliminate the low variance features using Recursive Features Elimination (RFE). The Random Forest Regression is used to predict OS time. By using intensities of peritumoral edema-label 2 of Flair, the necrotic and non-enhancing tumor core-label 1 along with enhancing tumor-label 4 of T1 contrast enhanced volumes and patients age, we are capable to predict patient’s OS with considerable accuracy of 31%.
Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries; 5th International Workshop, BrainLes 2019, Held in Conjunction with MICCAI 2019, Shenzhen, China, October 17, 2019, Revised Selected Papers, Part I
The two-volume set LNCS 11992 and 11993 constitutes the thoroughly refereed proceedings of the 5th International MICCAI Brainlesion Workshop, BrainLes 2019, the International Multimodal Brain Tumor Segmentation (BraTS) challenge, the Computational Precision Medicine: Radiology-Pathology Challenge on Brain Tumor Classification (CPM-RadPath) challenge, as well as the tutorial session on Tools Allowing Clinical Translation of Image Computing Algorithms (TACTICAL). These were held jointly at the Medical Image Computing for Computer Assisted Intervention Conference, MICCAI, in Shenzhen, China, in October 2019.; ; The revised selected papers presented in these volumes were organized in the following topical sections: brain lesion image analysis (12 selected papers from 32 submissions); brain tumor image segmentation (57 selected papers from 102 submissions); combined MRI and pathology brain tumor classification (4 selected papers from 5 submissions); tools allowing clinical translation of image computing algorithms (2 selected papers from 3 submissions.)
Convolutional 3D to 2D Patch Conversion for Pixel-Wise Glioma Segmentation in MRI Scans
Hamghalam, Mohammad
Lei, Baiying
Wang, Tianfu
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Algorithm Development
BraTS 2019
Challenge
convolutional Neural Network (CNN)
BRAIN
Magnetic Resonance Imaging (MRI)
Structural magnetic resonance imaging (MRI) has been widely utilized for analysis and diagnosis of brain diseases. Automatic segmentation of brain tumors is a challenging task for computer-aided diagnosis due to low-tissue contrast in the tumor subregions. To overcome this, we devise a novel pixel-wise segmentation framework through a convolutional 3D to 2D MR patch conversion model to predict class labels of the central pixel in the input sliding patches. Precisely, we first extract 3D patches from each modality to calibrate slices through the squeeze and excitation (SE) block. Then, the output of the SE block is fed directly into subsequent bottleneck layers to reduce the number of channels. Finally, the calibrated 2D slices are concatenated to obtain multimodal features through a 2D convolutional neural network (CNN) for prediction of the central pixel. In our architecture, both local inter-slice and global intra-slice features are jointly exploited to predict class label of the central voxel in a given patch through the 2D CNN classifier. We implicitly apply all modalities through trainable parameters to assign weights to the contributions of each sequence for segmentation. Experimental results on the segmentation of brain tumors in multimodal MRI scans (BraTS’19) demonstrate that our proposed method can efficiently segment the tumor regions.
Towards Population-Based Histologic Stain Normalization of Glioblastoma
Grenko, Caleb M.
Viaene, Angela N.
Nasrallah, MacLean P.
Feldman, Michael D.
Akbari, Hamed
Bakas, Spyridon
Brainlesion2020Journal Article, cited 0 times
DICOM-Glioma-SEG
TCGA-GBM
Ivy GAP
H&E-stained slides
Pathomics
Glioblastoma (‘GBM’) is the most aggressive type of primary malignant adult brain tumor, with very heterogeneous radiographic, histologic, and molecular profiles. A growing body of advanced computational analyses are conducted towards further understanding the biology and variation in glioblastoma. To address the intrinsic heterogeneity among different computational studies, reference standards have been established to facilitate both radiographic and molecular analyses, e.g., anatomical atlas for image registration and housekeeping genes, respectively. However, there is an apparent lack of reference standards in the domain of digital pathology, where each independent study uses an arbitrarily chosen slide from their evaluation dataset for normalization purposes. In this study, we introduce a novel stain normalization approach based on a composite reference slide comprised of information from a large population of anatomically annotated hematoxylin and eosin (‘H&E’) whole-slide images from the Ivy Glioblastoma Atlas Project (‘IvyGAP’). Two board-certified neuropathologists manually reviewed and selected annotations in 509 slides, according to the World Health Organization definitions. We computed summary statistics from each of these approved annotations and weighted them based on their percent contribution to overall slide (‘PCOS’), to form a global histogram and stain vectors. Quantitative evaluation of pre- and post-normalization stain density statistics for each annotated region with PCOS>0.05% yielded a significant (largest p=0.001, two-sided Wilcoxon rank sum test) reduction of its intensity variation for both ‘H’ & ‘E’. Subject to further large-scale evaluation, our findings support the proposed approach as a potentially robust population-based reference for stain normalization.
Skull-Stripping of Glioblastoma MRI Scans Using 3D Deep Learning
Thakur, S. P.
Doshi, J.
Pati, S.
Ha, S. M.
Sako, C.
Talbar, S.
Kulkarni, U.
Davatzikos, C.
Erus, G.
Bakas, S.
Brainlesion2019Journal Article, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Segmentation
Brain extraction
Brain tumor
CaPTk
IBSI
Deep learning
Glioblastoma
Skull-stripping
U-Net
Skull-stripping is an essential pre-processing step in computational neuro-imaging directly impacting subsequent analyses. Existing skull-stripping methods have primarily targeted non-pathologicallyaffected brains. Accordingly, they may perform suboptimally when applied on brain Magnetic Resonance Imaging (MRI) scans that have clearly discernible pathologies, such as brain tumors. Furthermore, existing methods focus on using only T1-weighted MRI scans, even though multi-parametric MRI (mpMRI) scans are routinely acquired for patients with suspected brain tumors. Here we present a performance evaluation of publicly available implementations of established 3D Deep Learning architectures for semantic segmentation (namely DeepMedic, 3D U-Net, FCN), with a particular focus on identifying a skull-stripping approach that performs well on brain tumor scans, and also has a low computational footprint. We have identified a retrospective dataset of 1,796 mpMRI brain tumor scans, with corresponding manually-inspected and verified gold-standard brain tissue segmentations, acquired during standard clinical practice under varying acquisition protocols at the Hospital of the University of Pennsylvania. Our quantitative evaluation identified DeepMedic as the best performing method (Dice = 97.9, Hausdorf f (95) = 2.68). We release this pre-trained model through the Cancer Imaging Phenomics Toolkit (CaPTk) platform.
Global and Local Multi-scale Feature Fusion Enhancement for Brain Tumor Segmentation and Pancreas Segmentation
Wang, Huan
Wang, Guotai
Liu, Zijian
Zhang, Shaoting
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Algorithm Development
Fully convolutional network
Segmentation
Radiomics
PANCREAS
BRAIN
The fully convolutional networks (FCNs) have been widely applied in numerous medical image segmentation tasks. However, tissue regions usually have large variations of shape and scale, so the ability of neural networks to learn multi-scale features is important to the segmentation performance. In this paper, we improve the network for multi-scale feature fusion, in the medical image segmentation by introducing two feature fusion modules: i) global attention multi-scale feature fusion module (GMF); ii) local dense multi-scale feature fusion module (LMF). GMF aims to use global context information to guide the recalibration of low-level features from both spatial and channel aspects, so as to enhance the utilization of effective multi-scale features and suppress the noise of low-level features. LMF adopts bottom-up top-down structure to capture context information, to generate semantic features, and to fuse feature information at different scales. LMF can integrate local dense multi-scale context features layer by layer in the network, thus improving the ability of network to encode interdependent relationships among boundary pixels. Based on the above two modules, we propose a novel medical image segmentation framework (GLF-Net). We evaluated the proposed network and modules on challenging brain tumor segmentation and pancreas segmentation datasets, and very competitive performance has been achieved.
Optimization with Soft Dice Can Lead to a Volumetric Bias
Segmentation is a fundamental task in medical image analysis. The clinical interest is often to measure the volume of a structure. To evaluate and compare segmentation methods, the similarity between a segmentation and a predefined ground truth is measured using metrics such as the Dice score. Recent segmentation methods based on convolutional neural networks use a differentiable surrogate of the Dice score, such as soft Dice, explicitly as the loss function during the learning phase. Even though this approach leads to improved Dice scores, we find that, both theoretically and empirically on four medical tasks, it can introduce a volumetric bias for tasks with high inherent uncertainty. As such, this may limit the method’s clinical applicability.
Saliency Based Deep Neural Network for Automatic Detection of Gadolinium-Enhancing Multiple Sclerosis Lesions in Brain MRI
The appearance of contrast-enhanced pathologies (e.g. lesion, cancer) is an important marker of disease activity, stage and treatment efficacy in clinical trials. The automatic detection and segmentation of these enhanced pathologies remains a difficult challenge, as they can be very small and visibly similar to other non-pathological enhancements (e.g. blood vessels). In this paper, we propose a deep neural network classifier for the detection and segmentation of Gadolinium enhancing lesions in brain MRI of patients with Multiple Sclerosis (MS). To avoid false positive and false negative assertions, the proposed end-to-end network uses an enhancement-based attention mechanism which assigns saliency based on the differences between the T1-weighted images before and after injection of Gadolinium, and works to first identify candidate lesions and then to remove the false positives. The effect of the saliency map is evaluated on 2293 patient multi-channel MRI scans acquired during two proprietary, multi-center clinical trials for MS treatments. Inclusion of the attention mechanism results in a decrease in false positive lesion voxels over a basic U-Net [2] and DeepMedic [6]. In terms of lesion-level detection, the framework achieves a sensitivity of 82% at a false discovery rate of 0.2, significantly outperforming the other two methods when detecting small lesions. Experiments aimed at predicting the presence of Gad lesion activity in patient scans (i.e. the presence of more than 1 lesion) result in high accuracy showing: (a) significantly improved accuracy over DeepMedic, and (b) a reduction in the errors in predicting the degree of lesion activity (in terms of per scan lesion counts) over a standard U-Net and DeepMedic.
Deep Learning for Brain Tumor Segmentation in Radiosurgery: Prospective Clinical Evaluation
Shirokikh, Boris
Dalechina, Alexandra
Shevtsov, Alexey
Krivov, Egor
Kostjuchenko, Valery
Durgaryan, Amayak
Galkin, Mikhail
Osinov, Ivan
Golanov, Andrey
Belyaev, Mikhail
2020Book Section, cited 0 times
BraTS-TCGA-GBM
Radiation Therapy
Deep convolutional neural network (DCNN)
Semi-automatic segmentation
BRAIN
Stereotactic radiosurgery is a minimally-invasive treatment option for a large number of patients with intracranial tumors. As part of the therapy treatment, accurate delineation of brain tumors is of great importance. However, slice-by-slice manual segmentation on T1c MRI could be time-consuming (especially for multiple metastases) and subjective. In our work, we compared several deep convolutional networks architectures and training procedures and evaluated the best model in a radiation therapy department for three types of brain tumors: meningiomas, schwannomas and multiple brain metastases. The developed semiautomatic segmentation system accelerates the contouring process by 2.2 times on average and increases inter-rater agreement from 92.0% to 96.5%.
3D U-Net Based Brain Tumor Segmentation and Survival Days Prediction
Wang, Feifan
Jiang, Runzhou
Zheng, Liqin
Meng, Chun
Biswal, Bharat
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
BraTS 2019
Segmentation
U-Net
Deep Learning
Past few years have witnessed the prevalence of deep learning in many application scenarios, among which is medical image processing. Diagnosis and treatment of brain tumors requires an accurate and reliable segmentation of brain tumors as a prerequisite. However, such work conventionally requires brain surgeons significant amount of time. Computer vision techniques could provide surgeons a relief from the tedious marking procedure. In this paper, a 3D U-net based deep learning model has been trained with the help of brain-wise normalization and patching strategies for the brain tumor segmentation task in the BraTS 2019 competition. Dice coefficients for enhancing tumor, tumor core, and the whole tumor are 0.737, 0.807 and 0.894 respectively on the validation dataset. These three values on the test dataset are 0.778, 0.798 and 0.852. Furthermore, numerical features including ratio of tumor size to brain size and the area of tumor surface as well as age of subjects are extracted from predicted tumor labels and have been used for the overall survival days prediction task. The accuracy could be 0.448 on the validation dataset, and 0.551 on the final test dataset.
Aggregating Multi-scale Prediction Based on 3D U-Net in Brain Tumor Segmentation
Magnetic resonance imaging (MRI) is the dominant modality used in the initial evaluation of patients with primary brain tumors due to its superior image resolution and high safety profile. Automated segmentation of brain tumors from MRI is critical in the determination of response to therapy. In this paper, we propose a novel method which aggregates multi-scale prediction from 3D U-Net to segment enhancing tumor (ET), whole tumor (WT) and tumor core (TC) from multimodal MRI. Multi-scale prediction is derived from the decoder part of 3D U-Net at different resolutions. The final prediction takes the minimum value of the corresponding pixel from the upsampling multi-scale prediction. Aggregating multi-scale prediction can add constraints to the network which is beneficial for limited data. Additionally, we employ model ensembling strategy to further improve the performance of the proposed network. Finally, we achieve dice scores of 0.7745, 0.8640 and 0.7914, and Hausdorff distances (95th percentile) of 4.2365, 6.9381 and 6.6026 for ET, WT and TC respectively on the test set in BraTS 2019.
Brain Tumor Synthetic Segmentation in 3D Multimodal MRI Scans
Hamghalam, Mohammad
Lei, Baiying
Wang, Tianfu
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Generative Adversarial Network (GAN)
Convolutional Neural Network (CNN)
Segmentation
Regression model
The magnetic resonance (MR) analysis of brain tumors is widely used for diagnosis and examination of tumor subregions. The overlapping area among the intensity distribution of healthy, enhancing, non-enhancing, and edema regions makes the automatic segmentation a challenging task. Here, we show that a convolutional neural network trained on high-contrast images can transform the intensity distribution of brain lesions in its internal subregions. Specifically, a generative adversarial network (GAN) is extended to synthesize high-contrast images. A comparison of these synthetic images and real images of brain tumor tissue in MR scans showed significant segmentation improvement and decreased the number of real channels for segmentation. The synthetic images are used as a substitute for real channels and can bypass real modalities in the multimodal brain tumor segmentation framework. Segmentation results on BraTS 2019 dataset demonstrate that our proposed approach can efficiently segment the tumor areas. In the end, we predict patient survival time based on volumetric features of the tumor subregions as well as the age of each case through several regression models.
Multi-step Cascaded Networks for Brain Tumor Segmentation
Li, Xiangyu
Luo, Gongning
Wang, Kuanquan
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Automatic segmentation
Segmentation
Challenge
Automatic brain tumor segmentation method plays an extremely important role in the whole process of brain tumor diagnosis and treatment. In this paper, we propose a multi-step cascaded network which takes the hierarchical topology of the brain tumor substructures into consideration and segments the substructures from coarse to fine. During segmentation, the result of the former step is utilized as the prior information for the next step to guide the finer segmentation process. The whole network is trained in an end-to-end fashion. Besides, to alleviate the gradient vanishing issue and reduce overfitting, we added several auxiliary outputs as a kind of deep supervision for each step and introduced several data augmentation strategies, respectively, which proved to be quite efficient for brain tumor segmentation. Lastly, focal loss is utilized to solve the problem of remarkably imbalance of the tumor regions and background. Our model is tested on the BraTS 2019 validation dataset, the preliminary results of mean dice coefficients are 0.886, 0.813, 0.771 for the whole tumor, tumor core and enhancing tumor respectively. Code is available at https://github.com/JohnleeHIT/Brats2019.
TuNet: End-to-End Hierarchical Brain Tumor Segmentation Using Cascaded Networks
Vu, Minh H.
Nyholm, Tufve
Löfstedt, Tommy
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
Segmentation
Glioma is one of the most common types of brain tumors; it arises in the glial cells in the human brain and in the spinal cord. In addition to having a high mortality rate, glioma treatment is also very expensive. Hence, automatic and accurate segmentation and measurement from the early stages are critical in order to prolong the survival rates of the patients and to reduce the costs of the treatment. In the present work, we propose a novel end-to-end cascaded network for semantic segmentation in the Brain Tumors in Multimodal Magnetic Resonance Imaging Challenge 2019 that utilizes the hierarchical structure of the tumor sub-regions with ResNet-like blocks and Squeeze-and-Excitation modules after each convolution and concatenation block. By utilizing cross-validation, an average ensemble technique, and a simple post-processing technique, we obtained dice scores of 88.06, 80.84, and 80.29, and Hausdorff Distances (95th percentile) of 6.10, 5.17, and 2.21 for the whole tumor, tumor core, and enhancing tumor, respectively, on the online test set. The proposed method was ranked among the top in the task of Quantification of Uncertainty in Segmentation.
Using Separated Inputs for Multimodal Brain Tumor Segmentation with 3D U-Net-like Architectures
The work presented in this paper addresses the MICCAI BraTS 2019 challenge devoted to brain tumor segmentation using magnetic resonance images. For each task of the challenge, we proposed and submitted for evaluation an original method. For the tumor segmentation task (Task 1), our convolutional neural network is based on a variant of the U-Net architecture of Ronneberger et al. with two modifications: first, we separate the four convolution parts to decorrelate the weights corresponding to each modality, and second, we provide volumes of size 240∗240∗3 as inputs in these convolution parts. This way, we profit of the 3D aspect of the input signal, and we do not use the same weights for separate inputs. For the overall survival task (Task 2), we compute explainable features and use a kernel PCA embedding followed by a Random Forest classifier to build a predictor with very few training samples. For the uncertainty estimation task (Task 3), we introduce and compare lightweight methods based on simple principles which can be applied to any segmentation approach. The overall performance of each of our contribution is honorable given the low computational requirements they have both for training and testing.
Two-Step U-Nets for Brain Tumor Segmentation and Random Forest with Radiomics for Survival Time Prediction
Kim, Soopil
Luna, Miguel
Chikontwe, Philip
Park, Sang Hyun
2020Book Section, cited 0 times
BraTS-TCGA-LGG
Segmentation
Radiomics
Random Forest
Convolutional Neural Network (CNN)
In this paper, a two-step convolutional neural network (CNN) for brain tumor segmentation in brain MR images with a random forest regressor for survival prediction of high-grade glioma subjects are proposed. The two-step CNN consists of three 2D U-nets for utilizing global information on axial, coronal, and sagittal axes, and a 3D U-net that uses local information in 3D patches. In our two-step setup, an initial segmentation probability map is first obtained using the ensemble 2D U-nets; second, a 3D U-net takes as input both the MR image and initial segmentation map to generate the final segmentation. Following segmentation, radiomics features from T1-weighted, T2-weighted, contrast enhanced T1-weighted, and T2-FLAIR images are extracted with the segmentation results as a prior. Lastly, a random forest regressor is used for survival time prediction. Moreover, only a small number of features selected by the random forest regressor are used to avoid overfitting. We evaluated the proposed methods on the BraTS 2019 challenge dataset. For the segmentation task, we obtained average dice scores of 0.74, 0.85 and 0.80 for enhanced tumor core, whole tumor, and tumor core, respectively. In the survival prediction task, an average accuracy of 50.5% was obtained showing the effectiveness of the proposed methods.
Bag of Tricks for 3D MRI Brain Tumor Segmentation
Zhao, Yuan-Xing
Zhang, Yan-Ming
Liu, Cheng-Lin
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Segmentation
BRAIN
3D brain tumor segmentation is essential for the diagnosis, monitoring, and treatment planning of brain diseases. In recent studies, the Deep Convolution Neural Network (DCNN) is one of the most potent methods for medical image segmentation. In this paper, we review the different kinds of tricks applied to 3D brain tumor segmentation with DNN. We divide such tricks into three main categories: data processing methods including data sampling, random patch-size training, and semi-supervised learning, model devising methods including architecture devising and result fusing, and optimizing processes including warming-up learning and multi-task learning. Most of these approaches are not particular to brain tumor segmentation, but applicable to other medical image segmentation problems as well. Evaluated on the BraTS 2019 online testing set, we obtain Dice scores of 0.810, 0.883 and 0.861, and Hausdorff Distances (95th percentile) of 2.447, 4.792, and 5.581 for enhanced tumor core, whole tumor, and tumor core, respectively. Our method won the second place of the BraTS 2019 Challenge for the tumor segmentation.
Multi-resolution 3D CNN for MRI Brain Tumor Segmentation and Survival Prediction
In this study, an automated three dimensional (3D) deep segmentation approach for detecting gliomas in 3D pre-operative MRI scans is proposed. Then, a classification algorithm based on random forests, for survival prediction is presented. The objective is to segment the glioma area and produce segmentation labels for its different sub-regions, i.e. necrotic and the non-enhancing tumor core, the peritumoral edema, and enhancing tumor. The proposed deep architecture for the segmentation task encompasses two parallel streamlines with two different resolutions. One deep convolutional neural network is to learn local features of the input data while the other one is set to have a global observation on whole image. Deemed to be complementary, the outputs of each stream are then merged to provide an ensemble complete learning of the input image. The proposed network takes the whole image as input instead of patch-based approaches in order to consider the semantic features throughout the whole volume. The algorithm is trained on BraTS 2019 which included 335 training cases, and validated on 127 unseen cases from the validation dataset using a blind testing approach. The proposed method was also evaluated on the BraTS 2019 challenge test dataset of 166 cases. The results show that the proposed methods provide promising segmentations as well as survival prediction. The mean Dice overlap measures of automatic brain tumor segmentation for validation set were 0.86, 0.77 and 0.71 for the whole tumor, core and enhancing tumor, respectively. The corresponding results for the challenge test dataset were 0.82, 0.72, and 0.70, respectively. The overall accuracy of the proposed model for the survival prediction task is 55% for the validation and 49% for the test dataset.
Two-Stage Cascaded U-Net: 1st Place Solution to BraTS Challenge 2019 Segmentation Task
Jiang, Zeyu
Ding, Changxing
Liu, Minfeng
Tao, Dacheng
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Algorithm Development
Segmentation
BRAIN
In this paper, we devise a novel two-stage cascaded U-Net to segment the substructures of brain tumors from coarse to fine. The network is trained end-to-end on the Multimodal Brain Tumor Segmentation Challenge (BraTS) 2019 training dataset. Experimental results on the testing set demonstrate that the proposed method achieved average Dice scores of 0.83267, 0.88796 and 0.83697, as well as Hausdorff distances (95%) of 2.65056, 4.61809 and 4.13071, for the enhancing tumor, whole tumor and tumor core, respectively. The approach won the 1st place in the BraTS 2019 challenge segmentation task, with more than 70 teams participating in the challenge.
Memory-Efficient Cascade 3D U-Net for Brain Tumor Segmentation
Segmentation is a routine and crucial procedure for the treatment of brain tumors. Deep learning based brain tumor segmentation methods have achieved promising performance in recent years. However, to pursue high segmentation accuracy, most of them require too much memory and computation resources. Motivated by a recently proposed partially reversible U-Net architecture that pays more attention to memory footprint, we further present a novel Memory-Efficient Cascade 3D U-Net (MECU-Net) for brain tumor segmentation in this work, which can achieve comparable segmentation accuracy with less memory and computation consumption. More specifically, MECU-Net utilizes fewer down-sampling channels to reduce the utilization of memory and computation resources. To make up the accuracy loss, MECU-Net employs multi-scale feature fusion module to enhance the feature representation capability. Additionally, a light-weight cascade model, which resolves the problem of small target segmentation accuracy caused by model compression to some extent, is further introduced into the segmentation network. Finally, edge loss and weighted dice loss are combined to refine the brain tumor segmentation results. Experiment results on BraTS 2019 validation set illuminate that MECU-Net can achieve average Dice coefficients of 0.902, 0.824 and 0.777 on the whole tumor, tumor core and enhancing tumor, respectively.
A Baseline for Predicting Glioblastoma Patient Survival Time with Classical Statistical Models and Primitive Features Ignoring Image Information
Kofler, Florian
Paetzold, Johannes C.
Ezhov, Ivan
Shit, Suprosanna
Krahulec, Daniel
Kirschke, Jan S.
Zimmer, Claus
Wiestler, Benedikt
Menze, Bjoern H.
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2019
BRAIN
Segmentation
Algorithm Development
Magnetic Resonance Imaging (MRI)
Gliomas are the most prevalent primary malignant brain tumors in adults. Until now an accurate and reliable method to predict patient survival time based on medical imaging and meta-information has not been developed [3]. Therefore, the survival time prediction task was introduced to the Multimodal Brain Tumor Segmentation Challenge (BraTS) to facilitate research in survival time prediction.; ; Here we present our submissions to the BraTS survival challenge based on classical statistical models to which we feed the provided metadata as features. We intentionally ignore the available image information to explore how patient survival can be predicted purely by metadata. We achieve our best accuracy on the validation set using a simple median regression model taking only patient age into account. We suggest using our model as a baseline to benchmark the added predictive value of sophisticated features for survival time prediction.
Brain Tumor Segmentation and Survival Prediction Using 3D Attention UNet
Islam, Mobarakol
Vibashan, V. S.
Jose, V. Jeya Maria
Wijethilake, Navodini
Utkarsh, Uppal
Ren, Hongliang
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
In this work, we develop an attention convolutional neural network (CNN) to segment brain tumors from Magnetic Resonance Images (MRI). Further, we predict the survival rate using various machine learning methods. We adopt a 3D UNet architecture and integrate channel and spatial attention with the decoder network to perform segmentation. For survival prediction, we extract some novel radiomic features based on geometry, location, the shape of the segmented tumor and combine them with clinical information to estimate the survival duration for each patient. We also perform extensive experiments to show the effect of each feature for overall survival (OS) prediction. The experimental results infer that radiomic features such as histogram, location, and shape of the necrosis region and clinical features like age are the most critical parameters to estimate the OS.
Brain Tumor Segmentation Using Dense Channels 2D U-net and Multiple Feature Extraction Network
Shi, Wei
Pang, Enshuai
Wu, Qiang
Lin, Fengming
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2019
BRAIN
Segmentation
Algorithm Development
Dense network
U-Net
Imaging features
Semantic segmentation plays an important role in the prevention, diagnosis and treatment of brain glioma. In this paper, we propose a dense channels 2D U-net segmentation model with residual unit and feature pyramid unit. The main difference compared with other U-net models is that the number of bottom feature components is increased, so that the network can learn more abundant patterns. We also develop a multiple feature extraction network model to extract rich and diverse features, which is conducive to segmentation. Finally, we employ decision tree regression model to predict patient overall survival by the different texture, shape and first-order features extracted from BraTS 2019 dataset.
Brain Tumour Segmentation on MRI Images by Voxel Classification Using Neural Networks, and Patient Survival Prediction
Sahayam, Subin
Krishna, Nanda H.
Jayaraman, Umarani
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2019
BRAIN
Classification
Algorithm Development
In this paper, an algorithm for segmentation of brain tumours and the survival prediction of a patient in days has been proposed. The delineation of brain tumours from magnetic resonance imaging (MRI) by experts is a time-consuming process and is susceptible to human error. Recently, most methods in the literature have used convolution neural network architectures, its variants, and an ensemble of several models to achieve the state-of-the-art result. In this paper, we study a neural network architecture to classify voxels in 3D MRI brain images into their respective segment classes. The study focuses on class imbalance among tumour regions, and pre-processing. The method has been trained and tested on the BraTS2019 dataset. The average Dice score for the segmentation task in the validation set is 0.47, 0.43, and 0.23 for enhancing, whole, and core tumour regions, respectively. For the second task, linear regression has been used to predict the survival of a patient in days. It achieved an accuracy of 0.465 on the online evaluation engine for the training dataset.
ONCOhabitats Glioma Segmentation Model
Juan-Albarracín, Javier
Fuster-Garcia, Elies
del Mar Álvarez-Torres, María
Chelebian, Eduard
García-Gómez, Juan M.
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2019
BRAIN
Segmentation
Algorithm Development
Convolutional Neural Network (CNN)
Automatic segmentation
ONCOhabitats is an open online service that provides a fully automatic analysis of tumor vascular heterogeneity in gliomas based on multiparametric MRI. Having a model capable of accurately segment pathological tissues is critical to generate a robust analysis of vascular heterogeneity. In this study we present the segmentation model embedded in ONCOhabitats and its performance obtained on the BRATS 2019 dataset. The model implements an residual-Inception U-Net convolutional neural network, incorporating several pre- and post- processing stages. A relabeling strategy has been applied to improve the segmentation of the necrosis of high-grade gliomas and the non-enhancing tumor of low-grade gliomas. The model was trained using 335 cases from the BraTS 2019 challenge training dataset and evaluated with 125 cases from the validation set and 166 cases from the test set. The results on the validation dataset in terms of the mean/median Dice coefficient are 0.73/0.85 in the enhancing tumor region, 0.90/0.92 in the whole tumor, and 0.78/0.89 in the tumor core. The Dice results obtained in the independent test are 0.78/0.84, 0.88/0.92 and 0.83/0.92 respectively for the same sub-compartments of the lesion.
Brain Tumor Segmentation with Uncertainty Estimation and Overall Survival Prediction
Accurate segmentation of different sub-regions of gliomas including peritumoral edema, necrotic core, enhancing and non-enhancing tumor core from multimodal MRI scans has important clinical relevance in diagnosis, prognosis and treatment of brain tumors. However, due to the highly heterogeneous appearance and shape, segmentation of the sub-regions is very challenging. Recent development using deep learning models has proved its effectiveness in the past several brain segmentation challenges as well as other semantic and medical image segmentation problems. Most models in brain tumor segmentation use a 2D/3D patch to predict the class label for the center voxel and variant patch sizes and scales are used to improve the model performance. However, it has low computation efficiency and also has limited receptive field. U-Net is a widely used network structure for end-to-end segmentation and can be used on the entire image or extracted patches to provide classification labels over the entire input voxels so that it is more efficient and expect to yield better performance with larger input size. In this paper we developed a deep-learning-based segmentation method using an ensemble of 3D U-Nets with different hyper-parameters. Furthermore, we estimated the uncertainty of the segmentation from the probabilistic outputs of each network and studied the correlation between the uncertainty and the performances. Preliminary results showed effectiveness of the segmentation model. Finally, we developed a linear model for survival prediction using extracted imaging and non-imaging features, which, despite the simplicity, can effectively reduce overfitting and regression errors.
Cascaded Global Context Convolutional Neural Network for Brain Tumor Segmentation
A cascade of global context convolutional neural networks is proposed to segment multi-modality MR images with brain tumor into three subregions: enhancing tumor, whole tumor and tumor core. Each network is a modification of the 3D U-Net consisting of residual connection, group normalization and deep supervision. In addition, we apply Global Context (GC) block to capture long-range dependency and inter-channel dependency. We use a combination of logarithmic Dice loss and weighted cross entropy loss to focus on less accurate voxels and improve the accuracy. Experiments with BraTS 2019 validation set show the proposed method achieved average Dice scores of 0.77338, 0.90712, 0.83911 for enhancing tumor, whole tumor and tumor core, respectively. The corresponding values for BraTS 2019 testing set were 0.79303, 0.87962, 0.82887 for enhancing tumor, whole tumor and tumor core, respectively.
Multi-task Learning for Brain Tumor Segmentation
Weninger, Leon
Liu, Qianyu
Merhof, Dorit
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2019
BRAIN
Segmentation
Algorithm Development
Accurate and reproducible detection of a brain tumor and segmentation of its sub-regions has high relevance in clinical trials and practice. Numerous recent publications have shown that deep learning algorithms are well suited for this application. However, fully supervised methods require a large amount of annotated training data. To obtain such data, time-consuming expert annotations are necessary. Furthermore, the enhancing core appears to be the most challenging to segment among the different sub-regions. Therefore, we propose a novel and straightforward method to improve brain tumor segmentation by joint learning of three related tasks with a partly shared architecture. Next to the tumor segmentation, image reconstruction and detection of enhancing tumor are learned simultaneously using a shared encoder. Meanwhile, different decoders are used for the different tasks, allowing for arbitrary switching of the loss function. In effect, this means that the architecture can partly learn on data without annotations by using only the autoencoder part. This makes it possible to train on bigger, but unannotated datasets, as only the segmenting decoder needs to be fine-tuned solely on annotated images. The second auxiliary task, detecting the presence of enhancing tumor tissue, is intended to provide a focus of the network on this area, and provides further information for postprocessing. The final prediction on the BraTS validation data using our method gives Dice scores of 0.89, 0.79 and 0.75 for the whole tumor, tumor core and the enhancing tumor region, respectively.
The paper demonstrates the use of the fully convolutional neural network for glioma segmentation on the BraTS 2019 dataset. Three-layers deep encoder-decoder architecture is used along with dense connection at the encoder part to propagate the information from the coarse layers to deep layers. This architecture is used to train three tumor sub-components separately. Sub-component training weights are initialized with whole tumor weights to get the localization of the tumor within the brain. In the end, three segmentation results were merged to get the entire tumor segmentation. Dice Similarity of training dataset with focal loss implementation for whole tumor, tumor core, and enhancing tumor is 0.92, 0.90, and 0.79, respectively. Radiomic features from the segmentation results predict survival. Along with these features, age and statistical features are used to predict the overall survival of patients using random forest regressors. The overall survival prediction method outperformed the other methods for the validation dataset on the leaderboard with 58.6% accuracy. This finding is consistent with the performance on the test set of BraTS 2019 with 57.9% accuracy.
Improving Brain Tumor Segmentation with Multi-direction Fusion and Fine Class Prediction
Liu, Sun’ao
Guo, Xiaonan
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2019
BRAIN
Segmentation
Algorithm Development
Convolutional neural networks have been broadly used for medical image analysis. Due to its characteristics, segmentation of glioma is considered to be one of the most challenging tasks. In this paper, we propose a novel Multi-direction Fusion Network (MFNet) for brain tumor segmentation with 3D multimodal MRI data. Unlike conventional 3D networks, the feature-extracting process is decomposed and fused in the proposed network. Furthermore, we design an additional task called Fine Class Prediction to reinforce the encoder and prevent over-segmentation. The proposed methods finally obtain dice scores of 0.81796, 0.8227, 0.88459 for enhancing tumor, tumor core and whole tumor respectively on BraTS 2019 test set.
An Ensemble of 2D Convolutional Neural Network for 3D Brain Tumor Segmentation
Pawar, Kamlesh
Chen, Zhaolin
Jon Shah, N.
Egan, Gary F.
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2019
BRAIN
Segmentation
Algorithm Development
Convolutional Neural Network (CNN)
We propose an ensemble of 2D convolutional neural networks to predict the 3D brain tumor segmentation mask using the multi-contrast brain images. A pretrained Resnet50 and Nasnet-mobile architecture were used as an encoder, which was appended with a decoder network to create an encoder-decoder neural network architecture. The encoder-decoder network was trained end to end using T1, T1 contrast-enhanced, T2 and T2-Flair images to classify each pixel in the 2D input image to either no tumor, necrosis/non-enhancing tumor (NCR/NET), enhancing tumor (ET) or edema (ED). Separate Resent50 and Nasnet-mobile architectures were trained for axial, sagittal and coronal slices. Predictions from 5 inferences including Resnet at all three orientations and Nasnet-mobile at two orientations were averaged to predict the final probabilities and subsequently the tumor mask. The mean dice scores calculated from 166 were 0.8865, 0.7372 and 0.7743 for whole tumor, tumor core and enhancing tumor respectively.
An Integrative Analysis of Image Segmentation and Survival of Brain Tumour Patients
Starke, Sebastian
Eckert, Carlchristian
Zwanenburg, Alex
Speidel, Stefanie
Löck, Steffen
Leger, Stefan
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2019
BRAIN
Segmentation
Algorithm Development
Our contribution to the BraTS 2019 challenge consisted of a deep learning based approach for segmentation of brain tumours from MR images using cross validation ensembles of 2D-UNet models. Furthermore, different approaches for the prediction of patient survival time using clinical as well as imaging features were investigated. A simple linear regression model using patient age and tumour volumes outperformed more elaborate approaches like convolutional neural networks or radiomics-based analysis with an accuracy of 0.55 on the validation cohort and 0.51 on the test cohort.
Triplanar Ensemble of 3D-to-2D CNNs with Label-Uncertainty for Brain Tumor Segmentation
McKinley, Richard
Rebsamen, Michael
Meier, Raphael
Wiest, Roland
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2019
BRAIN
Segmentation
Algorithm Development
We introduce a modification of our previous 3D-to-2D fully convolutional architecture, DeepSCAN, replacing batch normalization with instance normalization, and adding a lightweight local attention mechanism. These networks are trained using a previously described loss function which models label noise and uncertainty. We present results on the validation dataset of the Multimodal Brain Tumor Segmentation Challenge 2019.
Memory Efficient Brain Tumor Segmentation Using an Autoencoder-Regularized U-Net
Early diagnosis and accurate segmentation of brain tumors are imperative for successful treatment. Unfortunately, manual segmentation is time consuming, costly and despite extensive human expertise often inaccurate. Here, we present an MRI-based tumor segmentation framework using an autoencoder-regularized 3D-convolutional neural network. We trained the model on manually segmented structural T1, T1ce, T2, and Flair MRI images of 335 patients with tumors of variable severity, size and location. We then tested the model using independent data of 125 patients and successfully segmented brain tumors into three subregions: the tumor core (TC), the enhancing tumor (ET) and the whole tumor (WT). We also explored several data augmentations and preprocessing steps to improve segmentation performance. Importantly, our model was implemented on a single NVIDIA GTX1060 graphics unit and hence optimizes tumor segmentation for widely affordable hardware. In sum, we present a memory-efficient and affordable solution to tumor segmentation to support the accurate diagnostics of oncological brain pathologies.
Brain Tumor Segmentation Using Attention-Based Network in 3D MRI Images
Xu, Xiaowei
Zhao, Wangyuan
Zhao, Jun
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2019
BRAIN
Segmentation
Convolutional Neural Network (CNN)
Gliomas are the most common primary brain malignancies. Identifying the sub-regions of gliomas before surgery is meaningful, which may extend the survival of patients. However, due to the heterogeneous appearance and shape of gliomas, it is a challenge to accurately segment the enhancing tumor, the necrotic, the non-enhancing tumor core and the peritumoral edema. In this study, an attention-based network was used to segment the glioma sub-regions in multi-modality MRI scans. Attention U-Net was employed as the basic architecture of the proposed network. The attention gates help the network focus on the task-relevant regions in the image. Besides the spatial-wise attention gates, the channel-wise attention gates proposed in SE Net were also embedded into the segmentation network. This attention mechanism in the feature dimension prompts the network to focus on the useful feature maps. Furthermore, in order to reduce false positives, a training strategy combined with a sampling strategy was proposed in our study. The segmentation performance of the proposed network was evaluated on the BraTS 2019 validation dataset and testing dataset. In the validation dataset, the dice similarity coefficients of enhancing tumor, tumor core and whole tumor were 0.759, 0.807 and 0.893 respectively. And in the testing dataset, the dice scores of enhancing tumor, tumor core and whole tumor were 0.794, 0.814 and 0.866 respectively.
Multimodal Brain Image Segmentation and Analysis with Neuromorphic Attention-Based Learning
Automated image analysis of brain tumors from 3D Magnetic Resonance Imaging (MRI) is necessary for the diagnosis and treatment planning of the disease, because manual practices of segmenting tumors are time consuming, expensive and can be subject to clinician diagnostic error. We propose a novel neuromorphic attention-based learner (NABL) model to train the deep neural network for tumor segmentation, which is with challenges of typically small datasets and the difficulty of exact segmentation class determination. The core idea is to introduce the neuromorphic attention to guide the learning process of deep neural network architecture, providing the highlighted region of interest for tumor segmentation. The neuromorphic convolution filters mimicking visual cortex neurons are adopted for the neuromorphic attention generation, transferred from the pre-trained neuromorphic convolutional neural networks(CNNs) for adversarial imagery environments. Our pre-trained neuromorphic CNN has the feature extraction ability applicable to brain MRI data, verified by the overall survival prediction without the tumor segmentation training at Brain Tumor Segmentation (BraTS) Challenge 2018. NABL provides us with an affordable solution of more accurate and faster image analysis of brain tumor segmentation, by incorporating the typical encoder-decoder U-net architecture of CNN. Experiment results illustrated the effectiveness and feasibility of our proposed method with flexible requirements of clinical diagnostic decision data, from segmentation to overall survival prediction. The overall survival prediction accuracy is 55% for predicting overall survival period in days, based on the BraTS 2019 validation dataset, while 48.6% based on the BraTS 2019 test dataset.
Improving Brain Tumor Segmentation in Multi-sequence MR Images Using Cross-Sequence MR Image Generation
Zhao, Guojing
Zhang, Jianpeng
Xia, Yong
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2019
BRAIN
Segmentation
Algorithm Development
Accurate brain tumor segmentation using multi-sequence magnetic resonance (MR) imaging plays a pivotal role in clinical practice and research settings. Despite their prevalence, deep learning-based segmentation methods, which usually use multiple MR sequences as input, still have limited performance, partly due to their insufficient ability to image representation. In this paper, we propose a brain tumor segmentation (BraTSeg) model, which uses cross-sequence MR image generation as a self-supervision tool to improve the segmentation accuracy. This model is an ensemble of three image segmentation and generation (ImgSG) models, which are designed for simultaneous segmentation of brain tumors and generation of T1, T2, and Flair sequences, respectively. We evaluated the proposed BraTSeg model on the BraTS 2019 dataset and achieved an average Dice similarity coefficient (DSC) of 81.93%, 87.80%, and 83.44% in the segmentation of enhancing tumor, whole tumor, and tumor score on the testing set, respectively. Our results suggest that using cross-sequence MR image generation is an effective self-supervision method that can improve the accuracy of brain tumor segmentation and the proposed BraTSeg model can produce satisfactory segmentation of brain tumors and intra-tumor structures.
Ensemble of CNNs for Segmentation of Glioma Sub-regions with Survival Prediction
Gliomas are the most common malignant brain tumors, having varying level of aggressiveness, with Magnetic Resonance Imaging (MRI) being used for their diagnosis. As these tumors are highly heterogeneous in shape and appearance, their segmentation becomes a challenging task. In this paper we propose an ensemble of three Convolutional Neural Network (CNN) architectures viz. (i) P-Net, (ii) U-Net with spatial pooling, and (iii) ResInc-Net for glioma sub-regions segmentation. The segmented tumor Volume of Interest (VOI) is further used for extracting spatial habitat features for the prediction of Overall Survival (OS) of patients. A new aggregated loss function is used to help in effectively handling the data imbalance problem. The concept of modeling predictive distributions, test time augmentation and ensembling methods are used to reduce uncertainty and increase the confidence of the model prediction. The proposed integrated system (for Segmentation and OS prediction) is trained and validated on the Brain Tumor Segmentation (BraTS) Challenge 2019 dataset. We ranked among the top performing methods on Segmentation and Overall Survival prediction on the validation dataset, as observed from the leaderboard. We also ranked among the top four in the Uncertainty Quantification task on the testing dataset.
Brain Tumor Segmentation Based on Attention Mechanism and Multi-model Fusion
Guo, Xutao
Yang, Chushu
Ma, Ting
Zhou, Pengzheng
Lu, Shangfeng
Ji, Nan
Li, Deling
Wang, Tong
Lv, Haiyan
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2019
BRAIN
Segmentation
Algorithm Development
Convolutional Neural Network (CNN)
U-Net
Brain tumor are uncontrollable and abnormal cells in the brain. The incidence and mortality of brain tumors are very high. Among them, gliomas are the most common primary malignant tumors with different degrees of invasion. The segmentation of brain tumors is a prerequisite for disease diagnosis, surgical planning and prognosis. According to the characteristics of brain tumor data, we designed a multi-model fusion brain tumor automatic segmentation algorithm based on attention mechanism [1]. Our network architecture is slightly modified based on 3D U-Net [2]. At the same time, the attention mechanism was added to the 3D U-Net model. According to the patch size and attention mechanism in the training process, four independent networks are designed. Here, we use 64 × 64 × 64 and 128 × 128 × 128 patch sizes to train different sub-networks. Finally, the results of the four models in the label layer are combined to get the final segmentation results. This multi model fusion method can effectively improve the robustness of the algorithm. At the same time, the attention method can improve the feature extraction ability of the network and improve the segmentation accuracy. Our experimental study on the newly released brats data set (brats 2019) shows that our method accurately describes brain tumors.
Automatic Brain Tumour Segmentation and Biophysics-Guided Survival Prediction
Wang, Shuo
Dai, Chengliang
Mo, Yuanhan
Angelini, Elsa
Guo, Yike
Bai, Wenjia
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2019
BRAIN
Segmentation
Algorithm Development
Radiomics
Gliomas are the most common malignant brain tumours with intrinsic heterogeneity. Accurate segmentation of gliomas and their sub-regions on multi-parametric magnetic resonance images (mpMRI) is of great clinical importance, which defines tumour size, shape and appearance and provides abundant information for preoperative diagnosis, treatment planning and survival prediction. Recent developments on deep learning have significantly improved the performance of automated medical image segmentation. In this paper, we compare several state-of-the-art convolutional neural network models for brain tumour image segmentation. Based on the ensembled segmentation, we present a biophysics-guided prognostic model for patient overall survival prediction which outperforms a data-driven radiomics approach. Our method won the second place of the MICCAI 2019 BraTS Challenge for the overall survival prediction.
Multimodal Brain Tumor Segmentation and Survival Prediction Using Hybrid Machine Learning
Pei, Linmin
Vidyaratne, Lasitha
Monibor Rahman, M.
Shboul, Zeina A.
Iftekharuddin, Khan M.
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2019
BRAIN
Segmentation
Algorithm Development
In this paper, we propose a UNet-VAE deep neural network architecture for brain tumor segmentation and survival prediction. UNet-VAE architecture has shown great success in brain tumor segmentation in the multimodal brain tumor segmentation (BraTS) 2018 challenge. In this work, we utilize the UNet-VAE to extract high dimension features, then fuse with hand-crafted texture features to perform survival prediction. We apply the proposed method to the BraTS 2019 validation dataset for both tumor segmentation and survival prediction. The tumor segmentation result shows dice score coefficient (DSC) of 0.759, 0.90, and 0.806 for enhancing tumor (ET), whole tumor (WT), and tumor core (TC), respectively. For the feature fusion-based survival prediction method, we achieve 56.4% classification accuracy with mean square error (MSE) 101577, and 51.7% accuracy with MSE 70590 for training and validation, respectively. In testing phase, the proposed method for tumor segmentation achieves average DSC of 0.81328, 0.88616, and 0.84084 for ET, WT, and TC, respectively. Moreover, the model offers accuracy of 0.439 with MSE of 449009.135 for overall survival prediction in testing phase.
Robust Semantic Segmentation of Brain Tumor Regions from 3D MRIs
Myronenko, Andriy
Hatamizadeh, Ali
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2019
BRAIN
Segmentation
Algorithm Development
pyTorch
Multimodal brain tumor segmentation challenge (BraTS) brings together researchers to improve automated methods for 3D MRI brain tumor segmentation. Tumor segmentation is one of the fundamental vision tasks necessary for diagnosis and treatment planning of the disease. Previous years winning methods were all deep-learning based, thanks to the advent of modern GPUs, which allow fast optimization of deep convolutional neural network architectures. In this work, we explore best practices of 3D semantic segmentation, including conventional encoder-decoder architecture, as well combined loss functions, in attempt to further improve the segmentation accuracy. We evaluate the method on BraTS 2019 challenge.
Brain Tumor Segmentation with Cascaded Deep Convolutional Neural Network
Cancer is the second leading cause of death globally and is responsible for an estimated 9.6 million deaths in 2018. Approximately 70% of deaths from cancer occur in low and middle-income countries. One defining feature of cancer is the rapid creation of abnormal cells that grow uncontrollably causing tumor. Gliomas are brain tumors that arises from the glial cells in brain and comprise of 80% of all malignant brain tumors. Accurate delineation of tumor cells from healthy tissues is important for precise treatment planning. Because of different forms, shapes, sizes and similarity of the tumor tissues with rest of the brain segmentation of the Glial tumors is challenging. In this study we have proposed fully automatic two step approach for Glioblastoma (GBM) brain tumor segmentation with Cascaded U-Net. Training patches are extracted from 335 cases from Brain Tumor Segmentation (BraTS) Challenge for training and results are validated on 125 patients. The proposed approach is evaluated quantitatively in terms of Dice Similarity Coefficient (DSC) and Hausdorff95 distance.
Fully Automated Brain Tumor Segmentation and Survival Prediction of Gliomas Using Deep Learning and MRI
Tumor segmentation of magnetic resonance images is a critical step in providing objective measures of predicting aggressiveness and response to therapy in gliomas. It has valuable applications in diagnosis, monitoring, and treatment planning of brain tumors. The purpose of this work was to develop a fully-automated deep learning method for tumor segmentation and survival prediction. Well curated brain tumor cases with multi-parametric MR Images from the BraTS2019 dataset were used. A three-group framework was implemented, with each group consisting of three 3D-Dense-UNets to segment whole-tumor (WT), tumor-core (TC) and enhancing-tumor (ET). Each group was trained using different approaches and loss-functions. The output segmentations of a particular label from their respective networks from the three groups were ensembled and post-processed. For survival analysis, a linear regression model based on imaging texture features and wavelet texture features extracted from each of the segmented components was implemented. The networks were tested on both the BraTS2019 validation and testing datasets. The segmentation networks achieved average dice-scores of 0.901, 0.844 and 0.801 for WT, TC and ET respectively on the validation dataset and achieved dice-scores of 0.877, 0.835 and 0.803 for WT, TC and ET respectively on the testing dataset. The survival prediction network achieved an accuracy score of 0.55 and mean squared error (MSE) of 119244 on the validation dataset and achieved an accuracy score of 0.51 and MSE of 455500 on the testing dataset. This method could be implemented as a robust tool to assist clinicians in primary brain tumor management and follow-up.
3D Automatic Brain Tumor Segmentation Using a Multiscale Input U-Net Network
Rosas González, S.
Birgui Sekou, T.
Hidane, M.
Tauber, C.
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2019
BRAIN
Segmentation
Algorithm Development
Quantitative analysis of brain tumors is crucial for surgery planning, follow-up and subsequent radiation treatment of glioma. Finding an automatic and reproducible solution may save time to physicians and contribute to improve overall poor prognosis of glioma patients. In this paper, we present our current BraTS contribution on developing an accurate and robust tumor segmentation algorithm. Our network architecture implements a multiscale input module which has been thought to maximize the extraction of features associated to the multiple image modalities before they are merged in a modified U-Net network avoiding the loss of specific information provided by each modality and improving brain tumor segmentation performance. Our method’s current performance on the BraTS 2019 test set is dice scores of 0.775 ± 0.212, 0.865 ± 0.133 and 0.789 ± 0.266 for enhancing tumor, whole tumor and tumor core, respectively with and overall dice of 0.81.
Semi-supervised Variational Autoencoder for Survival Prediction
Pálsson, Sveinn
Cerri, Stefano
Dittadi, Andrea
Leemput, Koen Van
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2019
BRAIN
Segmentation
Algorithm Development
Semi-supervised learning
In this paper we propose a semi-supervised variational autoencoder for classification of overall survival groups from tumor segmentation masks. The model can use the output of any tumor segmentation algorithm, removing all assumptions on the scanning platform and the specific type of pulse sequences used, thereby increasing its generalization properties. Due to its semi-supervised nature, the method can learn to classify survival time by using a relatively small number of labeled subjects. We validate our model on the publicly available dataset from the Multimodal Brain Tumor Segmentation Challenge (BraTS) 2019.
Multi-modal U-Nets with Boundary Loss and Pre-training for Brain Tumor Segmentation
Ribalta Lorenzo, Pablo
Marcinkiewicz, Michal
Nalepa, Jakub
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2019
BRAIN
Segmentation
Algorithm Development
Gliomas are the most common primary brain tumors, and their manual segmentation is a time-consuming and user-dependent process. We present a two-step multi-modal U-Net-based architecture with unsupervised pre-training and surface loss component for brain tumor segmentation which allows us to seamlessly benefit from all magnetic resonance modalities during the delineation. The results of the experimental study, performed over the newest release of the BraTS test set, revealed that our method delivers accurate brain tumor segmentation, with the average DICE score of 0.72, 0.86, and 0.77 for the enhancing tumor, whole tumor, and tumor core, respectively. The total time required to process one study using our approach amounts to around 20 s.
Multidimensional and Multiresolution Ensemble Networks for Brain Tumor Segmentation
Murugesan, Gowtham Krishnan
Nalawade, Sahil
Ganesh, Chandan
Wagner, Ben
Yu, Fang F.
Fei, Baowei
Madhuranthakam, Ananth J.
Maldjian, Joseph A.
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2019
BRAIN
Segmentation
Algorithm Development
In this work, we developed multiple 2D and 3D segmentation models with multiresolution input to segment brain tumor components and then ensembled them to obtain robust segmentation maps. Ensembling reduced overfitting and resulted in a more generalized model. Multiparametric MR images of 335 subjects from the BRATS 2019 challenge were used for training the models. Further, we tested a classical machine learning algorithm with features extracted from the segmentation maps to classify subject survival range. Preliminary results on the BRATS 2019 validation dataset demonstrated excellent performance with DICE scores of 0.898, 0.784, 0.779 for the whole tumor (WT), tumor core (TC), and enhancing tumor (ET), respectively and an accuracy of 34.5% for predicting survival. The Ensemble of multiresolution 2D networks achieved 88.75%, 83.28% and 79.34% dice for WT, TC, and ET respectively in a test dataset of 166 subjects.
The accurate automatic segmentation of brain tumors enhances the probability of survival rate. Convolutional Neural Network (CNN) is a popular automatic approach for image evaluations. CNN provides excellent results against classical machine learning algorithms. In this paper, we present a unique approach to incorporate contexual information from multiple brain MRI labels. To address the problems of brain tumor segmentation, we implement combined strategies of residual-dense connections, multiple rates of an atrous convolutional layer on popular 3D U-Net architecture. To train and validate our proposed algorithm, we used BRATS 2019 different datasets. The results are promising on the different evaluation metrics.
Two Stages CNN-Based Segmentation of Gliomas, Uncertainty Quantification and Prediction of Overall Patient Survival
This paper proposes, in the context of brain tumor study, a fast automatic method that segments tumors and predicts patient overall survival. The segmentation stage is implemented using two fully convolutional networks based on VGG-16, pre-trained on ImageNet for natural image classification, and fine tuned with the training dataset of the MICCAI 2019 BraTS Challenge. The first network yields to a binary segmentation (background vs lesion) and the second one focuses on the enhancing and non-enhancing tumor classes. The final multiclass segmentation is a fusion of the results of these two networks. The prediction stage is implemented using kernel principal component analysis and random forest classifiers. It only requires a predicted segmentation of the tumor and a homemade atlas. Its simplicity allows to train it with very few examples and it can be used after any segmentation process.
Detection and Segmentation of Brain Tumors from MRI Using U-Nets
Kotowski, Krzysztof
Nalepa, Jakub
Dudzik, Wojciech
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2019
BRAIN
Segmentation
Algorithm Development
Graphics Processing Units (GPU)
In this paper, we exploit a cascaded U-Net architecture to perform detection and segmentation of brain tumors (low- and high-grade gliomas) from magnetic resonance scans. First, we detect tumors in a binary-classification setting, and they later undergo multi-class segmentation. The total processing time of a single input volume amounts to around 15 s using a single GPU. The preliminary experiments over the BraTS’19 validation set revealed that our approach delivers high-quality tumor delineation and offers instant segmentation.
Multimodal Segmentation with MGF-Net and the Focal Tversky Loss Function
In neuro-imaging, MRI is commonly used to acquire multiple sequences simultaneously, including T1, T2 and FLAIR. Multimodal image segmentation involves learning an optimal, joint representation of these sequences for accurate delineation of the region of interest. The most commonly utilized fusion scheme for multimodal segmentation is early fusion, where each modality sequence is treated as an independent channel. In this work, we propose a fusion architecture termed the Moment Gated Fusion (MGF) network which combines feature moments from individual modality sequences for the segmentation task. We supervise our network with a variant of the focal Tversky loss function. Our architecture promotes explain-ability, light-weight CNN design and has achieved 0.687, 0.843 and 0.751 DSC scores on the BraTs 2019 test cohort which is competitive with the commonly used vanilla U-Net.
Brain Tumor Segmentation Using 3D Convolutional Neural Network
Liang, Kaisheng
Lu, Wenlian
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2019
BRAIN
Segmentation
Algorithm Development
Convolutional Neural Network (CNN)
Brain tumors segmentation is one of the most crucial procedures in the diagnosis of brain tumors because it is of great significance for the analysis and visualization of brain structures that can guide the surgery. With the development of natural scene segmentation model FCN, the most representative model U-net has been developed. An increasing number of people are trying to improve the encoder-decoder architecture to achieve better performance currently. In this paper, we focus on the improvement of the encoder-decoder network and the analysis of 3D medical images. We propose an additional path to enhance the encoder part and two separate up-sampling paths for the decoder part of the model. The proposed approach was trained and evaluated on BraTS 2019 dataset.
DDU-Nets: Distributed Dense Model for 3D MRI Brain Tumor Segmentation
Zhang, Hanxiao
Li, Jingxiong
Shen, Mali
Wang, Yaqi
Yang, Guang-Zhong
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2019
BRAIN
Segmentation
Algorithm Development
Convolutional Neural Network (CNN)
Distributed learning
multimodal MRI
Segmentation of brain tumors and their subregions remains a challenging task due to their weak features and deformable shapes. In this paper, three patterns (cross-skip, skip-1 and skip-2) of distributed dense connections (DDCs) are proposed to enhance feature reuse and propagation of CNNs by constructing tunnels between key layers of the network. For better detecting and segmenting brain tumors from multi-modal 3D MR images, CNN-based models embedded with DDCs (DDU-Nets) are trained efficiently from pixel to pixel with a limited number of parameters. Postprocessing is then applied to refine the segmentation results by reducing the false-positive samples. The proposed method is evaluated on the BraTS 2019 dataset with results demonstrating the effectiveness of the DDU-Nets while requiring less computational cost.
Brain Tumor Segmentation Based on 3D Residual U-Net
We propose a deep learning based approach for automatic brain tumor segmentation utilizing a three-dimensional U-Net extended by residual connections. In this work, we did not incorporate architectural modifications to the existing 3D U-Net, but rather evaluated different training strategies for potential improvement of performance. Our model was trained on the dataset of the International Brain Tumor Segmentation (BraTS) challenge 2019 that comprise multi-parametric magnetic resonance imaging (mpMRI) scans from 335 patients diagnosed with a glial tumor. Furthermore, our model was evaluated on the BraTS 2019 independent validation data that consisted of another 125 brain tumor mpMRI scans. The results that our 3D Residual U-Net obtained on the BraTS 2019 test data are Mean Dice scores of 0.697, 0.828, 0.772 and Hausdorff95; distances of 25.56, 14.64, 26.69 for enhancing tumor, whole tumor, and tumor core, respectively.
Automatic Segmentation of Brain Tumor from 3D MR Images Using SegNet, U-Net, and PSP-Net
Weng, Yan-Ting
Chan, Hsiang-Wei
Huang, Teng-Yi
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2019
BRAIN
Segmentation
Algorithm Development
Ensemble learning
In the study, we used three two-dimensional convolutional neural networks, including SegNet, U-Net, and PSP-Net, to design an automatic segmentation of brain tumor from three-dimensional MR datasets. We extracted 2D slices from three slice orientations as the input tensor of the network in the training stage. In the prediction stage, we predict a volume several times with slicing along different angles. Based on the results, we learned that the result predicted more times has better outcomes than those predicted less times. Also, we implement two ensemble methods to combine the result of the three networks. According to the results, the above strategies all contributed to the improvement of the accuracy of segmentation.
3D Deep Residual Encoder-Decoder CNNS with Squeeze-and-Excitation for Brain Tumor Segmentation
Yan, Kai
Sun, Qiuchang
Li, Ling
Li, Zhicheng
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2019
BRAIN
Segmentation
Algorithm Development
Segmenting brain tumors from multimodal MR scans is thought to be highly beneficial for brain abnormality diagnosis, prognosis monitoring, and treatment evaluation. Due to the highly heterogeneous appearance and shape, segmentation of brain tumors in multimodal MRI scans is a challenging task in medical image analysis. In recent years, many segmentation algorithms based on neural network architecture are proposed to address this task. Observing the previous state-of-the-art algorithms, not only did we explore multimodal brain tumor segmentation in 2D space, 2.5D space and 3D space respectively, we also made a lot of attempts in attention block to improve the segmentation result. In this paper, we describe a 3D deep residual encoder-decoder CNNS with Squeeze-and-Excitation block for brain tumor segmentation. In order to learn more effective image features, we have utilized an attention module after each Res-block to weight each channel, which emphasizes useful features while suppresses invalid ones. To deal with class imbalance, we have formulated a weighted Dice loss function. We find that 3D segmentation network with attention block which can enhance context features can significantly improve the performance. In addition, the results of data preprocessing have a great impact on segmentation performance. Our method obtained Dice scores of 0.70, 0.85 and 0.80 for segmenting enhancing tumor, whole tumor and tumor core, respectively on the testing data set.
Overall Survival Prediction Using Conventional MRI Features
Ren, Yanhao
Sun, Pin
Lu, Wenlian
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2019
BRAIN
Segmentation
Algorithm Development
Support Vector Machine (SVM)
Semi-automatic segmentation
Imaging features
Gliomas are common primary brain malignancies. The sub-regions of gliomas are depicted by MRI scans, reflecting varying biological properties. These properties have effect on the diagnosis of neurosurgeons on whether or what kind of resection should be done. The survival days after gross total resection is also of great concern. In this paper, we propose a semi-auto method for segmentation, and extract features from slices of MRI scans, including conventional MRI features and clinical features. 13 features of a subject are selected finally and a support vector regression is used to fit with the training data.
A Multi-path Decoder Network for Brain Tumor Segmentation
Xue, Yunzhe
Xie, Meiyan
Farhat, Fadi G.
Boukrina, Olga
Barrett, A. M.
Binder, Jeffrey R.
Roshan, Usman W.
Graves, William W.
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2019
BRAIN
Segmentation
Algorithm Development
Convolutional Neural Network (CNN)
The identification of brain tumor type, shape, and size from MRI images plays an important role in glioma diagnosis and treatment. Manually identifying the tumor is time expensive and prone to error. And while information from different image modalities may help in principle, using these modalities for manual tumor segmentation may be even more time consuming. Convolutional U-Net architectures with encoders and decoders are state of the art in automated methods for image segmentation. Often only a single encoder and decoder is used, where different modalities and regions of the tumor share the same model parameters. This may lead to incorrect segmentations. We propose a convolutional U-Net that has separate, independent encoders for each image modality. The outputs from each encoder are concatenated and given to separate fusion and decoder blocks for each region of the tumor. The features from each decoder block are then calibrated in a final feature fusion block, after which the model gives it final predictions. Our network is an end-to-end model that simplifies training and reproducibility. On the BraTS 2019 validation dataset our model achieves average Dice values of 0.75, 0.90, and 0.83 for the enhancing tumor, whole tumor, and tumor core subregions respectively.
The Tumor Mix-Up in 3D Unet for Glioma Segmentation
Yin, Pengyu
Hu, Yingdong
Liu, Jing
Duan, Jiaming
Yang, Wei
Cheng, Kun
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2019
BRAIN
Segmentation
Algorithm Development
3D U-Net
Automated segmentation of glioma and its subregions has significant importance throughout the clinical work flow including diagnosis, monitoring and treatment planning of brain cancer. The automatic delineation of tumours have draw much attention in the past few years, particularly the neural network based supervised learning methods. While the clinical data acquisition is much expensive and time consuming, which is the key limitation of machine learning in medical data. We describe a solution for the brain tumor segmentation in the context of the BRATS19 challenge. The major learning scheme is based on the 3D-Unet encoder and decoder with intense data augmentation followed by bias correction. At the moment we submit this short paper, our solution achieved Dice scores of 76.84, 85.74 and 74.51 for the enhancing tumor, whole tumor and tumor core, respectively on the validation data.
Multi-branch Learning Framework with Different Receptive Fields Ensemble for Brain Tumor Segmentation
Guohua, Cheng
Mengyan, Luo
Linyang, He
Lingqiang, Mo
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2019
BRAIN
Segmentation
Algorithm Development
Segmentation of brain tumors from 3D magnetic resonance images (MRIs) is one of key elements for diagnosis and treatment. Most segmentation methods depend on manual segmentation which is time consuming and subjective. In this paper, we propose a robust method for automatic segmentation of brain tumors image, the complementarity between models and training programs with different structures was fully exploited. Due to significant size difference among brain tumors, the model with single receptive field is not robust. To solve this problem, we propose our own method: i) a cascade model with a 3D U-Net like architecture which provides small receptive field focus on local details. ii) a 3D U-Net model combines VAE module which provides large receptive field focus on global information. iii) redesigned Multi-Branch Network with Cascade Attention Network, which provides different receptive field for different types of brain tumors, this allows to scale differences between various brain tumors and make full use of the prior knowledge of the task. The ensemble of all these models further improves the overall performance on the BraTS2019 [10] image segmentation. We evaluate the proposed methods on the validation DataSet of the BraTS2019 segmentation challenge and achieved dice coefficients of 0.91, 0.83 and 0.79 for the whole tumor, tumor core and enhanced tumor core respectively. Our experiments indicate that the proposed methods have a promising potential in the field of brain tumor segmentation.
Domain Knowledge Based Brain Tumor Segmentation and Overall Survival Prediction
Guo, Xiaoqing
Yang, Chen
Lam, Pak Lun
Woo, Peter Y. M.
Yuan, Yixuan
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2019
BRAIN
Segmentation
Algorithm Development
Gradient boosting
3d convolutional neural network (CNN)
Automatically segmenting sub-regions of gliomas (necrosis, edema and enhancing tumor) and accurately predicting overall survival (OS) time from multimodal MRI sequences have important clinical significance in diagnosis, prognosis and treatment of gliomas. However, due to the high degree variations of heterogeneous appearance and individual physical state, the segmentation of sub-regions and OS prediction are very challenging. To deal with these challenges, we utilize a 3D dilated multi-fiber network (DMFNet) with weighted dice loss for brain tumor segmentation, which incorporates prior volume statistic knowledge and obtains a balance between small and large objects in MRI scans. For OS prediction, we propose a DenseNet based 3D neural network with position encoding convolutional layer (PECL) to extract meaningful features from T1 contrast MRI, T2 MRI and previously segmented sub-regions. Both labeled data and unlabeled data are utilized to prevent over-fitting for semi-supervised learning. Those learned deep features along with handcrafted features (such as ages, volume of tumor) and position encoding segmentation features are fed to a Gradient Boosting Decision Tree (GBDT) to predict a specific OS day.
Encoder-Decoder Network for Brain Tumor Segmentation on Multi-sequence MRI
Iantsen, Andrei
Jaouen, Vincent
Visvikis, Dimitris
Hatt, Mathieu
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2019
BRAIN
Segmentation
Algorithm Development
In this paper we describe our approach based on convolutional neural networks for medical image segmentation in a context of the BraTS 2019 challenge. We use the conventional encoder-decoder architecture enhanced with residual blocks, as well as spatial and channel squeeze & excitation modules. The present paper describes the general pipeline including the data pre-processing, the choices regarding the model architecture, the training procedure and the chosen data augmentation techniques. Our final results in the BraTS 2019 segmentation challenge are Dice scores equal to 0.76, 0.87 and 0.80 for enhanced tumor, whole tumor and tumor core sub-regions, respectively.
Deep Convolutional Neural Networks for Brain Tumor Segmentation: Boosting Performance Using Deep Transfer Learning: Preliminary Results
Brain tumor segmentation through MRI images analysis is one of the most challenging issues in medical field. Among these issues, Glioblastomas (GBM) invade the surrounding tissue rather than displacing it, causing unclear boundaries, furthermore, GBM in MRI scans have the same appearance as Gliosis, stroke, inflammation and blood spots. Also, fully automatic brain tumor segmentation methods face other issues such as false positive and false negative regions. In this paper, we present new pipelines to boost the prediction of GBM tumoral regions. These pipelines are based on 3 stages, first stage, we developed Deep Convolutional Neural Networks (DCNNs), then in second stage we extract multi-dimensional features from higher-resolution representation of DCNNs, in third stage we developed machine learning algorithms, where we feed the extracted features from DCNNs into different algorithms such as Random forest (RF) and Logistic regression (LR), and principal component analysis with support vector machine (PCA-SVM). Our experiment results are reported on BRATS-2019 dataset where we achieved through our proposed pipelines the state-of-the-art performance. The average Dice score of our best proposed brain tumor segmentation pipeline is 0.85, 0.76, 0.74 for whole tumor, tumor core, and enhancing tumor, respectively. Finally, our proposed pipeline provides an accurate segmentation performance in addition to the computational efficiency in terms of inference time makes it practical for day-to-day use in clinical centers and for research.
Multimodal Brain Tumor Segmentation with Normal Appearance Autoencoder
We propose a hybrid segmentation pipeline based on the autoencoders’ capability of anomaly detection. To this end, we, first, introduce a new augmentation technique to generate synthetic paired images. Gaining advantage from the paired images, we propose a Normal Appearance Autoencoder (NAA) that is able to remove tumors and thus reconstruct realistic-looking, tumor-free images. After estimating the regions where the abnormalities potentially exist, a segmentation network is guided toward the candidate region. We tested the proposed pipeline on the BraTS 2019 database. The preliminary results indicate that the proposed model improved the segmentation accuracy of brain tumor subregions compared to the U-Net model.
Knowledge Distillation for Brain Tumor Segmentation
Lachinov, Dmitrii
Shipunova, Elena
Turlapov, Vadim
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2019
BRAIN
Segmentation
Algorithm Development
The segmentation of brain tumors in multimodal MRIs is one of the most challenging tasks in medical image analysis. The recent state of the art algorithms solving this task are based on machine learning approaches and deep learning in particular. The amount of data used for training such models and its variability is a keystone for building an algorithm with high representation power.; ; In this paper, we study the relationship between the performance of the model and the amount of data employed during the training process. On the example of brain tumor segmentation challenge, we compare the model trained with labeled data provided by challenge organizers, and the same model trained in omni-supervised manner using additional unlabeled data annotated with the ensemble of heterogeneous models.; ; As a result, a single model trained with additional data achieves performance close to the ensemble of multiple models and outperforms individual methods.
Automatic Classification of Brain Tumor Types with the MRI Scans and Histopathology Images
In the study, we used two neural networks, including VGG16 and Resnet50, to process the whole slide images with feature extracting. To classify the three types of brain tumors (i.e., glioblastoma, oligodendroglioma, and astrocytoma), we tried several clustering methods include k-means and random forest classification methods. In the prediction stage, we compared the prediction results with and without MRI features. The results support that the classification method performed with image features extracted by VGG16 has the highest prediction accuracy. Moreover, we found that combining with radiomics generated from MR images slightly improved the accuracy of the classification.
Ensemble of Convolutional Neural Networks for the Detection of Prostate Cancer in Multi-parametric MRI Scans
Prostate MP-MRI scan is a non-invasive method of detecting early stage prostate cancer which is increasing in popularity. However, this imaging modality requires highly skilled radiologists to interpret the images which incurs significant time and cost. Convolutional neural networks may alleviate the workload of radiologists by discriminating between prostate tumor positive scans and negative ones, allowing radiologists to focus their attention on a subset of scans that are neither clearly positive nor negative. The major challenges of such a system are speed and accuracy. In order to address these two challenges, a new approach using ensemble learning of convolutional neural networks (CNNs) was proposed in this paper, which leverages different imaging modalities including T2 weight, B-value, ADC and Ktrans in a multi-parametric MRI clinical dataset with 330 samples of 204 patients for training and evaluation. The results of prostate tumor identification will display benign or malignant based on extracted features by the individual CNN models in seconds. The ensemble of the four individual CNN models for different image types improves the prediction accuracy to 92% with sensitivity at 94.28% and specificity at 86.67% among given 50 test samples. The proposed framework potentially provides rapid classification in high-volume quantitative prostate tumor samples.
Building a X-ray Database for Mammography on Vietnamese Patients and automatic Detecting ROI Using Mask-RCNN
Thang, Nguyen Duc
Dung, Nguyen Viet
Duc, Tran Vinh
Nguyen, Anh
Nguyen, Quang H.
Anh, Nguyen Tu
Cuong, Nguyen Ngoc
Linh, Le Tuan
Hanh, Bui My
Phu, Phan Huy
Phuong, Nguyen Hoang
2021Book Section, cited 0 times
CBIS-DDSM
BREAST
Convolutional Neural Network (CNN)
This paper describes the method of building a X-ray database for Mammography on Vietnamese patients that we collected at Hanoi Medical University Hospital. This dataset has 4664 images (Dicom) corresponding to 1161 standard patients with uniform distribution according to BIRAD from 0 to 5. This paper also presents the method of detecting Region of Interest (ROI) in mammogram based on Mask R-CNN architecture. The method of determining the ROI for accuracy mAP@0.5 = 0.8109 and the accuracy of classification BIRAD levels is 58.44%.
The Distance-Regularized Level Set Evolution (DRLSE) algorithm solves many problems that plague the class of Level Set algorithms, but has a significant computational cost and is sensitive to its many parameters. Configuring these parameters is a time-intensive trial-and-error task that limits the usability of the algorithm. This is especially true in the field of Medical Imaging, where it would be otherwise highly suitable. The aim of this work is to develop a parallel implementation of the algorithm using the Compute-Unified Device Architecture (CUDA) for Graphics Processing Units (GPU), which would reduce the computational cost of the algorithm, bringing it to the interactive regime. This would lessen the burden of configuring its parameters and broaden its application. Using consumer-grade, hardware, we observed performance gains between roughly 800% and 1700% when comparing against a purely serial C++ implementation we developed, and gains between roughly 180% and 500%, when comparing against the MATLAB reference implementation of DRLSE, both depending on input image resolution.
Automated Classification of Axial CT Slices Using Convolutional Neural Network
Badura, Paweł
Juszczyk, Jan
Bożek, Paweł
Smoliński, Michał
2020Book Section, cited 0 times
Head-Neck Cetuximab
LIDC-IDRI
Machine Learning
Badura, PawełJuszczyk, JanBożek, PawełSmoliński, Michał This study addresses the automated recognition of the axial computed tomography (CT) slice content in terms of a predefined region of the body for the computer-aided diagnosis purposes. A 23-layer convolutional neural network was designed, trained and tested for the axial CT slice classification. The system was validated over 120 CT studies from publicly available databases containing 21 704 images in two experiments with different definitions of classes. The classification accuracy reached 93.6% and 97.0% for the database partitions into 9 and 5 classes, respectively.
Meta Corrupted Pixels Mining for Medical Image Segmentation
Wang, Jixin
Zhou, Sanping
Fang, Chaowei
Wang, Le
Wang, Jinjun
2020Book Section, cited 0 times
LIDC-IDRI
Deep neural networks have achieved satisfactory performance in piles of medical image analysis tasks. However the training of deep neural network requires a large amount of samples with high-quality annotations. In medical image segmentation, it is very laborious and expensive to acquire precise pixel-level annotations. Aiming at training deep segmentation models on datasets with probably corrupted annotations, we propose a novel Meta Corrupted Pixels Mining (MCPM) method based on a simple meta mask network. Our method is targeted at automatically estimate a weighting map to evaluate the importance of every pixel in the learning of segmentation network. The meta mask network which regards the loss value map of the predicted segmentation results as input, is capable of identifying out corrupted layers and allocating small weights to them. An alternative algorithm is adopted to train the segmentation network and the meta mask network, simultaneously. Extensive experimental results on LIDC-IDRI and LiTS datasets show that our method outperforms state-of-the-art approaches which are devised for coping with corrupted annotations.
Prediction of Pathological Complete Response to Neoadjuvant Chemotherapy in Breast Cancer Using Deep Learning with Integrative Imaging, Molecular and Demographic Data
Neoadjuvant chemotherapy is widely used to reduce tumor size to make surgical excision manageable and to minimize distant metastasis. Assessing and accurately predicting pathological complete response is important in treatment planing for breast cancer patients. In this study, we propose a novel approach integrating 3D MRI imaging data, molecular data and demographic data using convolutional neural network to predict the likelihood of pathological complete response to neoadjuvant chemotherapy in breast cancer. We take post-contrast T1-weighted 3D MRI images without the need of tumor segmentation, and incorporate molecular subtypes and demographic data. In our predictive model, MRI data and non-imaging data are convolved to inform each other through interactions, instead of a concatenation of multiple data type channels. This is achieved by channel-wise multiplication of the intermediate results of imaging and non-imaging data. We use a subset of curated data from the I-SPY-1 TRIAL of 112 patients with stage 2 or 3 breast cancer with breast tumors underwent standard neoadjuvant chemotherapy. Our method yielded an accuracy of 0.83, AUC of 0.80, sensitivity of 0.68 and specificity of 0.88. Our model significantly outperforms models using imaging data only or traditional concatenation models. Our approach has the potential to aid physicians to identify patients who are likely to respond to neoadjuvant chemotherapy at diagnosis or early treatment, thus facilitate treatment planning, treatment execution, or mid-treatment adjustment.
Feature-Enhanced Graph Networks for Genetic Mutational Prediction Using Histopathological Images in Colon Cancer
Mining histopathological and genetic data provides a unique avenue to deepen our understanding of cancer biology. However, extensive cancer heterogeneity across image- and molecular-scales poses technical challenges for feature extraction and outcome prediction. In this study, we propose a feature-enhanced graph network (FENet) for genetic mutation prediction using histopathological images in colon cancer. Unlike conventional approaches analyzing patch-based feature alone without considering their spatial connectivity, we seek to link and explore non-isomorphic topological structures in histopathological images. Our FENet incorporates feature enhancement in convolutional graph neural networks to aggregate discriminative features for capturing gene mutation status. Specifically, our approach could identify both local patch feature information and global topological structure in histopathological images simultaneously. Furthermore, we introduced an ensemble strategy by constructing multiple subgraphs to boost the prediction performance. Extensive experiments on the TCGA-COAD and TCGA-READ cohort including both histopathological images and three key genes’ mutation profiles (APC, KRAS, and TP53) demonstrated the superiority of FENet for key mutational outcome prediction in colon cancer.
Nodule2vec: A 3D Deep Learning System for Pulmonary Nodule Retrieval Using Semantic Representation
Kravets, Ilia
Heletz, Tal
Greenspan, Hayit
2020Book Section, cited 0 times
LIDC-IDRI
Content-based retrieval supports a radiologist decision making process by presenting the doctor the most similar cases from the database containing both historical diagnosis and further disease development history. We present a deep learning system that transforms a 3D image of a pulmonary nodule from a CT scan into a low-dimensional embedding vector. We demonstrate that such a vector representation preserves semantic information about the nodule and offers a viable approach for content-based image retrieval (CBIR). We discuss the theoretical limitations of the available datasets and overcome them by applying transfer learning of the state-of-the-art lung nodule detection model. We evaluate the system using the LIDC-IDRI dataset of thoracic CT scans. We devise a similarity score and show that it can be utilized to measure similarity 1) between annotations of the same nodule by different radiologists and 2) between the query nodule and the top four CBIR results. A comparison between doctors and algorithm scores suggests that the benefit provided by the system to the radiologist end-user is comparable to obtaining a second radiologist’s opinion.
RevPHiSeg: A Memory-Efficient Neural Network for Uncertainty Quantification in Medical Image Segmentation
Gantenbein, Marc
Erdil, Ertunc
Konukoglu, Ender
2020Book Section, cited 0 times
LIDC-IDRI
Quantifying segmentation uncertainty has become an important issue in medical image analysis due to the inherent ambiguity of anatomical structures and its pathologies. Recently, neural network-based uncertainty quantification methods have been successfully applied to various problems. One of the main limitations of the existing techniques is the high memory requirement during training; which limits their application to processing smaller field-of-views (FOVs) and/or using shallower architectures. In this paper, we investigate the effect of using reversible blocks for building memory-efficient neural network architectures for quantification of segmentation uncertainty. The reversible architecture achieves memory saving by exactly computing the activations from the outputs of the subsequent layers during backpropagation instead of storing the activations for each layer. We incorporate the reversible blocks into a recently proposed architecture called PHiSeg that is developed for uncertainty quantification in medical image segmentation. The reversible architecture, RevPHiSeg, allows training neural networks for quantifying segmentation uncertainty on GPUs with limited memory and processing larger FOVs. We perform experiments on the LIDC-IDRI dataset and an in-house prostate dataset, and present comparisons with PHiSeg. The results demonstrate that RevPHiSeg consumes ∼30%$${\sim }30\%$$ less memory compared to PHiSeg while achieving very similar segmentation accuracy.
Augmented Radiology: Patient-Wise Feature Transfer Model for Glioma Grading
Li, Zisheng
Ogino, Masahiro
2020Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
In current oncological workflows of clinical decision making and treatment management, biopsy is the only way to confirm the abnormality of cancer. On the purpose of reducing unnecessary biopsies and diagnostic burden, we propose a patient-wise feature transfer model for learning the relationship of phenotypes between radiological images and pathological images. We hypothesize that high-level features from the same patient are possible to be linked between modalities of different image scales. We integrate multiple feature transfer blocks between CNN-based networks with single-/multi-modality radiological images and pathological images in an end-to-end training framework. We refer to our method as “augmented radiology” because the inference model only requires radiological images as input while the prediction result can be linked to specific pathological phenotypes. We apply the proposed method to glioma grading (high-grade vs. low-grade) and train the feature transfer model by using patient-wise multimodal MRI images and pathological images. Evaluation results show that the proposed method can achieve pathological tumor grading score in high accuracy (AUC 0.959) only given the radiological images as input.
Soft Tissue Sarcoma Co-segmentation in Combined MRI and PET/CT Data
Neubauer, Theresa
Wimmer, Maria
Berg, Astrid
Major, David
Lenis, Dimitrios
Beyer, Thomas
Saponjski, Jelena
Bühler, Katja
2020Book Section, cited 0 times
Soft-tissue-Sarcoma
Tumor segmentation in multimodal medical images has seen a growing trend towards deep learning based methods. Typically, studies dealing with this topic fuse multimodal image data to improve the tumor segmentation contour for a single imaging modality. However, they do not take into account that tumor characteristics are emphasized differently by each modality, which affects the tumor delineation. Thus, the tumor segmentation is modality- and task-dependent. This is especially the case for soft tissue sarcomas, where, due to necrotic tumor tissue, the segmentation differs vastly. Closing this gap, we develop a modality-specific sarcoma segmentation model that utilizes multimodal image data to improve the tumor delineation on each individual modality. We propose a simultaneous co-segmentation method, which enables multimodal feature learning through modality-specific encoder and decoder branches, and the use of resource-efficient densely connected convolutional layers. We further conduct experiments to analyze how different input modalities and encoder-decoder fusion strategies affect the segmentation result. We demonstrate the effectiveness of our approach on public soft tissue sarcoma data, which comprises MRI (T1 and T2 sequence) and PET/CT scans. The results show that our multimodal co-segmentation model provides better modality-specific tumor segmentation than models using only the PET or MRI (T1 and T2) scan as input.
Tissue Differentiation Based on Classification of Morphometric Features of Nuclei
Dudzińska, Dominika
Piórkowski, Adam
2020Book Section, cited 0 times
Pan-Cancer-Nuclei-Seg
The aim of the article is to analyze the shape of nuclei of various tissues and to assess the tumor differentiation based on morphometric measurements. For this purpose, an experiment was conducted, the results of which determine whether it is possible to determine a tissue’s type based on the mentioned features. The measurements were performed on a publicly available data set containing 1,356 hematoxylin- and eosin-stained images with nucleus segmentations for 14 different human tissues. Morphometric analysis of cell nuclei using ImageJ software took 17 parameters into account. Classification of the obtained results was performed in Matlab R2018b software using the SVM and t-SNE algorithms, which showed that some cancers can be distinguished with an accuracy close to 90% (lung squamous cell cancer vs others; breast cancer vs cervical cancer).
ParaGlyder: Probe-driven Interactive Visual Analysis for Multiparametric Medical Imaging Data
Mörth, Eric
Haldorsen, Ingfrid S.
Bruckner, Stefan
Smit, Noeska N.
2020Book Section, cited 0 times
Brain-Tumor-Progression
Multiparametric imaging in cancer has been shown to be useful for tumor detection and may also depict functional tumor characteristics relevant for clinical phenotypes. However, when confronted with datasets consisting of multiple values per voxel, traditional reading of the imaging series fails to capture complicated patterns. These patterns of potentially important imaging properties of the parameter space may be critical for the analysis, but standard approaches do not deliver sufficient details. Therefore, in this paper, we present an approach that aims to enable the exploration and analysis of such multiparametric studies using an interactive visual analysis application to remedy the trade-offs between details in the value domain and in spatial resolution. This may aid in the discrimination between healthy and cancerous tissue and potentially highlight metastases that evolved from the primary tumor. We conducted an evaluation with eleven domain experts from different fields of research to confirm the utility of our approach.
Towards MRI Progression Features for Glioblastoma Patients: From Automated Volumetry and Classical Radiomics to Deep Feature Learning
Suter, Yannick
Knecht, Urspeter
Wiest, Roland
Hewer, Ekkehard
Schucht, Philippe
Reyes, Mauricio
2020Book Section, cited 0 times
QIN GBM Treatment Response
Disease progression for Glioblastoma multiforme patients is currently assessed with manual bi-dimensional measurements of the active contrast-enhancing tumor on Magnetic Resonance Images (MRI). This method is known to be susceptible to error; in the lack of a data-driven approach, progression thresholds had been set rather arbitrarily considering measurement inaccuracies. We propose a data-driven methodology for disease progression assessment, building on tumor volumetry, classical radiomics, and deep learning based features. For each feature type, we infer progression thresholds by maximizing the correlation of the time-to-progression (TTP) and overall survival (OS). On a longitudinal study comprising over 500 data points, we observed considerable underestimation of the current volumetric disease progression threshold. We evaluate the data-driven disease progression thresholds based on expert ratings using the current clinical practice.
State-of-the-Art in Brain Tumor Segmentation and Current Challenges
Yousaf, Sobia
RaviPrakash, Harish
Anwar, Syed Muhammad
Sohail, Nosheen
Bagci, Ulas
2020Book Section, cited 0 times
QIN-BRAIN-DSC-MRI
Brain tumors are the third most common type of cancer among young adults and an accurate diagnosis and treatment demands strict delineation of the tumor effected tissue. Brain tumor segmentation involves segmenting different tumor tissues, particularly, the enhancing tumor regions, non-enhancing tumor and necrotic regions, and edema. With increasing computational power and data sharing, computer vision algorithms, particularly deep learning approaches, have begun to dominate the field of medical image segmentation. Accurate tumor segmentation will help in surgery planning as well as monitor the progress in longitudinal studies enabling a better understanding of the factors effecting malignant growth. The objective of this paper is to provide an overview of the current state-of-the-art in brain tumor segmentation approaches, an idea of the available resources, and highlight the most promising research directions moving forward. We also intend to highlight the challenges that exist in this field, in particular towards the successful adoption of such methods to clinical practice.
Patient-specific implants provide important advantages for patients and medical professionals. The state of the art of cranioplasty implant production is based on the bone structure reconstruction and use of patient’s own anatomical information for filling the bone defect. The present work proposes a two-dimensional investigation of which dataset results in the closest polynomial regression to a gold standard structure combining points of the bone defect region and points of the healthy contralateral skull hemisphere. The similarity measures used to compare datasets are the root mean square error (RMSE) and the Hausdorff distance. The objective is to use the most successful dataset in future development and testing of a semi-automatic methodology for cranial prosthesis modeling. The present methodology was implemented in Python scripts and uses five series of skull computed tomography images to generate phantoms with small, medium and large bone defects. Results from statistical tests and observations made from the mean RMSE and mean Hausdorff distance allow to determine that the dataset formed by the phantom contour points twice and the mirrored contour points is the one that significantly increases the similarity measures.
Method for Improved Image Reconstruction in Computed Tomography and Positron Emission Tomography, Based on Compressive Sensing with Prefiltering in the Frequency Domain
Garcia, Y.
Franco, C.
Miosso, C. J.
2022Book Section, cited 0 times
TCGA-LUAD
Computed tomography (CT) and positron emission tomography (PET) allow many types of diagnoses and medical analyses to be performed, as well as patient monitoring in different treatment scenarios. Therefore, they are among the most important medical imaging modalities, both in clinical applications and in scientific research. However, both methods lead to radiation exposure, associated to the X-rays, used in the CT case, and to the chemical contrast that inserts a radioactive isotope into the patient’s body, in the PET case. It is possible to reduce the amount of radiation needed to attain a specified quality in these imaging techniques by using compressive sensing (CS), which reduces the number of measurements required for signal and image reconstruction, compared to standard approaches such as filtered backprojection. In this paper, we propose and evaluate a new method for the reconstruction of CT and PET images based on CS with prefiltering in the frequency domain. We start by estimating frequency-domain measurements based on the acquired sinograms. Next, we perform a prefiltering in the frequency domain to favor the sparsity required by CS and improve the reconstruction of filtered versions of the image. Based on the reconstructed filtered images, a final composition stage leads to the complete image using the spectral information from the individual filtered versions. We compared the proposed method to the standard filtered backprojection technique, commonly used in CT and PET. The results suggest that the proposed method can lead to images with significantly higher signal-to-error ratios for a specified number of measurements, both for CT (p = 8.8324e-05) and PET (p = 4.7377e-09).
Deep MammoNet: Early Diagnosis of Breast Cancer Using Multi-layer Hierarchical Features of Deep Transfer Learned Convolutional Neural Network
Mohamed Aarif, K. O.
Sivakumar, P.
Mohamed Yousuff, Caffiyar
Mohammed Hashim, B. A.
Advanced Machine Learning Approaches in Cancer Prognosis2021Journal Article, cited 0 times
Website
CBIS-DDSM
Deep Learning
BREAST
Convolutional Neural Network (CNN)
Computer Aided Detection (CADe)
Deep Convolutional Neural Network (CNN) comprises of multiple convolutional layers which learn feature form input image with different levels of abstraction; In this work, we address the issue to improvise the recognition accuracy of CNN for classification of breast cancer from mammogram images. To achieve optimize classification we propose a multilayer hierarchical convolutional feature integration in deep transfer learned CNN. In Deep CNN the last layer learns significant features that are highly invariant but their spatial resolutions are too stiff to exactly confine the target. In contrast, features from earlier layers offer more exact localization and hold more fine-grained spatial subtleties for exact confinement but are less invariant. This observation recommends that reasoning with multiple layers of CNN features for breast cancer detection from mammogram images is of great importance. In this work, we proposed to integrate the features extracted from the earlier layer and the last layer of deep CNN to train and improvise the classification accuracy of breast cancer detection in the mammogram image. We also discussed that consistent improvement in accuracy is obtained by using mammogram augmentation and different weight learning factors across different layers.
Automatic Segmentation of Non-tumor Tissues in Glioma MR Brain Images Using Deformable Registration with Partial Convolutional Networks
Liu, Zhongqiang
Gu, Dongdong
Zhang, Yu
Cao, Xiaohuan
Xue, Zhong
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BraTS 2018
BRAIN
Segmentation
Image Registration
Algorithm Development
In brain tumor diagnosis and surgical planning, segmentation of tumor regions and accurate analysis of surrounding normal tissues are necessary for physicians. Pathological variability often renders difficulty to register a well-labeled normal atlas to such images and to automatic segment/label surrounding normal brain tissues. In this paper, we propose a new registration approach that first segments brain tumor using a U-Net and then simulates missed normal tissues within the tumor region using a partial convolutional network. Then, a standard normal brain atlas image is registered onto such tumor-removed images in order to segment/label the normal brain tissues. In this way, our new approach greatly reduces the effects of pathological variability in deformable registration and segments the normal tissues surrounding brain tumor well. In experiments, we used MICCAI BraTS2018 T1 and FLAIR images to evaluate the proposed algorithm. By comparing direct registration with the proposed algorithm, the results showed that the Dice coefficient for gray matters was significantly improved for surrounding normal brain tissues.
Symmetric-Constrained Irregular Structure Inpainting for Brain MRI Registration with Tumor Pathology
Liu, X.
Xing, F.
Yang, C.
Jay Kuo, C. C.
El Fakhri, G.
Woo, J.
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BraTS 2018
BRAIN
Segmentation
Algorithm Development
Brain Tumor
Contextual Learning
Deep Learning
Image Inpainting
Irregular Structure
Registration
Symmetry
Deformable registration of magnetic resonance images between patients with brain tumors and healthy subjects has been an important tool to specify tumor geometry through location alignment and facilitate pathological analysis. Since tumor region does not match with any ordinary brain tissue, it has been difficult to deformably register a patient's brain to a normal one. Many patient images are associated with irregularly distributed lesions, resulting in further distortion of normal tissue structures and complicating registration's similarity measure. In this work, we follow a multi-step context-aware image inpainting framework to generate synthetic tissue intensities in the tumor region. The coarse image-to-image translation is applied to make a rough inference of the missing parts. Then, a feature-level patch-match refinement module is applied to refine the details by modeling the semantic relevance between patch-wise features. A symmetry constraint reflecting a large degree of anatomical symmetry in the brain is further proposed to achieve better structure understanding. Deformable registration is applied between inpainted patient images and normal brains, and the resulting deformation field is eventually used to deform original patient data for the final alignment. The method was applied to the Multimodal Brain Tumor Segmentation (BraTS) 2018 challenge database and compared against three existing inpainting methods. The proposed method yielded results with increased peak signal-to-noise ratio, structural similarity index, inception score, and reduced L1 error, leading to successful patient-to-normal brain image registration.
Estimating Glioblastoma Biophysical Growth Parameters Using Deep Learning Regression
Pati, S.
Sharma, V.
Aslam, H.
Thakur, S. P.
Akbari, H.
Mang, A.
Subramanian, S.
Biros, G.
Davatzikos, C.
Bakas, S.
Brainlesion2021Journal Article, cited 0 times
TCGA-GBM
BraTS-TCGA-GBM
Algorithm Development
Deep Learning
Biophysical growth model
Brain tumor
Deep learning
Glioblastoma
Regression
BraTS 2020
Glioblastoma ( GBM ) is arguably the most aggressive, infiltrative, and heterogeneous type of adult brain tumor. Biophysical modeling of GBM growth has contributed to more informed clinical decision-making. However, deploying a biophysical model to a clinical environment is challenging since underlying computations are quite expensive and can take several hours using existing technologies. Here we present a scheme to accelerate the computation. In particular, we present a deep learning ( DL )-based logistic regression model to estimate the GBM's biophysical growth in seconds. This growth is defined by three tumor-specific parameters: 1) a diffusion coefficient in white matter ( Dw ), which prescribes the rate of infiltration of tumor cells in white matter, 2) a mass-effect parameter ( Mp ), which defines the average tumor expansion, and 3) the estimated time ( T ) in number of days that the tumor has been growing. Preoperative structural multi-parametric MRI ( mpMRI ) scans from n = 135 subjects of the TCGA-GBM imaging collection are used to quantitatively evaluate our approach. We consider the mpMRI intensities within the region defined by the abnormal FLAIR signal envelope for training one DL model for each of the tumor-specific growth parameters. We train and validate the DL-based predictions against parameters derived from biophysical inversion models. The average Pearson correlation coefficients between our DL-based estimations and the biophysical parameters are 0.85 for Dw, 0.90 for Mp, and 0.94 for T, respectively. This study unlocks the power of tumor-specific parameters from biophysical tumor growth estimation. It paves the way towards their clinical translation and opens the door for leveraging advanced radiomic descriptors in future studies by means of a significantly faster parameter reconstruction compared to biophysical growth modeling approaches.
Brain Tumor Segmentation Using Dual-Path Attention U-Net in 3D MRI Images
Jun, Wen
Haoxiang, Xu
Wang, Zhang
2021Book Section, cited 0 times
BraTS-TCGA-LGG
BraTS-TCGA-GBM
BraTS 2020
Segmentation
Challenge
U-Net
3d convolutional neural network (CNN)
Semantic segmentation plays an essential role in brain tumor diagnosis and treatment planning. Yet, manual segmentation is a time-consuming task. That fact leads to hire the Deep Neural Networks to segment brain tumor. In this work, we proposed a variety of 3D U-Net, which can achieve comparable segmentation accuracy with less graphic memory cost. To be more specific, our model employs a modified attention block to refine the feature map representation along the skip-connection bridge, which consists of parallelly connected spatial and channel attention blocks. Dice coefficients for enhancing tumor, whole tumor, and tumor core reached 0.752, 0.879 and 0.779 respectively on the BRATS- 2020 valid dataset.
Multimodal Brain Image Analysis and Survival Prediction Using Neuromorphic Attention-Based Neural Networks
Han, Il Song
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
BraTS 2020
2D and 3D radiomics features
Challenge
BRAIN
Segmentation
Algorithm Development
Accurate analysis of brain tumors from 3D Magnetic Resonance Imaging (MRI) is necessary for the diagnosis and treatment planning, and the recent development using deep neural networks becomes of great clinical importance because of its effective and accurate performance. The 3D nature of multimodal MRI demands the large scale memory and computation, while the variety of 3D U-net is widely adopted for medical image segmentation. In this study, 2D U-net is applied to the tumor segmentation and survival period prediction, inspired by the neuromorphic neural network. The new method introduces the neuromorphic saliency map for enhancing the image analysis. By mimicking the visual cortex and implementing the neuromorphic preprocessing, the map of attention and saliency is generated and applied to improve the accurate and fast medical image analysis performance. Through the BraTS 2020 challenge, the performance of the renewed neuromorphic algorithm is evaluated and an overall review is conducted on the previous neuromorphic processing and other approach. The overall survival prediction accuracy is 55.2% for the validation data, and 43% for the test data.;
Context Aware 3D UNet for Brain Tumor Segmentation
Ahmad, Parvez
Qamar, Saqib
Shen, Linlin
Saeed, Adnan
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
Convolutional Neural Network (CNN)
Deep convolutional neural network (CNN) achieves remarkable performance for medical image analysis. UNet is the primary source in the performance of 3D CNN architectures for medical imaging tasks, including brain tumor segmentation. The skip connection in the UNet architecture concatenates features from both encoder and decoder paths to extract multi-contextual information from image data. The multi-scaled features play an essential role in brain tumor segmentation. However, the limited use of features can degrade the performance of the UNet approach for segmentation. In this paper, we propose a modified UNet architecture for brain tumor segmentation. In the proposed architecture, we used densely connected blocks in both encoder and decoder paths to extract multi-contextual information from the concept of feature reusability. In addition, residual-inception blocks (RIB) are used to extract the local and global information by merging features of different kernel sizes. We validate the proposed architecture on the multi-modal brain tumor segmentation challenge (BRATS) 2020 testing dataset. The dice (DSC) scores of the whole tumor (WT), tumor core (TC), and enhancing tumor (ET) are 89.12%, 84.74%, and 79.12%, respectively.
Brain Tumor Segmentation Network Using Attention-Based Fusion and Spatial Relationship Constraint
Liu, Chenyu
Ding, Wangbin
Li, Lei
Zhang, Zhen
Pei, Chenhao
Huang, Liqin
Zhuang, Xiahai
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
Delineating the brain tumor from magnetic resonance (MR) images is critical for the treatment of gliomas. However, automatic delineation is challenging due to the complex appearance and ambiguous outlines of tumors. Considering that multi-modal MR images can reflect different tumor biological properties, we develop a novel multi-modal tumor segmentation network (MMTSN) to robustly segment brain tumors based on multi-modal MR images. The MMTSN is composed of three sub-branches and a main branch. Specifically, the sub-branches are used to capture different tumor features from multi-modal images, while in the main branch, we design a spatial-channel fusion block (SCFB) to effectively aggregate multi-modal features. Additionally, inspired by the fact that the spatial relationship between sub-regions of the tumor is relatively fixed, e.g., the enhancing tumor is always in the tumor core, we propose a spatial loss to constrain the relationship between different sub-regions of tumor. We evaluate our method on the test set of multi-modal brain tumor segmentation challenge 2020 (BraTs2020). The method achieves 0.8764, 0.8243 and 0.773 Dice score for the whole tumor, tumor core and enhancing tumor, respectively.
Modality-Pairing Learning for Brain Tumor Segmentation
Wang, Yixin
Zhang, Yao
Hou, Feng
Liu, Yang
Tian, Jiang
Zhong, Cheng
Zhang, Yang
He, Zhiqiang
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
3D U-Net
Automatic brain tumor segmentation from multi-modality Magnetic Resonance Images (MRI) using deep learning methods plays an important role in assisting the diagnosis and treatment of brain tumor. However, previous methods mostly ignore the latent relationship among different modalities. In this work, we propose a novel end-to-end Modality-Pairing learning method for brain tumor segmentation. Paralleled branches are designed to exploit different modality features and a series of layer connections are utilized to capture complex relationships and abundant information among modalities. We also use a consistency loss to minimize the prediction variance between two branches. Besides, learning rate warmup strategy is adopted to solve the problem of the training instability and early over-fitting. Lastly, we use average ensemble of multiple models and some post-processing techniques to get final results. Our method is tested on the BraTS 2020 online testing dataset, obtaining promising segmentation performance, with average dice scores of 0.891, 0.842, 0.816 for the whole tumor, tumor core and enhancing tumor, respectively. We won the second place of the BraTS 2020 Challenge for the tumor segmentation task.
Transfer Learning for Brain Tumor Segmentation
Wacker, Jonas
Ladeira, Marcelo
Nascimento, Jose Eduardo Vaz
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
Transfer learning
Gliomas are the most common malignant brain tumors that are treated with chemoradiotherapy and surgery. Magnetic Resonance Imaging (MRI) is used by radiotherapists to manually segment brain lesions and to observe their development throughout the therapy. The manual image segmentation process is time-consuming and results tend to vary among different human raters. Therefore, there is a substantial demand for automatic image segmentation algorithms that produce a reliable and accurate segmentation of various brain tissue types. Recent advances in deep learning have led to convolutional neural network architectures that excel at various visual recognition tasks. They have been successfully applied to the medical context including medical image segmentation. In particular, fully convolutional networks (FCNs) such as the U-Net produce state-of-the-art results in the automatic segmentation of brain tumors. MRI brain scans are volumetric and exist in various co-registered modalities that serve as input channels for these FCN architectures. Training algorithms for brain tumor segmentation on this complex input requires large amounts of computational resources and is prone to overfitting. In this work, we construct FCNs with pretrained convolutional encoders. We show that we can stabilize the training process this way and achieve an improvement with respect to dice scores and Hausdorff distances. We also test our method on a privately obtained clinical dataset.
Efficient Embedding Network for 3D Brain Tumor Segmentation
Messaoudi, Hicham
Belaid, Ahror
Allaoui, Mohamed Lamine
Zetout, Ahcene
Allili, Mohand Said
Tliba, Souhil
Ben Salem, Douraied
Conze, Pierre-Henri
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
Transfer learning
3D medical image processing with deep learning greatly suffers from a lack of data. Thus, studies carried out in this field are limited compared to works related to 2D natural image analysis, where very large datasets exist. As a result, powerful and efficient 2D convolutional neural networks have been developed and trained. In this paper, we investigate a way to transfer the performance of a two-dimensional classification network for the purpose of three-dimensional semantic segmentation of brain tumors. We propose an asymmetric U-Net network by incorporating the EfficientNet model as part of the encoding branch. As the input data is in 3D, the first layers of the encoder are devoted to the reduction of the third dimension in order to fit the input of the EfficientNet network. Experimental results on validation and test data from the BraTS 2020 challenge demonstrate that the proposed method achieve promising performance.
Segmentation of the Multimodal Brain Tumor Images Used Res-U-Net
Sun, Jindong
Peng, Yanjun
Li, Dapeng
Guo, Yanfei
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
Gliomas are the most common brain tumors, which have a high mortality. Magnetic resonance imaging (MRI) is useful to assess gliomas, in which segmentation of multimodal brain tissues in 3D medical images is of great significance for brain diagnosis. Due to manual job for segmentation is time-consuming, an automated and accurate segmentation method is required. How to segment multimodal brain accurately is still a challenging task. To address this problem, we employ residual neural blocks and a U-Net architecture to build a novel network. We have evaluated the performances of different primary residual neural blocks in building U-Net. Our proposed method was evaluated on the validation set of BraTS 2020, in which our model makes an effective segmentation for the complete, core and enhancing tumor regions in Dice Similarity Coefficient (DSC) metric (0.89, 0.78, 0.72). And in testing set, our model got the DSC results of 0.87, 0.82, 0.80. Residual convolutional block is especially useful to improve performance in building model. Our proposed method is inherently general and is a powerful tool to studies of medical images of brain tumors.
Vox2Vox: 3D-GAN for Brain Tumour Segmentation
Cirillo, Marco Domenico
Abramian, David
Eklund, Anders
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
Generative Adversarial Network (GAN)
Gliomas are the most common primary brain malignancies, with different degrees of aggressiveness, variable prognosis and various heterogeneous histological sub-regions, i.e., peritumoral edema, necrotic core, enhancing and non-enhancing tumour core. Although brain tumours can easily be detected using multi-modal MRI, accurate tumor segmentation is a challenging task. Hence, using the data provided by the BraTS Challenge 2020, we propose a 3D volume-to-volume Generative Adversarial Network for segmentation of brain tumours. The model, called Vox2Vox, generates realistic segmentation outputs from multi-channel 3D MR images, segmenting the whole, core and enhancing tumor with mean values of 87.20%, 81.14%, and 78.67% as dice scores and 6.44mm, 24.36 mm, and 18.95 mm for Hausdorff distance 95 percentile for the BraTS testing set after ensembling 10 Vox2Vox models obtained with a 10-fold cross-validation. The code is available at https://github.com/mdciri/Vox2Vox.
Automatic Brain Tumor Segmentation with Scale Attention Network
Yuan, Yading
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
Automatic segmentation of brain tumors is an essential but challenging step for extracting quantitative imaging biomarkers for accurate tumor detection, diagnosis, prognosis, treatment planning and assessment. Multimodal Brain Tumor Segmentation Challenge 2020 (BraTS 2020) provides a common platform for comparing different automatic algorithms on multi-parametric Magnetic Resonance Imaging (mpMRI) in tasks of 1) Brain tumor segmentation MRI scans; 2) Prediction of patient overall survival (OS) from pre-operative MRI scans; 3) Distinction of true tumor recurrence from treatment related effects and 4) Evaluation of uncertainty measures in segmentation. We participate the image segmentation challenge by developing a fully automatic segmentation network based on encoder-decoder architecture. In order to better integrate information across different scales, we propose a dynamic scale attention mechanism that incorporates low-level details with high-level semantics from feature maps at different scales. Our framework was trained using the 369 challenge training cases provided by BraTS 2020, and achieved an average Dice Similarity Coefficient (DSC) of 0.8828, 0.8433 and 0.8177, as well as 95% Hausdorff distance (in millimeter) of 5.2176, 17.9697 and 13.4298 on 166 testing cases for whole tumor, tumor core and enhanced tumor, respectively, which ranked itself as the 3rd place among 693 registrations in the BraTS 2020 challenge.
Impact of Spherical Coordinates Transformation Pre-processing in Deep Convolution Neural Networks for Brain Tumor Segmentation and Survival Prediction
Russo, Carlo
Liu, Sidong
Di Ieva, Antonio
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
Deep convolutional neural network (DCNN)
Magnetic Resonance Imaging (MRI)
Pre-processing and Data Augmentation play an important role in Deep Convolutional Neural Networks (DCNN). Whereby several methods aim for standardization and augmentation of the dataset, we here propose a novel method aimed to feed DCNN with spherical space transformed input data that could better facilitate feature learning compared to standard Cartesian space images and volumes. In this work, the spherical coordinates transformation has been applied as a preprocessing method that, used in conjunction with normal MRI volumes, improves the accuracy of brain tumor segmentation and patient overall survival (OS) prediction on Brain Tumor Segmentation (BraTS) Challenge 2020 dataset. The LesionEncoder framework has been then applied to automatically extract features from DCNN models, achieving 0.586 accuracy of OS prediction on the validation data set, which is one of the best results according to BraTS 2020 leaderboard.
Overall Survival Prediction for Glioblastoma on Pre-treatment MRI Using Robust Radiomics and Priors
Suter, Yannick
Knecht, Urspeter
Wiest, Roland
Reyes, Mauricio
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
Random Forest
Patients with Glioblastoma multiforme (GBM) have a very low overall survival (OS) time, due to the rapid growth an invasiveness of this brain tumor. As a contribution to the overall survival (OS) prediction task within the Brain Tumor Segmentation Challenge (BraTS), we classify the OS of GBM patients into overall survival classes based on information derived from pre-treatment Magnetic Resonance Imaging (MRI). The top-ranked methods from the past years almost exclusively used shape and position features. This is a remarkable contrast to the current advances in GBM radiomics showing a benefit of intensity-based features. This discrepancy may be caused by the inconsistent acquisition parameters in a multi-center setting. In this contribution, we test if normalizing the images based on the healthy tissue intensities enables the robust use of intensity features in this challenge. Based on these normalized images, we test the performance of 176 combinations of feature selection techniques and classifiers. Additionally, we test the incorporation of a sequence and robustness prior to limit the performance drop when models are applied to unseen data. The most robust performance on the training data (accuracy: 0.52±0.09; ) was achieved with random forest regression, but this accuracy could not be maintained on the test set.
Glioma Segmentation Using Encoder-Decoder Network and Survival Prediction Based on Cox Analysis
Pang, Enshuai
Shi, Wei
Li, Xuan
Wu, Qiang
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
Random Forest
Radiomics
Glioma imaging analysis is a challenging task. In this paper, we used the encoder-decoder structure to complete the task of glioma segmentation. The most important characteristic of the presented segmentation structure is that it can extract more abundant features, and at the same time, it greatly reduces the amount of network parameters and the consumption of computing resources. Different textures, first order statistics and shape-based features were extracted from the BraTS 2020 dataset. Then, we use cox survival analysis to perform feature selection on the extracted features. Finally, we use randomforest regression model to predict the survival time of the patients. The result of survival prediction with five-fold cross-validation on the training dataset is better than the baseline system.
Brain Tumor Segmentation with Self-ensembled, Deeply-Supervised 3D U-Net Neural Networks: A BraTS 2020 Challenge Solution
Henry, Théophraste
Carré, Alexandre
Lerousseau, Marvin
Estienne, Théo
Robert, Charlotte
Paragios, Nikos
Deutsch, Eric
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
Brain tumor segmentation is a critical task for patient’s disease management. In order to automate and standardize this task, we trained multiple U-net like neural networks, mainly with deep supervision and stochastic weight averaging, on the Multimodal Brain Tumor Segmentation Challenge (BraTS) 2020 training dataset. Two independent ensembles of models from two different training pipelines were trained, and each produced a brain tumor segmentation map. These two labelmaps per patient were then merged, taking into account the performance of each ensemble for specific tumor subregions. Our performance on the online validation dataset with test time augmentation were as follows: Dice of 0.81, 0.91 and 0.85; Hausdorff (95%) of 20.6, 4, 3, 5.7 mm for the enhancing tumor, whole tumor and tumor core, respectively. Similarly, our solution achieved a Dice of 0.79, 0.89 and 0.84, as well as Hausdorff (95%) of 20.4, 6.7 and 19.5 mm on the final test dataset, ranking us among the top ten teams. More complicated training schemes and neural network architectures were investigated without significant performance gain at the cost of greatly increased training time. Overall, our approach yielded good and balanced performance for each tumor subregion. Our solution is open sourced at https://github.com/lescientifik/open_brats2020.
Brain Tumour Segmentation Using a Triplanar Ensemble of U-Nets on MR Images
Sundaresan, Vaanathi
Griffanti, Ludovica
Jenkinson, Mark
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
Gliomas appear with wide variation in their characteristics both in terms of their appearance and location on brain MR images, which makes robust tumour segmentation highly challenging, and leads to high inter-rater variability even in manual segmentations. In this work, we propose a triplanar ensemble network, with an independent tumour core prediction module, for accurate segmentation of these tumours and their sub-regions. On evaluating our method on the MICCAI Brain Tumor Segmentation (BraTS) challenge validation dataset, for tumour sub-regions, we achieved a Dice similarity coefficient of 0.77 for both enhancing tumour (ET) and tumour core (TC). In the case of the whole tumour (WT) region, we achieved a Dice value of 0.89, which is on par with the top-ranking methods from BraTS’17-19. Our method achieved an evaluation score that was the equal 5th highest value (with our method ranking in 10th place) in the BraTS’20 challenge, with mean Dice values of 0.81, 0.89 and 0.84 on ET, WT and TC regions respectively on the BraTS’20 unseen test dataset.
MRI Brain Tumor Segmentation Using a 2D-3D U-Net Ensemble
Marti Asenjo, Jaime
Martinez-Larraz Solís, Alfonso
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
Three 2D networks, one for each patient-plane (axial, sagittal and coronal) plus a 3-D network were ensemble for tumor segmentation over MRI images, with final Dice scores of 0.75 for the enhancing tumor (ET), 0.81 whole tumor (WT) and 0.78 for tumor core (TC). A survival prediction model was design on Matlab, based on features extracted from the automatic segmentation. Gross tumor size and location seem to play a major role on survival prediction. A final accuracy of 0.617 was achieved.
Multimodal Brain Tumor Segmentation and Survival Prediction Using a 3D Self-ensemble ResUNet
Pei, Linmin
Murat, A. K.
Colen, Rivka
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
Residual u-net
Radiomics
In this paper, we propose a 3D self-ensemble ResUNet (srUNet) deep neural network architecture for brain tumor segmentation and machine learning-based method for overall survival prediction of patients with gliomas. UNet architecture has been using for semantic image segmentation. It also been used for medical imaging segmentation, including brain tumor segmentation. In this work, we utilize the srUNet to differentiate brain tumors, then the segmented tumors are used for survival prediction. We apply the proposed method to the Multimodal Brain Tumor Segmentation Challenge (BraTS) 2020 validation dataset for both tumor segmentation and survival prediction. The tumor segmentation result shows dice score coefficient (DSC) of 0.7634, 0.899, and 0.816 for enhancing tumor (ET), whole tumor (WT), and tumor core (TC), respectively. For the survival prediction method, we achieve 56.4% classification accuracy with mean square error (MSE) 101697, and 55.2% accuracy with MSE 56169 for training and validation, respectively. In the testing phase, the proposed method offers the DSC of 0.786, 0.881, and 0.823, for ET, WT, and TC, respectively. It also achieves an accuracy of 0.43 for overall survival prediction.
MRI Brain Tumor Segmentation and Uncertainty Estimation Using 3D-UNet Architectures
Ballestar, Laura Mora
Vilaplana, Veronica
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
Convolutional Neural Network (CNN)
Automation of brain tumor segmentation in 3D magnetic resonance images (MRIs) is key to assess the diagnostic and treatment of the disease. In recent years, convolutional neural networks (CNNs) have shown improved results in the task. However, high memory consumption is still a problem in 3D-CNNs. Moreover, most methods do not include uncertainty information, which is especially critical in medical diagnosis. This work studies 3D encoder-decoder architectures trained with patch-based techniques to reduce memory consumption and decrease the effect of unbalanced data. The different trained models are then used to create an ensemble that leverages the properties of each model, thus increasing the performance. We also introduce voxel-wise uncertainty information, both epistemic and aleatoric using test-time dropout (TTD) and data-augmentation (TTA) respectively. In addition, a hybrid approach is proposed that helps increase the accuracy of the segmentation. The model and uncertainty estimation measurements proposed in this work have been used in the BraTS’20 Challenge for task 1 and 3 regarding tumor segmentation and uncertainty estimation.
Utility of Brain Parcellation in Enhancing Brain Tumor Segmentation and Survival Prediction
Zhang, Yue
Wu, Jiewei
Huang, Weikai
Chen, Yifan
Wu, Ed X.
Tang, Xiaoying
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BraTS 2018
BRAIN
Segmentation
Algorithm Development
In this paper, we proposed a UNet-based brain tumor segmentation method and a linear model-based survival prediction method. The effectiveness of UNet has been validated in automatically segmenting brain tumors from multimodal magnetic resonance (MR) images. Rather than network architecture, we focused more on making use of additional information (brain parcellation), training and testing strategy (coarse-to-fine), and ensemble technique to improve the segmentation performance. We then developed a linear classification model for survival prediction. Different from previous studies that mainly employ features from brain tumor segmentation, we also extracted features from brain parcellation, which further improved the prediction accuracy. On the challenge testing dataset, the proposed approach yielded average Dice scores of 88.43%, 84.51%, and 78.93% for the whole tumor, tumor core, and enhancing tumor in the segmentation task and an overall accuracy of 0.533 in the survival prediction task.
Uncertainty-Driven Refinement of Tumor-Core Segmentation Using 3D-to-2D Networks with Label Uncertainty
McKinley, Richard
Rebsamen, Micheal
Dätwyler, Katrin
Meier, Raphael
Radojewski, Piotr
Wiest, Roland
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BraTS 2019
BRAIN
Segmentation
Algorithm Development
3d convolutional neural network (CNN)
The BraTS dataset contains a mixture of high-grade and low-grade gliomas, which have a rather different appearance: previous studies have shown that performance can be improved by separated training on low-grade gliomas (LGGs) and high-grade gliomas (HGGs), but in practice this information is not available at test time to decide which model to use. By contrast with HGGs, LGGs often present no sharp boundary between the tumor core and the surrounding edema, but rather a gradual reduction of tumor-cell density.; ; Utilizing our 3D-to-2D fully convolutional architecture, DeepSCAN, which ranked highly in the 2019 BraTS challenge and was trained using an uncertainty-aware loss, we separate cases into those with a confidently segmented core, and those with a vaguely segmented or missing core. Since by assumption every tumor has a core, we reduce the threshold for classification of core tissue in those cases where the core, as segmented by the classifier, is vaguely defined or missing.; ; We then predict survival of high-grade glioma patients using a fusion of linear regression and random forest classification, based on age, number of distinct tumor components, and number of distinct tumor cores.; ; We present results on the validation dataset of the Multimodal Brain Tumor Segmentation Challenge 2020 (segmentation and uncertainty challenge), and on the testing set, where the method achieved 4th place in Segmentation, 1st place in uncertainty estimation, and 1st place in Survival prediction.
Multi-decoder Networks with Multi-denoising Inputs for Tumor Segmentation
Vu, Minh H.
Nyholm, Tufve
Löfstedt, Tommy
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
Ensemble learning
Deep Learning
Magnetic Resonance Imaging (MRI)
Automatic segmentation of brain glioma from multimodal MRI scans plays a key role in clinical trials and practice. Unfortunately, manual segmentation is very challenging, time-consuming, costly, and often inaccurate despite human expertise due to the high variance and high uncertainty in the human annotations. In the present work, we develop an end-to-end deep-learning-based segmentation method using a multi-decoder architecture by jointly learning three separate sub-problems using a partly shared encoder. We also propose to apply smoothing methods to the input images to generate denoised versions as additional inputs to the network. The validation performance indicates an improvement when using the proposed method. The proposed method was ranked 2nd in the task of Quantification of Uncertainty in Segmentation in the Brain Tumors in Multimodal Magnetic Resonance Imaging Challenge 2020.
MultiATTUNet: Brain Tumor Segmentation and Survival Multitasking
Carmo, Diedre
Rittner, Leticia
Lotufo, Roberto
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
Segmentation of Glioma from three dimensional magnetic resonance imaging (MRI) is useful for diagnosis and surgical treatment of patients with brain tumor. Manual segmentation is expensive, requiring medical specialists. In the recent years, the Brain Tumor Segmentation Challenge (BraTS) has been calling researchers to submit automated glioma segmentation and survival prediction methods for evaluation and discussion over their public, multimodality MRI dataset, with manual annotations. This work presents an exploration of different solutions to the problem, using 3D UNets and self attention for multitasking both predictions and also training (2D) EfficientDet derived segmentations, with the best results submitted for the official challenge leaderboard. We show that end-to-end multitasking survival and segmentation, in this case, led to better results.
A Two-Stage Cascade Model with Variational Autoencoders and Attention Gates for MRI Brain Tumor Segmentation
Lyu, C.
Shu, H.
Brainlesion2020Journal Article, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
Attention gate
Brain tumor segmentation
Encoder-decoder network
Variational autoencoder
Automatic MRI brain tumor segmentation is of vital importance for the disease diagnosis, monitoring, and treatment planning. In this paper, we propose a two-stage encoder-decoder based model for brain tumor subregional segmentation. Variational autoencoder regularization is utilized in both stages to prevent the overfitting issue. The second-stage network adopts attention gates and is trained additionally using an expanded dataset formed by the first-stage outputs. On the BraTS 2020 validation dataset, the proposed method achieves the mean Dice score of 0.9041, 0.8350, and 0.7958, and Hausdorff distance (95%) of 4.953 , 6.299, 23.608 for the whole tumor, tumor core, and enhancing tumor, respectively. The corresponding results on the BraTS 2020 testing dataset are 0.8729, 0.8357, and 0.8205 for Dice score, and 11.4288, 19.9690, and 15.6711 for Hausdorff distance. The code is publicly available at https://github.com/shu-hai/two-stage-VAE-Attention-gate-BraTS2020.
Cascaded Coarse-to-Fine Neural Network for Brain Tumor Segmentation
Yang, Shuojue
Guo, Dong
Wang, Lu
Wang, Guotai
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
Convolutional Neural Network (CNN)
A cascaded framework of coarse-to-fine networks is proposed to segment brain tumor from multi-modality MR images into three subregions: enhancing tumor, whole tumor and tumor core. The framework is designed to decompose this multi-class segmentation into two sequential tasks according to hierarchical relationship among these regions. In the first task, a coarse-to-fine model based on Global Context Network predicts segmentation of whole tumor, which provides a bounding box of all three substructures to crop the input MR images. In the second task, cropped multi-modality MR images are fed into another two coarse-to-fine models based on NvNet trained on small patches to generate segmentation of tumor core and enhancing tumor, respectively. Experiments with BraTS 2020 validation set show that the proposed method achieves average Dice scores of 0.8003, 0.9123, 0.8630 for enhancing tumor, whole tumor and tumor core, respectively. The corresponding values for BraTS 2020 testing set were 0.81715, 0.88229, 0.83085, respectively.
Low-Rank Convolutional Networks for Brain Tumor Segmentation
Ashtari, Pooya
Maes, Frederik
Van Huffel, Sabine
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
The automated segmentation of brain tumors is crucial for various clinical purposes from diagnosis to treatment planning to follow-up evaluations. The vast majority of effective models for tumor segmentation are based on convolutional neural networks with millions of parameters being trained. Such complex models can be highly prone to overfitting especially in cases where the amount of training data is insufficient. In this work, we devise a 3D U-Net-style architecture with residual blocks, in which low-rank constraints are imposed on weights of the convolutional layers in order to reduce overfitting. Within the same architecture, this helps to design networks with several times fewer parameters. We investigate the effectiveness of the proposed technique on the BraTS 2020 challenge.
Automated Brain Tumour Segmentation Using Cascaded 3D Densely-Connected U-Net
Ghaffari, Mina
Sowmya, Arcot
Oliver, Ruth
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
Dense network
Magnetic Resonance Imaging (MRI)
Accurate brain tumour segmentation is a crucial step towards improving disease diagnosis and proper treatment planning. In this paper, we propose a deep-learning based method to segment a brain tumour into its subregions: whole tumour, tumour core and enhancing tumour. The proposed architecture is a 3D convolutional neural network based on a variant of the U-Net architecture of Ronneberger et al. [17] with three main modifications: (i) a heavy encoder, light decoder structure using residual blocks (ii) employment of dense blocks instead of skip connections, and (iii) utilization of self-ensembling in the decoder part of the network. The network was trained and tested using two different approaches: a multitask framework to segment all tumour subregions at the same time, and a three-stage cascaded framework to segment one subregion at a time. An ensemble of the results from both frameworks was also computed. To address the class imbalance issue, appropriate patch extraction was employed in a pre-processing step. Connected component analysis was utilized in the post-processing step to reduce the false positive predictions. Experimental results on the BraTS20 validation dataset demonstrates that the proposed model achieved average Dice Scores of 0.90, 0.83, and 0.78 for whole tumour, tumour core and enhancing tumour respectively.
Segmentation then Prediction: A Multi-task Solution to Brain Tumor Segmentation and Survival Prediction
Zhao, Guojing
Jiang, Bowen
Zhang, Jianpeng
Xia, Yong
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
ResNet50
Accurate brain tumor segmentation and survival prediction are two fundamental but challenging tasks in the computer aided diagnosis of gliomas. Traditionally, these two tasks were performed independently, without considering the correlation between them. We believe that both tasks should be performed under a unified framework so as to enable them mutually benefit each other. In this paper, we propose a multi-task deep learning model called segmentation then prediction (STP), to segment brain tumors and predict patient overall survival time. The STP model is composed of a segmentation module and a survival prediction module. The former uses 3D U-Net as its backbone, and the latter uses both local and global features. The local features are extracted by the last layer of the segmentation encoder, while the global features are produced by a global branch, which uses 3D ResNet-50 as its backbone. The STP model is jointly optimized for two tasks. We evaluated the proposed STP model on the BraTS 2020 validation dataset and achieved an average Dice similarity coefficient (DSC) of 0.790, 0.910, 0.851 for the segmentation of enhanced tumor core, whole tumor, and tumor core, respectively, and an accuracy of 65.5% for survival prediction.
Enhancing MRI Brain Tumor Segmentation with an Additional Classification Network
Nguyen, Hieu T.
Le, Tung T.
Nguyen, Thang V.
Nguyen, Nhan T.
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
Brain tumor segmentation plays an essential role in medical image analysis. In recent studies, deep convolution neural networks (DCNNs) are extremely powerful to tackle tumor segmentation tasks. We propose in this paper a novel training method that enhances the segmentation results by adding an additional classification branch to the network. The whole network was trained end-to-end on the Multimodal Brain Tumor Segmentation Challenge (BraTS) 2020 training dataset. On the BraTS’s test set, it achieved an average Dice score of 80.57%, 85.67% and 82.00% , as well as Hausdorff distances (95%) of 14.22, 7.36 and 23.27, respectively for the enhancing tumor, the whole tumor and the tumor core.
Self-training for Brain Tumour Segmentation with Uncertainty Estimation and Biophysics-Guided Survival Prediction
Dai, Chengliang
Wang, Shuo
Raynaud, Hadrien
Mo, Yuanhan
Angelini, Elsa
Guo, Yike
Bai, Wenjia
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
Radiomics
Gliomas are among the most common types of malignant brain tumours in adults. Given the intrinsic heterogeneity of gliomas, the multi-parametric magnetic resonance imaging (mpMRI) is the most effective technique for characterising gliomas and their sub-regions. Accurate segmentation of the tumour sub-regions on mpMRI is of clinical significance, which provides valuable information for treatment planning and survival prediction. Thanks to the recent developments on deep learning, the accuracy of automated medical image segmentation has improved significantly. In this paper, we leverage the widely used attention and self-training techniques to conduct reliable brain tumour segmentation and uncertainty estimation. Based on the segmentation result, we present a biophysics-guided prognostic model for the prediction of overall survival. Our method of uncertainty estimation has won the second place of the MICCAI 2020 BraTS Challenge.
Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries; 6th International Workshop, BrainLes 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, October 4, 2020, Revised Selected Papers, Part II
Automated brain tumor segmentation is a vital topic due to its clinical applications. We propose to exploit a lightweight U-Net-based deep architecture called Skinny for this task—it was originally employed for skin detection from color images, and benefits from a wider spatial context. We train multiple Skinny networks over all image planes (axial, coronal, and sagittal), and form an ensemble containing such models. The experiments showed that our approach allows us to obtain accurate brain tumor delineation from multi-modal magnetic resonance images.
Efficient Brain Tumour Segmentation Using Co-registered Data and Ensembles of Specialised Learners
Shah, Beenitaben
Madabushi, Harish Tayyar
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Segmentation
Automatic segmentation
Challenge
BraTS 2020
Gliomas are the most common and aggressive form of all brain tumours, leading to a very short survival time at their highest grade. Hence, swift and accurate treatment planning is key. Magnetic resonance imaging (MRI) is a widely used imaging technique for the assessment of these tumours but the large amount of data generated by them prevents rapid manual segmentation, the task of dividing visual input into tumorous and non-tumorous regions. Hence, reliable automatic segmentation methods are required. This paper proposes, tests and validates two different approaches to achieving this. Firstly, it is hypothesised that co-registering multiple MRI modalities into a single volume will result in a more time and memory efficient approach which captures the same, if not more, information resulting in accurate segmentation. Secondly, it is hypothesised that training models independently on different MRI modalities allow models to specialise on certain labels or regions, which can then be ensembled to achieve improved predictions. These hypotheses were tested by training and evaluating 3D U-Net models on the BraTS 2020 data set. The experiments show that these hypotheses are indeed valid.
Efficient MRI Brain Tumor Segmentation Using Multi-resolution Encoder-Decoder Networks
Soltaninejad, Mohammadreza
Pridmore, Tony
Pound, Michael
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Encoder-decoder
Deep Learning
Convolutional Neural Network (CNN)
In this paper, we propose an automated three dimensional (3D) deep learning approach for the segmentation of gliomas in pre-operative brain MRI scans. We introduce a state-of-the-art multi-resolution architecture based on encoder-decoder which comprise of separate branches to incorporate local high-resolution image features and wider low-resolution contextual information. We also used a unified multi-task loss function to provide end-to-end segmentation training. For the task of survival prediction, we propose a regression algorithm based on random forests to predict the survival days for the patients. Our proposed network is fully automated and designed to take input as patches that can work on input images of any arbitrary size. We trained our proposed network on the BraTS 2020 challenge dataset that consists of 369 training cases, and then validated on 125 unseen validation datasets, and tested on 166 unseen cases from the testing dataset using a blind testing approach. The quantitative and qualitative results demonstrate that our proposed network provides efficient segmentation of brain tumors. The mean Dice overlap measures for automatic brain tumor segmentation of the validation dataset against ground truth are 0.87, 0.80, and 0.66 for the whole tumor, core, and enhancing tumor, respectively. The corresponding results for the testing dataset are 0.78, 0.70, and 0.66, respectively. The accuracy measures of the proposed model for the survival prediction tasks are 0.45 and 0.505 for the validation and testing datasets, respectively.
Trialing U-Net Training Modifications for Segmenting Gliomas Using Open Source Deep Learning Framework
Ellis, David G.
Aizenberg, Michele R.
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Automatic brain segmentation has the potential to save time and resources for researchers and clinicians. We aimed to improve upon previously proposed methods by implementing the U-Net model and trialing various modifications to the training and inference strategies. The trials were performed and tested on the Multimodal Brain Tumor Segmentation dataset that provides MR images of brain tumors along with manual segmentations for hundreds of subjects. The U-Net models were trained on a training set of MR images from 369 subjects and then tested against a validation set of images from 125 subjects. The proposed modifications included predicting the labeled region contours, permutations of the input data via rotation and reflection, grouping labels together, as well as creating an ensemble of models. The ensemble of models provided the best results compared to any of the other methods, but the other modifications did not demonstrate improvement. Future work will look at reducing the level of the training augmentation so that the models are better able to generalize to the validation set. Overall, our open source deep learning framework allowed us to quickly implement and test multiple U-Net training modifications. The code for this project is available at https://github.com/ellisdg/3DUnetCNN.
HI-Net: Hyperdense Inception 3D UNet for Brain Tumor Segmentation
Qamar, Saqib
Ahmad, Parvez
Shen, Linlin
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
U-Net
Dense network
The brain tumor segmentation task aims to classify tissue into the whole tumor (WT), tumor core (TC) and enhancing tumor (ET) classes using multimodel MRI images. Quantitative analysis of brain tumors is critical for clinical decision making. While manual segmentation is tedious, time-consuming, and subjective, this task is at the same time very challenging to automatic segmentation methods. Thanks to the powerful learning ability, convolutional neural networks (CNNs), mainly fully convolutional networks, have shown promising brain tumor segmentation. This paper further boosts the performance of brain tumor segmentation by proposing hyperdense inception 3D UNet (HI-Net), which captures multi-scale information by stacking factorization of 3D weighted convolutional layers in the residual inception block. We use hyper dense connections among factorized convolutional layers to extract more contexual information, with the help of features reusability. We use a dice loss function to cope with class imbalances. We validate the proposed architecture on the multi-modal brain tumor segmentation challenges (BRATS) 2020 testing dataset. Preliminary results on the BRATS 2020 testing set show that achieved by our proposed approach, the dice (DSC) scores of ET, WT, and TC are 0.79457, 0.87494, and 0.83712, respectively.
H2NF-Net for Brain Tumor Segmentation Using Multimodal MR Imaging: 2nd Place Solution to BraTS Challenge 2020 Segmentation Task
Jia, Haozhe
Cai, Weidong
Huang, Heng
Xia, Yong
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
In this paper, we propose a Hybrid High-resolution and Non-local Feature Network (H2NF-Net) to segment brain tumor in multimodal MR images. Our H2NF-Net uses the single and cascaded HNF-Nets to segment different brain tumor sub-regions and combines the predictions together as the final segmentation. We trained and evaluated our model on the Multimodal Brain Tumor Segmentation Challenge (BraTS) 2020 dataset. The results on the test set show that the combination of the single and cascaded models achieved average Dice scores of 0.78751, 0.91290, and 0.85461, as well as Hausdorff distances (95%) of 26.57525, 4.18426, and 4.97162 for the enhancing tumor, whole tumor, and tumor core, respectively. Our method won the second place in the BraTS 2020 challenge segmentation task out of nearly 80 participants.
2D Dense-UNet: A Clinically Valid Approach to Automated Glioma Segmentation
McHugh, Hugh
Talou, Gonzalo Maso
Wang, Alan
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Dense network
Brain tumour segmentation is a requirement of many quantitative MRI analyses involving glioma. This paper argues that 2D slice-wise approaches to brain tumour segmentation may be more compatible with current MRI acquisition protocols than 3D methods because clinical MRI is most commonly a slice-based modality. A 2D Dense-UNet segmentation model was trained on the BraTS 2020 dataset. Mean Dice values achieved on the test dataset were: 0.859 (WT), 0.788 (TC) and 0.766 (ET). Median test data Dice values were: 0.902 (WT), 0.887 (TC) and 0.823 (ET). Results were comparable to previous high performing BraTS entries. 2D segmentation may have advantages over 3D methods in clinical MRI datasets where volumetric sequences are not universally available.
Attention U-Net with Dimension-Hybridized Fast Data Density Functional Theory for Automatic Brain Tumor Image Segmentation
Su, Zi-Jun
Chang, Tang-Chen
Tai, Yen-Ling
Chang, Shu-Jung
Chen, Chien-Chang
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Attention U-Net
In the article, we proposed a hybridized method for brain tumor image segmentation by fusing topological heterogeneities of images and the attention mechanism in the neural networks. The three-dimensional image datasets were first pre-processed using the histogram normalization for the standardization of pixel intensities. Then the normalized images were parallel fed into the procedures of affine transformations and feature pre-extractions. The technique of fast data density functional theory (fDDFT) was adopted for the topological feature extractions. Under the framework of fDDFT, 3-dimensional topological features were extracted and then used for the 2-dimensional tumor image segmentation, then those 2-dimensional significant images are reconstructed back to the 3-dimensional intensity feature maps by utilizing physical perceptrons. The undesired image components would be filtered out in this procedure. Thus, at the pre-processing stage, the proposed framework provided dimension-hybridized intensity feature maps and image sets after the affine transformations simultaneously. Then the feature maps and the transformed images were concatenated and then became the inputs of the attention U-Net. By employing the concept of gate controlling of the data flow, the encoder can perform as a masked feature tracker to concatenate the features produced from the decoder. Under the proposed algorithmic scheme, we constructed a fast method of dimension-hybridized feature pre-extraction for the training procedure in the neural network. Thus, the model size as well as the computational complexity might be reduced safely by applying the proposed algorithm.
MVP U-Net: Multi-View Pointwise U-Net for Brain Tumor Segmentation
Zhao, Changchen
Zhao, Zhiming
Zeng, Qingrun
Feng, Yuanjing
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Spatial-temporal network
It is a challenging task to segment brain tumors from multi-modality MRI scans. How to segment and reconstruct brain tumors more accurately and faster remains an open question. The key is to effectively model spatial-temporal information that resides in the input volumetric data. In this paper, we propose Multi-View Pointwise U-Net (MVP U-Net) for brain tumor segmentation. Our segmentation approach follows encoder-decoder based 3D U-Net architecture, among which, the 3D convolution is replaced by three 2D multi-view convolutions in three orthogonal views (axial, sagittal, coronal) of the input data to learn spatial features and one pointwise convolution to learn channel features. Further, we modify the Squeeze-and-Excitation (SE) block properly and introduce it into our original MVP U-Net after the concatenation section. In this way, the generalization ability of the model can be improved while the number of parameters can be reduced. In BraTS 2020 testing dataset, the mean Dice scores of the proposed method were 0.715, 0.839, and 0.768 for enhanced tumor, whole tumor, and tumor core, respectively. The results show the effectiveness of the proposed MVP U-Net with the SE block for multi-modal brain tumor segmentation.
Glioma Segmentation with 3D U-Net Backed with Energy-Based Post-Processing
Zsamboki, Richard
Takacs, Petra
Deak-Karancsi, Borbala
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Supervised training
Algorithm Development
This paper proposes a glioma segmentation method based on neural networks. The base of the network is a UNet, expanded by residual blocks. Several preprocessing steps were applied before training, such as intensity normalization, high intensity cutting, cropping, and random flips. 2D and 3D solutions are implemented and tested, and results show that the 3D network outperforms 2D directions, therefore we stayed with 3D directions.; ; The novelty of the method is the energy-based post-processing. Snakes [10], and conditional random fields (CRF) [11] were applied to the neural network’s predictions. Snake or active contour needs an initial outline around the object – e.g. the network’s prediction outline - and it can correct the contours of the tumor based on calculating the energy minimum, based on the intensity values at a given area. CRF is a specific type of graphical model, it uses the network’s prediction and the raw image features to estimate the posterior distribution (the tumor contour) using energy function minimization.; ; The proposed methods are evaluated within the framework of the BRATS 2020 challenge. Measured on the test dataset the mean dice scores of the whole tumor (WT), tumor core (TC) and enhancing tumor (ET) are 86.9%, 83.2% and 81.8% respectively. The results show high performance and promising future work in tumor segmentation, even outside of the brain.
nnU-Net for Brain Tumor Segmentation
Isensee, Fabian
Jäger, Paul F.
Full, Peter M.
Vollmuth, Philipp
Maier-Hein, Klaus H.
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
U-Net
We apply nnU-Net to the segmentation task of the BraTS 2020 challenge. The unmodified nnU-Net baseline configuration already achieves a respectable result. By incorporating BraTS-specific modifications regarding postprocessing, region-based training, a more aggressive data augmentation as well as several minor modifications to the nnU-Net pipeline we are able to improve its segmentation performance substantially. We furthermore re-implement the BraTS ranking scheme to determine which of our nnU-Net variants best fits the requirements imposed by it. Our method took the first place in the BraTS 2020 competition with Dice scores of 88.95, 85.06 and 82.03 and HD95 values of 8.498,17.337 and 17.805 for whole tumor, tumor core and enhancing tumor, respectively.
A Deep Random Forest Approach for Multimodal Brain Tumor Segmentation
Shaikh, Sameer
Phophalia, Ashish
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
Random Forest
Deep Learning
Locating brain tumor and its various sub-regions are crucial for treating tumor in humans. The challenge lies in taking cues for identification of tumors having different size, shape, and location in the brain using multimodal data. Numerous work has been done in the recent past in BRATS challenge [16]. In this work, an ensemble based approach using Deep Random Forest [23] in incremental learning mechanism is deployed. The proposed approach divides data and features into disjoint subsets and learn in chunk as cascading architecture of multi layer RFs. Each layer is also a combination of RFs to use sample of the data to learn diversity present. Given the huge amount of data, the proposed approach is fast and paralleled. In addition, we have proposed new kind of Local Binary Pattern (LBP) features with rotation. Also, few more handcrafted are designed primarily texture based features, appearance based features, statistical based features. The experiments are performed only on MICCAI BRATS 2020 dataset.
Brain Tumor Segmentation and Associated Uncertainty Evaluation Using Multi-sequences MRI Mixture Data Preprocessing
Groza, Vladimir
Tuchinov, Bair
Amelina, Evgeniya
Pavlovskiy, Evgeniy
Tolstokulakov, Nikolay
Amelin, Mikhail
Golushko, Sergey
Letyagin, Andrey
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
Deep Learning
Magnetic Resonance Imaging (MRI)
The brain tumor segmentation is one of the crucial tasks nowadays among other directions and domains where daily clinical workflow requires to put a lot of efforts while studying computer tomography (CT) or structural magnetic resonance imaging (MRI) scans of patients with various pathologies. MRI is the most common method of primary detection, non-invasive diagnostics and a source of recommendations for further treatment of brain diseases. The brain is a complex structure, different areas of which have different functional significance.; ; In this paper, we extend the previous research work on the robust pre-processing methods which allow to consider all available information from MRI scans by the composition of T1, T1C, T2 and T2-Flair sequences in the unique input. Such approach enriches the input data for the segmentation process and helps to improve the accuracy of the segmentation and associated uncertainty evaluation performance.; ; Proposed in this paper method also demonstrates strong improvement on the segmentation problem. This conclusion was done with respect to Dice metric, Sensitivity and Specificity compare to identical training/validation procedure based only on any single sequence and regardless of the chosen neural network architecture.; ; Obtained results demonstrate significant performance improvement while combining three MRI sequences in the 3-channel RGB like image for considered tasks of brain tumor segmentation. In this work we provide the comparison of various gradient descent optimization methods and of the different backbone architectures.
A Deep Supervision CNN Network for Brain Tumor Segmentation
Ma, Shiqiang
Zhang, Zehua
Ding, Jiaqi
Li, Xuejian
Tang, Jijun
Guo, Fei
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
Residual u-net
The brain tumor segmentation is essential for diagnosis and treatment of brain diseases. However, most of current 3D deep learning technologies require large number of magnetic resonance images (MRIs). In order to make full use of small dataset like BraTS 2020, we propose a deep supervision-based 2D residual U-net for efficient and automatic brain tumor segmentation. In our network, residual blocks are used to alleviate the gradient dispersion caused by excessive depth of network, while multiple deep supervision branches are used as the regularization of the network, they can improve the training stability and enable the encoder to extract richer visual features. The CBICA’s IPP’s evaluation of the segmentation results verifies the effectiveness of our method. The average Dice of ET, WT and TC are 0.7593, 0.8726 and 0.7879 respectively.
Multi-threshold Attention U-Net (MTAU) Based Model for Multimodal Brain Tumor Segmentation in MRI Scans
Gliomas are one of the most frequent brain tumors and are classified into high grade and low grade gliomas. The segmentation of various regions such as tumor core, enhancing tumor etc. plays an important role in determining severity and prognosis. Here, we have developed a multi-threshold model based on attention U-Net for identification of various regions of the tumor in magnetic resonance imaging (MRI). We propose a multi-path segmentation and built three separate models for the different regions of interest. The proposed model achieved mean Dice Coefficient of 0.59, 0.72, and 0.61 for enhancing tumor, whole tumor and tumor core respectively on the training dataset. The same model gave mean Dice Coefficient of 0.57, 0.73, and 0.61 on the validation dataset and 0.59, 0.72, and 0.57 on the test dataset .
Multi-stage Deep Layer Aggregation for Brain Tumor Segmentation
Silva, Carlos A.
Pinto, Adriano
Pereira, Sérgio
Lopes, Ana
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
Deep Learning
Convolutional Neural Network (CNN)
Gliomas are among the most aggressive and deadly brain tumors. This paper details the proposed Deep Neural Network architecture for brain tumor segmentation from Magnetic Resonance Images. The architecture consists of a cascade of three Deep Layer Aggregation neural networks, where each stage elaborates the response using the feature maps and the probabilities of the previous stage, and the MRI channels as inputs. The neuroimaging data are part of the publicly available Brain Tumor Segmentation (BraTS) 2020 challenge dataset, where we evaluated our proposal in the BraTS 2020 Validation and Test sets. In the Test set, the experimental results achieved a Dice score of 0.8858, 0.8297 and 0.7900, with an Hausdorff Distance of 5.32 mm, 22.32 mm and 20.44 mm for the whole tumor, core tumor and enhanced tumor, respectively.
Glioma Segmentation Using Ensemble of 2D/3D U-Nets and Survival Prediction Using Multiple Features Fusion
Automatic segmentation of gliomas from brain Magnetic Resonance Imaging (MRI) volumes is an essential step for tumor detection. Various 2D Convolutional Neural Network (2D-CNN) and its 3D variant, known as 3D-CNN based architectures, have been proposed in previous studies, which are used to capture contextual information. The 3D models capture depth information, making them an automatic choice for glioma segmentation from 3D MRI images. However, the 2D models can be trained in a relatively shorter time, making their parameter tuning relatively easier. Considering these facts, we tried to propose an ensemble of 2D and 3D models to utilize their respective benefits better. After segmentation, prediction of Overall Survival (OS) time was performed on segmented tumor sub-regions. For this task, multiple radiomic and image-based features were extracted from MRI volumes and segmented sub-regions. In this study, radiomic and image-based features were fused to predict the OS time of patients. Experimental results on BraTS 2020 testing dataset achieved a dice score of 0.79 on Enhancing Tumor (ET), 0.87 on Whole Tumor (WT), and 0.83 on Tumor Core (TC). For OS prediction task, results on BraTS 2020 testing leaderboard achieved an accuracy of 0.57, Mean Square Error (MSE) of 392,963.189, Median SE of 162,006.3, and Spearman R correlation score of −0.084.
Generalized Wasserstein Dice Score, Distributionally Robust Deep Learning, and Ranger for Brain Tumor Segmentation: BraTS 2020 Challenge
Fidon, Lucas
Ourselin, Sébastien
Vercauteren, Tom
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
Optimization
Convolutional Neural Network (CNN)
Training a deep neural network is an optimization problem with four main ingredients: the design of the deep neural network, the per-sample loss function, the population loss function, and the optimizer. However, methods developed to compete in recent BraTS challenges tend to focus only on the design of deep neural network architectures, while paying less attention to the three other aspects. In this paper, we experimented with adopting the opposite approach. We stuck to a generic and state-of-the-art 3D U-Net architecture and experimented with a non-standard per-sample loss function, the generalized Wasserstein Dice loss, a non-standard population loss function, corresponding to distributionally robust optimization, and a non-standard optimizer, Ranger. Those variations were selected specifically for the problem of multi-class brain tumor segmentation. The generalized Wasserstein Dice loss is a per-sample loss function that allows taking advantage of the hierarchical structure of the tumor regions labeled in BraTS. Distributionally robust optimization is a generalization of empirical risk minimization that accounts for the presence of underrepresented subdomains in the training dataset. Ranger is a generalization of the widely used Adam optimizer that is more stable with small batch size and noisy labels. We found that each of those variations of the optimization of deep neural networks for brain tumor segmentation leads to improvements in terms of Dice scores and Hausdorff distances. With an ensemble of three deep neural networks trained with various optimization procedures, we achieved promising results on the validation dataset and the testing dataset of the BraTS 2020 challenge. Our ensemble ranked fourth out of 78 for the segmentation task of the BraTS 2020 challenge with mean Dice scores of 88.9, 84.1, and 81.4, and mean Hausdorff distances at 95% of 6.4, 19.4, and 15.8 for the whole tumor, the tumor core, and the enhancing tumor.
3D Semantic Segmentation of Brain Tumor for Overall Survival Prediction
Glioma, a malignant brain tumor, requires immediate treatment to improve the survival of patients. The heterogeneous nature of Glioma makes the segmentation difficult, especially for sub-regions like necrosis, enhancing tumor, non-enhancing tumor, and edema. Deep neural networks like full convolution neural networks and an ensemble of fully convolution neural networks are successful for Glioma segmentation. The paper demonstrates the use of a 3D fully convolution neural network with a three-layer encoder-decoder approach. The dense connections within the layer help in diversified feature learning. The network takes 3D patches from T1, T2, T1c, and FLAIR modalities as input. The loss function combines dice loss and focal loss functions. The Dice similarity coefficient for training and validation set is 0.88, 0.83, 0.78 and 0.87, 0.75, 0.76 for the whole tumor, tumor core and enhancing tumor, respectively. The network achieves comparable performance with other state-of-the-art ensemble approaches. The random forest regressor trains on the shape, volumetric, and age features extracted from ground truth for overall survival prediction. The regressor achieves an accuracy of 56.8% and 51.7% on the training and validation sets.
Segmentation, Survival Prediction, and Uncertainty Estimation of Gliomas from Multimodal 3D MRI Using Selective Kernel Networks
Patel, Jay
Chang, Ken
Hoebel, Katharina
Gidwani, Mishka
Arun, Nishanth
Gupta, Sharut
Aggarwal, Mehak
Singh, Praveer
Rosen, Bruce R.
Gerstner, Elizabeth R.
Kalpathy-Cramer, Jayashree
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
Convolutional Neural Network (CNN)
U-Net
Segmentation of gliomas into distinct sub-regions can help guide clinicians in tasks such as surgical planning, prognosis, and treatment response assessment. Manual delineation is time-consuming and prone to inter-rater variability. In this work, we propose a deep learning based automatic segmentation method that takes T1-pre, T1-post, T2, and FLAIR MRI as input and outputs a segmentation map of the sub-regions of interest (enhancing tumor (ET), whole tumor (WT), and tumor core (TC)). Our U-Net based architecture incorporates a modified selective kernel block to enable the network to adjust its receptive field via an attention mechanism, enabling more robust segmentation of gliomas of all appearances, shapes, and scales. Using this approach on the official BraTS 2020 testing set, we obtain Dice scores of .822, .889, and .834, and Hausdorff distances (95%) of 11.588, 4.812, and 21.984 for ET, WT, and TC, respectively. For prediction of overall survival, we extract deep features from the bottleneck layer of this network and train a Cox Proportional Hazards model, obtaining .495 accuracy. For uncertainty prediction, we achieve AUCs of .850, .914, and .854 for ET, WT, and TC, respectively, which earned us third place for this task.
3D Brain Tumor Segmentation and Survival Prediction Using Ensembles of Convolutional Neural Networks
González, S. Rosas
Zemmoura, I.
Tauber, C.
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
Radiomic features
Convolutional Neural Networks (CNNs) are the state of the art in many medical image applications, including brain tumor segmentation. However, no successful studies using CNNs have been reported for survival prediction in glioma patients. In this work, we present two different solutions: tumor segmentation and the other for survival prediction. We proposed using an ensemble of asymmetric U-Net like architectures to improve segmentation results in the enhancing tumor region and the use of a DenseNet model for survival prognosis. We quantitatively compare deep learning with classical regression and classification models based on radiomics features and growth tumor models features for survival prediction on the BraTS 2020 database, and we provide an insight into the limitations of these models to accurately predict survival. Our method's current performance on the BraTS 2020 test set is dice scores of 0.80, 0.87, and 0.80 for enhancing tumor, whole tumor, and tumor core, respectively, with an overall dice of 0.82. For the survival prediction task, we got a 0.57 accuracy. In addition, we proposed a voxel-wise uncertainty estimation of our segmentation method that can be used effectively to improve brain tumor segmentation.
Brain Tumour Segmentation Using Probabilistic U-Net
Savadikar, Chinmay
Kulhalli, Rahul
Garware, Bhushan
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
Probabilistic model
We describe our approach towards the segmentation task of the BRATS 2020 challenge. We use the Probabilistic UNet to explore the effect of sampling different segmentation maps, which may be useful to experts when the opinions of different experts vary. We use 2D segmentation models and approach the problem in a slice-by-slice manner. To explore the possibility of designing robust models, we use self attention in the UNet, and the prior and posterior networks, and explore the effect of varying the number of attention blocks on the quality of the segmentation. Our model achieves Dice scores of 0.81898 on Whole Tumour, 0.71681 on Tumour Core, and 0.68893 on Enhancing Tumour on the Validation data, and 0.7988 on Whole Tumour, 0.7771 on Tumour Core, and 0.7249 on Enhancing Tumour on the Testing data. Our code is available at https://github.com/rahulkulhalli/BRATS2020.
Segmenting Brain Tumors from MRI Using Cascaded 3D U-Nets
Kotowski, Krzysztof
Adamski, Szymon
Malara, Wojciech
Machura, Bartosz
Zarudzki, Lukasz
Nalepa, Jakub
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
Computer Aided Detection (CADe)
In this paper, we exploit a cascaded 3D U-Net architecture to perform detection and segmentation of brain tumors (low- and high-grade gliomas) from multi-modal magnetic resonance scans. First, we detect tumors in a binary-classification setting, and they later undergo multi-class segmentation. To provide high-quality generalization, we investigate several regularization techniques that help improve the segmentation performance obtained for the unseen scans, and benefit from the expert knowledge of a senior radiologist captured in a form of several post-processing routines. Our preliminary experiments elaborated over the BraTS’20 validation set revealed that our approach delivers high-quality tumor delineation.
A Deep Supervised U-Attention Net for Pixel-Wise Brain Tumor Segmentation
Xu, Jia Hua
Teng, Wai Po Kevin
Wang, Xiong Jun
Nürnberger, Andreas
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
Glioblastoma (GBM) is one of the leading causes of cancer death. The imaging diagnostics are critical for all phases in the treatment of brain tumor. However, manually-checked output by a radiologist has several limitations such as tedious annotation, time consuming and subjective biases, which influence the outcome of a brain tumor affected region. Therefore, the development of an automatic segmentation framework has attracted lots of attention from both clinical and academic researchers. Recently, most state-of-the-art algorithms are derived from deep learning methodologies such as the U-net, attention network. In this paper, we propose a deep supervised U-Attention Net framework for pixel-wise brain tumor segmentation, which combines the U-net, Attention network and a deep supervised multistage layer. Subsequently, we are able to achieve a low resolution and high resolution feature representations even for small tumor regions. Preliminary results of our method on training data have mean dice coefficients of about 0.75, 0.88, and 0.80; on the other hand, validation data achieve a mean dice coefficient of 0.67, 0.86, and 0.70, for enhancing tumor (ET), whole tumor (WT), and tumor core (TC) respectively .
A Two-Stage Atrous Convolution Neural Network for Brain Tumor Segmentation and Survival Prediction
Miron, Radu
Albert, Ramona
Breaban, Mihaela
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
Glioma is a type of heterogeneous tumor originating in the brain, characterized by the coexistence of multiple subregions with different phenotypic characteristics, which further determine heterogeneous profiles, likely to respond variably to treatment. Identifying spatial variations of gliomas is necessary for targeted therapy. The current paper proposes a neural network composed of heterogeneous building blocks to identify the different histologic sub-regions of gliomas in multi-parametric MRIs and further extracts radiomic features to estimate a patient’s prognosis. The model is evaluated on the BraTS 2020 dataset.; Notes; 1.; https://github.com/IBM/pytorch-large-model-support.; ; 2.; https://github.com/maduriron/BraTS2020.
TwoPath U-Net for Automatic Brain Tumor Segmentation from Multimodal MRI Data
Kaewrak, Keerati
Soraghan, John
Di Caterina, Gaetano
Grose, Derek
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
A novel encoder-decoder deep learning network called TwoPath U-Net for multi-class automatic brain tumor segmentation task is presented. The network uses cascaded local and global feature extraction paths in the down-sampling path of the network which allows the network to learn different aspects of both the low-level feature and high-level features. The proposed network architecture using a full image and patches input technique was used on the BraTS2020 training dataset. We tested the network performance using the BraTS2019 validation dataset and obtained the mean dice score of 0.76, 0.64, and 0.58 and the Hausdorff distance 95% of 25.05, 32.83, and 37.57 for the whole tumor, tumor core and enhancing tumor regions.
Brain Tumor Segmentation and Survival Prediction Using Automatic Hard Mining in 3D CNN Architecture
We utilize 3-D fully convolutional neural networks (CNN) to segment gliomas and its constituents from multimodal Magnetic Resonance Images (MRI). The architecture uses dense connectivity patterns to reduce the number of weights and residual connection and is initialized with weights obtained from training this model with BraTS 2018 dataset. Hard mining is done during training to train for the difficult cases of segmentation tasks by increasing the dice similarity coefficient (DSC) threshold to choose the hard cases as epoch increases. On the BraTS2020 validation data (n = 125), this architecture achieved a tumor core, whole tumor, and active tumor dice of 0.744, 0.876, 0.714, respectively. On the test dataset, we get an increment in DSC of tumor core and active tumor by approximately 7%. In terms of DSC, our network performances on the BraTS 2020 test data are 0.775, 0.815, and 0.85 for enhancing tumor, tumor core, and whole tumor, respectively. Overall survival of a subject is determined using conventional machine learning from rediomics features obtained using generated segmentation mask. Our approach has achieved 0.448 and 0.452 as the accuracy on the validation and test dataset.
This manuscript outlines the design of methods, and initial progress on automatic detection of glioma from MRI images using deep neural networks, all applied and evaluated for the 2020 Brain Tumor Segmentation (BraTS) Challenge. Our approach builds on existing work using U-net architectures, and evaluates a variety deep learning techniques including model averaging and adaptive learning rates.
Segmentation of gliomas is essential to aid clinical diagnosis and treatment; however, imaging artifacts and heterogeneous shape complicate this task. In the last few years, researchers have shown the effectiveness of 3D UNets on this problem. They have found success using 3D patches to predict the class label for the center voxel; however, even a single patch-based UNet may miss representations that another UNet could learn. To circumvent this issue, I developed PieceNet, a deep learning model using a novel ensemble of patch-based 3D UNets. In particular, I used uncorrected modalities to train a standard 3D UNet for all label classes as well as one 3D UNet for each individual label class. Initial results indicate this 4-network ensemble is potentially a superior technique to a traditional patch-based 3D UNet on uncorrected images; however, further work needs to be done to allow for more competitive enhancing tumor segmentation. Moreover, I developed a linear probability model using radiomic and non-imaging features that predicts post-surgery survival.
Cerberus: A Multi-headed Network for Brain Tumor Segmentation
The automated analysis of medical images requires robust and accurate algorithms that address the inherent challenges of identifying heterogeneous anatomical and pathological structures, such as brain tumors, in large volumetric images. In this paper, we present Cerberus, a single lightweight convolutional neural network model for the segmentation of fine-grained brain tumor regions in multichannel MRIs. Cerberus has an encoder-decoder architecture that takes advantage of a shared encoding phase to learn common representations for these regions and, then, uses specialized decoders to produce detailed segmentations. Cerberus learns to combine the weights learned for each category to produce a final multi-label segmentation. We evaluate our approach on the official test set of the Brain Tumor Segmentation Challenge 2020, and we obtain dice scores of 0.807 for enhancing tumor, 0.867 for whole tumor and 0.826 for tumor core.
An Automatic Overall Survival Time Prediction System for Glioma Brain Tumor Patients Based on Volumetric and Shape Features
An automatic overall survival time prediction system for Glioma brain tumor patients is proposed and developed based on volumetric, location, and shape features. The proposed automatic prediction system consists of three stages: segmentation of brain tumor sub-regions; features extraction; and overall survival time predictions. A deep learning structure based on a modified 3 Dimension (3D) U-Net is proposed to develop an accurate segmentation model to identify and localize the three Glioma brain tumor sub-regions: gadolinium (GD)-enhancing tumor, peritumoral edema, and necrotic and non-enhancing tumor core (NCR/NET). The best performance of a segmentation model is achieved by the modified 3D U-Net based on an Accumulated Encoder (U-Net AE) with a Generalized Dice-Loss (GDL) function trained by the ADAM optimization algorithm. This model achieves Average Dice-Similarity (ADS) scores of 0.8898, 0.8819, and 0.8524 for Whole Tumor (WT), Tumor Core (TC), and Enhancing Tumor (ET), respectively, in the train dataset of the Multimodal Brain Tumor Segmentation challenge (BraTS) 2020. Various combinations of volumetric (based on brain functionality regions), shape, and location features are extracted to train an overall survival time classification model using a Neural Network (NN). The model classifies the data into three classes: short-survivors, mid-survivors, and long-survivors. An information fusion strategy based on features-level fusion and decision-level fusion is used to produce the best prediction model. The best performance is achieved by the ensemble model and shape features model with accuracies of (55.2%) on the BraTS 2020 validation dataset. The ensemble model achieves a competitive accuracy (55.1%) on the BraTS 2020 test dataset.;
Squeeze-and-Excitation Normalization for Brain Tumor Segmentation
Iantsen, Andrei
Jaouen, Vincent
Visvikis, Dimitris
Hatt, Mathieu
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
In this paper we described our approach for glioma segmentation in multi-sequence magnetic resonance imaging (MRI) in the context of the MICCAI 2020 Brain Tumor Segmentation Challenge (BraTS). We proposed an architecture based on U-Net with a new computational unit termed “SE Norm” that brought significant improvements in segmentation quality. Our approach obtained competitive results on the validation (Dice scores of 0.780, 0.911, 0.863) and test (Dice scores of 0.805, 0.887, 0.843) sets for the enhanced tumor, whole tumor and tumor core sub-regions. The full implementation and trained models are available at https://github.com/iantsen/brats.
Modified MobileNet for Patient Survival Prediction
Glioblastoma is a type of malignant tumor that varies significantly in size, shape, and location. The study of this type of tumor, one of which is about predicting the patient’s survival ability, is beneficial for the treatment of patients. However, the supporting data for the survival prediction model are minimal, so the best methods are needed for handling it. In this study, we propose an architecture for predicting patient survival using MobileNet combined with a linear survival prediction model (SPM). Several variations of MobileNet are tested to obtain the best results. Variations tested include modification of MobileNet V1 with freeze or unfreeze layers, and modification of MobileNet V2 with freeze or unfreeze layers connected to SPM. The dataset used for the trial came from BraTS 2020. A modification based on the MobileNet V2 architecture with the freezing layer was selected from the test results. The results of testing this proposed architecture with 95 training data and 23 validation data resulted in an MSE Loss of 78374.17. The online test results with the validation dataset 29 resulted in an MSE loss value of 149764.866 with an accuracy of 0.345. Testing with the testing dataset resulted in increased accuracy of 0.402. These results are promising for better architectural development.
Memory Efficient 3D U-Net with Reversible Mobile Inverted Bottlenecks for Brain Tumor Segmentation
Pendse, Mihir
Thangarasa, Vithursan
Chiley, Vitaliy
Holmdahl, Ryan
Hestness, Joel
DeCoste, Dennis
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
We propose combining memory saving techniques with traditional U-Net architectures to increase the complexity of the models on the Brain Tumor Segmentation (BraTS) challenge. The BraTS challenge consists of a 3D segmentation of a 240 × 240 × 155 × 4 input image into a set of tumor classes. Because of the large volume and need for 3D convolutional layers, this task is very memory intensive. To address this, prior approaches use smaller cropped images while constraining the model’s depth and width. Our 3D U-Net uses a reversible version of the mobile inverted bottleneck block defined in MobileNetV2, MnasNet and the more recent EfficientNet architectures to save activation memory during training. Using reversible layers enables the model to recompute input activations given the outputs of that layer, saving memory by eliminating the need to store activations during the forward pass. The inverted residual bottleneck block uses lightweight depthwise separable convolutions to reduce computation by decomposing convolutions into a pointwise convolution and a depthwise convolution. Further, this block inverts traditional bottleneck blocks by placing an intermediate expansion layer between the input and output linear 1 × 1 convolution, reducing the total number of channels. Given a fixed memory budget, with these memory saving techniques, we are able to train image volumes up to 3x larger, models with 25% more depth, or models with up to 2x the number of channels than a corresponding non-reversible network.
Brain Tumor Segmentation and Survival Prediction Using Patch Based Modified 3D U-Net
Parmar, Bhavesh
Parikh, Mehul
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
Patch-based
Brain tumor segmentation is a vital clinical requirement. In recent years, the developments of the prevalence of deep learning in medical image processing have been experienced. Automated brain tumor segmentation can reduce the diagnosis time and increase the potential of clinical intervention. In this work, we have used a patch selection methodology based on modified U-Net deep learning architecture with appropriate normalization and patch selection methods for the brain tumor segmentation task in BraTS 2020 challenge. Two-phase network training was implemented with patch selection methods. The performance of our deep learning-based brain tumor segmentation approach was done on CBICA’s Image Processing Portal. We achieved a Dice score of 0.795, 0.886, 0.827 in the testing phase, for the enhancing tumor, whole tumor, and tumor core respectively. The segmentation outcome with various radiomic features was used for Overall survival (OS) prediction. For OS prediction we achieved an accuracy of 0.570 for the testing phase. The algorithm can further be improved for tumor inter-class segmentation and OS prediction with various network implementation strategies. As the OS prediction results are based on segmentation, there is a scope of improvement in the segmentation and OS prediction thereby.
DR-Unet104 for Multimodal MRI Brain Tumor Segmentation
In this paper we propose a 2D deep residual Unet with 104 convolutional layers (DR-Unet104) for lesion segmentation in brain MRIs. We make multiple additions to the Unet architecture, including adding the ‘bottleneck’ residual block to the Unet encoder and adding dropout after each convolution block stack. We verified the effect of including the regularization of dropout with small rate (e.g. 0.2) on the architecture, and found a dropout of 0.2 improved the overall performance compared to no dropout, or a dropout of 0.5. We evaluated the proposed architecture as part of the Multimodal Brain Tumor Segmentation (BraTS) 2020 Challenge and compared our method to DeepLabV3+ with a ResNet-V2–152 backbone. We found the DR-Unet104 achieved a mean dice score coefficient of 0.8862, 0.6756 and 0.6721 for validation data, whole tumor, enhancing tumor and tumor core respectively, an overall improvement on 0.8770, 0.65242 and 0.68134 achieved by DeepLabV3+. Our method produced a final mean DSC of 0.8673, 0.7514 and 0.7983 on whole tumor, enhancing tumor and tumor core on the challenge’s testing data. We produce a competitive lesion segmentation architecture, despite only using 2D convolutions, having the added benefit that it can be used on lower power computers than a 3D architecture. The source code and trained model for this work is openly available at https://github.com/jordan-colman/DR-Unet104.
Glioma Sub-region Segmentation on Multi-parameter MRI with Label Dropout
Gliomas are the most common primary brain tumor, the accurate segmentation of clinical sub-regions including enhancing tumor (ET), tumor core (TC) and whole tumor (WT) has great clinical importance throughout the diagnosis, treatment planning, delivery and prognosis. Machine learning algorithms particularly neural network based methods have been successful in many medical image segmentation applications. In this paper, we trained a patch based 3D UNet model with a hybrid loss between soft dice loss, generalized dice loss and multi-class cross-entropy loss. We also proposed a label dropout process that randomly discards inner segment labels and their corresponding network output during training to overcome the heavy class imbalance issue. On the BraTs 2020 final test data, we achieved 0.823, 0.886 and 0.843 for ET, WT and TC respectively.
Variational-Autoencoder Regularized 3D MultiResUNet for the BraTS 2020 Brain Tumor Segmentation
Tang, Jiarui
Li, Tengfei
Shu, Hai
Zhu, Hongtu
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
Tumor segmentation is an important research topic in medical image segmentation. With the fast development of deep learning in computer vision, automated segmentation of brain tumors using deep neural networks becomes increasingly popular. U-Net is the most widely-used network in the applications of automated image segmentation. Many well-performed models are built based on U-Net. In this paper, we devise a model that combines the variational-autoencoder regularuzed 3D U-Net model [10] and the MultiResUNet model [7]. The model is trained on the 2020 Multimodal Brain Tumor Segmentation Challenge (BraTS) dataset and predicts on the validation set. Our result shows that the modified 3D MultiResUNet performs better than the previous 3D U-Net.
Learning Dynamic Convolutions for Multi-modal 3D MRI Brain Tumor Segmentation
Yang, Qiushi
Yuan, Yixuan
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Deep convolutional neural network (DCNN)
Segmentation
Algorithm Development
Accurate automated brain tumor segmentation with 3D Magnetic Resonance Image (MRIs) liberates doctors from tedious annotation work and further monitors and provides prompt treatment of the disease. Many recent Deep Convolutional Neural Networks (DCNN) achieve tremendous success on medical image analysis, especially tumor segmentation, while they usually use static networks without considering the inherent diversity of multi-modal inputs. In this paper, we introduce a dynamic convolutional module into brain tumor segmentation and help to learn input-adaptive parameters for specific multi-modal images. To the best of our knowledge, this is the first work to adopt dynamic convolutional networks to segment brain tumor with 3D MRI data. In addition, we employ multiple branches to learn low-level features from multi-modal inputs in an end-to-end fashion. We further investigate boundary information and propose a boundary-aware module to enforce our model to pay more attention to important pixels. Experimental results on the testing dataset and cross-validation dataset split from the training dataset of BraTS 2020 Challenge demonstrate that our proposed framework obtains competitive Dice scores compared with state-of-the-art approaches.
Automatic Glioma Grading Based on Two-Stage Networks by Integrating Pathology and MRI Images
Wang, Xiyue
Yang, Sen
Wu, Xiyi
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
Radiomics
Pathomics
Digital pathology
Magnetic Resonance Imaging (MRI)
Multi-modal imaging
Glioma with a high incidence is one of the most common brain cancers. In the clinic, pathologist diagnoses the types of the glioma by observing the whole-slide images (WSIs) with different magnifications, which is time-consuming, laborious, and experience-dependent. The automatic grading of the glioma based on WSIs can provide aided diagnosis for clinicians. This paper proposes two fully convolutional networks, which are respectively used for WSIs and MRI images to achieve the automatic glioma grading (astrocytoma (lower-grade A), oligodendroglioma (middle-grade O), and glioblastoma (higher-grade G)). The final classification result is the probability average of the two networks. In the clinic and also in our multi-modalities image representation, grade A and O are difficult to distinguish. This work proposes a two-stage training strategy to exclude the distraction of the grade G and focuses on the classification of grade A and O. The experimental result shows that the proposed model achieves high glioma classification performance with the balanced accuracy of 0.889, Cohen’s Kappa of 0.903, and F1-score of 0.943 tested on the validation set.
Brain Tumor Classification Based on MRI Images and Noise Reduced Pathology Images
Yin, Baocai
Cheng, Hu
Wang, Fengyan
Wang, Zengfu
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Classification
Algorithm Development
Digital pathology
Magnetic Resonance Imaging (MRI)
Gliomas are the most common and severe malignant tumors of the brain. The diagnosis and grading of gliomas are typically based on MRI images and pathology images. To improve the diagnosis accuracy and efficiency, we intend to design a framework for computer-aided diagnosis combining the two modalities. Without loss of generality, we first take an individual network for each modality to get the features and fuse them to predict the subtype of gliomas. For MRI images, we directly take a 3D-CNN to extract features, supervised by a cross-entropy loss function. There are too many normal regions in abnormal whole slide pathology images (WSI), which affect the training of pathology features. We call these normal regions as noise regions and propose two ideas to reduce them. Firstly, we introduce a nucleus segmentation model trained on some public datasets. The regions that has a small number of nuclei are excluded in the subsequent training of tumor classification. Secondly, we take a noise-rank module to further suppress the noise regions. After the noise reduction, we train a gliomas classification model based on the rest regions and obtain the features of pathology images. Finally, we fuse the features of the two modalities by a linear weighted module. We evaluate the proposed framework on CPM-RadPath2020 and achieve the first rank on the validation set.
Multimodal Brain Tumor Classification
Lerousseau, Marvin
Deutsch, Eric
Paragios, Nikos
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
Digital pathology
Magnetic Resonance Imaging (MRI)
multi-modal imaging
Cancer is a complex disease that provides various types of information depending on the scale of observation. While most tumor diagnostics are performed by observing histopathological slides, radiology images should yield additional knowledge towards the efficacy of cancer diagnostics. This work investigates a deep learning method combining whole slide images and magnetic resonance images to classify tumors. In particular, our solution comprises a powerful, generic and modular architecture for whole slide image classification. Experiments are prospectively conducted on the 2020 Computational Precision Medicine challenge, in a 3-classes unbalanced classification task. We report cross-validation (resp. validation) balanced-accuracy, kappa and f1 of 0.913, 0.897 and 0.951 (resp. 0.91, 0.90 and 0.94). For research purposes, including reproducibility and direct performance comparisons, our finale submitted models are usable off-the-shelf in a Docker image available at https://hub.docker.com/repository/docker/marvinler/cpm_2020_marvinler.
A Hybrid Convolutional Neural Network Based-Method for Brain Tumor Classification Using mMRI and WSI
Pei, Linmin
Hsu, Wei-Wen
Chiang, Ling-An
Guo, Jing-Ming
Iftekharuddin, Khan M.
Colen, Rivka
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
Convolutional Neural Network (CNN)
Digital Pathology
Magnetic Resonance Imaging (MRI)
In this paper, we propose a hybrid deep learning-based method for brain tumor classification using whole slide images (WSIs) and multimodal magnetic resonance image (mMRI). It comprises two methods: a WSI-based method and a mMRI-based method. For the WSI-based method, many patches are sampled from the WSI for each category as the training dataset. However, not all the sampling patches are representative of the category to which their corresponding WSI belongs without the annotations by pathologists. Therefore, some error tolerance schemes were applied when training the classification model to achieve better generalization. For the mMRI-based method, we firstly apply a 3D convolutional neural network (3DCNN) on the multimodal magnetic resonance image (mMRI) for brain tumor segmentation, which distinguishes brain tumors from healthy tissues, then the segmented tumors are used for tumor subtype classification using 3DCNN. Lastly, an ensemble scheme using the two methods was performed to reach a consensus as the final predictions. We evaluate the proposed method with the patient dataset from Computational Precision Medicine: Radiology-Pathology Challenge (CPM: Rad-Path) on Brain Tumor Classification 2020. The performance of the prediction results on the validation set reached 0.886 in f1_micro, 0.801 in kappa, 0.8 in balance_acc, and 0.829 in the overall average. The experimental results show that the performance with the consideration of both MRI and WSI outperforms the performance using single type of image dataset. Accordingly, the fusion from two image datasets can provide more sufficient information in diagnosis for the system.
CNN-Based Fully Automatic Glioma Classification with Multi-modal Medical Images
Zhao, Bingchao
Huang, Jia
Liang, Changhong
Liu, Zaiyi
Han, Chu
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
Convolutional Neural Network (CNN)
Digital Pathology
Magnetic Resonance Imaging (MRI)
Feature Extraction
Radiomics
The accurate classification of gliomas is essential in clinical practice. It is valuable for clinical practitioners and patients to choose the appropriate management accordingly, promoting the development of personalized medicine. In the MICCAI 2020 Combined Radiology and Pathology Classification Challenge, 4 MRI sequences and a WSI image are provided for each patient. Participants are required to use the multi-modal images to predict the subtypes of glioma. In this paper, we proposed a fully automated pipeline for glioma classification. Our proposed model consists of two parts: feature extraction and feature fusion, which are respectively responsible for extracting representative features of images and making prediction. In specific, we proposed a segmentation-free self-supervised feature extraction network for 3D MRI volume. And a feature extraction model is designed for the H&E stained WSI by associating traditional image processing methods with convolutional neural network. Finally, we fuse the extracted features from multi-modal images and use a densely connected neural network to predict the final classification results. We evaluate the proposed model with F1-Score, Cohen’s Kappa, and Balanced Accuracy on the validation set, which achieves 0.943, 0.903, and 0.889 respectively.
Glioma Classification Using Multimodal Radiology and Histology Data
Hamidinekoo, Azam
Pieciak, Tomasz
Afzali, Maryam
Akanyeti, Otar
Yuan, Yinyin
2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Algorithm Development
Digital Pathology
Magnetic Resonance Imaging (MRI)
Gliomas are brain tumours with a high mortality rate. There are various grades and sub-types of this tumour, and the treatment procedure varies accordingly. Clinicians and oncologists diagnose and categorise these tumours based on visual inspection of radiology and histology data. However, this process can be time-consuming and subjective. The computer-assisted methods can help clinicians to make better and faster decisions. In this paper, we propose a pipeline for automatic classification of gliomas into three sub-types: oligodendroglioma, astrocytoma, and glioblastoma, using both radiology and histopathology images. The proposed approach implements distinct classification models for radiographic and histologic modalities and combines them through an ensemble method. The classification algorithm initially carries out tile-level (for histology) and slice-level (for radiology) classification via a deep learning method, then tile/slice-level latent features are combined for a whole-slide and whole-volume sub-type prediction. The classification algorithm was evaluated using the data set provided in the CPM-RadPath 2020 challenge. The proposed pipeline achieved the F1-Score of 0.886, Cohen’s Kappa score of 0.811 and Balance accuracy of 0.860. The ability of the proposed model for end-to-end learning of diverse features enables it to give a comparable prediction of glioma tumour sub-types.
A Framework Based on Metabolic Networks and Biomedical Images Data to Discriminate Glioma Grades
Maddalena, Lucia
Granata, Ilaria
Manipur, Ichcha
Manzo, Mario
Guarracino, Mario R.
2021Conference Paper, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
BraTS 2018
Glioma grading
Radiogenomics
Transcriptomics
Classification
Collecting and integrating information from different data sources is a successful approach to investigate complex biological phenomena and to address tasks such as disease subtyping, biomarker prediction, target, and mechanisms identification. Here, we describe an integrative framework, based on the combination of transcriptomics data, metabolic networks, and magnetic resonance images, to classify different grades of glioma, one of the most common types of primary brain tumors arising from glial cells. The framework is composed of three main blocks for feature sorting, choosing the best number of sorted features, and classification model building. We investigate different methods for each of the blocks, highlighting those that lead to the best results. Our approach demonstrates how the integration of molecular and imaging data achieves better classification performance than using the individual data-sets, also comparing results with state-of-the-art competitors. The proposed framework can be considered as a starting point for a clinically relevant grading system, and the related software made available lays the foundations for future comparisons.
Microscopic Analysis of Blood Cells for Disease Detection: A Review
Deshpande, Nilkanth Mukund
Gite, Shilpa Shailesh
Aluvalu, Rajanikanth
2021Book Section, cited 0 times
C-NMC 2019
Any contamination in the human body can prompt changes in blood cell morphology and various parameters of cells. The minuscule images of blood cells are examined for recognizing the contamination inside the body with an expectation of maladies and variations from the norm. Appropriate segmentation of these cells makes the detection of a disease progressively exact and vigorous. Microscopic blood cell analysis is a critical activity in the pathological analysis. It highlights the investigation of appropriate malady after exact location followed by an order of abnormalities, which assumes an essential job in the analysis of various disorders, treatment arranging, and assessment of results of treatment. A survey on different areas where microscopic imaging of blood cells is used for disease detection is presented in this paper. A small note on Blood composition is presented, which is followed by a generalized methodology for microscopic blood image analysis for certain application of medical imaging. Comparison of existing methodologies proposed by researchers for disease detection using microscopic blood cell image analysis is discussed in this paper.
Spatially Varying Label Smoothing: Capturing Uncertainty from Expert Annotations
Islam, Mobarakol
Glocker, Ben
2021Book Section, cited 0 times
LIDC-IDRI
The task of image segmentation is inherently noisy due to ambiguities regarding the exact location of boundaries between anatomical structures. We argue that this information can be extracted from the expert annotations at no extra cost, and when integrated into state-of-the-art neural networks, it can lead to improved calibration between soft probabilistic predictions and the underlying uncertainty. We built upon label smoothing (LS) where a network is trained on ‘blurred’ versions of the ground truth labels which has been shown to be effective for calibrating output predictions. However, LS is not taking the local structure into account and results in overly smoothed predictions with low confidence even for non-ambiguous regions. Here, we propose Spatially Varying Label Smoothing (SVLS), a soft labeling technique that captures the structural uncertainty in semantic segmentation. SVLS also naturally lends itself to incorporate inter-rater uncertainty when multiple labelmaps are available. The proposed approach is extensively validated on four clinical segmentation tasks with different imaging modalities, number of classes and single and multi-rater expert annotations. The results demonstrate that SVLS, despite its simplicity, obtains superior boundary prediction with improved uncertainty and model calibration.
Pancreas CT Segmentation by Predictive Phenotyping
Pancreas CT segmentation offers promise at understanding the structural manifestation of metabolic conditions. To date, the medical primary record of conditions that impact the pancreas is in the electronic health record (EHR) in terms of diagnostic phenotype data (e.g., ICD-10 codes). We posit that similar structural phenotypes could be revealed by studying subjects with similar medical outcomes. Segmentation is mainly driven by imaging data, but this direct approach may not consider differing canonical appearances with different underlying conditions (e.g., pancreatic atrophy versus pancreatic cysts). To this end, we exploit clinical features from EHR data to complement image features for enhancing the pancreas segmentation, especially in high-risk outcomes. Specifically, we propose, to the best of our knowledge, the first phenotype embedding model for pancreas segmentation by predicting representatives that share similar comorbidities. Such an embedding strategy can adaptively refine the segmentation outcome based on the discriminative contexts distilled from clinical features. Experiments with 2000 patients’ EHR data and 300 CT images with the healthy pancreas, type II diabetes, and pancreatitis subjects show that segmentation by predictive phenotyping significantly improves performance over state-of-the-arts (Dice score 0.775 to 0.791, p<0.05 , Wilcoxon signed-rank test). The proposed method additionally achieves superior performance on two public testing datasets, BTCV MICCAI Challenge 2015 and TCIA pancreas CT. Our approach provides a promising direction of advancing segmentation with phenotype features while without requiring EHR data as input during testing.
A Hybrid Attention Ensemble Framework for Zonal Prostate Segmentation
NCI-ISBI 2013 Challenge: Automated Segmentation of Prostate Structures
NCI-MICCAI 2013 Challenge
PROSTATE
Segmentation
Classification
Imaging features
Radiomics
Accurate and automatic segmentation of the prostate sub-regions is of great importance for the diagnosis of prostate cancer and quantitative analysis of prostate. By analyzing the characteristics of prostate images, we propose a hybrid attention ensemble framework (HAEF) to automatically segment the central gland (CG) and peripheral zone (PZ) of the prostate from a 3D MR image. The proposed attention bridge module (ABM) in the HAEF helps the Unet to be more robust for cases with large differences in foreground size. In order to deal with low segmentation accuracy of the PZ caused by small proportion of PZ to CG, we gradually increase the proportion of voxels in the region of interest (ROI) in the image through a multi-stage cropping and then introduce self-attention mechanisms in the channel and spatial domain to enhance the multi-level semantic features of the target. Finally, post-processing methods such as ensemble and classification are used to refine the segmentation results. Extensive experiments on the dataset from NCI-ISBI 2013 Challenge demonstrate that the proposed framework can automatically and accurately segment the prostate sub-regions, with a mean DSC of 0.881 for CG and 0.821 for PZ, the 95% HDE of 3.57 mm for CG and 3.72 mm for PZ, and the ASSD of 1.08 mm for CG and 0.96 mm for PZ, and outperforms the state-of-the-art methods in terms of DSC for PZ and average DSC of CG and PZ.
Sli2Vol: Annotate a 3D Volume from a Single Slice with Self-supervised Learning
Yeung, Pak-Hei
Namburete, Ana I. L.
Xie, Weidi
2021Book Section, cited 0 times
C4KC-KiTS
CT Lymph Nodes
Pancreas-CT
The objective of this work is to segment any arbitrary structures of interest (SOI) in 3D volumes by only annotating a single slice, (i.e. semi-automatic 3D segmentation). We show that high accuracy can be achieved by simply propagating the 2D slice segmentation with an affinity matrix between consecutive slices, which can be learnt in a self-supervised manner, namely slice reconstruction. Specifically, we compare our proposed framework, termed as Sli2Vol, with supervised approaches and two other unsupervised/self-supervised slice registration approaches, on 8 public datasets (both CT and MRI scans), spanning 9 different SOIs. Without any parameter-tuning, the same model achieves superior performance with Dice scores (0–100 scale) of over 80 for most of the benchmarks, including the ones that are unseen during training. Our results show generalizability of the proposed approach across data from different machines and with different SOIs: a major use case of semi-automatic segmentation methods where fully supervised approaches would normally struggle.
Inter Extreme Points Geodesics for End-to-End Weakly Supervised Image Segmentation
We introduce InExtremIS, a weakly supervised 3D approach to train a deep image segmentation network using particularly weak train-time annotations: only 6 extreme clicks at the boundary of the objects of interest. Our fully-automatic method is trained end-to-end and does not require any test-time annotations. From the extreme points, 3D bounding boxes are extracted around objects of interest. Then, deep geodesics connecting extreme points are generated to increase the amount of “annotated” voxels within the bounding boxes. Finally, a weakly supervised regularised loss derived from a Conditional Random Field formulation is used to encourage prediction consistency over homogeneous regions. Extensive experiments are performed on a large open dataset for Vestibular Schwannoma segmentation. InExtremIS obtained competitive performance, approaching full supervision and outperforming significantly other weakly supervised techniques based on bounding boxes. Moreover, given a fixed annotation time budget, InExtremIS outperformed full supervision. Our code and data are available online.
Federated Whole Prostate Segmentation in MRI with Personalized Neural Architectures
Roth, Holger R.
Yang, Dong
Li, Wenqi
Myronenko, Andriy
Zhu, Wentao
Xu, Ziyue
Wang, Xiaosong
Xu, Daguang
2021Book Section, cited 0 times
ISBI-MR-Prostate-2013
Building robust deep learning-based models requires diverse training data, ideally from several sources. However, these datasets cannot be combined easily because of patient privacy concerns or regulatory hurdles, especially if medical data is involved. Federated learning (FL) is a way to train machine learning models without the need for centralized datasets. Each FL client trains on their local data while only sharing model parameters with a global server that aggregates the parameters from all clients. At the same time, each client’s data can exhibit differences and inconsistencies due to the local variation in the patient population, imaging equipment, and acquisition protocols. Hence, the federated learned models should be able to adapt to the local particularities of a client’s data. In this work, we combine FL with an AutoML technique based on local neural architecture search by training a “supernet”. Furthermore, we propose an adaptation scheme to allow for personalized model architectures at each FL client’s site. The proposed method is evaluated on four different datasets from 3D prostate MRI and shown to improve the local models’ performance after adaptation through selecting an optimal path through the AutoML supernet.
4D-CBCT Registration with a FBCT-derived Plug-and-Play Feasibility Regularizer
Deformable registration of phase-resolved lung images is an important procedure to appreciate respiratory motion and enhance image quality. Compared to high-resolution fan-beam CTs (FBCTs), cone-beam CTs (CBCTs) are more readily available for on-table acquisition in companion with treatment. However, CBCT registration is challenging because classic regularization energies in convention methods usually cannot overcome the strong artifacts and the lack of structural details. In this study, we propose to learn an implicit feasibility prior of respiratory motion and incorporate it in a plug-and-play (PnP) fashion into the training of an unsupervised image registration network to improve registration accuracy and robustness to noise and artifacts. In particular, we propose a novel approach to develop a feasibility descriptor from a set of deformation vector fields (DVFs) generated from FBCTs. Subsequently, this FBCT-derived feasibility descriptor was used as a spatially variant regularizer on DVF Jacobian during the unsupervised training for 4D-CBCT registration. In doing so, the higher-quality, higher-confidence information from FBCT is transferred into the much challenging problem of CBCT registration, without explicit FB-CB synthesis. The method was evaluated using manually identified landmarks on real CBCTs and automatically detected landmarks on simulated CBCTs. The method presented good robustness to noise and artifacts and generated physically more feasible DVFs. The target registration errors on the real and simulated data were (1.63 ± 0.98) and (2.16 ± 1.91) mm, respectively, significantly better than the classic bending energy regularization in both the conventional method in SimpleElastix and the unsupervised network. The average registration time was 0.04 s.; ; Keywords; Deep learning Image registration 4D cone-beam CT
Revisiting Iterative Highly Efficient Optimisation Schemes in Medical Image Registration
Computed tomography (CT) reconstruction from X-ray projections acquired within a limited angle range is challenging, especially when the angle range is extremely small. Both analytical and iterative models need more projections for effective modeling. Deep learning methods have gained prevalence due to their excellent reconstruction performances, but such success is mainly limited within the same dataset and does not generalize across datasets with different distributions. Hereby we propose ExtraPolationNetwork for limited-angle CT reconstruction via the introduction of a sinogram extrapolation module, which is theoretically justified. The module complements extra sinogram information and boots model generalizability. Extensive experimental results show that our reconstruction model achieves state-of-the-art performance on NIH-AAPM dataset, similar to existing approaches. More importantly, we show that using such a sinogram extrapolation module significantly improves the generalization capability of the model on unseen datasets (e.g., COVID-19 and LIDC datasets) when compared to existing approaches.; Keywords; Limited-angle CT reconstruction Sinogram extrapolation Model generalizability
DT-MIL: Deformable Transformer for Multi-instance Learning on Histopathological Image
Detecting the specific locations of malignancy signs in a medical image is a non-trivial and time-consuming task for radiologists. A complex, 3D version of this task, was presented in the DBTex 2021 Grand Challenge on Digital Breast Tomosynthesis Lesion Detection. Teams from all over the world competed in an attempt to build AI models that predict the 3D locations that require biopsy. We describe a novel method to combine detection candidates from multiple models with minimum false positives. This method won the second place in the DBTex competition, with a very small margin from being first and a standout from the rest. We performed an ablation study to show the contribution of each one of the different new components in the proposed ensemble method, including additional performance improvements done after the competition.
Uncertainty-Based Dynamic Graph Neighborhoods for Medical Segmentation
In recent years, deep learning based methods have shown success in essential medical image analysis tasks such as segmentation. Post-processing and refining the results of segmentation is a common practice to decrease the misclassifications originating from the segmentation network. In addition to widely used methods like Conditional Random Fields (CRFs) which focus on the structure of the segmented volume/area, a graph-based recent approach makes use of certain and uncertain points in a graph and refines the segmentation according to a small graph convolutional network (GCN). However, there are two drawbacks of the approach: most of the edges in the graph are assigned randomly and the GCN is trained independently from the segmentation network. To address these issues, we define a new neighbor-selection mechanism according to feature distances and combine the two networks in the training procedure. According to the experimental results on pancreas segmentation from Computed Tomography (CT) images, we demonstrate improvement in the quantitative measures. Also, examining the dynamic neighbors created by our method, edges between semantically similar image parts are observed. The proposed method also shows qualitative enhancements in the segmentation maps, as demonstrated in the visual results.
Modal Uncertainty Estimation for Medical Imaging Based Diagnosis
Attention-Guided Pancreatic Duct Segmentation from Abdominal CT Volumes
Shen, Chen
Roth, Holger R.
Yuichiro, Hayashi
Oda, Masahiro
Miyamoto, Tadaaki
Sato, Gen
Mori, Kensaku
2021Book Section, cited 0 times
Pancreas-CT
Pancreatic duct dilation indicates a high risk of pancreatic ductal adenocarcinoma (PDAC), the deadliest cancer with a poor prognosis. Segmentation of dilated pancreatic duct from CT taken from patients without PDAC shows the potential to assist the early detection of PDAC. Most current researches include pancreatic duct segmentation as one additional class for patients who have already detected PDAC. However, the dilated pancreatic duct for people who have not yet developed PDAC is typically much smaller, making the segmentation difficult. Deep learning-based segmentation on tiny components is challenging because of the large imbalance between the target object and irrelevant regions. In this work, we explore an attention-guided approach for dilated pancreatic duct segmentation as a screening tool for pre-PDAC patients, enhancing the pancreas regions’ concentration and ignoring the unnecessary features. We employ a multi-scale aggregation to combine the information at different scales to improve the segmentation performance further. Our proposed multi-scale pancreatic attention-guided approach achieved a Dice score of 54.16% on dilated pancreatic duct dataset, which shows a significant improvement over prior techniques.
Binary Classification for Lung Nodule Based on Channel Attention Mechanism
In order to effectively handle the problem of tumor detection on the LUNA16 dataset, we present a new methodology for data augmentation to address the issue of imbalance between the number of positive and negative candidates in this study. Furthermore, a new deep learning model - ASS (a model that combines Convnet sub-attention with Softmax loss) is also proposed and evaluated on patches with different sizes of the LUNA16. Data enrichment techniques are implemented in two ways: off-line augmentation increases the number of images based on the image under consideration, and on-line augmentation increases the number of images by rotating the image at four angles (0°, 90°, 180°, and 270°). We build candidate boxes of various sizes based on the coordinates of each candidate, and these candidate boxes are used to demonstrate the usefulness of the suggested ASS model. The results of cross-testing (with four cases: case 1, ASS trained and tested on a dataset of size 50 × 50; case 2, using ASS trained on a dataset of size 50 × 50 to test a dataset of size 100 × 100; case 3, ASS trained and tested on a dataset of size 100 × 100 and case 4, using ASS trained on a dataset of size 100 × 100 to test a dataset of size 50 × 50) show that the proposed ASS model is feasible.
AutoSeg - Steering the Inductive Biases for Automatic Pathology Segmentation
In medical imaging, un-, semi-, or self-supervised pathology detection is often approached with anomaly- or out-of-distribution detection methods, whose inductive biases are not intentionally directed towards detecting pathologies, and are therefore sub-optimal for this task. To tackle this problem, we propose AutoSeg, an engine that can generate diverse artificial anomalies that resemble the properties of real-world pathologies. Our method can accurately segment unseen artificial anomalies and outperforms existing methods for pathology detection on a challenging real-world dataset of Chest X-ray images. We experimentally evaluate our method on the Medical Out-of-Distribution Analysis Challenge 2021 (Code available under: https://github.com/FeliMe/autoseg).
Fast 3D Registration with Accurate Optimisation and Little Learning for Learn2Reg 2021
Siebert, Hanna
Hansen, Lasse
Heinrich, Mattias P.
2022Book Section, cited 0 times
TCGA-KIRC
TCGA-KIRP
TCGA-LIHC
Current approaches for deformable medical image registration often struggle to fulfill all of the following criteria: versatile applicability, small computation or training times, and the being able to estimate large deformations. Furthermore, end-to-end networks for supervised training of registration often become overly complex and difficult to train. For the Learn2Reg2021 challenge, we aim to address these issues by decoupling feature learning and geometric alignment. First, we introduce a new very fast and accurate optimisation method. By using discretised displacements and a coupled convex optimisation procedure, we are able to robustly cope with large deformations. With the help of an Adam-based instance optimisation, we achieve very accurate registration performances and by using regularisation, we obtain smooth and plausible deformation fields. Second, to be versatile for different registration tasks, we extract hand-crafted features that are modality and contrast invariant and complement them with semantic features from a task-specific segmentation U-Net. With our results we were able to achieve the overall Learn2Reg2021 challenge’s second place, winning Task 1 and being second and third in the other two tasks.
Trafne: A Training Framework for Non-expert Annotators with Auto Validation and Expert Feedback
Miyata, Shugo
Chang, Chia-Ming
Igarashi, Takeo
2022Conference Proceedings, cited 0 times
Brain-Tumor-Progression
Annotation
Large-scale datasets play an important role in the application of deep learning methods to various practical tasks. Many crowdsourcing tools have been proposed for annotation tasks; however, these tasks are relatively easy. Non-obvious annotation tasks require professional knowledge (e.g., medical image annotation) and non-expert annotators need to be trained to perform such tasks. In this paper, we propose Trafne, a framework for the effective training of non-expert annotators by combining feedback from the system (auto validation) and human experts (expert validation). Subsequently, we present a prototype implementation designed for brain tumor image annotation. We perform a user study to evaluate the effectiveness of our framework compared to a traditional training method. The results demonstrate that our proposed approach can help non-expert annotators to complete a non-obvious annotation more accurately than the traditional method. In addition, we discuss the requirements of non-expert training on a non-obvious annotation and potential applications of the framework.
EMSViT: Efficient Multi Scale Vision Transformer for Biomedical Image Segmentation
Sagar, Abhinav
2022Book Section, cited 0 times
BraTS 2021
Algorithm Development
Challenge
BraTS-TCGA-GBM
BraTS-TCGA-LGG
In this paper, we propose a novel network named Efficient Multi Scale Vision Transformer for Biomedical Image Segmentation (EMSViT). Our network splits the input feature maps into three parts with 1×1, 3×3 and 5×5 convolutions in both encoder and decoder. Concat operator is used to merge the features before being fed to three consecutive transformer blocks with attention mechanism embedded inside it. Skip connections are used to connect encoder and decoder transformer blocks. Similarly, transformer blocks and multi scale architecture is used in decoder before being linearly projected to produce the output segmentation map. We test the performance of our network using Synapse multi-organ segmentation dataset, Automated cardiac diagnosis challenge dataset, Brain tumour MRI segmentation dataset and Spleen CT segmentation dataset. Without bells and whistles, our network outperforms most of the previous state of the art CNN and transformer based models using Dice score and the Hausdorff distance as the evaluation metrics.
CA-Net: Collaborative Attention Network for Multi-modal Diagnosis of Gliomas
Yin, Baocai
Cheng, Hu
Wang, Fengyan
Wang, Zengfu
2022Book Section, cited 0 times
Algorithm Development
BraTS-TCGA-GBM
BraTS-TCGA-LGG
multi-modal imaging
BRAIN
BraTS 2021
Deep neural network methods have led to impressive breakthroughs in the medical image field. Most of them focus on single-modal data, while diagnoses in clinical practice are usually determined based on multi-modal data, especially for tumor diseases. In this paper, we intend to find a way to effectively fuse radiology images and pathology images for the diagnosis of gliomas. To this end, we propose a collaborative attention network (CA-Net), which consists of three attention-based feature fusion modules, multi-instance attention, cross attention, and attention fusion. We first take an individual network for each modality to extract the original features. Multi-instance attention combines different informative patches in the pathology image to form a holistic pathology feature. Cross attention interacts between the two modalities and enhances single modality features by exploring complementary information from the other modality. The cross attention matrixes imply the feature reliability, so they are further utilized to obtain a coefficient for each modality to linearly fuse the enhanced features as the final representation in the attention fusion module. The three attention modules are collaborative to discover a comprehensive representation. Our result on the CPM-RadPath outperforms other fusion methods by a large margin, which demonstrates the effectiveness of the proposed method.
Challenging Current Semi-supervised Anomaly Segmentation Methods for Brain MRI
Meissen, Felix
Kaissis, Georgios
Rueckert, Daniel
2022Book Section, cited 0 times
BraTS 2020
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Algorithm Development
Challenge
In this work, we tackle the problem of Semi-Supervised Anomaly Segmentation (SAS) in Magnetic Resonance Images (MRI) of the brain, which is the task of automatically identifying pathologies in brain images. Our work challenges the effectiveness of current Machine Learning (ML) approaches in this application domain by showing that thresholding Fluid-attenuated inversion recovery (FLAIR) MR scans provides better anomaly segmentation maps than several different ML-based anomaly detection models. Specifically, our method achieves better Dice similarity coefficients and Precision-Recall curves than the competitors on various popular evaluation data sets for the segmentation of tumors and multiple sclerosis lesions. (Code available under: https://github.com/FeliMe/brain_sas_baseline)
Small Lesion Segmentation in Brain MRIs with Subpixel Embedding
Wong, Alex
Chen, Allison
Wu, Yangchao
Cicek, Safa
Tiard, Alexandre
Hong, Byung-Woo
Soatto, Stefano
2022Book Section, cited 0 times
BraTS 2020
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Algorithm Development
Challenge
We present a method to segment MRI scans of the human brain into ischemic stroke lesion and normal tissues. We propose a neural network architecture in the form of a standard encoder-decoder where predictions are guided by a spatial expansion embedding network. Our embedding network learns features that can resolve detailed structures in the brain without the need for high-resolution training images, which are often unavailable and expensive to acquire. Alternatively, the encoder-decoder learns global structures by means of striding and max pooling. Our embedding network complements the encoder-decoder architecture by guiding the decoder with fine-grained details lost to spatial downsampling during the encoder stage. Unlike previous works, our decoder outputs at 2× the input resolution, where a single pixel in the input resolution is predicted by four neighboring subpixels in our output. To obtain the output at the original scale, we propose a learnable downsampler (as opposed to hand-crafted ones e.g. bilinear) that combines subpixel predictions. Our approach improves the baseline architecture by ≈ 11.7% and achieves the state of the art on the ATLAS public benchmark dataset with a smaller memory footprint and faster runtime than the best competing method. Our source code has been made available at: https://github.com/alexklwong/subpixel-embedding-segmentation.
Unsupervised Multimodal Supervoxel Merging Towards Brain Tumor Segmentation
Pelluet, Guillaume
Rizkallah, Mira
Acosta, Oscar
Mateus, Diana
2022Book Section, cited 0 times
Algorithm Development
Segmentation
Challenge
BraTS 2020
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Supervised training
Automated brain tumor segmentation is challenging given the tumor’s variability in size, shape, and image intensity. This paper focuses on the fusion of multimodal information coming from different Magnetic Resonance (MR) imaging sequences. We argue it is important to exploit all the modality complementarity to better segment and later determine the aggressiveness of tumors. However, simply concatenating the multimodal data as channels of a single image generates a high volume of redundant information. Therefore, we propose a supervoxel-based approach that regroups pixels sharing perceptually similar information across the different modalities to produce a single coherent oversegmentation. To further reduce redundant information while keeping meaningful borders, we include a variance constraint and a supervoxel merging step. Our experimental validation shows that the proposed merging strategy produces high-quality clustering results useful for brain tumor segmentation. Indeed, our method reaches an ASA score of 0.712 compared to 0.316 for the monomodal approach, indicating that the supervoxels accommodate well tumor boundaries. Our approach also improves by 11.5% the Global Score (GS), showing clusters effectively group pixels similar in intensity and texture.
Evaluating Glioma Growth Predictions as a Forward Ranking Problem
van Garderen, Karin A.
van der Voort, Sebastian R.
Wijnenga, Maarten M. J.
Incekara, Fatih
Kapsas, Georgios
Gahrmann, Renske
Alafandi, Ahmad
Smits, Marion
Klein, Stefan
2022Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2020
BRAIN
Segmentation
Validation
Magnetic Resonance Imaging (MRI)
Algorithm Development
The problem of tumor growth prediction is challenging, but promising results have been achieved with both model-driven and statistical methods. In this work, we present a framework for the evaluation of growth predictions that focuses on the spatial infiltration patterns, and specifically evaluating a prediction of future growth. We propose to frame the problem as a ranking problem rather than a segmentation problem. Using the average precision as a metric, we can evaluate the results with segmentations while using the full spatiotemporal prediction. Furthermore, by applying a biophysical tumor growth model to 21 patient cases we compare two schemes for fitting and evaluating predictions. By carefully designing a scheme that separates the prediction from the observations used for fitting the model, we show that a better fit of model parameters does not guarantee a better predictive power.
Predicting Isocitrate Dehydrogenase Mutation Status in Glioma Using Structural Brain Networks and Graph Neural Networks
Wei, Yiran
Li, Yonghao
Chen, Xi
Schönlieb, Carola-Bibiane
Li, Chao
Price, Stephen J.
2022Book Section, cited 0 times
TCGA-LGG
BraTS 2021
Challenge
Radiogenomics
Isocitrate dehydrogenase (IDH) mutation
Training
Algorithm Development
Glioma is a common malignant brain tumor with distinct survival among patients. The isocitrate dehydrogenase (IDH) gene mutation provides critical diagnostic and prognostic value for glioma. It is of crucial significance to non-invasively predict IDH mutation based on pre-treatment MRI. Machine learning/deep learning models show reasonable performance in predicting IDH mutation using MRI. However, most models neglect the systematic brain alterations caused by tumor invasion, where widespread infiltration along white matter tracts is a hallmark of glioma. Structural brain network provides an effective tool to characterize brain organisation, which could be captured by the graph neural networks (GNN) to more accurately predict IDH mutation.; ; Here we propose a method to predict IDH mutation using GNN, based on the structural brain network of patients. Specifically, we firstly construct a network template of healthy subjects, consisting of atlases of edges (white matter tracts) and nodes (cortical/subcortical brain regions) to provide regions of interest (ROIs). Next, we employ autoencoders to extract the latent multi-modal MRI features from the ROIs of edges and nodes in patients, to train a GNN architecture for predicting IDH mutation. The results show that the proposed method outperforms the baseline models using the 3D-CNN and 3D-DenseNet. In addition, model interpretation suggests its ability to identify the tracts infiltrated by tumor, corresponding to clinical prior knowledge. In conclusion, integrating brain networks with GNN offers a new avenue to study brain lesions using computational neuroscience and computer vision approaches.
Optimization of Deep Learning Based Brain Extraction in MRI for Low Resource Environments
Brain extraction is an indispensable step in neuro-imaging with a direct impact on downstream analyses. Most such methods have been developed for non-pathologically affected brains, and hence tend to suffer in performance when applied on brains with pathologies, e.g., gliomas, multiple sclerosis, traumatic brain injuries. Deep Learning (DL) methodologies for healthcare have shown promising results, but their clinical translation has been limited, primarily due to these methods suffering from i) high computational cost, and ii) specific hardware requirements, e.g., DL acceleration cards. In this study, we explore the potential of mathematical optimizations, towards making DL methods amenable to application in low resource environments. We focus on both the qualitative and quantitative evaluation of such optimizations on an existing DL brain extraction method, designed for pathologically-affected brains and agnostic to the input modality. We conduct direct optimizations and quantization of the trained model (i.e., prior to inference on new data). Our results yield substantial gains, in terms of speedup, latency, throughput, and reduction in memory usage, while the segmentation performance of the initial and the optimized models remains stable, i.e., as quantified by both the Dice Similarity Coefficient and the Hausdorff Distance. These findings support post-training optimizations as a promising approach for enabling the execution of advanced DL methodologies on plain commercial-grade CPUs, and hence contributing to their translation in limited- and low- resource clinical environments.
Reciprocal Adversarial Learning for Brain Tumor Segmentation: A Solution to BraTS Challenge 2021 Segmentation Task
Peiris, Himashi
Chen, Zhaolin
Egan, Gary
Harandi, Mehrtash
2022Book Section, cited 0 times
Algorithm Development
Segmentation
BraTS 2021
Radiomics
Generative Adversarial Network (GAN)
CPTAC-GBM
TCGA-GBM
TCGA-LGG
ACRIN-FMISO-Brain (ACRIN 6684)
Ivy GAP
UCSF-PDGM
This paper proposes an adversarial learning based training approach for brain tumor segmentation task. In this concept, the 3D segmentation network learns from dual reciprocal adversarial learning approaches. To enhance the generalization across the segmentation predictions and to make the segmentation network robust, we adhere to the Virtual Adversarial Training approach by generating more adversarial examples via adding some noise on original patient data. By incorporating a critic that acts as a quantitative subjective referee, the segmentation network learns from the uncertainty information associated with segmentation results. We trained and evaluated network architecture on the RSNA-ASNR-MICCAI BraTS 2021 dataset. Our performance on the online validation dataset is as follows: Dice Similarity Score of 81.38%, 90.77% and 85.39%; Hausdorff Distance (95%) of 21.83 mm, 5.37 mm, 8.56 mm for the enhancing tumor, whole tumor and tumor core, respectively. Similarly, our approach achieved a Dice Similarity Score of 84.55%, 90.46% and 85.30%, as well as Hausdorff Distance (95%) of 13.48 mm, 6.32 mm and 16.98 mm on the final test dataset. Overall, our proposed approach yielded better performance in segmentation accuracy for each tumor sub-region. Our code implementation is publicly available.
Unet3D with Multiple Atrous Convolutions Attention Block for Brain Tumor Segmentation
Brain tumor segmentation by computer computing is still an exciting challenge. UNet architecture has been widely used for medical image segmentation with several modifications. Attention blocks have been used to modify skip connections on the UNet architecture and result in improved performance. In this study, we propose the development of UNet for brain tumor image segmentation by modifying its contraction and expansion block by adding Attention, adding multiple atrous convolutions, and adding a residual pathway that we call Multiple Atrous convolutions Attention Block (MAAB). The expansion part is also added with the formation of pyramid features taken from each level to produce the final segmentation output. The architecture is trained using patches and batch 2 to save GPU memory usage. Online validation of the segmentation results from the BraTS 2021 validation dataset resulted in dice performance of 78.02, 80.73, and 89.07 for ET, TC, and WT. These results indicate that the proposed architecture is promising for further development.
BRATS2021: Exploring Each Sequence in Multi-modal Input for Baseline U-net Performance
Druzhinina, Polina
Kondrateva, Ekaterina
Bozhenko, Arseny
Yarkin, Vyacheslav
Sharaev, Maxim
Kurmukov, Anvar
2022Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2019
BraTS 2020
BraTS 2021
BRAIN
Segmentation
Algorithm Development
Since 2012 the BraTS competition has become a benchmark for brain MRI segmentation. The top-ranked solutions from the competition leaderboard of past years are primarily heavy and sophisticated ensembles of deep neural networks. The complexity of the proposed solutions can restrict their clinical use due to the long execution time and complicate the model transfer to the other datasets, especially with the lack of some MRI sequences in multimodal input. The current paper provides a baseline segmentation accuracy for each separate MRI modality and all four sequences (T1, T1c, T2, and FLAIR) on conventional 3D U-net architecture. We explore the predictive ability of each modality to segment enhancing core, tumor core, and whole tumor. We then compare the baseline performance with BraTS 2019–2020 state-of-the-art solutions. Finally, we share the code and trained weights to facilitate further research on model transfer to different domains and use in other applications.
Combining Global Information with Topological Prior for Brain Tumor Segmentation
Yang, Hua
Shen, Zhiqiang
Li, Zhaopei
Liu, Jinqing
Xiao, Jinchao
2022Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2021
BRAIN
Segmentation
Algorithm Development
Convolutional Neural Network (CNN)
Gliomas are the most common and aggressive malignant primary brain tumors. Automatic brain tumor segmentation from multi-modality magnetic resonance images using deep learning methods is critical for gliomas diagnosis. Deep learning segmentation architectures, especially based on fully convolutional neural network, have proved great performance on medical image segmentation. However, these approaches cannot explicitly model global information and overlook the topology structure of lesion regions, which leaves room for improvement. In this paper, we propose a convolution-and-transformer network (COTRNet) to explicitly capture global information and a topology aware loss to constrain the network to learn topological information. Moreover, we exploit transfer learning by using pretrained parameters on ImageNet and deep supervision by adding multi-level predictions to further improve the segmentation performance. COTRNet achieved dice scores of 78.08%, 76.18%, and 83.92% in the enhancing tumor, the tumor core, and the whole tumor segmentation on brain tumor segmentation challenge 2021. Experimental results demonstrated effectiveness of the proposed method.
Automatic Brain Tumor Segmentation Using Multi-scale Features and Attention Mechanism
Li, Zhaopei
Shen, Zhiqiang
Wen, Jianhui
He, Tian
Pan, Lin
2022Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2021
BRAIN
Segmentation
Algorithm Development
Gliomas are the most common primary malignant tumors of the brain. Magnetic resonance (MR) imaging is one of the main detection methods of brain tumors, so accurate segmentation of brain tumors from MR images has important clinical significance in the whole process of diagnosis. At present, most popular automatic medical image segmentation methods are based on deep learning. Many researchers have developed convolutional neural network and applied it to brain tumor segmentation, and proved superior performance. In this paper, we propose a novel deep learned-based method named multi-scale feature recalibration network(MSFR-Net), which can extract features with multiple scales and recalibrate them through the multi-scale feature extraction and recalibration (MSFER) module. In addition, we improve the segmentation performance by exploiting cross-entropy and dice loss to solve the class imbalance problem. We evaluate our proposed architecture on the brain tumor segmentation challenges (BraTS) 2021 test dataset. The proposed method achieved 89.15%, 83.02%, 82.08% dice coefficients for the whole tumor, tumor core and enhancing tumor, respectively.
Simple and Fast Convolutional Neural Network Applied to Median Cross Sections for Predicting the Presence of MGMT Promoter Methylation in FLAIR MRI Scans
Chen, Daniel Tianming
Chen, Allen Tianle
Wang, Haiyan
2022Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Radiogenomics
Challenge
BraTS 2021
BRAIN
Algorithm Development
Convolutional Neural Network (CNN)
In this paper we present a small and fast Convolutional Neural Network (CNN) used to predict the presence of MGMT promoter methylation in Magnetic Resonance Imaging (MRI) scans. Our data set is “The RSNA-ASNR-MICCAI BraTS 2021 Benchmark on Brain Tumor Segmentation and Radiogenomic Classification” by U. Baid, et al. We focus on using the median (“middle-most”) cross section of a FLAIR scan and use this as the input to the neural net for training. This cross section therefore presents the most or nearly the most surface area compared to any other cross section. We are thus able to reduce the computational complexity and time of the training step while preserving the high performance extrapolation capabilities of the model on unseen data.
Brain Tumor Segmentation Using Non-local Mask R-CNN and Single Model Ensemble
Gliomas are the most common primary malignant brain tumors. Accurate segmentation and quantitative analysis of brain tumor are critical for diagnosis and treatment planning. Automatically segmenting tumors and their subregions is a challenging task as demonstrated by the annual Multimodal Brain Tumor Segmentation Challenge (BraTS). In order to tackle this challenging task, we trained 2D non-local Mask R-CNN with 814 patients from the BraTS 2021 training dataset. Our performance on another 417 patients from the BraTS 2021 training dataset were as follows: DSC of 0.784, 0.851 and 0.817; sensitivity of 0.775, 0.844 and 0.825 for the enhancing tumor, whole tumor and tumor core, respectively. By applying the focal loss function, our method achieved a DSC of 0.775, 0.885 and 0.829, as well as sensitivity of 0.757, 0.877 and 0.801. We also experimented with data distillation to ensemble single model’s predictions. Our refined results were DSC of 0.797, 0.884 and 0.833; sensitivity of 0.820, 0.855 and 0.820.
In the development of technology, there are increasing cases of brain disease, there are more treatments proposed and achieved a positive result. However, with Brain-Lesion, the early diagnoses can improve the possibility for successful treatment and can help patients recuperate better. From this reason, Brain-Lesion is one of the controversial topics in medical images analysis nowadays. With the improvement of the architecture, there is a variety of methods that are proposed and achieve competitive scores. In this paper, we proposed a technique that uses efficient-net for 3D images, especially the Efficient-net B0 for Brain-Lesion classification task solution, and achieve the competitive score. Moreover, we also proposed the method to use Multiscale-EfficientNet to classify the slices of the MRI data.
HarDNet-BTS: A Harmonic Shortcut Network for Brain Tumor Segmentation
Wu, Hung-Yu
Lin, Youn-Long
2022Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2021
BRAIN
Segmentation
Algorithm Development
Tumor segmentation of brain MRI image is an important and challenging computer vision task. With well-curated multi-institutional multi-parametric MRI (mpMRI) data, the RSNA-ASNR-MICCAI Brain Tumor Segmentation (BraTS) Challenge 2021 is a great bench-marking venue for world-wide researchers to contribute to the advancement of the state-of-the-art. HarDNet is a memory-efficient neural network backbone that has demonstrated excellent performance and efficiency in image classification, object detection, real-time semantic segmentation, and colonoscopy polyp segmentation. In this paper, we propose HarDNet-BTS, a U-Net-like encoder-decoder architecture with HarDNet backbone, for Brain Tumor Segmentation. We train it with the BraTS 2021 dataset using three training strategies and ensemble the resultant models to improve the prediction quality. Assessment reports from the BraTS 2021 validation server show that HarDNet-BTS delivers state-of-the-art performance (Dice_ET = 0.8442, Dice_TC = 0.8793, Dice_WT = 0.9260, HD95_ET = 12.592, HD95_TC = 7.073, HD95_WT = 3.884). It was ranked 8th in the validation phase. Its performance on the final testing dataset is consistent with that of the validation phase (Dice_ET = 0.8727, Dice_TC = 0.8665, Dice_WT = 0.9286, HD95_ET = 8.496, HD95_TC = 18.606, HD95_WT = 4.059). Inferencing an MRI case takes only 16 s of GPU time and 6GBs of GPU memory.
Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images
Hatamizadeh, Ali
Nath, Vishwesh
Tang, Yucheng
Yang, Dong
Roth, Holger R.
Xu, Daguang
2022Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2021
BRAIN
Segmentation
Algorithm Development
Convolutional Neural Network (CNN)
U-Net
Transformer
Semantic segmentation of brain tumors is a fundamental medical image analysis task involving multiple MRI imaging modalities that can assist clinicians in diagnosing the patient and successively studying the progression of the malignant entity. In recent years, Fully Convolutional Neural Networks (FCNNs) approaches have become the de facto standard for 3D medical image segmentation. The popular “U-shaped” network architecture has achieved state-of-the-art performance benchmarks on different 2D and 3D semantic segmentation tasks and across various imaging modalities. However, due to the limited kernel size of convolution layers in FCNNs, their performance of modeling long-range information is sub-optimal, and this can lead to deficiencies in the segmentation of tumors with variable sizes. On the other hand, transformer models have demonstrated excellent capabilities in capturing such long-range information in multiple domains, including natural language processing and computer vision. Inspired by the success of vision transformers and their variants, we propose a novel segmentation model termed Swin UNEt TRansformers (Swin UNETR). Specifically, the task of 3D brain tumor semantic segmentation is reformulated as a sequence to sequence prediction problem wherein multi-modal input data is projected into a 1D sequence of embedding and used as an input to a hierarchical Swin transformer as the encoder. The swin transformer encoder extracts features at five different resolutions by utilizing shifted windows for computing self-attention and is connected to an FCNN-based decoder at each resolution via skip connections. We have participated in BraTS 2021 segmentation challenge, and our proposed model ranks among the top-performing approaches in the validation phase.; ; Code: https://monai.io/research/swin-unetr.
Multi-plane UNet++ Ensemble for Glioblastoma Segmentation
Glioblastoma multiforme (grade four glioma, GBM) is the most aggressive malignant tumor in the brain and usually treated by combined surgery, chemo- and radiotherapy. The O-6-methylguanine-DNA methyltransferase (MGMT) promoter methylation status was shown to be predictive of GBM sensitivity to alkylating agent chemotherapy and is a promising marker for personalized treatment. In this paper we propose to use a multi-plane ensemble of UNet++ models for the segmentation of gliomas in MRI scans, using a combination of Dice loss and boundary loss for training. For the prediction of MGMT promoter methylation, we use an ensemble of 3D EfficientNet (one per MRI modality). Both, the UNet++ ensemble and EfficientNet are trained and validated on data provided in the context of the Brain Tumor Segmentation Challenge (BraTS) 2021, containing 2.000 fully annotated glioma samples with four different MRI modalities. We achieve Dice scores of 0.792, 0.835, and 0.906 as well as Hausdorff distances of 16.61, 10.11, and 4.54 for enhancing tumor, tumor core and whole tumor, respectively. For MGMT promoter methylation status prediction, an AUROC of 0.577 is obtained.
Multimodal Brain Tumor Segmentation Using Modified UNet Architecture
Singh, Gaurav
Phophalia, Ashish
2022Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2021
BRAIN
Segmentation
Algorithm Development
Convolutional Neural Network (CNN)
Segmentation of brain tumor is challenging due presence of healthy or background region more compared to tumor regions and also the tumor region itself divided in edema, tumor core and non enhancing regions makes it hard to segment. Given the scarcity of such data, it becomes more challenging. In this paper, we built a 3D-UNet based architecture for multimodal brain tumor segmentation task. We have reported results on BraTS 2021 Validation and Test Dataset. We achieved a Dice value of 0.87, 0.76 and 0.73 on whole tumor region, tumor core region and enhancing part respectively for Validation Data and 0.73, 0.67 and 0.63 on whole tumor region, tumor core region and enhancing part respectively for Test Data.
A Video Data Based Transfer Learning Approach for Classification of MGMT Status in Brain Tumor MR Images
Lang, D. M.
Peeken, J. C.
Combs, S. E.
Wilkens, J. J.
Bartzsch, S.
2022Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Radiogenomics
Challenge
BraTS 2021
Transfer learning
Deep Learning
BRAIN
Classification
Algorithm Development
Patient MGMT (O6 methylguanine DNA methyltransferase) status has been identified essential for the responsiveness to chemotherapy in glioblastoma patients and therefore depicts an important clinical factor. Testing for MGMT methylation is invasive, time consuming and costly and lacks a uniform gold standard. We studied MGMT status assessment by multi-parametric magnetic resonance imaging (mpMRI) scans and tested the ability of deep learning for classification of this task. To overcome the limited number of training examples we used a transfer learning approach based on the video clip classification network C3D [30], allowing for full exploitation of three dimensional information in the MR images. MRI sequences were fused using a locally connected layer. Our approach was able to differentiate MGMT methylated from unmethylated patients with an area under the receiver operating characteristics curve (AUC) of 0.689 for the public validation set. On the private test set AUC was given by 0.577. Further studies for assessment of clinical importance and predictive power in terms of survival are needed.
Multimodal Brain Tumor Segmentation Using a 3D ResUNet in BraTS 2021
Pei, Linmin
Liu, Yanling
2022Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2021
BRAIN
Segmentation
Algorithm Development
In this paper, we propose a multimodal brain tumor segmentation using a 3D ResUNet deep neural network architecture. Deep neural network has been applying in many domains, including computer vision, natural language processing, etc. It has also been used for semantic segmentation in medical imaging segmentation, including brain tumor segmentation. In this work, we utilize a 3D ResUNet to segment tumors in brain magnetic resonance image (MRI). Multimodal MRI is prevailing in brain tumor analysis due to providing rich tumor information. We apply the proposed method to the Multimodal Brain Tumor Segmentation Challenge (BraTS) 2021 validation dataset for tumor segmentation. The online evaluation of brain tumor segmentation using the proposed method offers the dice score coefficient (DSC) of 0.8196, 0.9195, and 0.8503 for enhancing tumor (ET), whole tumor (WT), and tumor core (TC), respectively.
3D MRI Brain Tumour Segmentation with Autoencoder Regularization and Hausdorff Distance Loss Function
Manual segmentation of the Glioblastoma is a challenging task for the radiologists, essential for treatment planning. In recent years deep convolutional neural networks have been shown to perform exceptionally well, in particular the winner of the BraTS challenge 2019 uses 3D U-net architecture in combination with variational autoencoder, using Dice overlap measure as a cost function. In this work we are proposing a loss function that approximates Hausdorff Distance metric that is used to evaluate performance of different segmentation in the hopes that it will allow achieving better performance of the segmentation on new data.
3D CMM-Net with Deeper Encoder for Semantic Segmentation of Brain Tumors in BraTS2021 Challenge
Choi, Yoonseok
Al-masni, Mohammed A.
Kim, Dong-Hyun
2022Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2021
BRAIN
Segmentation
Algorithm Development
We propose a 3D version of the Contextual Multi-scale Multi-level Network (3D CMM-Net) with deeper encoder depth for automated semantic segmentation of different brain tumors in the BraTS2021 challenge. The proposed network has the capability to extract and learn deeper features for the task of multi-class segmentation directly from 3D MRI data. The overall performance of the proposed network gave Dice scores of 0.7557, 0.8060, and 0.8351 for enhancing tumor, tumor core, and whole tumor, respectively on the local-test dataset.
Multi Modal Fusion for Radiogenomics Classification of Brain Tumor
Glioblastomas are the most common and aggressive malignant primary tumor of the central nervous system in adults. The tumours are quite heterogeneous in its shape, texture, and histology. Patients that have been diagnosed with glioblastoma typically have low survival rates and it can take weeks to perform a genetic analysis of an extracted tissue sample. If an effective way to diagnose glioblastomas have been discovered through the use of imaging and AI techniques, this can lead to quality of life improvement for patients through better planning of therapy and surgery required. This work is part of the Brain Tumor Segmentation BraTS 2021 challenge. The challenge is to predict the MGMT promotor methylation status from multi-modal MRI data. We propose a multi-modal late fusion 3D classification network for brain tumor classification on 3D MRI images by using all 4 different modalities (T1w, T1wCE, T2w, FLAIR) and also can be extended to include radiomics features or other external features into the network. We also then compare it against 3D classification models trained on each image modality on its own and then ensembled together during inference.
A Joint Graph and Image Convolution Network for Automatic Brain Tumor Segmentation
Saueressig, Camillo
Berkley, Adam
Munbodh, Reshma
Singh, Ritambhara
2022Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
We present a joint graph convolution - image convolution neural network as our submission to the Brain Tumor Segmentation (BraTS) 2021 challenge. We model each brain as a graph composed of distinct image regions, which is initially segmented by a graph neural network (GNN). Subsequently, the tumorous volume identified by the GNN is further refined by a simple (voxel) convolutional neural network (CNN), which produces the final segmentation. This approach captures both global brain feature interactions via the graphical representation and local image details through the use of convolutional filters. We find that the GNN component by itself can effectively identify and segment the brain tumors. The addition of the CNN further improves the median performance of the model on the validation set by 2% across all metrics evaluated.
Brain Tumor Segmentation Using Neural Network Topology Search
Milesi, Alexandre
Futrega, Michal
Marcinkiewicz, Michal
Ribalta, Pablo
2022Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
We apply a method from Automated Machine Learning (AutoML), namely Neural Architecture Search (NAS), to the task of brain tumor segmentation in MRIs for the BraTS 2021 challenge. NAS methods are known to be compute-intensive, so we use a continuous and differentiable search space in order to apply a DiNTS search for optimal fully convolutional architectures. Our method obtained Dice scores of 0.9161, 0.8707 and 0.8537 for whole tumor, tumor core and enhancing tumor regions respectively on the test dataset, while requiring no manual design of the network architecture, which was found automatically from the provided training data.
Residual 3D U-Net with Localization for Brain Tumor Segmentation
Gliomas are brain tumors originating from the neuronal support tissue called glia, which can be benign or malignant. They are considered rare tumors, whose prognosis, which is highly fluctuating, is primarily related to several factors, including localization, size, degree of extension and certain immune factors. We propose an approach using a Residual 3D U-Net to segment these tumors with localization, a technique for centering and reducing the size of input images to make more accurate and faster predictions. We incorporated different training and post-processing techniques such as cross-validation and minimum pixel threshold.
A Two-Phase Optimal Mass Transportation Technique for 3D Brain Tumor Detection and Segmentation
Lin, Wen-Wei
Li, Tiexiang
Huang, Tsung-Ming
Lin, Jia-Wei
Yueh, Mei-Heng
Yau, Shing-Tung
2022Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
The goal of optimal mass transportation (OMT) is to transform any irregular 3D object (i.e., a brain image) into a cube without creating significant distortion, which is utilized to preprocess irregular brain samples to facilitate the tensor form of the input format of the U-net algorithm. The BraTS 2021 database newly provides a challenging platform for the detection and segmentation of brain tumors, namely, the whole tumor (WT), the tumor core (TC) and the enhanced tumor (ET), by AI techniques. We propose a two-phase OMT algorithm with density estimates for 3D brain tumor segmentation. In the first phase, we construct a volume-mass-preserving OMT via the density determined by the FLAIR grayscale of the scanned modality for the U-net and predict the possible tumor regions. Then, in the second phase, we increase the density on the region of interest and construct a new OMT to enlarge the target region of tumors for the U-net so that the U-net has a better chance to learn how to mark the correct segmentation labels. The application of this preprocessing OMT technique is a new and trending method for CNN training and validation.
Cascaded Training Pipeline for 3D Brain Tumor Segmentation
We apply a cascaded training pipeline for the 3D U-Net to segment each brain tumor sub-region separately and chronologically. Firstly, the volumetric data of four modalities are used to segment the whole tumor in the first round of training. Then, our model combines the whole tumor segmentation with the mpMRI images to segment the tumor core. Finally, the network uses whole tumor and tumor core segmentations to predict enhancing tumor regions. Unlike the standard 3D U-Net, we use Group Normalization and Randomized Leaky Rectified Linear Unit in the encoding and decoding blocks. We achieved dice scores on the validation set of 88.84, 81.97, and 75.02 for whole tumor, tumor core, and enhancing tumor, respectively.
Brain Tumor Segmentation Using Attention Activated U-Net with Positive Mining
Singh, Har Shwinder
2022Book Section, cited 0 times
BraTS-TCGA-GBM
This paper proposes a Deeply Supervised Attention U-Net Deep Learning network with a novel image mining augmentation method to segment brain tumors in MR images. The network was trained on the 3D segmentation task of the BraTS2021 Challenge Task 1. The Attention U-Net model improves upon the original U-Net by increasing focus on relevant feature maps, increasing training efficiency and increasing model performance. Notably, a novel data augmentation technique termed Positive Mining was applied. This technique crops out randomly scaled, positively labelled training samples and adds them to the training pipeline. This can effectively increase the discriminative ability of the Network to identify a tumor and use tumor feature-specific attention maps. The metrics used to train and validate the network were the Dice coefficient and the Hausdorff metric. The best performance on the online final dataset with the aforementioned network and augmentation technique was: Dice Scores of 0.858, 0.869 and 0.913 and Hausdorff Distance of 12.7, 16.9 and 5.43 for the Enhancing Tumor (ET), Tumor Core (TC) and Whole Tumor (WT).
Ensemble Outperforms Single Models in Brain Tumor Segmentation
Ren, Jianxun
Zhang, Wei
An, Ning
Hu, Qingyu
Zhang, Youjia
Zhou, Ying
2022Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Brain tumor segmentation remains an open and popular challenge, for which countless medical image segmentation models have been proposed. Based on the platform that BraTS challenge 2021 provided for researchers, we implemented a battery of cutting-edge deep neural networks, such as nnU-Net, UNet++, CoTr, HRNet, and Swin-Unet to directly compare performances amongst distinct models. To improve segmentation accuracy, we first tried several modification techniques (e.g., data augmentation, region-based training, batch-dice loss function, etc.). Next, the outputs from the five best models were averaged using a final ensemble model, of which four models in the committee were organized in different architectures. As a result, the strengths of every single model were amplified by the aggregation. Our model took one of the best performing places in the Brain Tumor Segmentation (BraTS) 2021 competition amongst over 1200 excellent researchers from all over the world, which achieved Dice score of 0.9256, 0.8774, 0.8576 and Hausdor Distances (95%) of 4.36, 14.80, 14.49 for whole tumor, tumor core, and enhancing tumor respectively.
Brain Tumor Segmentation Using UNet-Context Encoding Network
Glioblastoma is an aggressive type of cancer that can develop in the brain or spinal cord. Magnetic Resonance Imaging (MRI) is key to diagnosing and tracking brain tumors in clinical settings. Brain tumor segmentation in MRI is required for disease diagnosis, surgical planning, and prognosis. As these tumors are heterogeneous in shape and appearance, their segmentation becomes a challenging task. The performance of automated medical image segmentation has considerably improved because of recent advances in deep learning. Introducing context encoding with deep CNN models has shown promise for semantic segmentation of brain tumors. In this work, we use a 3D UNet-Context Encoding (UNCE) deep learning network for improved brain tumor segmentation. Further, we introduce epistemic and aleatoric Uncertainty Quantification (UQ) using Monte Carlo Dropout (MCDO) and Test Time Augmentation (TTA) with the UNCE deep learning model to ascertain confidence in tumor segmentation performance. We build our model using the training MRI image sets of RSNA-ASNR-MICCAI Brain Tumor Segmentation (BraTS) Challenge 2021. We evaluate the model performance using the validation and test images from the BraTS challenge dataset. Online evaluation of validation data shows dice score coefficients (DSC) of 0.7787, 0.8499, and 0.9159 for enhancing tumor (ET), tumor core (TC), and whole tumor (WT), respectively. The dice score coefficients of the test datasets are 0.6684 for ET, 0.7056 for TC, and 0.7551 for WT, respectively.
Ensemble CNN Networks for GBM Tumors Segmentation Using Multi-parametric MRI
Zeineldin, Ramy A.
Karar, Mohamed E.
Mathis-Ullrich, Franziska
Burgert, Oliver
2022Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2021
BRAIN
Segmentation
Algorithm Development
Glioblastomas are the most aggressive fast-growing primary brain cancer which originate in the glial cells of the brain. Accurate identification of the malignant brain tumor and its sub-regions is still one of the most challenging problems in medical image segmentation. The Brain Tumor Segmentation Challenge (BraTS) has been a popular benchmark for automatic brain glioblastomas segmentation algorithms since its initiation. In this year, BraTS 2021 challenge provides the largest multi-parametric (mpMRI) dataset of 2,000 pre-operative patients. In this paper, we propose a new aggregation of two deep learning frameworks namely, DeepSeg and nnU-Net for automatic glioblastoma recognition in pre-operative mpMRI. Our ensemble method obtains Dice similarity scores of 92.00, 87.33, and 84.10 and Hausdorff Distances of 3.81, 8.91, and 16.02 for the enhancing tumor, tumor core, and whole tumor regions, respectively, on the BraTS 2021 validation set, ranking us among the top ten teams. These experimental findings provide evidence that it can be readily applied clinically and thereby aiding in the brain cancer prognosis, therapy planning, and therapy response monitoring. A docker image for reproducing our segmentation results is available online at (https://hub.docker.com/r/razeineldin/deepseg21).
BiTr-Unet: A CNN-Transformer Combined Network for MRI Brain Tumor Segmentation
Jia, Q.
Shu, H.
Brainlesion2021Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2021
BRAIN
Segmentation
Algorithm Development
Convolutional Neural Network (CNN)
Brain Tumor
Deep Learning
Multi-modal Image Segmentation
Vision Transformer
Convolutional neural networks (CNNs) have achieved remarkable success in automatically segmenting organs or lesions on 3D medical images. Recently, vision transformer networks have exhibited exceptional performance in 2D image classification tasks. Compared with CNNs, transformer networks have an appealing advantage of extracting long-range features due to their self-attention algorithm. Therefore, we propose a CNN-Transformer combined model, called BiTr-Unet, with specific modifications for brain tumor segmentation on multi-modal MRI scans. Our BiTr-Unet achieves good performance on the BraTS2021 validation dataset with median Dice score 0.9335, 0.9304 and 0.8899, and median Hausdor_ distance 2.8284, 2.2361 and 1.4142 for the whole tumor, tumor core, and enhancing tumor, respectively. On the BraTS2021 testing dataset, the corresponding results are 0.9257, 0.9350 and 0.8874 for Dice score, and 3, 2.2361 and 1.4142 for Hausdorff distance. The code is publicly available at https://github.com/JustaTinyDot/BiTr-Unet.
Optimized U-Net for Brain Tumor Segmentation
Futrega, Michał
Milesi, Alexandre
Marcinkiewicz, Michał
Ribalta, Pablo
2022Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2021
BRAIN
Segmentation
Algorithm Development
U-Net
We propose an optimized U-Net architecture for a brain tumor segmentation task in the BraTS21 challenge. To find the optimal model architecture and the learning schedule, we have run an extensive ablation study to test: deep supervision loss, Focal loss, decoder attention, drop block, and residual connections. Additionally, we have searched for the optimal depth of the U-Net encoder, number of convolutional channels and post-processing strategy. Our method won the validation phase and took third place in the test phase. We have open-sourced the code to reproduce our BraTS21 submission at the NVIDIA Deep Learning Examples GitHub Repository (https://github.com/NVIDIA/DeepLearningExamples/blob/master/PyTorch/Segmentation/nnUNet/notebooks/BraTS21.ipynb).
MS UNet: Multi-scale 3D UNet for Brain Tumor Segmentation
Ahmad, Parvez
Qamar, Saqib
Shen, Linlin
Rizvi, Syed Qasim Afser
Ali, Aamir
Chetty, Girija
2022Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2021
BRAIN
Segmentation
Algorithm Development
Deep convolutional neural network (DCNN)
A deep convolutional neural network (CNN) achieves remarkable performance for medical image analysis. UNet is the primary source in the performance of 3D CNN architectures for medical imaging tasks, including brain tumor segmentation. The skip connection in the UNet architecture concatenates multi-scale features from image data. The multi-scaled features play an essential role in brain tumor segmentation. Researchers presented numerous multi-scale strategies that have been excellent for the segmentation task. This paper proposes a multi-scale strategy that can further improve the final segmentation accuracy. We propose three multi-scale strategies in MS UNet. Firstly, we utilize densely connected blocks in the encoder and decoder for multi-scale features. Next, the proposed residual-inception blocks extract local and global information by merging features of different kernel sizes. Lastly, we utilize the idea of deep supervision for multiple depths at the decoder. We validate the MS UNet on the BraTS 2021 validation dataset. The dice (DSC) scores of the whole tumor (WT), tumor core (TC), and enhancing tumor (ET) are 91.938%, 86.268%, and 82.409%, respectively.
Evaluating Scale Attention Network for Automatic Brain Tumor Segmentation with Large Multi-parametric MRI Database
Yuan, Yading
2022Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2021
BRAIN
Segmentation
Algorithm Development
Automatic segmentation
Automatic segmentation of brain tumors is an essential but challenging step for extracting quantitative imaging biomarkers for accurate tumor detection, diagnosis, prognosis, treatment planning and assessment. This is the 10th year of Brain Tumor Segmentation (BraTS) Challenge that utilizes multi-institutional multi-parametric magnetic resonance imaging (mpMRI) scans for tasks: 1) evaluation the state-of-the-art methods for the segmentation of intrinsically heterogeneous brain glioblastoma sub-regions in mpMRI scans; and 2) the evaluation of classification methods to predict the MGMT promoter methylation status at pre-operative baseline scans. We participated the image segmentation task by applying a fully automated segmentation framework that we previously developed in BraTS 2020. This framework, named as scale-attention network, incorporates a dynamic scale attention mechanism to integrate low-level details with high-level feature maps at different scales. Our framework was trained using the 1251 challenge training cases provided by BraTS 2021, and achieved an average Dice Similarity Coefficient (DSC) of 0.9277, 0.8851 and 0.8754, as well as 95% Hausdorff distance (in millimeter) of 4.2242, 15.3981 and 11.6925 on 570 testing cases for whole tumor, tumor core and enhanced tumor, respectively, which ranked itself as the second place in the brain tumor segmentation task of RSNA-ASNR-MICCAI BraTS 2021 Challenge (id: deepX).
Orthogonal-Nets: A Large Ensemble of 2D Neural Networks for 3D Brain Tumor Segmentation
Pawar, Kamlesh
Zhong, Shenjun
Goonatillake, Dilshan Sasanka
Egan, Gary
Chen, Zhaolin
2022Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2021
BRAIN
Segmentation
Algorithm Development
Convolutional Neural Network (CNN)
We propose Orthogonal-Nets consisting of a large number of ensembles of 2D encoder-decoder convolutional neural networks. The Orthogonal-Nets takes 2D slices of the image from axial, sagittal, and coronal views of the 3D brain volume and predicts the probability for the tumor segmentation region. The predicted probability distributions from all three views are averaged to generate a 3D probability distribution map that is subsequently used to predict the tumor regions for the 3D images. In this work, we propose a two-stage Orthogonal-Nets. Stage-I predicts the brain tumor labels for the whole 3D image using the axial, sagittal, and coronal views. The labels from the first stage are then used to crop only the tumor region. Multiple Orthogonal-Nets were then trained in stage-II, which takes only the cropped region as input. The two-stage strategy substantially reduces the computational burden on the stage-II networks and thus many Orthogonal-Nets can be used in stage-II. We used one Orthogonal-Net for stage-I and 28 Orthogonal-Nets for stage-II. The mean dice score on the testing datasets was 0.8660, 0.8776, 0.9118 for enhancing tumor, core tumor, and whole tumor respectively.
Feature Learning by Attention and Ensemble with 3D U-Net to Glioma Tumor Segmentation
Cai, Xiaohong
Lou, Shubin
Shuai, Mingrui
An, Zhulin
2022Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
BraTS2021 Task1 is research on segmentation of intrinsically heterogeneous brain glioblastoma sub-regions in mpMRI scans. Base on BraTS 2020 top ten team’s solution (open brats2020, ranked among the top ten teams work), we proposed a similar as 3D U-Net neural network, called as TE U-Net, to differentiate glioma sub-regions class. According that automatically learns to focus on sub-regions class structures of varying shapes and sizes, we proposed TE U-Net which is similar with U-Net++ network architecture. Firstly, we reserved encoder second and third stage’s skip connect design, then also cut off first stage skip connect design. Secondly, multiple stage features through attention gate block before features skip connect, so as to ensemble channels and space region information to suppress irrelevant regions. Finally, in order to improve model performance, on network post-processing stage, we ensemble multiple similar 3D U-Net with attention module. On the online validation database, the TE U-Net architecture get best result is that the GD-enhancing tumor (ET) dice is 83.79%, the peritumoral edematous/invaded tissue (TC) dice is 86.47%, and the necrotic tumor core (WT) dice is 91.98%, Hausdorff(95%) values is 6.39,7.81,3.86and Sensitivity values is 82.20%, 83.99%, 91.92% respectively. And our solution achieved a dice of 85.62%,86.70%,90.64% for ET,TC and WT, as well as Hausdorff(95%) is 18.70,21.06,10.88 on final private test dataset.
MRI Brain Tumor Segmentation Using Deep Encoder-Decoder Convolutional Neural Networks
Yan, Benjamin B.
Wei, Yujia
Jagtap, Jaidip Manikrao M.
Moassefi, Mana
Garcia, Diana V. Vera
Singh, Yashbir
Vahdati, Sanaz
Faghani, Shahriar
Erickson, Bradley J.
Conte, Gian Marco
2022Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
In this study, we focus on Task 1 of the 2021 Multimodal Brain Tumor Segmentation (BraTS) challenge. We present a modified U-net model aimed at improving the segmentation of glioblastomas, reducing the computation time without compromising detection sensitivity. Our automated approach takes multimodal MR images as input, generates a bounding box of the brain volume, and combines the model predictions at the 2D slice level into a full 3D segmentation that is written into a NIfTI file. On the official 2021 BraTS test set of 570 cases, the model obtained median Dice scores of 0.80, 0.87, and 0.87, as well as median 95% Hausdorff distances of 2.45, 4.64, and 6.40 for the enhancing tumor, tumor core, and whole tumor regions, respectively.
Brain Tumor Segmentation with Patch-Based 3D Attention UNet from Multi-parametric MRI
Accurate segmentation of different sub-regions of gliomas including peritumoral edema, necrotic core, enhancing and non-enhancing tumor core from multiparametric MRI scans has important clinical relevance in diagnosis, prognosis and treatment of brain tumors. However, due to the highly heterogeneous appearance and shape, segmentation of the sub-regions is very challenging. Recent development using deep learning models has proved its effectiveness in the past several brain segmentation challenges as well as other semantic and medical image segmentation problems. In this paper we developed a deep-learning-based segmentation method using a patch-based 3D UNet with the attention block. Hyper-parameters tuning and training and testing augmentations were applied to increase the model performance. Preliminary results showed effectiveness of the segmentation model and achieved mean Dice scores of 0.806 (ET), 0.863 (TC) and 0.918 (WT) in the validation dataset.
Dice Focal Loss with ResNet-like Encoder-Decoder Architecture in 3D Brain Tumor Segmentation
Nguyen-Truong, Hai
Pham, Quan-Dung
2022Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Accurate identification of brain tumor sub-regions boundaries in MRI plays a profoundly important role in clinical applications, such as surgical treatment planning, image-guided interventions, monitoring tumor growth, and the generation of radiotherapy maps. However, manual delineation practices has suffered from many problems such as requiring anatomical knowledge, taking considerable time for annotation, showing inaccuracy due to human error. To tackle these issues, automated segmentation of brain tumors from 3D magnetic resonance images (MRIs) has been used in recent years. In this work, a ResNet-like Encoder-Decoder architecture is trained on the Multimodal Brain Tumor Segmentation Challenge (BraTS) 2021 training dataset. Experimental results demonstrate that this work shows a faily good performance in brain tumor segmentation.
Brain Tumor Segmentation in Multi-parametric Magnetic Resonance Imaging Using Model Ensembling and Super-resolution
Jiang, Zhifan
Zhao, Can
Liu, Xinyang
Linguraru, Marius George
2022Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Brain tumor segmentation in MRI offers critical quantitative imaging data to characterize and improve prognosis. The International Brain Tumor Segmentation (BraTS) Challenge provides a unique opportunity to encourage machine learning solutions to address this challenging task. This year, the 10th edition of BraTS collected a multi-institutional multi-parametric MRI dataset of 2040 cases with typical heterogeneity in large multi-domain imaging datasets. In this paper we present a strategy ensembling four parallelly-trained models to increase the stability and performance of our neural network-based tumor segmentation. Particularly, image intensity normalization and multi-parametric MRI super-resolution techniques are used in ensembled pipelines. The evaluation of our solution on 570 unseen testing cases resulted in Dice scores of 86.28, 87.12 and 92.10, and Hausdorff distance of 14.36, 17.48 and 5.37 mm for the enhancing tumor, tumor core and whole tumor, respectively.
Quality-Aware Model Ensemble for Brain Tumor Segmentation
Wang, Kang
Wang, Haoran
Li, Zeyang
Pan, Mingyuan
Wang, Manning
Wang, Shuo
Song, Zhijian
2022Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Automatic segmentation of brain tumors is still a challenging task. To improve the segmentation performance and better ensemble all the candidate models with different architectures, we proposed a three-stage model with the quality-aware model ensemble. The first stage locates the tumor with coarse segmentation, while the second stage refines the coarse segmentation in the region of interest. The last stage performs the quality-aware model ensemble with a quality score prediction net to fuse the results from the multiple outputs of sub-networks. Besides, we warp a standard SRI24 brain template to the subject image, which is a strong prior of the brain structure and symmetry. Our method shows competitive performance on the BraTS 2021 online validation dataset, obtaining an average dice similarity coefficient (DSC) of 0.911, 0.850, 0.816, and average 95th$$95_{th}$$ percentile of Hausdorff distance (HD95) of 4.58, 8.959, 10.400, for whole tumor, tumor core, and enhancing tumor, respectively.
Redundancy Reduction in Semantic Segmentation of 3D Brain Tumor MRIs
Another year of the multimodal brain tumor segmentation challenge (BraTS) 2021 provides an even larger dataset to facilitate collaboration and research of brain tumor segmentation methods, which are necessary for disease analysis and treatment planning. A large dataset size of BraTS 2021 and the advent of modern GPUs provide a better opportunity for deep-learning based approaches to learn tumor representation from the data. In this work, we maintained an encoder-decoder based segmentation network, but focused on a modification of network training process that minimizes redundancy under perturbations. Given a set trained networks, we further introduce a confidence based ensembling techniques to further improve the performance. We evaluated the method on BraTS 2021, and in terms of dice for enhanced tumor core, tumor core and whole tumor, we achieved 0.8600, 0.8868 and 0.9265 average dice for the validation set, and 0.8769, 0.8721, 0.9266 average dice for the testing set. Our team (NVAUTO) submission was the top performing in terms of ET and TC scores, and using the Brats ranking system (based on the dice and Hausdorff distance ranking per case) achieved the 2nd place on the validation set, and the 4th place on the testing set.
Extending nn-UNet for Brain Tumor Segmentation
Luu, Huan Minh
Park, Sung-Hong
2022Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Brain tumor segmentation is essential for the diagnosis and prognosis of patients with gliomas. The brain tumor segmentation challenge has provided an abundant and high-quality data source to develop automatic algorithms for the task. This paper describes our contribution to the 2021 competition. We developed our methods based on nn-UNet, the winning entry of last year’s competition. We experimented with several modifications, including using a larger network, replacing batch normalization with group normalization and utilizing axial attention in the decoder. Internal 5-fold cross-validation and online evaluation from the organizers showed a minor improvement in quantitative metrics compared to the baseline. The proposed models won first place in the final ranking on unseen test data, achieving a dice score of 88.35%, 88.78%, 93.19% for the enhancing tumor, the tumor core, and the whole tumor, respectively. The codes, pretrained weights, and docker image for the winning submission are publicly available. (https://github.com/rixez/Brats21_KAIST_MRI_Labhttps://hub.docker.com/r/rixez/brats21nnunet)
Generalized Wasserstein Dice Loss, Test-Time Augmentation, and Transformers for the BraTS 2021 Challenge
Fidon, Lucas
Shit, Suprosanna
Ezhov, Ivan
Paetzold, Johannes C.
Ourselin, Sébastien
Vercauteren, Tom
2022Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Brain tumor segmentation from multiple Magnetic Resonance Imaging (MRI) modalities is a challenging task in medical image computation. The main challenges lie in the generalizability to a variety of scanners and imaging protocols. In this paper, we explore strategies to increase model robustness without increasing inference time. Towards this aim, we explore finding a robust ensemble from models trained using different losses, optimizers, and train-validation data split. Importantly, we explore the inclusion of a transformer in the bottleneck of the U-Net architecture. While we find transformer in the bottleneck performs slightly worse than the baseline U-Net in average, the generalized Wasserstein Dice loss consistently produces superior results. Further, we adopt an efficient test time augmentation strategy for faster and robust inference. Our final ensemble of seven 3D U-Nets with test-time augmentation produces an average dice score of 89.4% and an average Hausdorff 95% distance of 10.0 mm when evaluated on the BraTS 2021 testing dataset. Our code and trained models are publicly available at https://github.com/LucasFidon/TRABIT_BraTS2021.
Coupling nnU-Nets with Expert Knowledge for Accurate Brain Tumor Segmentation from MRI
Kotowski, Krzysztof
Adamski, Szymon
Machura, Bartosz
Zarudzki, Lukasz
Nalepa, Jakub
2022Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Accurate and reproducible segmentation of brain tumors from multi-modal magnetic resonance (MR) scans is a pivotal step in clinical practice, as MR imaging is the modality of choice in brain tumor diagnosis and assessment, and incorrectly delineated tumor areas may adversely affect the process of designing the treatment pathway. In this paper, we exploit an end-to-end 3D nnU-Net architecture for this task, and utilize an ensemble of five models using our custom stratification based on the distribution of the necrosis, enhancing tumor, and edema. To improve the segmentation, we benefit from the experience of a senior radiologist captured in a form of several post-processing routines. The experiments obtained for the BraTS’21 training and validation sets show that exploiting such expert knowledge can significantly improve the underlying models, delivering the average Dice score of 0.81977 (enhancing tumor), 0.87837 (tumor core), and 0.92723 (whole tumor). Finally, our algorithm allowed us to take the 6th$$6^\mathrm{th}$$ place (out of 1600 participants) in the BraTS’21 Challenge, with the average Dice score over the test data of 0.86317, 0.87987, and 0.92838 for the enhancing tumor, tumor core and whole tumor, respectively.
Deep Learning Based Ensemble Approach for 3D MRI Brain Tumor Segmentation
Brain tumor segmentation has wide applications and important potential values for glioblastoma research. Because of the complexity of the structure of subtype tumors and the different visual scenes of multi modalities like T1, T1ce, T2, and FLAIR, most methods fail to segment the brain tumors with high accuracy. The sizes and shapes of tumors are very diverse in the wild. Another problem is that most recent algorithms ignore the multi-scale information of brain tumor features. To handle these problems, an ensemble method that utilizes the strength of dilated convolution in capturing larger receptive fields, which has more context information of brain image, also gets the ability of small tumor segmentation by using multiple tasks learning. Besides, we apply the generalized wasserstein dice loss function in training the model to solve the problem of imbalanced between multi-class segmentation. The experimental results demonstrate that the proposed ensemble method improves the accuracy in brain tumor segmentation, showing superiority to other recent segmentation methods.
Prediction of MGMT Methylation Status of Glioblastoma Using Radiomics and Latent Space Shape Features
Pálsson, Sveinn
Cerri, Stefano
Van Leemput, Koen
2022Book Section, cited 0 times
Radiomics
Radiogenomics
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2021
BRAIN
Methylation markers
Algorithm Development
In this paper we propose a method for predicting the status of MGMT promoter methylation in high-grade gliomas. From the available MR images, we segment the tumor using deep convolutional neural networks and extract both radiomic features and shape features learned by a variational autoencoder. We implemented a standard machine learning workflow to obtain predictions, consisting of feature selection followed by training of a random forest classification model. We trained and evaluated our method on the RSNA-ASNR-MICCAI BraTS 2021 challenge dataset and submitted our predictions to the challenge.
Combining CNNs with Transformer for Multimodal 3D MRI Brain Tumor Segmentation
We apply an ensemble of modified TransBTS, nnU-Net, and a combination of both for the segmentation task of the BraTS 2021 challenge. We change the original architecture of the TransBTS model by adding Squeeze-and-Excitation blocks, increasing the number of CNN layers, replacing positional encoding in the Transformer block with a learnable Multilayer Perceptron (MLP) embeddings, which makes Transformer adjustable to any input size during inference. With these modifications, we can improve TransBTS performance largely. Inspired by a nnU-Net framework, we decided to combine it with our modified TransBTS by changing the architecture inside nnU-Net to our custom model. On the Validation set of BraTS 2021, the ensemble of these approaches achieves 0.8496, 0.8698, 0.9256 Dice score and 15.72, 11.057, 3.374 HD95 for enhancing tumor, tumor core, and whole tumor, correspondingly. On test set we get Dice score 0.8789, 0.8759, 0.9279, and HD95: 10.426, 17.203, 4.93. Our code is publicly available. (Implementation is available at https://github.com/ucuapps/BraTS2021_Challenge).
Brain Tumor Segmentation Using Deep Infomax
Marndi, Jitendra
Craven, Cailyn
Kim, Geena
2022Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
In this study, we apply Deep Infomax (DIM) loss to a U-Net variant model with DenseNet blocks (DenseUnet). Using DenseUnet as a baseline model, we compare performances when training with cross entropy loss alone versus training with DIM loss. From a pilot study on BraTS 2020 data, we observed improvements when training with DIM then retraining with cross entropy (DTC). The results from BraTS 2021 data also show slight improvements, however, a longer training epoch and further hyperparameter tuning are needed to achieve more effective results from DIM.
Automatic Brain Tumor Segmentation with a Bridge-Unet Deeply Supervised Enhanced with Downsampling Pooling Combination, Atrous Spatial Pyramid Pooling, Squeeze-and-Excitation and EvoNorm
Segmentation of brain tumors is a critical task for patient disease management. Since this task is time-consuming and subject to inter-expert delineation variation, automatic methods are of significant interest. The Multimodal Brain Tumor Segmentation Challenge (BraTS) has been in place for about a decade and provides a common platform to compare different automatic segmentation algorithms based on multiparametric magnetic resonance imaging (mpMRI) of gliomas. This year the challenge has taken a big step forward by multiplying the total data by approximately 3. We address the image segmentation challenge by developing a network based on a Bridge-Unet and improved with a concatenation of max and average pooling for downsampling, Squeeze-and-Excitation (SE) block, Atrous Spatial Pyramid Pooling (ASSP), and EvoNorm-S0. Our model was trained using the 1251 training cases from the BraTS 2021 challenge and achieved an average Dice similarity coefficient (DSC) of 0.92457, 0.87811 and 0.84094, as well as a 95% Hausdorff distance (HD) of 4.19442, 7.55256 and 14.13390 mm for the whole tumor, tumor core, and enhanced tumor, respectively on the online validation platform composed of 219 cases. Similarly, our solution achieved a DSC of 0.92548, 0.87628 and 0.87122, as well as HD95 of 4.30711, 17.84987 and 12.23361 mm on the test dataset composed of 530 cases. Overall, our approach yielded well balanced performance for each tumor subregion.
Brain Tumor Segmentation with Self-supervised Enhance Region Post-processing
In this paper, we extend the previous research works on the robust multi-sequences segmentation methods which allows to consider all available information from MRI scans by the composition of T1, T1C, T2 and T2-FLAIR sequences. It is based on the clinical radiology hypothesis and presents an efficient approach to combining and matching 3D methods to search for areas of comprised the GD-enhancing tumor in order to significantly improve the model’s performance of the particular applied numerical problem of brain tumor segmentation.; ; Proposed in this paper method also demonstrates strong improvement on the segmentation problem. This conclusion was done with respect to Dice and Hausdorff metric, Sensitivity and Specificity compare to identical training/test procedure based only on any single sequence and regardless of the chosen neural network architecture. We achieved on the test set of 0.866, 0.921 and 0.869 for ET, WT, and TC Dice scores.; ; Obtained results demonstrate significant performance improvement while combining several 3D approaches for considered tasks of brain tumor segmentation. In this work we provide the comparison of various 3D and 2D approaches, pre-processing to self-supervised clean data, post-processing optimization methods and the different backbone architectures.
E1D3 U-Net for Brain Tumor Segmentation: Submission to the RSNA-ASNR-MICCAI BraTS 2021 challenge
Convolutional Neural Networks (CNNs) have demonstrated state-of-the-art performance in medical image segmentation tasks. A common feature in most top-performing CNNs is an encoder-decoder architecture inspired by the U-Net. For multi-region brain tumor segmentation, 3D U-Net architecture and its variants provide the most competitive segmentation performances. In this work, we propose an interesting extension of the standard 3D U-Net architecture, specialized for brain tumor segmentation. The proposed network, called E1D3 U-Net, is a one-encoder, three-decoder fully-convolutional neural network architecture where each decoder segments one of the hierarchical regions of interest: whole tumor, tumor core, and enhancing core. On the BraTS 2018 validation (unseen) dataset, E1D3 U-Net demonstrates single-prediction performance comparable with most state-of-the-art networks in brain tumor segmentation, with reasonable computational requirements and without ensembling. As a submission to the RSNA-ASNR-MICCAI BraTS 2021 challenge, we also evaluate our proposal on the BraTS 2021 dataset. E1D3 U-Net showcases the flexibility in the standard 3D U-Net architecture which we exploit for the task of brain tumor segmentation.
Brain Tumor Segmentation from Multiparametric MRI Using a Multi-encoder U-Net Architecture
This paper describes our submission to Task 1 of the RSNA-ASNR-MICCAI Brain Tumor Segmentation (BraTS) Challenge 2021, where the goal is to segment brain glioblastoma sub-regions in multi-parametric MRI scans. Glioblastoma patients have a very high mortality rate; robust and precise segmentation of the whole tumor, tumor core, and enhancing tumor subregions plays a vital role in patient management. We design a novel multi-encoder, shared decoder U-Net architecture aimed at reducing the effect of signal artefacts that can appear in single channels of the MRI recordings. We train multiple such models on the training images made available from the challenge organizers, collected from 1251 subjects. The ensemble-model achieves Dice Scores of 0.9274 +/- 0.0930, 0.8717+/- 0.2456, and 0.8750 +/- 0.1798; and Hausdorff distances of 4.77 +/- 17.05 , 17.97 +/- 71.54, and 10.66 +/ 55.52 ; for whole tumor, tumor core, and enhancing tumor, respectively; on the 570 test subjects assessed by the organizer. We investigate the robustness of our automated segmentation system and discuss its possible relevance to existing and future clinical workflows for tumor evaluation and radiation therapy planning.
AttU-NET: Attention U-Net for Brain Tumor Segmentation
Tumor delineation is critical for the precise diagnosis and treatment of glioma patients. Since manual segmentation is time-consuming and tedious, automatic segmentation is desired. With the advent of convolution neural network (CNN), tremendous CNN models have been proposed for medical image segmentation. However, the small size of kernel limits the shape of the receptive view, omitting the global information. To utilize the intrinsic features of brain anatomical structure, we propose a modified U-Net with an attention block (AttU-Net) to tract the complementary information from the whole image. The proposed attention block can be easily added to any segmentation backbones, which improved the Dice score by 5%. We evaluated our approach on the dataset of BraTS 2021 challenge and achieved promising performance on this dataset. The Dice scores of enhancing tumor, tumor core, and whole tumor segmentation are 0.793, 0.819, and 0.879, respectively.
Brain Tumor Segmentation in mpMRI Scans (BraTS-2021) Using Models Based on U-Net Architecture
Maurya, Satyajit
Kumar Yadav, Virendra
Agarwal, Sumeet
Singh, Anup
2022Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Accurate segmentation of brain tumors from MR images has important clinical relevance. To overcome the limitations of manual segmentation, a semi-automatic or automatic approach is desirable. The current study mainly focused on task-1 defined in the BraTS'21 challenge, i.e., segmenting glioblastoma into sub-regions (the “enhancing tumor” (ET), the “tumor core” (TC), and the “whole tumor” (WT)). In the current study, deep learning models based upon UNet architecture were developed for producing tumor segmentation labels from 3D multi-parametric MRI (mpMRI) scans. During the segmentation process, the output mask resulting from the first developed Model (WT) was used to develop segmentation models for the remaining subregions. Optimizations were carried out to develop robust models, reduce computation time, and achieve high accuracy. Developed models showed high accuracy for the segmentation of tumor sub-regions on training as well as validation data.
Glioblastoma is the most common and lethal primary brain tumor in adults. Magnetic resonance imaging (MRI) is a critical diagnostic tool for glioblastoma. Besides MRI, histopathology features and molecular subtypes like MGMT methylation, IDH mutation, 1p19q co-deletion, etc. are used for prognosis. Accurate tumor segmentation is a step towards fully utilizing the MRI data for radiogenomics that will allow use of MRI to predict genomic features of glioblastoma. With accurate tumor segmentation, we can get precise quantitative information about the 3D tumor volumetric features. We have developed an inference model for brain tumor segmentation using neural network algorithm with Resnet50 as an encoding layer. Major feature of our algorithm is the use of composite image generated from T1, T2, T1ce and FLAIR series. We report average Dice scores of 0.88716 for the whole tumor, 0.79052 for the necrotic core, and 0.72760 for the contrast-enhancing tumor on the validation set of BraTS 2021 Task1 challenge. For the final unseen test data, we report average Dice scores of 0.89656 for the whole tumor, 0.83734 for the necrotic core, and 0.81162 for the contrast-enhancing tumor.
A Deep Learning Approach to Glioblastoma Radiogenomic Classification Using Brain MRI
A malignant brain tumor known as a glioblastoma is an extremely life-threatening condition. It has been proven that the existence of a specific genetic sequence in the tumor known as MGMT promoter methylation is a favourable prognostic factor and a sign of how well a patient will respond to chemotherapy. Currently, the only way to identify the presence of the MGMT promoter is to perform a genetic analysis that requires surgical intervention. The development of an accurate method for determining the presence of the MGMT promoter using only MRI would help to reduce the number of surgeries. In this work, we developed a method for glioblastoma classification using just MRI by choosing an appropriate loss function, neural network architecture and ensembling trained models. This problem was successfully solved as part of the “RSNA-MICCAI Brain Tumor Radiogenomic Classification” competition, and the proposed algorithm was included in the top 5% of best solutions.
Radiogenomic Prediction of MGMT Using Deep Learning with Bayesian Optimized Hyperparameters
Glioblastoma (GBM) is the most aggressive primary brain tumor. The standard radiotherapeutic treatment for newly diagnosed GBM patients is Temozolomide (TMZ). O6-methylguanine-DNA-methyltransferase (MGMT) gene methylation status is a genetic biomarker for patient response to the treatment and is associated with a longer survival time. The standard method of assessing genetic alternation is surgical resection which is invasive and time-consuming. Recently, imaging genomics has shown the potential to associate imaging phenotype with genetic alternation. Imaging genomics provides an opportunity for noninvasive assessment of treatment response. Accordingly, we propose a convolutional neural network (CNN) framework with Bayesian optimized hyperparameters for the prediction of MGMT status from multimodal magnetic resonance imaging (mMRI). The goal of the proposed method is to predict the MGMT status noninvasively. Using the RSNA-MICCAI dataset, the proposed framework achieves an area under the curve (AUC) of 0.718 and 0.477 for validation and testing phase, respectively.
Comparison of MR Preprocessing Strategies and Sequences for Radiomics-Based MGMT Prediction
Hypermethylation of the O6-methylguanine-DNA-methyltransferase (MGMT) promoter in glioblastoma (GBM) is a predictive biomarker associated with improved treatment outcome. In clinical practice, MGMT methylation status is determined by biopsy or after surgical removal of the tumor. This study aims to investigate the feasibility of non-invasive medical imaging based “radio-genomic” surrogate markers of MGMT methylation status.; ; The imaging dataset of the RSNA-ASNR-MICCAI Brain Tumor Segmentation (BraTS) challenge allows exploring radiomics strategies for MGMT prediction in a large and very heterogeneous dataset that represents a variety of real-world imaging conditions including different imaging protocols and devices. To characterize and optimize MGMT prediction strategies under these conditions, we examined different image preprocessing approaches and their effect on the average prediction performance of simple radiomics models.; ; We found features derived from FLAIR images to be most informative for MGMT prediction, particularly if aggregated over the entire (enhancing and non-enhancing) tumor with or without inclusion of the edema. Our results also indicate that the imaging characteristics of the tumor region can distort MR-bias-field correction in a way that negatively affects the prediction performance of the derived models.
Federated Learning Using Variable Local Training for Brain Tumor Segmentation
The potential for deep learning to improve medical image analysis is often stymied by the difficulty in acquiring and collecting sufficient data to train models. One major barrier to data acquisition is the private and sensitive nature of the data in question, as concerns about patient privacy, among others, make data sharing between institutions difficult. Distributed learning avoids the need to share data centrally by training models locally. One approach to distributed learning is federated learning, where models are trained in parallel at local institutions and aggregated together into a global model. The 2021 Federated Tumor Segmentation (FeTS) challenge focuses on federated learning for brain tumor segmentation using magnetic resonance imaging scans collected from a real-world federation of collaborating institutions. We developed a federated training algorithm that uses a combination of variable local epochs in each federated round, a decaying learning rate, and an ensemble weight aggregation function. When testing on unseen validation data our model trained with federated learning achieves very similar performance (average DSC score of 0.674) to a central model trained on pooled data (average DSC score 0.685). When our federated learning algorithm was evaluated on unseen training and testing data, it achieved similar performances on the FeTS challenge leaderboards 1 and 2 (average DSC scores of 0.623 and 0.608, respectively). This federated learning algorithm offers an approach to training deep learning learning models without the need to share private and sensitive patient data.
Multi-institutional Travelling Model for Tumor Segmentation in MRI Datasets
Souza, Raissa
Tuladhar, Anup
Mouches, Pauline
Wilms, Matthias
Tyagi, Lakshay
Forkert, Nils D.
2022Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Administrative, ethical, and legal reasons are often preventing the central collection of data and subsequent development of machine learning models for computer-aided diagnosis tools using medical images. The main idea of distributed learning is to train machine learning models locally at each site rather than using centrally collected data, thereby avoiding sharing data between health care centers and model developers. Thus, distributed learning is an alternative that solves many legal and ethical issues and overcomes the need to directly share data. Most previous studies simulated data distribution or used datasets that are acquired in a controlled way, potentially misrepresenting real clinical cases. The 2021 Federated Tumor Segmentation (FeTS) challenge provides clinically acquired multi-institutional magnetic resonance imaging (MRI) scans from patients with brain cancer and aims to compare federated learning models. In this work, we propose a travelling model that visits each collaborator site up to five times with three distinct travelling orders (ascending, descending, and random) between collaborators as a solution to distributed learning. Our results demonstrate that performing more training cycles is effective independent of the order that the models are transferred among the collaborators. Moreover, we show that our model does not suffer from catastrophic forgetting and successfully achieves a similar performance (average Dice score 0.676) compared to standard machine learning implementations (Dice score 0.667) trained using the data from all collaborators hosted at a central location.
Learn to Fuse Input Features for Large-Deformation Registration with Differentiable Convex-Discrete Optimisation
Siebert, Hanna
Heinrich, Mattias P.
2022Book Section, cited 0 times
CT COLONOGRAPHY
Hybrid methods that combine learning-based features with conventional optimisation have become popular for medical image registration. The ConvexAdam algorithm that ranked first in the comprehensive Learn2Reg registration challenges completely decouples semantic and/or hand-crafted feature extraction from the estimation of the transformation due to the difficulty of differentiating the discrete optimisation step. In this work, we propose a simple extension that enables backpropagation through discrete optimisation and learns to fuse the semantic and hand-crafted features in a supervised setting. We demonstrate state-of-the-art performance on abdominal CT registration.
Survey of Leukemia Cancer Cell Detection Using Image Processing
Devi, Tulasi Gayatri
Patil, Nagamma
Rai, Sharada
Philipose, Cheryl Sarah
2022Book Section, cited 0 times
SN-AM
Cancer is the development of abnormal cells that divide at an abnormal pace, uncontrollably. Cancerous cells have the ability to destroy other normal tissues and can spread throughout the body. Cancer cells can develop in various parts of the body. The paper focuses on leukemia which is a type of blood cancer. Blood cancer usually start in the bone marrow where the blood is produced in the body. The types of blood cancer are: Leukemia, Non-Hodgkin lymphoma, Hodgkin lymphoma, and Multiple myeloma. Leukemia is a type of blood cancer that originates in the bone marrow. Leukemia is seen when the body produces an abnormal amount of white blood cells that hinder the bone marrow from creating red blood cells and platelets. Several detection methods to identify the cancerous cells have been proposed. Identification of the cancer cells through cell image processing is very complex. The use of computer aided image processing allows the images to be viewed in 2D and 3D making it easier to identify the cancerous cells. The cells have to undergo segmentation and classification in order to identify the cancerous tumours. Several papers propose segmentation methods, classification methods and some propose both. The purpose of this survey is to review various papers that use either conventional methods or machine learning methods to detect the cells as cancerous and non-cancerous.
A Multi Brain Tumor Classification Using a Deep Reinforcement Learning Model
Brain Tumor is a type of disease where the abnormal cells will grow in the human brain. There will be different type of tumors in the brain and also these tumors will be in the spinal cord. Doctors will use some techniques to cure this tumors which are present in the brain. So the first task is to classify the different types of tumors and to give the respective treatment. In general the Magnetic-Resonance-Imaging (MRI) is used to find the type of tumor is present in the image or not and also identifies the position of the tumor. Basically images will have Benign or malignant type of tumors. Benign tumors are non-cancerous can be cured with the help of medicines. Malignant tumors are dangerous they can’t be cured with medicines it will leads to death of a person. MRI is used to classify these type of tumors. MRI images will use more time to evaluate the tumor and evaluation of the tumor is different for different doctors. So There is one more technique which is used to classify the brain tumor images are deep learning. Deep learning consists of supervised learning mechanism, unsupervised learning mechanism and Reinforcement learning mechanism. The DL model uses convolution neural network to classify the brain tumor images into Glioma, Meningioma and Pituitary from the given dataset and also used for classification and feature Extraction of images. The dataset is consisting of 3064 images which is included with Glioma, Meningioma and pituitary tumors. Here, Reinforcement learning mechanism is used for classifying the images based on the agent, reward, policy, state. The Deep Q-network which is part of Reinforcement learning is used for better accuracy. Reinforcement learning got more accuracy in classification Compared to different mechanisms like supervised, unsupervised mechanisms. In this Accuracy of the Brain Tumor classification is increased to 95.4% by using Reinforcement compared with the supervised learning. The results indicates that classification of the Brain Tumors.
Fitting Segmentation Networks on Varying Image Resolutions Using Splatting
Data used in image segmentation are not always defined on the same grid. This is particularly true for medical images, where the resolution, field-of-view and orientation can differ across channels and subjects. Images and labels are therefore commonly resampled onto the same grid, as a pre-processing step. However, the resampling operation introduces partial volume effects and blurring, thereby changing the effective resolution and reducing the contrast between structures. In this paper we propose a splat layer, which automatically handles resolution mismatches in the input data. This layer pushes each image onto a mean space where the forward pass is performed. As the splat operator is the adjoint to the resampling operator, the mean-space prediction can be pulled back to the native label space, where the loss function is computed. Thus, the need for explicit resolution adjustment using interpolation is removed. We show on two publicly available datasets, with simulated and real multi-modal magnetic resonance images, that this model improves segmentation results compared to resampling as a pre-processing step.
Correlation Between IBSI Morphological Features and Manually-Annotated Shape Attributes on Lung Lesions at CT
Bianconi, Francesco
Fravolini, Mario Luca
Pascoletti, Giulia
Palumbo, Isabella
Scialpi, Michele
Aristei, Cynthia
Palumbo, Barbara
2022Book Section, cited 0 times
LIDC-IDRI
Radiological examination of pulmonary nodules on CT involves the assessment of the nodules’ size and morphology, a procedure usually performed manually. In recent years computer-assisted analysis of indeterminate lung nodules has been receiving increasing research attention as a potential means to improve the diagnosis, treatment and follow-up of patients with lung cancer. Computerised analysis relies on the extraction of objective, reproducible and standardised imaging features. In this context the aim of this work was to evaluate the correlation between nine IBSI-compliant morphological features and three manually-assigned radiological attributes – lobulation, sphericity and spiculation. Experimenting on 300 lung nodules from the open-access LIDC-IDRI dataset we found that the correlation between the computer-calculated features and the manually-assigned visual scores was at best moderate (Pearson’s r between -0.61 and 0.59; Spearman’s $$\rho $$ between -0.59 and 0.56). We conclude that the morphological features investigated here have moderate ability to match/explain manually-annotated lobulation, sphericity and spiculation.
Systematic Comparison of Incomplete-Supervision Approaches for Biomedical Image Classification
Shetab Boushehri, Sayedali
Qasim, Ahmad Bin
Waibel, Dominik
Schmich, Fabian
Marr, Carsten
2022Book Section, cited 0 times
AML-Cytomorphology_LMU
Deep learning based classification of biomedical images requires expensive manual annotation by experts. Incomplete-supervision approaches including active learning, pre-training, and semi-supervised learning have thus been developed to increase classification performance with a limited number of annotated images. In practice, a combination of these approaches is often used to reach the desired performance for biomedical images.Most of these approaches are designed for natural images, which differ fundamentally from biomedical images in terms of color, contrast, image complexity, and class imbalance. In addition, it is not always clear which combination to use in practical cases.We, therefore, analyzed the performance of combining seven active learning, three pre-training, and two semi-supervised methods on four exemplary biomedical image datasets covering various imaging modalities and resolutions. The results showed that the ImageNet (pre-training) in combination with pseudo-labeling (semi-supervised learning) dominates the best performing combinations, while no particular active learning algorithm prevailed. For three out of four datasets, this combination reached over 90% of the fully supervised results by only adding 25% of labeled data. An ablation study also showed that pre-training and semi-supervised learning contributed up to 25% increase in F1-score in each cycle. In contrast, active learning contributed less than 5% increase in each cycle.Based on these results, we suggest employing the correct combination of pre-training and semi-supervised learning can be more efficient than active learning for biomedical image classification with limited annotated images. We believe that our study is an important step towards annotation-efficient model training for biomedical classification challenges.
Learning Shape Distributions from Large Databases of Healthy Organs: Applications to Zero-Shot and Few-Shot Abnormal Pancreas Detection
Multi-view Local Co-occurrence and Global Consistency Learning Improve Mammogram Classification Generalisation.
Chen, Yuanhong
Wang, Hu
Wang, Chong
Tian, Yu
Liu, Fengbei
Liu, Yuyuan
Elliott, Michael
McCarthy, Davis J.
Frazer, Helen
Carneiro, Gustavo
2022Book Section, cited 0 times
CBIS-DDSM
CMMD
Supervised training
Mammography
BREAST
Classification
Deep learning
When analysing screening mammograms, radiologists can naturally process information across two ipsilateral views of each breast, namely the cranio-caudal (CC) and mediolateral-oblique (MLO) views. These multiple related images provide complementary diagnostic information and can improve the radiologist’s classification accuracy. Unfortunately, most existing deep learning systems, trained with globally-labelled images, lack the ability to jointly analyse and integrate global and local information from these multiple views. By ignoring the potentially valuable information present in multiple images of a screening episode, one limits the potential accuracy of these systems. Here, we propose a new multi-view global-local analysis method that mimics the radiologist’s reading procedure, based on a global consistency learning and local co-occurrence learning of ipsilateral views in mammograms. Extensive experiments show that our model outperforms competing methods, in terms of classification accuracy and generalisation, on a large-scale private dataset and two publicly available datasets, where models are exclusively trained and tested with global labels.
CIRDataset: A Large-Scale Dataset for Clinically-Interpretable Lung Nodule Radiomics and Malignancy Prediction
Choi, Wookjin
Dahiya, Navdeep
Nadeem, Saad
2022Book Section, cited 0 times
LIDC-IDRI
Spiculations/lobulations, sharp/curved spikes on the surface of lung nodules, are good predictors of lung cancer malignancy and hence, are routinely assessed and reported by radiologists as part of the standardized Lung-RADS clinical scoring criteria. Given the 3D geometry of the nodule and 2D slice-by-slice assessment by radiologists, manual spiculation/lobulation annotation is a tedious task and thus no public datasets exist to date for probing the importance of these clinically-reported features in the SOTA malignancy prediction algorithms. As part of this paper, we release a large-scale Clinically-Interpretable Radiomics Dataset, CIRDataset, containing 956 radiologist QA/QC’ed spiculation/lobulation annotations on segmented lung nodules from two public datasets, LIDC-IDRI (N = 883) and LUNGx (N = 73). We also present an end-to-end deep learning model based on multi-class Voxel2Mesh extension to segment nodules (while preserving spikes), classify spikes (sharp/spiculation and curved/lobulation), and perform malignancy prediction. Previous methods have performed malignancy prediction for LIDC and LUNGx datasets but without robust attribution to any clinically reported/actionable features (due to known hyperparameter sensitivity issues with general attribution schemes). With the release of this comprehensively-annotated CIRDataset and end-to-end deep learning baseline, we hope that malignancy prediction methods can validate their explanations, benchmark against our baseline, and provide clinically-actionable insights. Dataset, code, pretrained models, and docker containers are available at https://github.com/nadeemlab/CIR.
Survival Prediction of Brain Cancer with Incomplete Radiology, Pathology, Genomic, and Demographic Data
Cui, Can
Liu, Han
Liu, Quan
Deng, Ruining
Asad, Zuhayr
Wang, Yaohong
Zhao, Shilin
Yang, Haichun
Landman, Bennett A.
Huo, Yuankai
2022Book Section, cited 0 times
TCGA-GBM
TCGA-LGG
Integrating cross-department multi-modal data (e.g., radiology, pathology, genomic, and demographic data) is ubiquitous in brain cancer diagnosis and survival prediction. To date, such an integration is typically conducted by human physicians (and panels of experts), which can be subjective and semi-quantitative. Recent advances in multi-modal deep learning, however, have opened a door to leverage such a process in a more objective and quantitative manner. Unfortunately, the prior arts of using four modalities on brain cancer survival prediction are limited by a “complete modalities” setting (i.e., with all modalities available). Thus, there are still open questions on how to effectively predict brain cancer survival from incomplete radiology, pathology, genomic, and demographic data (e.g., one or more modalities might not be collected for a patient). For instance, should we use both complete and incomplete data, and more importantly, how do we use such data? To answer the preceding questions, we generalize the multi-modal learning on cross-department multi-modal data to a missing data setting. Our contribution is three-fold: 1) We introduce a multi-modal learning with missing data (MMD) pipeline with competitive performance and less hardware consumption; 2) We extend multi-modal learning on radiology, pathology, genomic, and demographic data into missing data scenarios; 3) A large-scale public dataset (with 962 patients) is collected to systematically evaluate glioma tumor survival prediction using four modalities. The proposed method improved the C-index of survival prediction from 0.7624 to 0.8053.
Radiological Reports Improve Pre-training for Localized Imaging Tasks on Chest X-Rays
The Intrinsic Manifolds of Radiological Images and Their Role in Deep Learning
Konz, Nicholas
Gu, Hanxue
Dong, Haoyu
Mazurowski, Maciej A.
2022Book Section, cited 0 times
Prostate-MRI-US-Biopsy
The manifold hypothesis is a core mechanism behind the success of deep learning, so understanding the intrinsic manifold structure of image data is central to studying how neural networks learn from the data. Intrinsic dataset manifolds and their relationship to learning difficulty have recently begun to be studied for the common domain of natural images, but little such research has been attempted for radiological images. We address this here. First, we compare the intrinsic manifold dimensionality of radiological and natural images. We also investigate the relationship between intrinsic dimensionality and generalization ability over a wide range of datasets. Our analysis shows that natural image datasets generally have a higher number of intrinsic dimensions than radiological images. However, the relationship between generalization ability and intrinsic dimensionality is much stronger for medical images, which could be explained as radiological images having intrinsic features that are more difficult to learn. These results give a more principled underpinning for the intuition that radiological images can be more challenging to apply deep learning to than natural image datasets common to machine learning research. We believe rather than directly applying models developed for natural images to the radiological imaging domain, more care should be taken to developing architectures and algorithms that are more tailored to the specific characteristics of this domain. The research shown in our paper, demonstrating these characteristics and the differences from natural images, is an important first step in this direction.
CateNorm: Categorical Normalization for Robust Medical Image Segmentation
Many medical datasets have recently been created for medical image segmentation tasks, and it is natural to question whether we can use them to sequentially train a single model that (1) performs better on all these datasets, and (2) generalizes well and transfers better to the unknown target site domain. Prior works have achieved this goal by jointly training one model on multi-site datasets, which achieve competitive performance on average but such methods rely on the assumption about the availability of all training data, thus limiting its effectiveness in practical deployment. In this paper, we propose a novel multi-site segmentation framework called incremental-transfer learning (ITL), which learns a model from multi-site datasets in an end-to-end sequential fashion. Specifically, “incremental” refers to training sequentially constructed datasets, and “transfer” is achieved by leveraging useful information from the linear combination of embedding features on each dataset. In addition, we introduce our ITL framework, where we train the network including a site-agnostic encoder with pretrained weights and at most two segmentation decoder heads. We also design a novel site-level incremental loss in order to generalize well on the target domain. Second, we show for the first time that leveraging our ITL training scheme is able to alleviate challenging catastrophic forgetting problems in incremental learning. We conduct experiments using five challenging benchmark datasets to validate the effectiveness of our incremental-transfer learning approach. Our approach makes minimal assumptions on computation resources and domain-specific expertise, and hence constitutes a strong starting point in multi-site medical image segmentation.
Gradient-Rebalanced Uncertainty Minimization for Cross-Site Adaptation of Medical Image Segmentation
Automatically adapting image segmentation across data sites benefits to reduce the data annotation burden in medical image analysis. Due to variations in image collection procedures, there usually exists moderate domain gap between medical image datasets from different sites. Increasing the prediction certainty is beneficial for gradually reducing the category-wise domain shift. However, uncertainty minimization naturally leads to bias towards major classes since the target object usually occupies a small portion of pixels in the input image. In this paper, we propose a gradient-rebalanced uncertainty minimization scheme which is capable of eliminating the learning bias. First, the foreground pixels and background pixels are reweighted according to the total gradient amplitude of every class. Furthermore, we devise a feature-level adaptation scheme to reduce the overall domain gap between source and target datasets, based on feature norm regularization and adversarial learning. Experiments on CT pancreas segmentation and MRI prostate segmentation validate that, our method outperforms existing cross-site adaptation algorithms by around 3% on the DICE similarity coefficient.
msFormer: Adaptive Multi-Modality 3D Transformer for Medical Image Segmentation
Tan, Jiaxin
Jiang, Chuangbo
Li, Laquan
Li, Haoyuan
Li, Weisheng
Zheng, Shenhai
2022Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Medical Decathlon
BRAIN
RSNA-ASNR-MICCAI BraTS 2021
Radiomics
Segmentation
Over the past years, Convolutional Neural Networks (CNNs) have dominated the field of medical image segmentation. But they have difficulty representing long-range dependencies. Recently, the Transformer has been applied to medical image segmentation. Transformer-based architectures that utilize the self-attention (core of the Transformer) mechanism can encode long-range dependencies on images with highly expressive learning capabilities. In this paper, we introduce an adaptive multi-modality 3D medical image segmentation network based on Transformer (called msFormer), which is also a powerful 3D fusion network, and extend the application of Transformer to multi-modality medical image segmentation. This fusion network is modeled in the U-shaped structure to exploit complementary features of different modalities at multiple scales, which increases the cubical representations. We conducted a comprehensive experimental analysis on the Prostate and BraTS2021 datasets. The results show that our method achieves an average DSC of 0.905 and 0.851 on these two datasets, respectively, outperforming existing state-of-the-art methods and providing significant improvements.
UniMiSS: Universal Medical Self-supervised Learning via Breaking Dimensionality Barrier
Self-supervised learning (SSL) opens up huge opportunities for medical image analysis that is well known for its lack of annotations. However, aggregating massive (unlabeled) 3D medical images like computerized tomography (CT) remains challenging due to its high imaging cost and privacy restrictions. In this paper, we advocate bringing a wealth of 2D images like chest X-rays as compensation for the lack of 3D data, aiming to build a universal medical self-supervised representation learning framework, called UniMiSS. The following problem is how to break the dimensionality barrier, i.e., making it possible to perform SSL with both 2D and 3D images? To achieve this, we design a pyramid U-like medical Transformer (MiT). It is composed of the switchable patch embedding (SPE) module and Transformers. The SPE module adaptively switches to either 2D or 3D patch embedding, depending on the input dimension. The embedded patches are converted into a sequence regardless of their original dimensions. The Transformers model the long-term dependencies in a sequence-to-sequence manner, thus enabling UniMiSS to learn representations from both 2D and 3D images. With the MiT as the backbone, we perform the UniMiSS in a self-distillation manner. We conduct expensive experiments on six 3D/2D medical image analysis tasks, including segmentation and classification. The results show that the proposed UniMiSS achieves promising performance on various downstream tasks, outperforming the ImageNet pre-training and other advanced SSL counterparts substantially. Code is available at https://github.com/YtongXie/UniMiSS-code.
Joint Learning of Localized Representations from Medical Images and Reports
Müller, Philip
Kaissis, Georgios
Zou, Congyu
Rueckert, Daniel
2022Book Section, cited 0 times
COVID-19-AR
Contrastive learning has proven effective for pre-training image models on unlabeled data with promising results for tasks such as medical image classification. Using paired text (like radiological reports) during pre-training improves the results even further. Still, most existing methods target image classification downstream tasks and may not be optimal for localized tasks like semantic segmentation or object detection. We therefore propose Localized representation learning fromVision andText (LoVT), a text-supervised pre-training method that explicitly targets localized medical imaging tasks. Our method combines instance-level image-report contrastive learning with local contrastive learning on image region and report sentence representations. We evaluate LoVT and commonly used pre-training methods on an evaluation framework of 18 localized tasks on chest X-rays from five public datasets. LoVT performs best on 10 of the 18 studied tasks making it the preferred method of choice for localized tasks.
PACTran: PAC-Bayesian Metrics for Estimating the Transferability of Pretrained Models to Classification Tasks
Ding, Nan
Chen, Xi
Levinboim, Tomer
Changpinyo, Soravit
Soricut, Radu
2022Book Section, cited 0 times
CBIS-DDSM
With the increasing abundance of pretrained models in recent years, the problem of selecting the best pretrained checkpoint for a particular downstream classification task has been gaining increased attention. Although several methods have recently been proposed to tackle the selection problem (e.g. LEEP, H-score), these methods resort to applying heuristics that are not well motivated by learning theory. In this paper we present PACTran, a theoretically grounded family of metrics for pretrained model selection and transferability measurement. We first show how to derive PACTran metrics from the optimal PAC-Bayesian bound under the transfer learning setting. We then empirically evaluate three metric instantiations of PACTran on a number of vision tasks (VTAB) as well as a language-and-vision (OKVQA) task. An analysis of the results shows PACTran is a more consistent and effective transferability measure compared to existing selection methods.
Whole Mammography Diagnosis via Multi-instance Supervised Discriminative Localization and Classification
Wu, Qingxia
Tan, Hongna
Wu, Yaping
Dong, Pei
Che, Jifei
Li, Zheren
Lei, Chenjin
Shen, Dinggang
Xue, Zhong
Wang, Meiyun
2022Conference Proceedings, cited 0 times
CMMD
Algorithm Development
Computer Aided Diagnosis (CADx)
Precise mammography diagnosis plays a vital role in breast cancer management, especially in identifying malignancy with computer assistance. Due to high resolution, large image size, and small lesion region, it is challenging to localize lesions while classifying the whole mammography, which also renders difficulty for annotating mammography datasets and balancing tumor and normal background regions for training. To fully use local lesion information and macroscopic malignancy information, we propose a two-step mammography classification method based on multi-instance learning. In step one, a multi-task encoder-decoder architecture (mt-ConvNext-Unet) is employed for instance-level lesion localization and lesion type classification. To enhance the ability of feature extraction, we adopt ConvNext as the encoder, and added normalization layer and scSE attention blocks in the decoder to strengthen localization ability of small lesions. A classification branch is used after the encoder to jointly train lesion classification and segmentation. The instance-based outputs are merged into the image-level both for segmentation and classification (SegMap and ClsMap). In step two, a whole mammography classification model is applied for breast-level cancer diagnosis by combining the results of CC and MLO views with EfficientNet. Experimental results on the open dataset show that our method not only accurately classifies breast cancer on mammography but also highlights the suspicious regions.
Estimating the Coverage in 3D Reconstructions of the Colon from Colonoscopy Videos
Muhlethaler, Emmanuelle
Posner, Erez
Bouhnik, Moshe
2022Book Section, cited 0 times
CT COLONOGRAPHY
Colonoscopy is the most common procedure for early detection and removal of polyps, a critical component of colorectal cancer prevention. Insufficient visual coverage of the colon surface during the procedure often results in missed polyps. To mitigate this issue, reconstructing the 3D surfaces of the colon in order to visualize the missing regions has been proposed. However, robustly estimating the local and global coverage from such a reconstruction has not been thoroughly investigated until now. In this work, we present a new method to estimate the coverage from a reconstructed colon pointcloud. Our method splits a reconstructed colon into segments and estimates the coverage of each segment by estimating the area of the missing surfaces. We achieve a mean absolute coverage error of 3–6% on colon segments generated from synthetic colonoscopy data and real colonography CT scans. In addition, we show good qualitative results on colon segments reconstructed from real colonoscopy videos.
Artificial Intelligence for Colorectal Polyps Classification Using 3D CNN
Convolutional Neural Network (CNN) has made remarkable progress in the medical field. The use of CNN is widely necessary to extract highly representative characteristics in the case of acute medical pathology. Composed of fully connected layers, the CNN allows the classification of the data. The classification process is done among the network layers by filtering, selecting, and applying these features at the last layers. CNN offers a better prognosis, especially in the case of colorectal cancer (CRC) prevention. CRC develops from cells that line the inner lining of the colon. Mostly, it comes from a benign tumor, called a polyp, which slowly grows with time to develop into malignant cells. However, classification of 3D scan images of the abdomen based on the presence or absence of polyps is necessary to increase the chance of early detection of the disease and thus guide it to the appropriate treatment. In this work, we present and study a 3D CNN model for the processing and classification of polyps. The results show promising performances for a 12 layers 3D CNN model.
Multi-Graph Convolutional Neural Network for Breast Cancer Multi-task Classification
Ibrahim, Mohamed
Henna, Shagufta
Cullen, Gary
2023Book Section, cited 0 times
CBIS-DDSM
Semi-supervised learning
Algorithm Development
Radiomics
Mammography is a popular diagnostic imaging procedure for detecting breast cancer at an early stage. Various deep-learning approaches to breast cancer detection incur high costs and are erroneous. Therefore, they are not reliable to be used by medical practitioners. Specifically, these approaches do not exploit complex texture patterns and interactions. These approaches warrant the need for labelled data to enable learning, limiting the scalability of these methods with insufficient labelled datasets. Further, these models lack generalisation capability to new-synthesised patterns/textures. To address these problems, in the first instance, we design a graph model to transform the mammogram images into a highly correlated multigraph that encodes rich structural relations and high-level texture features. Next, we integrate a pre-training self-supervised learning multigraph encoder (SSL-MG) to improve feature presentations, especially under limited labelled data constraints. Then, we design a semi-supervised mammogram multigraph convolution neural network downstream model (MMGCN) to perform multi-classifications of mammogram segments encoded in the multigraph nodes. Our proposed frameworks, SSL-MGCN and MMGCN, reduce the need for annotated data to 40% and 60%, respectively, in contrast to the conventional methods that require more than 80% of data to be labelled. Finally, we evaluate the classification performance of MMGCN independently and with integration with SSL-MG in a model called SSL-MMGCN over multi-training settings. Our evaluation results on DSSM, one of the recent public datasets, demonstrate the efficient learning performance of SSL-MNGCN and MMGCN with 0.97 and 0.98 AUC classification accuracy in contrast to the multitask deep graph (GCN) method Hao Du et al. (2021) with 0.81 AUC accuracy.
Deep Active Learning for Glioblastoma Quantification
Generating pixel or voxel-wise annotations of radiological images to train deep learning-based segmentation models is a time consuming and expensive job involving precious time and effort of radiologists. Other challenges include obtaining diverse annotated training data that covers the entire spectrum of potential situations. In this paper, we propose an Active Learning (AL) based segmentation strategy involving a human annotator or “Oracle" to annotate interactively. The deep learning-based segmentation model learns in parallel by training in iterations with the annotated samples. A publicly available MRI dataset of brain tumors (Glioma) is used for the experimental studies. The efficiency of the proposed AL-based segmentation model is demonstrated in terms of annotation time requirement compared with the conventional Passive Learning (PL) based strategies. Experimentally it is also demonstrated that the proposed AL-based segmentation strategy achieves comparable or enhanced segmentation performance with much fewer annotations through quantitative and qualitative evaluations of the segmentation results.
Leveraging geodesic distances and the geometrical information they convey is key for many data-oriented applications in imaging. Geodesic distance computation has been used for long for image segmentation using Image based metrics. We introduce a new method by generating isotropic Riemannian metrics adapted to a problem using CNN and give as illustrations an example of application. We then apply this idea to the segmentation of brain tumours as unit balls for the geodesic distance computed with the metric potential output by a CNN, thus imposing geometrical and topological constraints on the output mask. We show that geodesic distance modules work well in machine learning frameworks and can be used to achieve state-of-the-art performances while ensuring geometrical and/or topological properties.
Probabilistic Tissue Mapping for Tumor Segmentation and Infiltration Detection of Glioma
Segmentation of glioma structures is vital for therapy planning. Although state of the art algorithms achieve impressive results when compared to ground-truth manual delineations, one could argue that the binary nature of these labels does not properly reflect the underlying biology, nor does it account for uncertainties in the predicted segmentations. Moreover, the tumor infiltration beyond the contrast-enhanced lesion – visually imperceptible on imaging – is often ignored despite its potential role in tumor recurrence. We propose an intensity-based probabilistic model for brain tissue mapping based on conventional MRI sequences. We evaluated its value in the binary segmentation of the tumor and its subregions, and in the visualisation of possible infiltration. The model achieves a median Dice of 0.82 in the detection of the whole tumor, but suffers from confusion between different subregions. Preliminary results for the tumor probability maps encourage further investigation of the model regarding infiltration detection.
Robustifying Automatic Assessment of Brain Tumor Progression from MRI
Kotowski, Krzysztof
Machura, Bartosz
Nalepa, Jakub
2023Book Section, cited 0 times
Brain-Tumor-Progression
Accurate assessment of brain tumor progression from magnetic resonance imaging is a critical issue in clinical practice which allows us to precisely monitor the patient’s response to a given treatment. Manual analysis of such imagery is, however, prone to human errors and lacks reproducibility. Therefore, designing automated end-to-end quantitative tumor’s response assessment is of pivotal clinical importance nowadays. In this work, we further investigate this issue and verify the robustness of bidimensional and volumetric tumor’s measurements calculated over the delineations obtained using the state-of-the-art tumor segmentation deep learning model which was ranked 6th$$^\textrm{th}$$ in the BraTS21 Challenge. Our experimental study, performed over the Brain Tumor Progression dataset, showed that volumetric measurements are more robust against varying-quality tumor segmentation, and that improving brain extraction can notably impact the calculation of the tumor’s characteristics.
Multi-modal Transformer for Brain Tumor Segmentation
Segmentation of brain tumors from multiple MRI modalities is necessary for successful disease diagnosis and clinical treatment. In recent years, Transformer-based networks with the self-attention mechanism have been proposed. But they do not show the performance beyond the U-shaped fully convolutional network. In this paper, we apply HFTrans network to the brain tumor segmentation task of BraTS 2022 challenge by focusing on the multi-modalities of MRI with the benefits of Transformer. By applying BraTS-specific modifications of preprocessing, aggressive data augmentation, and postprocessing, our method shows superior results in comparisons between previous best performers. We show that the final result on the BraTS 2022 validation dataset achieves dice scores of 82.94%, 85.48%, and 92.44% and Hausdorff distances of 14.55 mm, 12.96 mm, and 3.77 mm for enhancing tumor, tumor core, and whole tumor, respectively.
An Efficient Cascade of U-Net-Like Convolutional Neural Networks Devoted to Brain Tumor Segmentation
Bouchet, Philippe
Deloges, Jean-Baptiste
Canton-Bacara, Hugo
Pusel, Gaëtan
Pinot, Lucas
Elbaz, Othman
Boutry, Nicolas
2023Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Machine Learning
A glioma is a fast-growing and aggressive tumor that starts in the glial cells of the brain. They make up about 30% of all brain tumors, and 80% of all malignant brain tumors. Gliomas are considered to be rare tumors, affecting less than 10,000 people each year, with a 5-year survival rate of 6%. If intercepted at an early stage, they pose no danger; however, providing an accurate diagnosis has proven to be difficult. In this paper, we propose a cascade approach using state-of-the-art Convolutional Neural Networks, in order to maximize accuracy in tumor detection. Various U-Net-like networks have been implemented and tested in order to select the network best suited for this problem.
We propose a solution for BraTS22 challenge that builds on top of our previous submission—Optimized U-Net method. This year we focused on improving the model architecture and training schedule. The proposed method further improves scores on both our internal cross validation and challenge validation data. The validation mean dice scores are: ET 0.8381, TC 0.8802, WT 0.9292, and mean Hausdorff95: ET 14.460, TC 5.840, WT 3.594.
Diffraction Block in Extended nn-UNet for Brain Tumor Segmentation
Hou, Qingfan
Wang, Zhuofei
Wang, Jiao
Jiang, Jian
Peng, Yanjun
2023Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Automatic brain tumor segmentation based on 3D mpMRI is highly significant for brain diagnosis, monitoring, and treatment planning. Due to the limitation of manual delineation, automatic and accurate segmentation based on a deep learning network has a tremendous practical necessity. The BraTS2022 challenge provides many data to develop our network. In this work, we proposed a diffraction block based on the Fraunhofer single-slit diffraction principle, which emphasizes the effect of associated features and suppresses isolated features. We added the diffraction block to nn-UNet, which took first place in the BraTS 2020 competition. We also improved nn-UNet by referring to the solution proposed by the 2021 winner, including using a larger network and replacing the batch with a group normalization. In the final unseen test data, our method is ranked first for Pediatric population data and third for BraTS continuous evaluation data.
Infusing Domain Knowledge into nnU-Nets for Segmenting Brain Tumors in MRI
Kotowski, Krzysztof
Adamski, Szymon
Machura, Bartosz
Zarudzki, Lukasz
Nalepa, Jakub
2023Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Accurate and reproducible segmentation of brain tumors from multi-modal magnetic resonance (MR) scans is a pivotal step in clinical practice. In this BraTS Continuous Evaluation initiative, we exploit a 3D nnU-Net for this task which was ranked at the 6th$$6^\textrm{th}$$ place (out of 1600 participants) in the BraTS’21 Challenge. We benefit from an ensemble of deep models enhanced with the expert knowledge of a senior radiologist captured in a form of several post-processing routines. The experimental study showed that infusing the domain knowledge into the deep models can enhance their performance, and we obtained the average Dice score of 0.81977 (enhancing tumor), 0.87837 (tumor core), and 0.92723 (whole tumor) over the validation set. For the test data, we had the average Dice score of 0.86317, 0.87987, and 0.92838 for the enhancing tumor, tumor core and whole tumor. Our approach was also validated over the hold-out testing data which encompassed the BraTS 2021 Challenge test set, as well as new data from out-of-sample sources including independent pediatric population of diffuse intrinsic pontine glioma patients, together with an independent multi-institutional dataset covering under-represented Sub-Saharian African adult patient population of brain diffuse glioma. Our technique was ranked 2nd$$2^\textrm{nd}$$ and 3rd$$3^\textrm{rd}$$ over the pediatric and Sub-Saharian African populations, respectively, proving its high generalization capabilities.
Multi-modal Brain Tumour Segmentation Using Transformer with Optimal Patch Size
Mojtahedi, Ramtin
Hamghalam, Mohammad
Simpson, Amber L.
2023Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Early diagnosis and grading of gliomas are crucial for determining therapy and the prognosis of brain cancer. For this purpose, magnetic resonance (MR) studies of brain tumours are widely used in the therapy process. Due to the overlap between the intensity distributions of healthy, enhanced, non-enhancing, and edematous areas, automated segmentation of tumours is a complicated task. Convolutional neural networks (CNNs) have been utilized as the dominant deep learning method for segmentation tasks. However, they suffer from the inability to capture and learn long-range dependencies and global features due to their limited kernels. Vision transformers (ViTs) were introduced recently to tackle these limitations. Although ViTs are capable of capturing long-range features, their segmentation performance falls as the variety of tumour sizes increases. In this matter, ViT’s patch size plays a significant role in the learning process of a network, and finding an optimal patch size is a challenging and time-consuming task. In this paper, we propose a framework to find the optimal ViT patch size for the brain tumour segmentation task, particularly for segmenting smaller tumours. We validated our proposed framework on the BraTS’21 dataset. Our proposed framework, could improve the segmentation dice performance for 0.97%, 1.14%, and 2.05% for enhancing tumour, tumour core, and whole tumour, respectively, in comparison with default ViT (ViT-base). This research lays the groundwork for future research on the semantic segmentation of tumour segmentation and detection using vision transformer-based networks for optimal outcomes. The implementation source code is available at: https://github.com/Ramtin-Mojtahedi/BRATS_OVTPS.
Brain Tumor Segmentation Using Neural Ordinary Differential Equations with UNet-Context Encoding Network
Sadique, M. S.
Rahman, M. M.
Farzana, W.
Temtam, A.
Iftekharuddin, K. M.
2023Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Glioblastoma Multiforme (GBM) are the most aggressive brain tumor types and because of their heterogeneity in shape and appearance, their segmentation becomes a challenging task. Automated brain tumor segmentation using Magnetic Resonance Imaging (MRI) plays a key role in disease diagnosis, surgical planning, and brain tumor tracking. Medical image segmentation using deep learning-based U-Net architectures are the state-of-the-art. Despite their improved performance, these architectures require optimization for each segmentation task. Introducing a continuous depth learning with context encoding in deep CNN models for semantic segmentation enable 3D image analysis quantifications in many applications. In this work, we propose Neural Ordinary Differential Equations (NODE) with 3D UNet-Context Encoding (UNCE), a continuous depth deep learning network for improved brain tumor segmentation. We showed that these NODEs can be implemented within the U-Net framework to improve segmentation performance. This year we participated for the Brain Tumor Segmentation (BraTS) continuous evaluation and our model was trained using the same MRI image sets of RSNA-ASNR-MICCAI Brain Tumor Segmentation (BraTS) Challenge 2021. Our model is evaluated on unseen hold-out data included i) the BraTS 2021 Challenge test data, ii) SSA adult patient populations of brain diffuse glioma (Africa-BraTS), and iii) from another independent pediatric population of diffuse intrinsic pontine glioma (DIPG) patients. The mean DSC for the BraTS test dataset are: 0.797797 (ET), 0.825647 (TC) and 0.894891 (WT) respectively. For the Africa-BraTS dataset the performance of our model improves which indicating the generalizability of our model to new, out-of-sample populations for adult brain tumor cases.
An UNet-Based Brain Tumor Segmentation Framework via Optimal Mass Transportation Pre-processing
This article aims to build a framework for MRI images of brain tumor segmentation using the deep learning method. For this purpose, we develop a novel 2-Phase UNet-based OMT framework to increase the ratio of brain tumors using optimal mass transportation (OMT). Moreover, due to the scarcity of training data, we change the density function by different parameters to increase the data diversity. For the post-processing, we propose an adaptive ensemble procedure by solving the eigenvectors of the Dice similarity matrix and choosing the result with the highest aggregation probability as the predicted label. The Dice scores of the whole tumor (WT), tumor core (TC), and enhanced tumor (ET) regions for online validation computed by SegResUNet were 0.9214, 0.8823, and 0.8411, respectively. Compared with random crop pre-processing, OMT is far superior.
Pixel-Level Explanation of Multiple Instance Learning Models in Biomedical Single Cell Images
Sadafi, Ario
Adonkina, Oleksandra
Khakzar, Ashkan
Lienemann, Peter
Hehr, Rudolf Matthias
Rueckert, Daniel
Navab, Nassir
Marr, Carsten
2023Book Section, cited 0 times
AML-Cytomorphology_MLL_Helmholtz
Explainability is a key requirement for computer-aided diagnosis systems in clinical decision-making. Multiple instance learning with attention pooling provides instance-level explainability, however for many clinical applications a deeper, pixel-level explanation is desirable, but missing so far. In this work, we investigate the use of four attribution methods to explain a multiple instance learning models: GradCAM, Layer-Wise Relevance Propagation (LRP), Information Bottleneck Attribution (IBA), and InputIBA. With this collection of methods, we can derive pixel-level explanations on for the task of diagnosing blood cancer from patients’ blood smears. We study two datasets of acute myeloid leukemia with over 100 000 single cell images and observe how each attribution method performs on the multiple instance learning architecture focusing on different properties of the white blood single cells. Additionally, we compare attribution maps with the annotations of a medical expert to see how the model’s decision-making differs from the human standard. Our study addresses the challenge of implementing pixel-level explainability in multiple instance learning models and provides insights for clinicians to better understand and trust decisions from computer-aided diagnosis systems.
Med-NCA: Robust and Lightweight Segmentation with Neural Cellular Automata
Kalkhof, John
González, Camila
Mukhopadhyay, Anirban
2023Book Section, cited 0 times
ISBI-MR-Prostate-2013
Access to the proper infrastructure is critical when performing medical image segmentation with Deep Learning. This requirement makes it difficult to run state-of-the-art segmentation models in resource-constrained scenarios like primary care facilities in rural areas and during crises. The recently emerging field of Neural Cellular Automata (NCA) has shown that locally interacting one-cell models can achieve competitive results in tasks such as image generation or segmentations in low-resolution inputs. However, they are constrained by high VRAM requirements and the difficulty of reaching convergence for high-resolution images. To counteract these limitations we propose Med-NCA, an end-to-end NCA training pipeline for high-resolution image segmentation. Our method follows a two-step process. Global knowledge is first communicated between cells across the downscaled image. Following that, patch-based segmentation is performed. Our proposed Med-NCA outperforms the classic UNet by 2% and 3% Dice for hippocampus and prostate segmentation, respectively, while also being 500 times smaller. We also show that Med-NCA is by design invariant with respect to image scale, shape and translation, experiencing only slight performance degradation even with strong shifts; and is robust against MRI acquisition artefacts. Med-NCA enables high-resolution medical image segmentation even on a Raspberry Pi B+, arguably the smallest device able to run PyTorch and that can be powered by a standard power bank.
Self-supervised image denoising techniques emerged as convenient methods that allow training denoising models without requiring ground-truth noise-free data. Existing methods usually optimize loss metrics that are calculated from multiple noisy realizations of similar images, e.g., from neighboring tomographic slices. However, those approaches fail to utilize the multiple contrasts that are routinely acquired in medical imaging modalities like MRI or dual-energy CT. In this work, we propose the new self-supervised training scheme Noise2Contrast that combines information from multiple measured image contrasts to train a denoising model. We stack denoising with domain-transfer operators to utilize the independent noise realizations of different image contrasts to derive a self-supervised loss. The trained denoising operator achieves convincing quantitative and qualitative results, outperforming state-of-the-art self-supervised methods by 4.7–11.0%/4.8–7.3% (PSNR/SSIM) on brain MRI data and by 43.6–50.5%/57.1–77.1% (PSNR/SSIM) on dual-energy CT X-ray microscopy data with respect to the noisy baseline. Our experiments on different real measured data sets indicate that Noise2Contrast training generalizes to other multi-contrast imaging modalities.
Performance of Deep CNN and Radiologists in Prostate Cancer Classification: A Comparative Pilot Study
Sobecki, Piotr
Jóźwiak, Rafał
Mykhalevych, Ihor
2023Book Section, cited 0 times
PROSTATEx
In recent years multiple deep-learning solutions have emerged that aim to assist radiologists in prostate cancer (PCa) diagnosis. Most of the studies however do not compare the diagnostic accuracy of the developed models to that of radiology specialists but simply report the model performance on the reference datasets. This makes it hard to infer the potential benefits and applicability of proposed methods in diagnostic workflows. In this paper, we investigate the effects of using pre-trained models in the differentiation of clinically significant PCa (csPCa) on mpMRI and report the results of conducted multi-reader multi-case pilot study involving human experts. The study aims to compare the performance of deep learning models with six radiologists varying in diagnostic experience. A subset of the ProstateX Challenge dataset counting 32 prostate lesions was used to evaluate the diagnostic accuracy of models and human raters using ROC analysis. Deep neural networks were found to achieve comparable performance to experienced readers in the diagnosis of csPCa. Results confirm the potential of deep neural networks in enhancing the cognitive abilities of radiologists in PCa assessment.
On the Use of WebAssembly for Rendering and Segmenting Medical Images
Jodogne, Sébastien
2023Book Section, cited 0 times
LCTSC
Rendering medical images is a critical step in a variety of medical applications, from diagnosis to therapy. There is a growing need for advanced viewers that can display the fusion of multiple layers, such as contours, annotations, doses, or segmentation masks, on the top of image slices extracted from volumes. Such viewers obviously necessitate complex software components. But desktop viewers are often developed using technologies that are different from those used for Web viewers, which results in a lack of code reuse and shared expertise between development teams. Furthermore, the rise of artificial intelligence in radiology calls for Web viewers that integrate deep learning models and that can be used outside of a clinical environment, for instance to evaluate algorithms or to train skilled workers. In this paper, we show how the emerging WebAssembly standard can be used to tackle these challenges by sharing the same code base between heavyweight viewers and zero-footprint viewers. Moreover, we introduce a fully functional Web viewer that is entirely developed using WebAssembly and that can be used in research projects or in teleradiology applications. Finally, we demonstrate that deep convolutional neural networks for image segmentation can be executed entirely inside a Web browser thanks to WebAssembly, without any dedicated computing infrastructure. The source code associated with this paper is released as free and open-source software.
Brain Tumor Segmentation Based on Zernike Moments, Enhanced Ant Lion Optimization, and Convolutional Neural Network in MRI Images
Gliomas that form in glial cells in the spinal cord and brain are the most aggressive and common kinds of brain tumors (intra-axial brain tumors) due to their rapid progression and infiltrative nature. The procedure of recognizing tumor margins from healthy tissues is still an arduous and time-consuming task in the clinical routine. In this study, a robust and efficient machine learning-based pipeline is suggested for brain tumor segmentation. Moreover, we employ four MRI modalities for increasing the final accuracy of the segmentation results, namely, Flair, T1, T2, and T1ce. Firstly, eight feature maps are extracted from each modality using the Zernike moments approach. The Zernike moments can create a feature map using two parameters, namely, n and m. So, by changing these values, we are able to generate different sets of edge feature maps. Then, eight edge feature maps for each modality are selected to produce a final feature map. Next, four original images are encoded into new four images to represent more unique and key information using the Local Directional Number Pattern (LDNP). As different encoded image leads to obtaining different final results and accuracies, the Enhanced Ant Lion Optimization (EALO) was employed to find the best possible set of feature maps for creating the best possible encoded image. Finally, a CNN model is utilized to explore significant details from the brain tissue more efficiently which accepts four input patches. Overall, the suggested framework outperforms the baseline methods regarding Dice score and Recall.
Unsupervised Sparse-View Backprojection via Convolutional and Spatial Transformer Networks
Liu, Xueqing
Sajda, Paul
Brain Informatics2023Book Section, cited 0 times
QIN-LungCT-Seg
Convolutional Neural Network (CNN)
Unsupervised learning
Computed Tomography (CT)
Sparse-view CT
Algorithm Development
Imaging technologies heavily rely on tomographic reconstruction, which involves solving a multidimensional inverse problem given a limited number of projections. Building upon our prior research [14], we have ascertained that the integration of the predicted source space derived from electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) can be effectively approached as a backprojection problem involving sensor non-uniformity. Although backprojection is a commonly used algorithm for tomographic reconstruction, it often produces subpar image reconstructions when the projection angles are sparse or the sensor characteristics are non-uniform. To address this issue, various deep learning-based algorithms have been developed to solve the inverse problem and reconstruct images using a reduced number of projections. However, these algorithms typically require ground-truth examples, i.e., reconstructed images, to achieve satisfactory performance. In this paper, we present an unsupervised sparse-view backprojection algorithm that does not rely on ground-truth examples. Our algorithm comprises two modules within a generator-projector framework: a convolutional neural network and a spatial transformer network. We evaluate the effectiveness of our algorithm using computed tomography (CT) images of the human chest. The results demonstrate that our algorithm outperforms filtered backprojection significantly in scenarios with very sparse projection angles or varying sensor characteristics for different angles. Our proposed approach holds practical implications for medical imaging and other imaging modalities (e.g., radar) where sparse and/or non-uniform projections may arise due to time or sampling constraints.
Interpretable Medical Image Classification Using Prototype Learning and Privileged Information
Gallée, Luisa
Beer, Meinrad
Götz, Michael
2023Book Section, cited 0 times
LIDC-IDRI
Deep Learning
Computer Aided Diagnosis (CADx)
Algorithm Development
LUNG
Interpretability is often an essential requirement in medical imaging. Advanced deep learning methods are required to address this need for explainability and high performance. In this work, we investigate whether additional information available during the training process can be used to create an understandable and powerful model. We propose an innovative solution called Proto-Caps that leverages the benefits of capsule networks, prototype learning and the use of privileged information. Evaluating the proposed solution on the LIDC-IDRI dataset shows that it combines increased interpretability with above state-of-the-art prediction performance. Compared to the explainable baseline model, our method achieves more than 6 % higher accuracy in predicting both malignancy (93.0 % ) and mean characteristic features of lung nodules. Simultaneously, the model provides case-based reasoning with prototype representations that allow visual validation of radiologist-defined attributes.
Unpaired Cross-Modal Interaction Learning for COVID-19 Segmentation on Limited CT Images
Guan, Qingbiao
Xie, Yutong
Yang, Bing
Zhang, Jianpeng
Liao, Zhibin
Wu, Qi
Xia, Yong
2023Book Section, cited 0 times
COVID-19-AR
Segmentation
COVID-19
Automatic Segmentation
Algorithm Development
Computed Tomography (CT)
X-Rays
Accurate automated segmentation of infected regions in CT images is crucial for predicting COVID-19’s pathological stage and treatment response. Although deep learning has shown promise in medical image segmentation, the scarcity of pixel-level annotations due to their expense and time-consuming nature limits its application in COVID-19 segmentation. In this paper, we propose utilizing large-scale unpaired chest X-rays with classification labels as a means of compensating for the limited availability of densely annotated CT scans, aiming to learn robust representations for accurate COVID-19 segmentation. To achieve this, we design an Unpaired Cross-modal Interaction (UCI) learning framework. It comprises a multi-modal encoder, a knowledge condensation (KC) and knowledge-guided interaction (KI) module, and task-specific networks for final predictions. The encoder is built to capture optimal feature representations for both CT and X-ray images. To facilitate information interaction between unpaired cross-modal data, we propose the KC that introduces a momentum-updated prototype learning strategy to condense modality-specific knowledge. The condensed knowledge is fed into the KI module for interaction learning, enabling the UCI to capture critical features and relationships across modalities and enhance its representation ability for COVID-19 segmentation. The results on the public COVID-19 segmentation benchmark show that our UCI with the inclusion of chest X-rays can significantly improve segmentation performance, outperforming advanced segmentation approaches including nnUNet, CoTr, nnFormer, and Swin UNETR. Code is available at: https://github.com/GQBBBB/UCI.
A Sheaf Theoretic Perspective for Robust Prostate Segmentation
Santhirasekaram, Ainkaran
Pinto, Karen
Winkler, Mathias
Rockall, Andrea
Glocker, Ben
2023Book Section, cited 0 times
ISBI-MR-Prostate-2013
Deep Learning
PROSTATE
Segmentation
Radiomics
Deep learning based methods have become the most popular approach for prostate segmentation in MRI. However, domain variations due to the complex acquisition process result in textural differences as well as imaging artefacts which significantly affects the robustness of deep learning models for prostate segmentation across multiple sites. We tackle this problem by using multiple MRI sequences to learn a set of low dimensional shape components whose combinatorially large learnt composition is capable of accounting for the entire distribution of segmentation outputs. We draw on the language of cellular sheaf theory to model compositionality driven by local and global topological correctness. In our experiments, our method significantly improves the domain generalisability of anatomical and tumour segmentation of the prostate. Code is available at https://github.com/AinkaranSanthi/A-Sheaf-Theoretic-Perspective-for-Robust-Segmentation.git.
vox2vec: A Framework for Self-supervised Contrastive Learning of Voxel-Level Representations in Medical Images
Goncharov, Mikhail
Soboleva, Vera
Kurmukov, Anvar
Pisov, Maxim
Belyaev, Mikhail
2023Book Section, cited 0 times
MIDRC-RICORD-1A
NLST
This paper introduces vox2vec — a contrastive method for self-supervised learning (SSL) of voxel-level representations. vox2vec representations are modeled by a Feature Pyramid Network (FPN): a voxel representation is a concatenation of the corresponding feature vectors from different pyramid levels. The FPN is pre-trained to produce similar representations for the same voxel in different augmented contexts and distinctive representations for different voxels. This results in unified multi-scale representations that capture both global semantics (e.g., body part) and local semantics (e.g., different small organs or healthy versus tumor tissue). We use vox2vec to pre-train a FPN on more than 6500 publicly available computed tomography images. We evaluate the pre-trained representations by attaching simple heads on top of them and training the resulting models for 22 segmentation tasks. We show that vox2vec outperforms existing medical imaging SSL techniques in three evaluation setups: linear and non-linear probing and end-to-end fine-tuning. Moreover, a non-linear head trained on top of the frozen vox2vec representations achieves competitive performance with the FPN trained from scratch while having 50 times fewer trainable parameters. The code is available at https://github.com/mishgon/vox2vec.
A One-Class Variational Autoencoder (OCVAE) Cascade for Classifying Atypical Bone Marrow Cell Sub-types
Atypical bone marrow (BM) cell-subtype characterization defines the diagnosis and follow up of different hematologic disorders. However, this process is basically a visual task, which is prone to inter- and intra-observer variability. The presented work introduces a new application of one-class variational autoencoders (OCVAE) for automatically classifying the 4 most common pathological atypical BM cell-subtypes, namely myelocytes, blasts, promyelocytes, and erythroblasts, regardless the disease they are associated with. The presented OCVAE-based representation is obtained by concatenating the bottleneck of 4 separated OCVAEs, specifically set to capture one-cell-sub-type pattern at a time. In addition, this strategy provides a complete validation scheme in a subset of an open access image dataset, demonstrating low requirements in terms of number of training images. Each particular OCVAE is trained to provide specific latent space parameters (64 means and 64 variances) for the corresponding atypical cell class. Afterwards, the obtained concatenated representation space feeds different classifiers which discriminate the proposed classes. Evaluation is done by using a subset (n=26,000) of a public single-cell BM image database, including two independent partitions, one for setting the VAEs to extract features ( n=20,800), and one for training and testing a set classifiers (n = 5200). Reported performance metrics show the concatenated-OCVAE characterization successfully differentiates the proposed atypical BM cell classes with accuracy = 0.938, precision = 0.935, recall = 0.935, f1-score = 0.932, outperforming previously published strategies for the same task (handcrafted features, ResNext, ResNet-50, XCeption, CoAtnet), while a more thorough experimental validation is included.
Segmentation of Kidney Tumors on Non-Contrast CT Images Using Protuberance Detection Network
Hatsutani, Taro
Ichinose, Akimichi
Nakamura, Keigo
Kitamura, Yoshiro
2023Book Section, cited 0 times
C4KC-KiTS
Segmentation
Algorithm Development
Shape analysis
Many renal cancers are incidentally found on non-contrast CT (NCCT) images. On contrast-enhanced CT (CECT) images, most kidney tumors, especially renal cancers, have different intensity values compared to normal tissues. However, on NCCT images, some tumors called isodensity tumors, have similar intensity values to the surrounding normal tissues, and can only be detected through a change in organ shape. Several deep learning methods which segment kidney tumors from CECT images have been proposed and showed promising results. However, these methods fail to capture such changes in organ shape on NCCT images. In this paper, we present a novel framework, which can explicitly capture protruded regions in kidneys to enable a better segmentation of kidney tumors. We created a synthetic mask dataset that simulates a protuberance, and trained a segmentation network to separate the protruded regions from the normal kidney regions. To achieve the segmentation of whole tumors, our framework consists of three networks. The first network is a conventional semantic segmentation network which extracts a kidney region mask and an initial tumor region mask. The second network, which we name protuberance detection network, identifies the protruded regions from the kidney region mask. Given the initial tumor region mask and the protruded region mask, the last network fuses them and predicts the final kidney tumor mask accurately. The proposed method was evaluated on a publicly available KiTS19 dataset, which contains 108 NCCT images, and showed that our method achieved a higher dice score of 0.615 (+0.097) and sensitivity of 0.721 (+0.103) compared to 3D-UNet. To the best of our knowledge, this is the first deep learning method that is specifically designed for kidney tumor segmentation on NCCT images.
Full Image-Index Remainder Based Single Low-Dose DR/CT Self-supervised Denoising
Long, Yifei
Pan, Jiayi
Xi, Yan
Zhang, Jianjia
Wu, Weiwen
2023Book Section, cited 0 times
LDCT-and-Projection-data
Low-dose digital radiography (DR) and computed tomography (CT) play a crucial role in minimizing health risks during clinical examinations and diagnoses. However, reducing the radiation dose often leads to lower signal-to-noise ratio measurements, resulting in degraded image quality. Existing supervised and self-supervised reconstruction techniques have been developed with noisy and clean image pairs or noisy and noisy image pairs, implying they cannot be adapted to single DR and CT image denoising. In this study, we introduce the Full Image-Index Remainder (FIRE) method. Our method begins by dividing the entire high-dimensional image space into multiple low-dimensional sub-image spaces using a full image-index remainder technique. By leveraging the data redundancy present within these sub-image spaces, we identify similar groups of noisy sub-images for training a self-supervised denoising network. Additionally, we establish a sub-space sampling theory specifically designed for self-supervised denoising networks. Finally, we propose a novel regularization optimization function that effectively reduces the disparity between self-supervised and supervised denoising networks, thereby enhancing denoising training. Through comprehensive quantitative and qualitative experiments conducted on both clinical low-dose CT and DR datasets, we demonstrate the remarkable effectiveness and advantages of our FIRE method compared to other state-of-the-art approaches.
Examining the Effects of Slice Thickness on the Reproducibility of CT Radiomics for Patients with Colorectal Liver Metastases
Peoples, Jacob J.
Hamghalam, Mohammad
James, Imani
Wasim, Maida
Gangai, Natalie
Kang, HyunSeon Christine
Rong, Xiujiang John
Chun, Yun Shin
Do, Richard K. G.
Simpson, Amber L.
2023Conference Paper, cited 0 times
Colorectal-Liver-Metastases
Radiomics
Imaging biomarker
Prospective Studies
Computed Tomography (CT)
We present an analysis of 81 patients with colorectal liver metastases from two major cancer centers prospectively enrolled in an imaging trial to assess reproducibility of radiomic features in contrast-enhanced CT. All scans were reconstructed with different slice thicknesses and levels of iterative reconstruction. Radiomic features were extracted from the liver parenchyma and largest metastasis from each reconstruction, using different levels of resampling and methods of feature aggregation. The prognostic value of reproducible features was tested using Cox proportional hazards to model overall survival in an independent, public data set of 197 hepatic resection patients with colorectal liver metastases. Our results show that larger differences in slice thickness reduced the concordance of features (p<10^-6). Extracting features with 2.5D aggregation and no axial resampling produced the most robust features, and the best test-set performance in the survival model on the independent data set (C-index = 0.65). Across all feature extraction methods, restricting the survival models to use reproducible features had no statistically significant effect on the test set performance (p=0.98). In conclusion, our results show that feature extraction settings can positively impact the robustness of radiomics features to variations in slice thickness, without negatively effecting prognostic performance.
COVID-19 Lesion Segmentation Framework for the Contrast-Enhanced CT in the Absence of Contrast-Enhanced CT Annotations
Medical imaging is a dynamic domain where new acquisition protocols are regularly developed and employed to meet changing clinical needs. Deep learning models for medical image segmentation have proven to be a valuable tool for medical image processing. Creating such a model from scratch requires a lot of effort in terms of annotating new types of data and model training. Therefore, the amount of annotated training data for the new imaging protocol might still be limited. In this work we propose a framework for segmentation of images acquired with a new imaging protocol(contrast-enhanced lung CT) that does not require annotating training data in the new target domain. Instead, the framework leverages the previously developed models, data and annotations in a related source domain. Using contrast-enhanced lung CT data as a target data we demonstrate that unpaired image translation from the non-contrast enhanced source data, combined with self-supervised pretraining achieves 0.726 Dice Score for the COVID-19 lesion segmentation task on the target data, without the necessity to annotate any target data for the model training.
Ensemble Methods with [$$^{18}$$F]FDG-PET/CT Radiomics in Breast Cancer Response Prediction
Pathological complete response (pCR) after neoadjuvant che-motherapy (NAC) in patients with breast cancer was found to improve survival, and it has a great prognostic value in the aggressive tumor subtype. This study aims to predict pCR before NAC treatment with a radiomic feature-based ensemble learning model using both positron emission tomography/computed tomography (PET/CT) images taken from the online QIN-Breast dataset. It studies the problem of constructing an end-to-end classification pipeline that includes a large-scale radiomic feature extraction, a hybrid iterative feature selection and a heterogeneous weighted ensemble classification. The proposed hybrid feature selection procedure can identify significant radiomic predictors out of 2153 features extracted from delineated tumour regions. The proposed weighted ensemble approach aggregates the outcomes of four weak classifiers (Decision tree, Naive Bayes, K-nearest neighbour, and Logistics regression) based on their importance. The empirical study demonstrates that the proposed feature selection-cum-ensemble classification method has achieved 92% and 88.4% balanced accuracy in PET and CT, respectively. The PET/CT aggregated model performed better and achieved 98% balanced accuracy and 94.74% F1-score. Furthermore, this study is the first classification work on the online QIN-Breast dataset.
An Investigation into Race Bias in Random Forest Models Based on Breast DCE-MRI Derived Radiomics Features
Huti, Mohamed
Lee, Tiarna
Sawyer, Elinor
King, Andrew P.
2023Book Section, cited 0 times
Duke-Breast-Cancer-MRI
Bias
Artificial Intelligence
Radiomics
Random forest classifier
DCE-MRI
BREAST
Recent research has shown that artificial intelligence (AI) models can exhibit bias in performance when trained using data that are imbalanced by protected attribute(s). Most work to date has focused on deep learning models, but classical AI techniques that make use of hand-crafted features may also be susceptible to such bias. In this paper we investigate the potential for race bias in random forest (RF) models trained using radiomics features. Our application is prediction of tumour molecular subtype from dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) of breast cancer patients. Our results show that radiomics features derived from DCE-MRI data do contain race-identifiable information, and that RF models can be trained to predict White and Black race from these data with 60–70% accuracy, depending on the subset of features used. Furthermore, RF models trained to predict tumour molecular subtype using race-imbalanced data seem to produce biased behaviour, exhibiting better performance on test data from the race on which they were trained.
A Study of Age and Sex Bias in Multiple Instance Learning Based Classification of Acute Myeloid Leukemia Subtypes
Sadafi, Ario
Hehr, Matthias
Navab, Nassir
Marr, Carsten
2023Book Section, cited 0 times
AML-Cytomorphology_MLL_Helmholtz
Accurate classification of Acute Myeloid Leukemia (AML) subtypes is crucial for clinical decision-making and patient care. In this study, we investigate the potential presence of age and sex bias in AML subtype classification using Multiple Instance Learning (MIL) architectures. To that end, we train multiple MIL models using different levels of sex imbalance in the training set and excluding certain age groups. To assess the sex bias, we evaluate the performance of the models on male and female test sets. For age bias, models are tested against underrepresented age groups in the training data. We find a significant effect of sex and age bias on the performance of the model for AML subtype classification. Specifically, we observe that females are more likely to be affected by sex imbalance dataset and certain age groups, such as patients with 72 to 86 years of age with the RUNX1::RUNX1T1 genetic subtype, are significantly affected by an age bias present in the training data. Ensuring inclusivity in the training data is thus essential for generating reliable and equitable outcomes in AML genetic subtype classification, ultimately benefiting diverse patient populations.
Optimal Cut-Off Points for Pancreatic Cancer Detection Using Deep Learning Techniques
Dzemyda, Gintautas
Kurasova, Olga
Medvedev, Viktor
Šubonienė, Aušra
Gulla, Aistė
Samuilis, Artūras
Jagminas, Džiugas
Strupas, Kȩstutis
2024Book Section, cited 0 times
Pancreas-CT
Machine Learning
Deep learning-based approaches are attracting increasing attention in medicine. Applying deep learning models to specific tasks in the medical field is very useful for early disease detection. In this study, the problem of detecting pancreatic cancer by classifying CT images was solved using the provided deep learning-based framework. The choice of the optimal cut-off point is particularly important for an effective assessment of the results of the classification. In order to investigate the capabilities of the deep learning-based framework and to maximise pancreatic cancer diagnostic performance through the selection of optimal cut-off points, experimental studies were carried out using open-access data. Four classification accuracy metrics (Youden index, closest-to-(0,1) criterion, balanced accuracy, g-mean) were used to find the optimal cut-off point in order to balance sensitivity and specificity. This study compares different approaches for finding the optimal cut-off points and selects those that are most clinically relevant.
Hierarchical Compositionality in Hyperbolic Space for Robust Medical Image Segmentation
Santhirasekaram, Ainkaran
Winkler, Mathias
Rockall, Andrea
Glocker, Ben
2024Conference Paper, cited 0 times
ISBI-MR-Prostate-2013
NCI-ISBI 2013 Challenge: Automated Segmentation of Prostate Structures
Deep Learning
ABDOMEN
PROSTATE
Segmentation
Deep learning based medical image segmentation models need to be robust to domain shifts and image distortion for the safe translation of these models into clinical practice. The most popular methods for improving robustness are centred around data augmentation and adversarial training. Many image segmentation tasks exhibit regular structures with only limited variability. We aim to exploit this notion by learning a set of base components in the latent space whose composition can account for the entire structural variability of a specific segmentation task. We enforce a hierarchical prior in the composition of the base components and consider the natural geometry in which to build our hierarchy. Specifically, we embed the base components on a hyperbolic manifold which we claim leads to a more natural composition. We demonstrate that our method improves model robustness under various perturbations and in the task of single domain generalisation.
C3Fusion: Consistent Contrastive Colon Fusion, Towards Deep SLAM in Colonoscopy
Posner, Erez
Zholkover, Adi
Frank, Netanel
Bouhnik, Moshe
2023Book Section, cited 0 times
CT COLONOGRAPHY
Abstract3D colon reconstruction from Optical Colonoscopy (OC) to detect non-examined surfaces remains an unsolved problem. The challenges arise from the nature of optical colonoscopy data, characterized by highly reflective low-texture surfaces, drastic illumination changes and frequent tracking loss. Recent methods demonstrate compelling results, but suffer from: (1) frangible frame-to-frame (or frame-to-model) pose estimation resulting in many tracking failures; or (2) rely on point-based representations at the cost of scan quality. In this paper, we propose a novel reconstruction framework that addresses these issues end to end, which result in both quantitatively and qualitatively accurate and robust 3D colon reconstruction. Our SLAM approach, which employs correspondences based on contrastive deep features, and deep consistent depth maps, estimates globally optimized poses, is able to recover from frequent tracking failures, and estimates a global consistent 3D model; all within a single framework. We perform an extensive experimental evaluation on multiple synthetic and real colonoscopy videos, showing high-quality results and comparisons against relevant baselines.
Ensembling Voxel-Based and Box-Based Model Predictions for Robust Lesion Detection
This paper presents a novel generic method to improve lesion detection by ensembling semantic segmentation and object detection models. The proposed approach allows to benefit from both voxel-based and box-based predictions, thus improving the ability to accurately detect lesions. The method consists of 3 main steps: (i) semantic segmentation and object detection models are trained separately; (ii) voxel-based and box-based predictions are matched spatially; (iii) corresponding lesion presence probabilities are combined into summary detection maps. We illustrate and validate the robustness of the proposed approach on three different oncology applications: liver and pancreas neoplasm detection in single-phase CT, and significant prostate cancer detection in multi-modal MRI. Performance is evaluated on publicly-available databases, and compared to two state-of-the art baseline methods. The proposed ensembling approach improves the average precision metric in all considered applications, with a 8% gain for prostate cancer.
Enhancing Clinical Support for Breast Cancer with Deep Learning Models Using Synthetic Correlated Diffusion Imaging
Tai, Chi-en Amy
Gunraj, Hayden
Hodzic, Nedim
Flanagan, Nic
Sabri, Ali
Wong, Alexander
2024Book Section, cited 0 times
ACRIN 6698
I-SPY2 Breast DWI
MICCAI
BREAST
Breast cancer
Deep Learning
Radiomic features
Magnetic Resonance Imaging (MRI)
Synthetic images
Algorithm Development
Breast cancer is the second most common type of cancer in women in Canada and the United States, representing over 25% of all new female cancer cases. As such, there has been immense research and progress on improving screening and clinical support for breast cancer. In this paper, we investigate enhancing clinical support for breast cancer with deep learning models using a newly introduced magnetic resonance imaging (MRI) modality called synthetic correlated diffusion imaging (CDIs). More specifically, we leverage a volumetric convolutional neural network to learn volumetric deep radiomic features from a pre-treatment cohort and construct a predictor based on the learnt features for grade and post-treatment response prediction. As the first study to learn CDIs-centric radiomic sequences within a deep learning perspective for clinical decision support, we evaluated the proposed approach using the ACRIN-6698 study against those learnt using gold-standard imaging modalities. We find that the proposed approach can achieve better performance for both grade and post-treatment response prediction and thus may be a useful tool to aid oncologists in improving recommendation of treatment of patients. Subsequently, the approach to leverage volumetric deep radiomic features for breast cancer can be further extended to other applications of CDIs in the cancer domain to further improve clinical support.
A Modern Approach to Osteosarcoma Tumor Identification Through Integration of FP-Growth, Transfer Learning and Stacking Model
The early detection of cancer through radiographs is crucial for identifying indicative signs of its presence or status. However, the analysis of histological images of osteosarcoma faces significant challenges due to discrepancies among pathologists, intra-class variations, inter-class similarities, complex contexts, and data noise. In this article, we present a novel deep learning method that helps address these issues. The architecture of our model consists of the following phases: 1) Dataset construction: advanced image processing techniques such as dimensionality reduction, identification of frequent patterns through unsupervised learning (FP-Growth), and data augmentation are applied in this phase. 2) Stacking model: we apply a stacking model that combines the strengths of two models: convolutional neural networks (CNN) with transfer learning, allowing us to leverage pre-trained knowledge from related datasets, and a Random Forest (RF) model to enhance the classification and diagnosis of osteosarcoma images. The models were trained on a dataset of publicly available images from The Cancer Imaging Archive (TCIA) [12]. The accuracy of our models is evaluated using classification metrics such as Accuracy, F1 Score, Precision, and Recall. This work provides a solid foundation for ongoing innovation in histology and the potential to apply and adapt this approach to broader clinical challenges in the future.
Data Augmentation Based on DiscrimDiff for Histopathology Image Classification
Guan, Xianchao
Wang, Yifeng
Lin, Yiyang
Zhang, Yongbing
2024Book Section, cited 0 times
Osteosarcoma-Tumor-Assessment
data augmentation
Histopathology
Histopathological analysis is the present gold standard for cancer diagnosis. Accurate classification of histopathology images has great clinical significance and application value for assisting pathologists in diagnosis. However, the performance of histopathology image classification is greatly affected by data imbalance. To address this problem, we propose a novel data augmentation framework based on the diffusion model, DiscrimDiff, which expands the dataset by synthesizing images of rare classes. To compensate for the lack of discrimination ability of the diffusion model for synthesized images, we design a post-discrimination mechanism to provide image quality assurance for data augmentation. Our method significantly improves classification performance on multiple datasets. Furthermore, histomorphological features of different classes concerned by the diffusion model may provide guiding significance for pathologists in clinical diagnosis. Therefore, we visualize histomorphological features related to classification, which can be used to assist pathologist-in-training education and improve the understanding of histomorphology.
Risk Factors of Recurrence and Metastasis of Breast Cancer Sub-types Based on Magnetic Resonance Imaging Techniques
This study presents an analysis of breast cancer risk factors, focusing on metastasis and recurrence. Using MRI, we identified variables that could differentiate cancer sub-types, detect recurring cancers, and identify metastasizing cancers. Contrary to some studies, we found no higher incidence of metastasis or recurrence for the Triple Negative sub-type. However, the HER2 type showed a higher likelihood of metastasis. Besides, we used 529 features obtained from DCE-MRI images and identified 21 MRI derived variables as sub-type indicators, 9 as recurrence indicators, and 10 as metastasis indicators. Our findings aim to refine the target and highlight important information, assisting those who use MRI to diagnose breast cancer.
Ensemble Deep Learning Models for Segmentation of Prostate Zonal Anatomy and Pathologically Suspicious Areas
Breast cancer is one of the most common types of cancer in women, and its early detection significantly improves the survival rate. Although mammography is one of the least invasive and most widely used methods in the diagnostic process, its complexity and subjectivity in medical interpretation present significant challenges. In this article, we propose a new approach that supports the breast cancer diagnosis process by assisting in the classification of mammography images as malignant or benign, or through the BIRADS system. Our proposal consists of two phases. Initially, we implemented the FP-Growth algorithm on patients’ clinical data, analyzing variables such as age and sex to identify frequent patterns. This allows us to explore, group, and visually characterize shared findings and trends among clinical data, which is useful for doctors when creating risk groups or establishing a pre-diagnosis based on the patient’s profile. In this phase, we also prepared the images for training the different models. Subsequently, we combined the strengths of two models through stacking: the Random Forest (RF) model and Convolutional Neural Networks (CNN) with knowledge transfer, to improve image classification and diagnosis. We also explored other methods such as CNN and Support Vector Machine (SVM) to compare the accuracy of the proposed methodology against conventional techniques. The developed models were trained using public datasets: “The Chinese Mammography Database” [2] and “The INbreast database” [3]. The accuracy of the method is evaluated using various classification-related metrics, such as Accuracy, Precision, F1 Score, and Recall. The results show that combining base models using a stacking strategy achieves significantly superior performance compared to individual models, with ideal scores in accuracy, recall, and F1 score using k-fold cross-validation in the meta-model. These excellent results suggest that combining multiple base models more effectively captures the underlying complexities and patterns in the data.
A Fast Domain-Inspired Unsupervised Method to Compute COVID-19 Severity Scores from Lung CT
Dey, Samiran
Kundu, Bijon
Basuchowdhuri, Partha
Saha, Sanjoy Kumar
Chakraborti, Tapabrata
Pattern Recognition2025Book Section, cited 0 times
Website
MIDRC-RICORD-1A
There has been a deluge of data-driven deep learning approaches to detect COVID-19 from computed tomography (CT) images over the pandemic, most of which use ad-hoc deep learning black boxes of little to no relevance to the actual process clinicians use and hence have not seen translation to real-life practical settings. Radiologists use a clinically established process of estimating the percentage of the affected area of the lung to grade the severity of infection out of a score of 0-25 from lung CT scans. Hence any computer-automated process that has aspirations of being adopted in the clinic to alleviate the workload of radiologists while being trustworthy and safe, needs to follow this clearly defined clinical process religiously. Keeping this in mind, we propose a simple yet effective methodology that uses explainable mechanistic modelling using classical image processing and pattern recognition techniques. The proposed pipeline has no learning element and hence is fast. It mimics the clinical process and hence is transparent. We collaborate with an experienced radiologist to enhance an existing benchmark COVID-19 lung CT dataset by adding the grading labels, which is another contribution of this paper, along with the methodology which has a higher potential of becoming a clinical decision support system (CDSS) due to its rapid and explainable nature. The radiologist gradations and the code is available at https://github.com/Samiran-Dey/explainable_seg.
Radiological Atlas for Patient Specific Model Generation
The paper presents the development of a radiological atlas employed in an abdomen patient specific model verification.; ; After a patient specific model introduction, the development of a radiological atlas is discussed.; ; Unprocessed database, containing DICOM images and radiological diagnosis presented. This database is processed manually to retrieve the required information. Organs and pathologies are determined and each study is tagged with specific labels, e.g. ‘liver normal’, ‘liver tumor’, ‘liver cancer’, ‘spleen normal’, ‘spleen absence’, etc. Selected structures are additionally segmented. Masks are stored as gold standard.; ; Web service based network system is provided to permit PACS-driven retrieval of image data matching desired criteria. Image series as well as ground truth images may be retrieved for benchmark or model-development purposes. The database is evaluated.
A new 2.5 D representation for lymph node detection using random sets of deep convolutional neural network observations
Roth, Holger R
Lu, Le
Seff, Ari
Cherry, Kevin M
Hoffman, Joanne
Wang, Shijun
Liu, Jiamin
Turkbey, Evrim
Summers, Ronald M
2014Conference Proceedings, cited 192 times
Website
CT Lymph Nodes
Computer Aided Detection (CADe)
*Algorithms
Computer Simulation
Data Interpretation
Statistical
Humans
Imaging
Three-Dimensional/*methods
Lymph Nodes/*diagnostic imaging
Lymphatic Diseases/*diagnostic imaging
*Models
Statistical
Neural Networks
Computer
Pattern Recognition
Automated/*methods
Radiographic Image Enhancement/methods
Radiographic Image Interpretation
Computer-Assisted/*methods
Reproducibility of Results
Sensitivity and Specificity
Automated Lymph Node (LN) detection is an important clinical diagnostic task but very challenging due to the low contrast of surrounding structures in Computed Tomography (CT) and to their varying sizes, poses, shapes and sparsely distributed locations. State-of-the-art studies show the performance range of 52.9% sensitivity at 3.1 false-positives per volume (FP/vol.), or 60.9% at 6.1 FP/vol. for mediastinal LN, by one-shot boosting on 3D HAAR features. In this paper, we first operate a preliminary candidate generation stage, towards -100% sensitivity at the cost of high FP levels (-40 per patient), to harvest volumes of interest (VOI). Our 2.5D approach consequently decomposes any 3D VOI by resampling 2D reformatted orthogonal views N times, via scale, random translations, and rotations with respect to the VOI centroid coordinates. These random views are then used to train a deep Convolutional Neural Network (CNN) classifier. In testing, the CNN is employed to assign LN probabilities for all N random views that can be simply averaged (as a set) to compute the final classification probability per VOI. We validate the approach on two datasets: 90 CT volumes with 388 mediastinal LNs and 86 patients with 595 abdominal LNs. We achieve sensitivities of 70%/83% at 3 FP/vol. and 84%/90% at 6 FP/vol. in mediastinum and abdomen respectively, which drastically improves over the previous state-of-the-art work.
2d view aggregation for lymph node detection using a shallow hierarchy of linear classifiers
Enlarged lymph nodes (LNs) can provide important information for cancer diagnosis, staging, and measuring treatment reactions, making automated detection a highly sought goal. In this paper, we propose a new algorithm representation of decomposing the LN detection problem into a set of 2D object detection subtasks on sampled CT slices, largely alleviating the curse of dimensionality issue. Our 2D detection can be effectively formulated as linear classification on a single image feature type of Histogram of Oriented Gradients (HOG), covering a moderate field-of-view of 45 by 45 voxels. We exploit both simple pooling and sparse linear fusion schemes to aggregate these 2D detection scores for the final 3D LN detection. In this manner, detection is more tractable and does not need to perform perfectly at instance level (as weak hypotheses) since our aggregation process will robustly harness collective information for LN detection. Two datasets (90 patients with 389 mediastinal LNs and 86 patients with 595 abdominal LNs) are used for validation. Cross-validation demonstrates 78.0% sensitivity at 6 false positives/volume (FP/vol.) (86.1% at 10 FP/vol.) and 73.1% sensitivity at 6 FP/vol. (87.2% at 10 FP/vol.), for the mediastinal and abdominal datasets respectively. Our results compare favorably to previous state-of-the-art methods.
Combining Generative Models for Multifocal Glioma Segmentation and Registration
In this paper, we propose a new method for simultaneously segmenting brain scans of glioma patients and registering these scans to a normal atlas. Performing joint segmentation and registration for brain tumors is very challenging when tumors include multifocal masses and have complex shapes with heterogeneous textures. Our approach grows tumors for each mass from multiple seed points using a tumor growth model and modifies a normal atlas into one with tumors and edema using the combined results of grown tumors. We also generate a tumor shape prior via the random walk with restart, utilizing multiple tumor seeds as initial foreground information. We then incorporate this shape prior into an EM framework which estimates the mapping between the modified atlas and the scans, posteriors for each tissue labels, and the tumor growth model parameters. We apply our method to the BRATS 2013 leaderboard dataset to evaluate segmentation performance. Our method shows the best performance among all participants.
Automated Medical Image Modality Recognition by Fusion of Visual and Text Information
Computer Aided Diagnosis (CAD) is a technique where diagnosis is performed in an automatic way. This work has developed a CAD system for automatically classifying the given brain Magnetic Resonance Imaging (MRI) image into ‘tumor affected’ or ‘tumor not affected’. The input image is preprocessed using wiener filter and Contrast Limited Adaptive Histogram Equalization (CLAHE). The image is then quantized and aggregated to get a reduced image data. The reduced image is then segmented into four regions such as gray matter, white matter, cerebrospinal fluid and high intensity tumor cluster using Fuzzy C Means (FCM) algorithm. The tumor region is then extracted using the intensity metric. A contour is evolved over the identified tumor region using Active Contour model (ACM) to extract exact tumor segment. Thirty five features including Gray Level Co-occurrence Matrix (GLCM) features, Gray Level Run Length Matrix features (GLRL), statistical features and shape based features are extracted from the tumor region. Neural network and Support Vector Machine (SVM) classifiers are trained using these features. Results indicate that Support vector machine classifier with quadratic kernel function performs better than Radial Basis Function (RBF) kernel function and neural network classifier with fifty hidden nodes performs better than twenty five hidden nodes. It is also evident from the result that average running time of FCM is less when used on reduced image data.
eFis: A Fuzzy Inference Method for Predicting Malignancy of Small Pulmonary Nodules
Predicting malignancy of small pulmonary nodules from computer tomography scans is a difficult and important problem to diagnose lung cancer. This paper presents a rule based fuzzy inference method for predicting malignancy rating of small pulmonary nodules. We use the nodule characteristics provided by Lung Image Database Consortium dataset to determine malignancy rating. The proposed fuzzy inference method uses outputs of ensemble classifiers and rules from radiologist agreements on the nodules. The results are evaluated over classification accuracy performance and compared with single classifier methods. We observed that the preliminary results are very promising and system is open to development.
Automatic Design of Window Operators for the Segmentation of the Prostate Gland in Magnetic Resonance Images
Breast cancer today is the leading cause of death amongstcancer patients inflicting women around the world. Breast cancer is themost common cancer in women worldwide. It is also the principle cause ofdeath from cancer among women globally. Early detection of this diseasecan greatly enhance the chances of long-term survival of breast cancervictims. Classification of cancer data helps widely in detection of the dis-ease and it can be achieved using many techniques such as Perceptronwhich is an Artificial Neural Network (ANN) classification technique.In this paper, we proposed a new hybrid algorithm by combining theperceptron algorithm and the feature extraction algorithm after apply-ing the Scale Invariant Feature Transform (SIFT) algorithm in orderto classify magnetic resonance imaging (MRI) breast cancer images. Theproposed algorithm is called breast MRI cancer classifier (BMRICC) andit has been tested tested on 281 MRI breast images (138 abnormal and143 normal). The numerical results of the general performance of theBMRICC algorithm and the comparasion results between it and other 5benchmark classifiers show that, the BMRICC algorithm is a promisingalgorithm and its performance is better than the other algorithms.
Comparison of Automatic Seed Generation Methods for Breast Tumor Detection Using Region Growing Technique
Performance analysis of several state-of-the-art prediction approaches is performed for lossless image compression. To provide this analysis special models of edges are presented: bound-oriented and gradient-oriented approaches. Several heuristic assumptions are proposed for considered intra- and inter-component predictors using determined edge models. Numerical evaluation using image test sets with various statistical features confirms obtained heuristic assumptions.
Optimization Methods for Medical Image Super Resolution Reconstruction
Super-resolution (SR) concentrates on constructing a high-resolution (HR) image of a scene from two or more sets of low-resolution (LR) images of the same scene. It is the process of combining a sequence of low-resolution (LR) noisy blurred images to produce a higher-resolution image. The reconstruction of high-resolution images is computationally expensive. SR is defined to be an inverse problem that is well-known as ill-conditioned. The SR problem has been reformulated using optimization techniques to define a solution that is a close approximation of the true scene and less sensitive to errors in the observed images. This paper reviews the optimized SR reconstruction approaches and highlights its challenges and limitations. An experiment has been done to compare between bicubic, iterative back-projection (IBP), projected onto convex sets (POCS), total variation (TV) and Gradient descent via sparse representation. The experimental results show that Gradient descent via sparse representation outperforms other optimization techniques.
GLISTRboost: Combining Multimodal MRI Segmentation, Registration, and Biophysical Tumor Growth Modeling with Gradient Boosting Machines for Glioma Segmentation.
Bakas, S.
Zeng, K.
Sotiras, A.
Rathore, S.
Akbari, H.
Gaonkar, B.
Rozycki, M.
Pati, S.
Davatzikos, C.
Brainlesion2016Journal Article, cited 49 times
Website
Algorithm Development
Challenge
Segmentation
BRAIN
BraTS
Lower-grade glioma (LGG)
Glioblastoma Multiforme (GBM)
We present an approach for segmenting low- and high-grade gliomas in multimodal magnetic resonance imaging volumes. The proposed approach is based on a hybrid generative-discriminative model. Firstly, a generative approach based on an Expectation-Maximization framework that incorporates a glioma growth model is used to segment the brain scans into tumor, as well as healthy tissue labels. Secondly, a gradient boosting multi-class classification scheme is used to refine tumor labels based on information from multiple patients. Lastly, a probabilistic Bayesian strategy is employed to further refine and finalize the tumor segmentation based on patient-specific intensity statistics from the multiple modalities. We evaluated our approach in 186 cases during the training phase of the BRAin Tumor Segmentation (BRATS) 2015 challenge and report promising results. During the testing phase, the algorithm was additionally evaluated in 53 unseen cases, achieving the best performance among the competing methods.
Personalized Medicine, Biomarkers of Risk and Breast MRI
Spatial Aggregation of Holistically-Nested Networks for Automated Pancreas Segmentation
Roth, Holger R.
Lu, Le
Farag, Amal
Sohn, Andrew
Summers, Ronald M.
2016Book Section, cited 0 times
Pancreas-CT
Segmentation
Accurate automatic organ segmentation is an important yet challenging problem for medical image analysis. The pancreas is an abdominal organ with very high anatomical variability. This inhibits traditional segmentation methods from achieving high accuracies, especially compared to other organs such as the liver, heart or kidneys. In this paper, we present a holistic learning approach that integrates semantic mid-level cues of deeply-learned organ interior and boundary maps via robust spatial aggregation using random forest. Our method generates boundary preserving pixel-wise class labels for pancreas segmentation. Quantitative evaluation is performed on CT scans of 82 patients in 4-fold cross-validation. We achieve a (mean ± std. dev.) Dice Similarity Coefficient of 78.01 %±8.2 % in testing which significantly outperforms the previous state-of-the-art approach of 71.8 %±10.7 % under the same evaluation criterion.
Tumor Lesion Segmentation from 3D PET Using a Machine Learning Driven Active Surface
Segmentation of gliomas in pre-operative and post-operative multimodal magnetic resonance imaging volumes based on a hybrid generative-discriminative framework
Fixation devices are used in radiotherapy treatment of head and neck cancers to ensure successive treatment fractions are accurately targeted. Typical fixations usually take the form of a custom made mask that is clamped to the treatment couch and these are evident in many CT data sets as radiotherapy treatment is normally planned with the mask in place. But the fixations can make planning more difficult for certain tumor sites and are often unwanted by third parties wishing to reuse the data. Manually editing the CT images to remove the fixations is time consuming and error prone. This paper presents a fast and automatic approach that removes artifacts due to fixations in CT images without affecting pixel values representing tissue. The algorithm uses particle swarm optimisation to speed up the execution time and presents results from five CT data sets that show it achieves an average specificity of 92.01% and sensitivity of 99.39%.
Discovery radiomics for pathologically-proven computed tomography lung cancer prediction
Kumar, Devinder
Chung, Audrey G
Shaifee, Mohammad J
Khalvati, Farzad
Haider, Masoom A
Wong, Alexander
2017Conference Proceedings, cited 30 times
Website
LIDC-IDRI
Radiomics
Classification
LUNG
Deep convolutional neural network (DCNN)
Lung cancer is the leading cause for cancer related deaths. As such, there is an urgent need for a streamlined process that can allow radiologists to provide diagnosis with greater efficiency and accuracy. A powerful tool to do this is radiomics: a high-dimension imaging feature set. In this study, we take the idea of radiomics one step further by introducing the concept of discovery radiomics for lung cancer prediction using CT imaging data. In this study, we realize these custom radiomic sequencers as deep convolutional sequencers using a deep convolutional neural network learning architecture. To illustrate the prognostic power and effectiveness of the radiomic sequences produced by the discovered sequencer, we perform cancer prediction between malignant and benign lesions from 97 patients using the pathologically-proven diagnostic data from the LIDC-IDRI dataset. Using the clinically provided pathologically-proven data as ground truth, the proposed framework provided an average accuracy of 77.52% via 10-fold cross-validation with a sensitivity of 79.06% and specificity of 76.11%, surpassing the state-of-the art method.
Normalized Euclidean Super-Pixels for Medical Image Segmentation
We propose a super-pixel segmentation algorithm based on normalized Euclidean distance for handling the uncertainty and complexity in medical image. Benefited from the statistic characteristics, compactness within super-pixels is described by normalized Euclidean distance. Our algorithm banishes the balance factor of the Simple Linear Iterative Clustering framework. In this way, our algorithm properly responses to the lesion tissues, such as tiny lung nodules, which have a little difference in luminance with their neighbors. The effectiveness of proposed algorithm is verified in The Cancer Imaging Archive (TCIA) database. Compared with Simple Linear Iterative Clustering (SLIC) and Linear Spectral Clustering (LSC), the experiment results show that, the proposed algorithm achieves competitive performance over super-pixel segmentation in the state of art.
Deep Neural Network Based Classification of Tumourous and Non-tumorous Medical Images
Tumor identification and classification from various medical images is a very challenging task. Various image processing and pattern identification techniques can be used for tumor identification and classification process. Deep learning is evolving technique under machine learning that provides the advantage for automatically extracting the features from the images. The computer aided diagnosis system proposed in this research work can assist the radiologists in cancer tumor identification based on various facts and studies done previously. The system can expedite the process of identification even in earlier stages by adding up the facility of a second opinion which makes the process simpler and faster. In this paper, we have proposed a framework of convolution neural network (CNN), that is a technique under Deep Learning. The research work implements the framework on AlexNet and ZFNet architectures and have trained the system for tumor detection in lung nodules and well as brain. The accuracy for classification is more than 97% for both the architectures and both the datasets of lung CT and brain MRI images.
Hybrid Mass Detection in Breast MRI Combining Unsupervised Saliency Analysis and Deep Learning
To interpret a breast MRI study, a radiologist has to examine over 1000 images, and integrate spatial and temporal information from multiple sequences. The automated detection and classification of suspicious lesions can help reduce the workload and improve accuracy. We describe a hybrid mass-detection algorithm that combines unsupervised candidate detection with deep learning-based classification. The detection algorithm first identifies image-salient regions, as well as regions that are cross-salient with respect to the contralateral breast image. We then use a convolutional neural network (CNN) to classify the detected candidates into true-positive and false-positive masses. The network uses a novel multi-channel image representation; this representation encompasses information from the anatomical and kinetic image features, as well as saliency maps. We evaluated our algorithm on a dataset of MRI studies from 171 patients, with 1957 annotated slices of malignant (59%) and benign (41%) masses. Unsupervised saliency-based detection provided a sensitivity of 0.96 with 9.7 false-positive detections per slice. Combined with CNN classification, the number of false positive detections dropped to 0.7 per slice, with 0.85 sensitivity. The multi-channel representation achieved higher classification performance compared to single-channel images. The combination of domain-specific unsupervised methods and general-purpose supervised learning offers advantages for medical imaging applications, and may improve the ability of automated algorithms to assist radiologists.
CASED: Curriculum Adaptive Sampling for Extreme Data Imbalance
Jesson, Andrew
Guizard, Nicolas
Ghalehjegh, Sina Hamidi
Goblot, Damien
Soudan, Florian
Chapados, Nicolas
2017Conference Proceedings, cited 18 times
Website
LIDC-IDRI
LUNA16 Challenge
Computer Aided Detection (CAD)
Segmentation
Classification
Algorithm Development
We introduce CASED, a novel curriculum sampling algorithm that facilitates the optimization of deep learning segmentation or detection models on data sets with extreme class imbalance. We evaluate the CASED learning framework on the task of lung nodule detection in chest CT. In contrast to two-stage solutions, wherein nodule candidates are first proposed by a segmentation model and refined by a second detection stage, CASED improves the training of deep nodule segmentation models (e.g. UNet) to the point where state of the art results are achieved using only a trivial detection stage. CASED improves the optimization of deep segmentation models by allowing them to first learn how to distinguish nodules from their immediate surroundings, while continuously adding a greater proportion of difficult-to-classify global context, until uniformly sampling from the empirical data distribution. Using CASED during training yields a minimalist proposal to the lung nodule detection problem that tops the LUNA16 nodule detection benchmark with an average sensitivity score of 88.35%. Furthermore, we find that models trained using CASED are robust to nodule annotation quality by showing that comparable results can be achieved when only a point and radius for each ground truth nodule are provided during training. Finally, the CASED learning framework makes no assumptions with regard to imaging modality or segmentation target and should generalize to other medical imaging problems where class imbalance is a persistent problem.
Pancreas Segmentation in MRI Using Graph-Based Decision Fusion on Convolutional Neural Networks
Deep neural networks have demonstrated very promising performance on accurate segmentation of challenging organs (e.g., pancreas) in abdominal CT and MRI scans. The current deep learning approaches conduct pancreas segmentation by processing sequences of 2D image slices independently through deep, dense per-pixel masking for each image, without explicitly enforcing spatial consistency constraint on segmentation of successive slices. We propose a new convolutional/recurrent neural network architecture to address the contextual learning and segmentation consistency problem. A deep convolutional sub-network is first designed and pre-trained from scratch. The output layer of this network module is then connected to recurrent layers and can be fine-tuned for contextual learning, in an end-to-end manner. Our recurrent sub-network is a type of Long short-term memory (LSTM) network that performs segmentation on an image by integrating its neighboring slice segmentation predictions, in the form of a dependent sequence processing. Additionally, a novel segmentation-direct loss function (named Jaccard Loss) is proposed and deep networks are trained to optimize Jaccard Index (JI) directly. Extensive experiments are conducted to validate our proposed deep models, on quantitative pancreas segmentation using both CT and MRI scans. Our method outperforms the state-of-the-art work on CT [11] and MRI pancreas segmentation [1], respectively.
Towards Image-Guided Pancreas and Biliary Endoscopy: Automatic Multi-organ Segmentation on Abdominal CT with Dense Dilated Networks
Gibson, Eli
Giganti, Francesco
Hu, Yipeng
Bonmati, Ester
Bandula, Steve
Gurusamy, Kurinchi
Davidson, Brian R
Pereira, Stephen P
Clarkson, Matthew J
Barratt, Dean C
2017Conference Proceedings, cited 14 times
Website
Pancreas-CT
Algorithm Development
Segmentation
Deep learning
Computer Aided Detection (CADe)
Collage CNN for Renal Cell Carcinoma Detection from CT
Abstract
This paper presents an integrated quantitative MR image analysis framework to include all necessary steps such as MRI inhomogeneity correction, feature extraction, multiclass feature selection and multimodality abnormal brain tissue segmentation respectively. We first obtain mathematical algorithm to compute a novel Generalized multifractional Brownian motion (GmBm) texture feature. We then demonstrate efficacy of multiple multiresolution texture features including regular fractal dimension (FD) texture, and stochastic texture such as multifractional Brownian motion (mBm) and GmBm features for robust tumor and other abnormal tissue segmentation in brain MRI. We evaluate these texture and associated intensity features to effectively delineate multiple abnormal tissues within and around the tumor core, and stroke lesions using large scale public and private datasets.
A Fast Semi-Automatic Segmentation Tool for Processing Brain Tumor Images
Overall Survival Time Prediction for High Grade Gliomas Based on Sparse Representation Framework
Wu, Guoqing
Wang, Yuanyuan
Yu, Jinhua
2018Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Abstract
Accurate prognosis for high grade glioma (HGG) is of great clinical value since it would provide optimized guidelines for treatment planning. Previous imaging-based survival prediction generally relies on some features guided by clinical experiences, which limits the full utilization of biomedical image. In this paper, we propose a sparse representation-based radiomics framework to predict overall survival (OS) time of HGG. Firstly, we develop a patch-based sparse representation method to extract the high-throughput tumor texture features. Then, we propose to combine locality preserving projection and sparse representation to select discriminating features. Finally, we treat the OS time prediction as a classification task and apply sparse representation to classification. Experiment results show that, with 10-fold cross-validation, the proposed method achieves the accuracy of 94.83% and 95.69% by using T1 contrast-enhanced and T2 weighted magnetic resonance images, respectively.
Deep Learning Based Multimodal Brain Tumor Diagnosis
Li, Yuexiang
Shen, Linlin
2018Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Brain tumor segmentation plays an important role in the disease diagnosis. In this paper, we proposed deep learning frameworks, i.e. MvNet and SPNet, to address the challenges of multimodal brain tumor segmentation. The proposed multi-view deep learning framework (MvNet) uses three multi-branch fully-convolutional residual networks (Mb-FCRN) to segment multimodal brain images from different view-point, i.e. slices along x, y, z axis. The three sub-networks produce independent segmentation results and vote for the final outcome. The SPNet is a CNN-based framework developed to predict the survival time of patients. The proposed deep learning frameworks was evaluated on BraTS 17 validation set and achieved competing results for tumor segmentation While Dice scores of 0.88, 0.75 0.71 were achieved for whole tumor, enhancing tumor and tumor core, respectively, an accuracy of 0.55 was obtained for survival prediction.
3D Brain Tumor Segmentation Through Integrating Multiple 2D FCNNs
Zhao, Xiaomei
Wu, Yihong
Song, Guidong
Li, Zhenye
Zhang, Yazhuo
Fan, Yong
2018Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
The Magnetic Resonance Images (MRI) which can be used to segment brain tumors are 3D images. To make use of 3D information, a method that integrates the segmentation results of 3 2D Fully Convolutional Neural Networks (FCNNs), each of which is trained to segment brain tumor images from axial, coronal, and sagittal views respectively, is applied in this paper. Integrating multiple FCNN models by fusing their segmentation results rather than by fusing into one deep network makes sure that each FCNN model can still be tested by 2D slices, guaranteeing the testing efficiency. An averaging strategy is applied to do the fusing job. This method can be easily extended to integrate more FCNN models which are trained to segment brain tumor images from more views, without retraining the FCNN models that we already have. In addition, 3D Conditional Random Fields (CRFs) are applied to optimize our fused segmentation results. Experimental results show that, integrating the segmentation results of multiple 2D FCNNs obviously improves the segmentation accuracy, and 3D CRF greatly reduces false positives and improves the accuracy of tumor boundaries.
MRI Brain Tumor Segmentation and Patient Survival Prediction Using Random Forests and Fully Convolutional Networks
In this paper, we use a Fully Convolutional Neural Network (FCNN) for the segmentation of gliomas from Magnetic Resonance Images (MRI). A fully automatic, voxel based classification was achieved by training a 23 layer deep FCNN on 2-D slices extracted from patient volumes. The network was trained on slices extracted from 130 patients and validated on 50 patients. For the task of survival prediction, texture and shape based features were extracted from T1 post contrast volume to train an Extremely Gradient Boosting (XGBoost) regressor. On the BraTS 2017 validation set, the proposed scheme achieved a mean whole tumor, tumor core and active dice score of 0.83, 0.69 and 0.69 respectively, while for the task of overall survival prediction, the proposed scheme achieved an accuracy of 52%.
Multimodal Brain Tumor Segmentation Using 3D Convolutional Networks
Rodríguez Colmeiro, R. G.
Verrastro, C. A.
Grosges, T.
2018Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Volume segmentation is one of the most time consuming and therefore error prone tasks in the field of medicine. The construction of a good segmentation requires cross-validation from highly trained professionals. In order to address this problem we propose the use of 3D deep convolutional networks (DCN). Using a 2 step procedure we first segment whole the tumor from a low resolution volume and then feed a second step which makes the fine tissue segmentation. The advantages of using 3D-DCN is that it extracts 3D features form all neighbouring voxels. In this method all parameters are self-learned during a single training procedure and its accuracy can improve by feeding new examples to the trained network. The training dice-loss value reach 0.85 and 0.9 for the coarse and fine segmentation networks respectively. The obtained validation and testing mean dice for the Whole Tumor class are 0.86 and 0.82 respectively.
Dilated Convolutions for Brain Tumor Segmentation in MRI Scans
Moreno Lopez, Marc
Ventura, Jonathan
2018Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
We present a novel method to detect and segment brain tumors in Magnetic Resonance Imaging scans using a novel network based on the Dilated Residual Network. Dilated convolutions provide efficient multi-scale analysis for dense prediction tasks without losing resolution by downsampling the input. To the best of our knowledge, our work is the first to evaluate a dilated residual network for brain tumor segmentation in magnetic resonance imaging scans. We train and evaluate our method on the Brain Tumor Segmentation (BraTS) 2017 challenge dataset. To address the severe label imbalance in the data, we adopt a balanced, patch-based sampling approach for training. An ablation study establishes the importance of residual connections in the performance of our network.
Multi-modal PixelNet for Brain Tumor Segmentation
Islam, Mobarakol
Ren, Hongliang
2018Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Abstract
Brain tumor segmentation using multi-modal MRI data sets is important for diagnosis, surgery and follow up evaluation. In this paper, a convolutional neural network (CNN) with hypercolumns features (e.g. PixelNet) utilizes for automatic brain tumor segmentation containing low and high-grade glioblastomas. Though pixel level convolutional predictors like CNNs, are computationally efficient, such approaches are not statistically efficient during learning precisely because spatial redundancy limits the information learned from neighboring pixels. PixelNet extracts features from multiple layers that correspond to the same pixel and samples a modest number of pixels across a small number of images for each SGD (Stochastic gradient descent) batch update. PixelNet has achieved whole tumor dice accuracy 87.6% and 85.8% for validation and testing data respectively in BraTS 2017 challenge.
Brain Tumor Segmentation Using Dense Fully Convolutional Neural Network
Shaikh, Mazhar
Anand, Ganesh
Acharya, Gagan
Amrutkar, Abhijit
Alex, Varghese
Krishnamurthi, Ganapathy
2018Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Manual segmentation of brain tumor is often time consuming and the performance of the segmentation varies based on the operators experience. This leads to the requisition of a fully automatic method for brain tumor segmentation. In this paper, we propose the usage of the 100 layer Tiramisu architecture for the segmentation of brain tumor from multi modal MR images, which is evolved by integrating a densely connected fully convolutional neural network (FCNN), followed by post-processing using a Dense Conditional Random Field (DCRF). The network consists of blocks of densely connected layers, transition down layers in down-sampling path and transition up layers in up-sampling path. The method was tested on dataset provided by Multi modal Brain Tumor Segmentation Challenge (BraTS) 2017. The training data is composed of 210 high-grade brain tumor and 74 low-grade brain tumor cases. The proposed network achieves a mean whole tumor, tumor core & active tumor dice score of 0.87, 0.68 & 0.65. Respectively on the BraTS ’17 validation set and 0.83, 0.65 & 0.65 on the Brats ’17 test set.
Brain Tumor Segmentation in MRI Scans Using Deeply-Supervised Neural Networks
Pourreza, Reza
Zhuge, Ying
Ning, Holly
Miller, Robert
2018Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Gliomas are the most frequent primary brain tumors in adults. Improved quantification of the various aspects of a glioma requires accurate segmentation of the tumor in magnetic resonance images (MRI). Since the manual segmentation is time-consuming and subject to human error and irreproducibility, automatic segmentation has received a lot of attention in recent years. This paper presents a fully automated segmentation method which is capable of automatic segmentation of brain tumor from multi-modal MRI scans. The proposed method is comprised of a deeply-supervised neural network based on Holistically-Nested Edge Detection (HED) network. The HED method, which is originally developed for the binary classification task of image edge detection, is extended for multiple-class segmentation. The classes of interest include the whole tumor, tumor core, and enhancing tumor. The dataset provided by 2017 Multimodal Brain Tumor Image Segmentation Benchmark (BraTS) challenge is used in this work for training the neural network and performance evaluations. Experiments on BraTS 2017 challenge datasets demonstrate that the method performs well compared to the existing works. The assessments revealed the Dice scores of 0.86, 0.60, and 0.69 for whole tumor, tumor core, and enhancing tumor classes, respectively.
Brain Tumor Segmentation and Parsing on MRIs Using Multiresolution Neural Networks
Brain lesion segmentation is a critical application of computer vision to the biomedical image analysis. The difficulty is derived from the great variance between instances, and the high computational cost of processing three dimensional data. We introduce a neural network for brain tumor semantic segmentation that parses their internal structures and is capable of processing volumetric data from multiple MRI modalities simultaneously. As a result, the method is able to learn from small training datasets. We develop an architecture that has four parallel pathways with residual connections. It receives patches from images with different spatial resolutions and analyzes them independently. The results are then combined using fully-connected layers to obtain a semantic segmentation of the brain tumor. We evaluated our method using the 2017 BraTS Challenge dataset, reaching average dice coefficients of 89%$$89\%$$, 88%$$88\%$$ and 86%$$86\%$$ over the training, validation and test images, respectively.
Glioblastoma and Survival Prediction
Shboul, Zeina A.
Vidyaratne, Lasitha
Alam, Mahbubul
Iftekharuddin, Khan M.
2018Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Glioblastoma is a stage IV highly invasive astrocytoma tumor. Its heterogeneous appearance in MRI poses a critical challenge in diagnosis, prognosis and survival prediction. This work proposes an automated survival prediction method by utilizing different types of texture and other features. The method tests feature significance and prognostic values, and then utilizes the most significant features with a Random Forest regression model to perform survival prediction. We use 163 cases from BraTS17 training dataset for evaluation of the proposed model. A 10-fold cross validation offers normalized root mean square error of 30% for the training dataset and the cross-validated accuracy of 67%, respectively. Finally, the proposed model ranked first in the Survival Prediction task for global Brain Tumor Segmentation Challenge (BraTS) 2017 and an accuracy of 57.9% is achieved.
Cascaded V-Net Using ROI Masks for Brain Tumor Segmentation
In this work we approach the brain tumor segmentation problem with a cascade of two CNNs inspired in the V-Net architecture [13], reformulating residual connections and making use of ROI masks to constrain the networks to train only on relevant voxels. This architecture allows dense training on problems with highly skewed class distributions, such as brain tumor segmentation, by focusing training only on the vecinity of the tumor area. We report results on BraTS2017 Training and Validation sets.
Brain Tumor Segmentation Using a 3D FCN with Multi-scale Loss
Jesson, Andrew
Arbel, Tal
2018Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
In this work, we use a 3D Fully Connected Network (FCN) architecture for brain tumor segmentation. Our method includes a multi-scale loss function on predictions given at each resolution of the FCN. Using this approach, the higher resolution features can be combined with the initial segmentation at a lower resolution so that the FCN models context in both the image and label domains. The model is trained using a multi-scale loss function and a curriculum on sample weights is employed to address class imbalance. We achieved competitive results during the testing phase of the BraTS 2017 Challenge for segmentation with Dice scores of 0.710, 0.860, and 0.783 for enhancing tumor, whole tumor, and tumor core, respectively.
3D Deep Neural Network-Based Brain Tumor Segmentation Using Multimodality Magnetic Resonance Sequences
Brain tumor segmentation plays a pivotal role in clinical practice and research settings. In this paper, we propose a 3D deep neural network-based algorithm for joint brain tumor detection and intra-tumor structure segmentation, including necrosis, edema, non-enhancing and enhancing tumor, using multimodal magnetic resonance imaging sequences. An ensemble of cascaded U-Nets is designed to detect the tumor and a deep convolutional neural network is constructed for patch-based intra-tumor structure segmentation. This algorithm has been evaluated on the BraTS 2017 Challenge dataset and achieved Dice similarity coefficients of 0.81, 0.69 and 0.55 in the segmentation of whole tumor, core tumor and enhancing tumor, respectively. Our results suggest that the proposed algorithm has promising performance in automated brain tumor segmentation.
Automated Brain Tumor Segmentation on Magnetic Resonance Images and Patient’s Overall Survival Prediction Using Support Vector Machines
Osman, Alexander F. I.
2018Book Section, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
This study is aimed to develop two algorithms for glioma tumor segmentation and patient’s overall survival (OS) prediction with machine learning approaches. The segmentation algorithm is fully automated to accurately and efficiently delineate the whole tumor on a magnetic resonance imaging (MRI) scan for radiotherapy treatment planning. The survival algorithm predicts the OS for glioblastoma multiforme (GBM) patients based on regression and classification principles. Multi-institutional BRATS’2017 data of MRI scans from 477 patients with high-grade and lower-grade glioma (HGG/LGG) used in this study. Clinical patient survival data of 291 glioblastoma multiforme (GBM) were available in the provided data. Support vector machines (SVMs) were used to develop both algorithms. The segmentation chain comprises pre-processing with a goal of noise removal, feature extraction of the image intensity, segmentation process using a non-linear classifier with ‘Gaussian’ kernel, and post-processing to enhance the segmentation morphology. The OS prediction algorithm sequence involves two steps; extraction of patient’s age, and segmented tumor’s size and its location features; prediction process using a non-linear classifier and a linear regression model with ‘Gaussian’ kernels. The algorithms were trained, validated and tested on BRATS’2017’s training, validation, and testing datasets. Average Dice for the whole tumor segmentation obtained on the validation and testing datasets is 0.53 ± 0.31 (median 0.60) which indicates the consistency of the proposed algorithm on the new “unseen” data. For OS prediction, the mean accuracy is 0.49 for the validation dataset and 0.35 for the testing dataset based on regression principle; whereas an overall accuracy of 1.00 achieved in classification into short, medium, and long-survivor classes for a designed validation dataset. The computational time for the automated segmentation algorithm took approximately 3 min. In its present form, the segmentation tool is fully automated, fast, and provides a reasonable segmentation accuracy on the multi-institutional dataset.
Tumor Segmentation from Multimodal MRI Using Random Forest with Superpixel and Tensor Based Feature Extraction
Identification and localization of brain tumor tissues plays an important role in diagnosis and treatment planning of gliomas. A fully automated superpixel wise two-stage tumor tissue segmentation algorithm using random forest is proposed in this paper. First stage is used to identify total tumor and the second stage to segment sub-regions. Features for random forest classifier are extracted by constructing a tensor from multimodal MRI data and applying multi-linear singular value decomposition. The proposed method is tested on BRATS 2017 validation and test dataset. The first stage model has a Dice score of 83% for the whole tumor on the validation dataset. The total model achieves a performance of 77%, 50% and 61% Dice scores for whole tumor, enhancing tumor and tumor core, respectively on the test dataset.
Number of Useful Components in Gaussian Mixture Models for Patch-Based Image Denoising
Fully-Automatic CT Data Preparation for Interventional X-Ray Skin Dose Simulation
Roser, Philipp
Birkhold, Annette
Preuhs, Alexander
Stimpel, Bernhard
Syben, Christopher
Strobel, Norbert
Kowarschik, Markus
Fahrig, Rebecca
Maier, Andreas
2020Book Section, cited 0 times
CT Lymph Nodes
HNSCC-3DCT-RT
Recently, deep learning (DL) found its way to interventional X-ray skin dose estimation. While its performance was found to be acceptable, even more accurate results could be achieved if more data sets were available for training. One possibility is to turn to computed tomography (CT) data sets. Typically, computed tomography (CT) scans can be mapped to tissue labels and mass densities to obtain training data. However, care has to be taken to make sure that the different clinical settings are properly accounted for. First, the interventional environment is characterized by wide variety of table setups that are significantly different from the typical patient tables used in conventional CT. This cannot be ignored, since tables play a crucial role in sound skin dose estimation in an interventional setup, e. g., when the X-ray source is directly underneath a patient (posterior-anterior view). Second, due to interpolation errors, most CT scans do not facilitate a clean segmentation of the skin border. As a solution to these problems, we applied connected component labeling (CCL) and Canny edge detection to (a) robustly separate the patient from the table and (b) to identify the outermost skin layer. Our results show that these extensions enable fully-automatic, generalized pre-processing of CT scans for further simulation of both skin dose and corresponding X-ray projections.
In histopathology, scanner-induced domain shifts are known to impede the performance of trained neural networks when tested on unseen data. Multidomain pre-training or dedicated domain-generalization techniques can help to develop domain-agnostic algorithms. For this, multi-scanner datasets with a high variety of slide scanning systems are highly desirable. We present a publicly available multi-scanner dataset of canine cutaneous squamous cell carcinoma histopathology images, composed of 44 samples digitized with five slide scanners. This dataset provides local correspondences between images and thereby isolates the scanner-induced domain shift from other inherent, e.g. morphology-induced domain shifts. To highlight scanner differences, we present a detailed evaluation of color distributions, sharpness, and contrast of the individual scanner subsets. Additionally, to quantify the inherent scanner-induced domain shift, we train a tumor segmentation network on each scanner subset and evaluate the performance both in- and cross-domain. We achieve a class-averaged in-domain intersection over union coefficient of up to 0.86 and observe a cross-domain performance decrease of up to 0.38, which confirms the inherent domain shift of the presented dataset and its negative impact on the performance of deep neural networks.
Fully Automated Multi-Modal Anatomic Atlas Generation Using 3D-Slicer
Atlases of the human body have many applications, includ-ing for instance the analysis of information from patient cohorts to eval-uate the distribution of tumours and metastases. We present a 3D Slicer module that simplifies the task of generating a multi-modal atlas from anatomical and functional data. It provides for a simpler evaluation of existing image and verbose patient data by integrating a database that isautomatically generated from text files and accompanies the visualization of the atlas volume. The computation of the atlas is a two step process. First, anatomical data is pairwise registered to a reference dataset withan affine initialization and a B-Spline based deformable approach. Sec-ond, the computed transformations are applied to anatomical as well as the corresponding functional data to generate both atlases. The moduleis validated with a publicly available soft tissue sarcoma dataset fromThe Cancer Imaging Archive. We show that functional data in the atlasvolume correlates with the findings from the patient database.
Supervised Dimension-Reduction Methods for Brain Tumor Image Data Analysis
The purpose of this study was to construct a risk score for glioblastomas based on magnetic resonance imaging (MRI) data. Tumor identification requires multimodal voxel-based imaging data that are highly dimensional, and multivariate models with dimension reduction are desirable for their analysis. We propose a two-step dimension-reduction method using a radial basis function–supervised multi-block sparse principal component analysis (SMS–PCA) method. The method is first implemented through the basis expansion of spatial brain images, and the scores are then reduced through regularized matrix decomposition in order to produce simultaneous data-driven selections of related brain regions supervised by univariate composite scores representing linear combinations of covariates such as age and tumor location. An advantage of the proposed method is that it identifies the associations of brain regions at the voxel level, and supervision is helpful in the interpretation.
An Improved Mammogram Classification Approach Using Back Propagation Neural Network
Mammograms are generally contaminated by quantum noise, degrading their visual quality and thereby the performance of the classifier in Computer-Aided Diagnosis (CAD). Hence, enhancement of mammograms is necessary to improve the visual quality and detectability of the anomalies present in the breasts. In this paper, a sigmoid based non-linear function has been applied for contrast enhancement of mammograms. The enhanced mammograms are used to define the texture of the detected anomaly using Gray Level Co-occurrence Matrix (GLCM) features. Later, a Back Propagation Artificial Neural Network (BP-ANN) is used as a classification tool for segregating the mammogram into abnormal or normal. The proposed classifier approach has reported to be the one with considerably better accuracy in comparison to other existing approaches.
MRI imaging texture features in prostate lesions classification
Sobecki, Piotr
Życka-Malesa, Dominika
Mykhalevych, Ihor
Sklinda, Katarzyna
Przelaskowski, Artur
2018Book Section, cited 0 times
PROSTATEx
Radiomics
multiparametric Magnetic Resonance Imaging (mpMRI)
Algorithm Development
Deep Learning
Challenge
Prostate cancer (PCa) is the most common diagnosed cancer and cause of cancer-related death among men. Computer Aided Diagnosis (CAD) systems are used to support radiologists in multiparametric Magnetic Resonance (mpMR) image-based analysis in order to avoid unnecessary biopsis and increase radiologist’s specificity. CAD systems have been reported in many papers for the last decade. The reported results have been obtained on small, private data sets and are impossible to reproduce or verify concluded remarks. PROSTATEx challenge organizers provided database that contains approximately 350 MRI cases, each from a distinct patient, allowing benchmarking of various CAD systems. This paper describes novel, deep learning based PCa CAD system that uses statistical central moments and Haralick features extracted from MR images, integrated with anamnestic data. Developed system has been trained on the dataset consisting of 330 lesions and evaluated on the challenge dataset using area under curve (AUC) related to estimated receiver operating characteristic (ROC). Two configurations of our method, based on statistical and Haralick features, scored 0.63 and 0.73 of AUC values. We draw conclusions from the challenge participation and discussed further improvements that could be made to the model to improve prostate classification.
Classification of Breast Masses Using Convolutional Neural Network as Feature Extractor and Classifier
Sarkar, Pinaki Ranjan
Mishra, Deepak
Sai Subrahmanyam, Gorthi R. K.
2018Book Section, cited 0 times
CBIS-DDSM
Duetothe difficulties of radiologists to detect micro-calcification clusters, computer-aided detection (CAD) system is much needed. Many researchers have undertaken the challenge of building an efficient CAD system and several feature extraction methods are being proposed. Most of them extract low- or mid-level features which restrict the accuracy of the overall classification. We observed that high-level features lead to a better diagnosis and convolutional neural network (CNN) is the best-known model to extract high-level features. In this paper, we propose to use a CNN architecture to do both of the feature extraction and classification task. Our proposed network was applied to both MIAS and DDSM databases, and we have achieved accuracy of 99.074%$$99.074\%$$ and 99.267%$$99.267\%$$, respectively, which we believe that is the best reported so far.
Computer-Aided Diagnosis of Life-Threatening Diseases
Kumar, Pramod
Ambekar, Sameer
Roy, Subarna
Kunchur, Pavan
2019Book Section, cited 0 times
LIDC-IDRI
Computer Aided Diagnosis (CADx)
According to WHO, the incidence of life-threatening diseases like cancer, diabetes, and Alzheimer’s disease is escalating globally. In the past few decades, traditional methods have been used to diagnose such diseases. These traditional methods often have limitations such as lack of accuracy, expense, and time-consuming procedures. Computer-aided diagnosis (CAD) aims to overcome these limitations by personalizing healthcare issues. Machine learning is a promising CAD method, offering effective solutions for these diseases. It is being used for early detection of cancer, diabetic retinopathy, as well as Alzheimer’s disease, and also to identify diseases in plants. Machine learning can increase efficiency, making the process more cost effective, with quicker delivery of results. There are several CAD algorithms (ANN, SVM, etc.) that can be used to train the disease dataset, and eventually make significant predictions. It has also been proven that CAD algorithms have potential to diagnose and early detection of life-threatening diseases.
Classification of Cancer Microscopic Images via Convolutional Neural Networks
Khan, Mohammad Azam
Choo, Jaegul
2019Book Section, cited 0 times
C-NMC 2019
Machine Learning
This paper describes our approach for the classification of normal versus malignant cells in B-ALL white blood cancer microscopic images: ISBI 2019—classification of leukemic B-lymphoblast cells from normal B-lymphoid precursors from blood smear microscopic images. We leverage a state of the art convolutional neural network pretrained with the ImageNet dataset and applied several data augmentation and hyperparameters optimization strategies. Our method obtains an F1 score of 0.83 for the final test set in the competition.
Extraction of Cancer Section from 2D Breast MRI Slice Using Brain Strom Optimization
Hand-Crafted and Deep Learning-Based Radiomics Models for Recurrence Prediction of Non-Small Cells Lung Cancers
Aonpong, Panyanat
Iwamoto, Yutaro
Wang, Weibin
Lin, Lanfen
Chen, Yen-Wei
Innovation in Medicine and Healthcare2020Journal Article, cited 0 times
Website
NSCLC Radiogenomics
Deep Learning
LUNG
This research was created to examine the recurrence of non-small lung cancer (NSCLC) using computed-tomography images (CT-images) to avoid biopsy from patients because the cancer cells may have an uneven distribution which can lead to the investigation mistake. This work presents a comparison of the operations of two different methods: Hand-Crafted Radiomics model and deep learning-based radiomics model using 88 patient samples from open-access dataset of non-small cell lung cancer in The Cancer Imaging Archive (TCIA) Public Access. In Hand-Crafted Radiomics Models, the pattern of NSCLC CT-images was analyzed in various statistics as radiomics features. The radiomics features associated with recurrence are selected through three statistical calculations: LASSO, Chi-2, and ANOVA. Then, those selected radiomics features were processed using different models. In the Deep Learning-based Radiomics Model, the proposed artificial neural network has been used to enhance the recurrence prediction. The Hand-Crafted Radiomics Model with non-selected, Lasso, Chi-2, and ANOVA, give the following results: 76.56% (AUC 0.6361), 76.83% (AUC 0.6375), 78.64% (AUC 0.6778), and 78.17% (AUC 0.6556), respectively, and the Deep Learning-based Radiomic Models, including ResNet50 and DenseNet121 give the following results: 79.00% (AUC 0.6714), and 79.31% (AUC 0.6712), respectively.
Auto Segmentation of Lung in Non-small Cell Lung Cancer Using Deep Convolution Neural Network
Patil, Ravindra
Wee, Leonard
Dekker, Andre
2020Book Section, cited 0 times
NSCLC Radiogenomics
NSCLC-Radiomics
Segmentation of Lung is the vital first step in radiologic diagnosis of lung cancer. In this work, we present a deep learning based automated technique that overcomes various shortcomings of traditional lung segmentation and explores the role of adding “explainability” to deep learning models so that the trust can be built on these models. Our approach shows better generalization across different scanner settings, vendors and the slice thickness. In addition, there is no initialization of the seed point making it complete automated without manual intervention. The dice score of 0.98 is achieved for lung segmentation on an independent data set of non-small cell lung cancer.
Radial Cumulative Frequency Distribution: A New Imaging Signature to Detect Chromosomal Arms 1p/19q Co-deletion Status in Glioma
Gliomas are the most common primary brain tumor and are associated with high mortality. Gene mutations are one of the hallmarks of glioma formation, determining its aggressiveness as well as patients’s response towards treatment. The paper presents a novel approach to detect chromosomal arms 1p/19q co-deletion status non-invasively in low-graded glioma based on its textural characteristics in frequency domain. For this, we derived Radial Cumulative Frequency Distribution (RCFD) function from Fourier power spectrum of consecutive glioma slices. Multi-parametric MRIs of 159 grade-2 and grade-3 glioma patients, having biopsy proven 1p/19q mutational status (non-deletion: n = 57 and co-deletion: n = 102) was used in this study. Different RCFD textural features were extracted to quantify MRI image signature pattern of mutant and wildtype glioma. Owing to skewed dataset we have performed RUSBoost classification; yielding average accuracy of 73.5% for grade-2 and 83% for grade-3 glioma subjects. The efficacy of the proposed technique is discussed further in comparison with state-of-art methods.
Detection of Leukemia Using Convolutional Neural Network
Anagha, V.
Disha, A.
Aishwarya, B. Y.
Nikkita, R.
Biradar, Vidyadevi G.
2022Book Section, cited 0 times
C-NMC 2019
Pathomics
Computer Aided Detection (CADe)
Leukemia
Convolutional Neural Network (CNN)
Keras
TensorFlow
Deep Learning
Leukemia which is commonly known as blood cancer is a fatal type of cancer that affects white blood cells. It usually originates from the bone marrow and causes the development of abnormal blood cells called blasts. The diagnosis is made by blood tests and bone marrow biopsy which involve manual work and are time consuming. There is a need for development of an automatic tool for the detection of white blood cell cancer. Therefore, in this work, a classification model using Convolutional Neural Network with Deep Learning techniques as a basis is proposed. This work was implemented using Keras library with TensorFlow as backend. This model was trained and evaluated on cancer cell dataset C_NMC_2019 which includes white blood cell regions segmented from the microscopic blood smear images. The model offers an accuracy of 91% for training and 87% for testing which is satisfactory.
Brain Tumor Segmentation Using Unet
Raina, Sneha
Khandelwal, Abha
Gupta, Saloni
Leekha, Alka
2021Book Section, cited 0 times
TCGA-LGG
We are at the cusp of a massive Bio-medical revolution. Advances in medical engineering have been supplying us enormous amounts of data, such as medical scans, electroencephalography, genome, and protein sequences. Computer vision algorithms show promise in extracting features and learning patterns from this complex data. One such application is the segmentation of Brain Tumors. There have been a number of somewhat successful attempts at the demarcation of brain tumors through simple Convolutional Neural networks (CNN), CNN-Support Vector Machines, DenseNets, Unets, etc. In this paper, we worked with the database consisting of brain Magnetic Resonance images, with each image composed of three channels together with the manually extracted abnormality masks for segmentation. If implemented for real-world applications, this technology can be used to generate semantic segmentation on Brain MR Images in real-time. We have implemented a U-net architecture, a fully connected CNN. We were successful in demarcating the tumors in the brain MR image accurately.
Breast DCE-MRI Segmentation for Lesion Detection Using Clustering with Multi-verse Optimization Algorithm
Kar, Bikram
Si, Tapas
2021Book Section, cited 0 times
TCGA-BRCA
The highest number of deaths among all types of cancers in women is caused by breast cancer. Therefore, early detection and diagnosis of breast cancer are very much needed for its treatment. Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is widely used for breast cancer diagnosis. In this paper, a segmentation method using modified hard-clustering technique with multi-verse optimizer (MVO) is proposed for the detection of breast lesion in DCE-MRI. The proposed method is termed as CMVO in this paper. First, MR images are denoised, and intensity inhomogeneities are corrected in the preprocessing steps. Then clustering technique is used in segmentation of the MR images. Finally, from the segmented images, lesions are extracted in the postprocessing step. The results of CMVO are compared with that of K-means algorithm and PSO-based hard clustering. The CMVO performs better than other methods in lesion detection in breast DCE-MRI.Kar, BikramSi, Tapas
Computer-Aided Detection for Early Detection of Lung Cancer Using CT Images
Doctors face difficulty in the diagnosis of lung cancer due to the complex nature and clinical interrelations of computer-diagnosed scan images. Hence, the visual inspection and subjective evaluation methods are time consuming and tedious, which leads to inter and intra observer inconsistency or imprecise classification. The Computer-Aided Detection (CAD) can help the clinicians for objective decision-making, early diagnosis, and classification of cancerous abnormalities. In this work, CAD has been employed to enhance the accuracy, sensitivity, and specificity of automated detection in which, the phases of lung cancer are discriminated using image processing tools. Cancer is the second leading cause of death in non-communicable diseases worldwide. Lung cancer is, in fact, the most dangerous form of cancer that affects both the genders. Either or both sides of the lung begin to expand during the uncontrolled growth of extraordinary cells. The most widely used imaging technique for lung cancer diagnosis is Computerised Tomography (CT) scanning. In this work, the CAD method is used to differentiate between the phases of pictures of lung cancer stages. Abnormality detection consists of 4 steps: pre-processing, segmentation, extraction of features, and classification of input CT images. For the segmentation process, Marker-controlled watershed segmentation and the K-means algorithm are used. From CT images, normal and abnormal information is extracted and its characteristics are determined. Stages 1–4 of cancerous imaging were discriminated and graded with approximately 80% efficiency using neural network feedforward backpropagation algorithm. Input data is collected from the Lung Image Database Consortium (LIDC), which out of 1018 dataset cases uses 100 cases. For the output display, a graphical user interface (GUI) is developed. This automated and robust CAD system is necessary for accurate and quick screening of the mass population.
Genomics-Based Models for Recurrence Prediction of Non-small Cells Lung Cancers
This research is designed to examine the recurrence of non-small lung cancer (NSCLC) prediction using genomics information to reach the maximum accuracy. The raw gene data show very good performance but require more precise examination. This work is study about the way to reduce the complexity of the gene data with minimal information loss. This processed gene data tends to have the ability to archive the reasonable prediction result with faster process. This work presents a comparison of the operations of the two steps, including gene selection and gene quantization, Linear quantization and K-mean quantization, using associated gene selected from 88 patient sample from the open-access dataset of non-small cell lung cancer in The Cancer Imaging Archive Public Access. We use the different number of the group splitting and compare the performance of the recurrence prediction in both operations. The results of this study show us that the F-test method can provide us the best gene set that related to NSCLC recurrence. With F-test without quantization, accuracy of the prediction has been improved from 81.41% (using 5587 genes) to 91.83% (using selected 294 genes). With quantization methods, the suitable gene groups separation can maximize the accuracy to 93.42% using K-mean quantization.
Evaluating Mobile Tele-radiology Performance for the Task of Analyzing Lung Lesions on CT Images
The accurate detection of lung lesions as well as the precise measurement of their sizes on Computed Tomography (CT) images is known to be crucial for the response to therapy assessment of cancer patients. The goal of this study is to investigate the feasibility of using mobile tele-radiology for this task in order to improve efficiency in radiology. Lung CT Images were obtained from The Cancer Imaging Archive (TCIA). The Bland-Altman analysis method was used to compare and assess conventional radiology and mobile radiology based lesion size measurements. Percentage of correctly detected lesions at the right image locations was also recorded. Sizes of 183 lung lesions between 5 and 52 mm in CT images were measured by two experienced radiologists. Bland-Altman plots were drawn, and limits of agreements (LOA) were determined as 0.025 and 0.975 percentiles (−1.00, 0.00), (−1.39, 0.00). For lesions of 10 mm and higher, these intervals were found to be much smaller than the decision interval (−30% and +20%) recommended by the RECIST 1.1 criteria. In average, observers accurately detected 98.2% of the total 271 lesions on the medical monitor, while they detected 92.8% of the nodules on the iPhone.; ; In conclusion, mobile tele-radiology can be a feasible alternative for the accurate measurement of lung lesions on CT images. A higher resolution display technology such as iPad may be preferred in order to detect new small <5 mm lesions more accurately. Further studies are needed to confirm these results with more mobile technologies and types of lesions.; ; Keywords; Lung CT Lung lesions Lesion size measurement Tumor burden measurement Measurement uncertainties Tele-radiology Bland-Altman method Non-parametric method
2Be3-Net: Combining 2D and 3D Convolutional Neural Networks for 3D PET Scans Predictions
Radiomics - high-dimensional features extracted from clinical images - is the main approach used to develop predictive models based on 3D Positron Emission Tomography (PET) scans of patients suffering from cancer. Radiomics extraction relies on an accurate segmentation of the tumoral region, which is a time consuming task subject to inter-observer variability. On the other hand, data driven approaches such as deep convolutional neural networks (CNN) struggle to achieve great performances on PET images due to the absence of available large PET datasets combined to the size of 3D networks. In this paper, we assemble several public datasets to create a PET dataset large of 2800 scans and propose a deep learning architecture named “2Be3-Net” associating a 2D feature extractor to a 3D CNN predictor. First, we take advantage of a 2D pre-trained model to extract feature maps out of 2D PET slices. Then we apply a 3D CNN on top of the concatenation of the previously extracted feature maps to compute patient-wise predictions. Experiments suggest that 2Be3-Net has an improved ability to exploit spatial information compared to 2D or 3D-only CNN solutions. We also evaluate our network on the prediction of clinical outcomes of head-and-neck cancer. The proposed pipeline outperforms PET radiomics approaches on the prediction of loco-regional recurrences and overall survival. Innovative deep learning architectures combining a pre-trained network with a 3D CNN could therefore be a great alternative to traditional CNN and radiomics approaches while empowering small and medium sized datasets.
Oropharyngeal cancer (OPC) patients with associated human papillomavirus (HPV) infection generally present more favorable outcomes than HPV-negative patients and, consequently, their treatment with radiation therapy may be potentially de-escalated. The diagnostic accuracy of a deep learning (DL) model to predict HPV status on computed tomography (CT) images was evaluated in this study, together with its ability to perform unsupervised heatmap-based localization of relevant regions in OPC and HPV infection, i.e., the primary tumor and lymph nodes, as a measure of its reliability. The dataset consisted of 767 patients from one internal and two public collections from The Cancer Imaging Archive and was split into training, validation and test sets using the ratio 60–20–20. Images were resampled to a resolution of 2 mm3 and a sub-volume of 96 pixels3 was automatically cropped, which spanned from the nose until the start of the lungs. Models Genesis was fine-tuned for the classification task. Grad-CAM and Score-CAM were applied to the test subjects that belonged to the internal cohort (n = 24), and the overlap and Dice coefficients between the resulting heatmaps and the planning target volumes (PTVs) were calculated. Final train/validation/test area-under-the-curve (AUC) values of 0.9/0.87/0.87, accuracies of 0.83/0.82/0.79, and F1-scores of 0.83/0.79/0.74 were achieved. The reliability analysis showed an increased focus on dental artifacts in HPV-positive patients, whereas promising overlaps and moderate Dice coefficients with the PTVs were obtained for HPV-negative cases. These findings prove the necessity of performing reliability studies before a DL model is implemented in a real clinical setting, even if there is optimal diagnostic accuracy.
Lung Nodules Detection Using Inverse Surface Adaptive Thresholding (ISAT) and Artificial Neural Network
Early detection of lung nodules is important since it increases the probability of survival for the lung cancer’s patient. Conventionally, the radiologists will manually examine the lung Computed Tomography (CT) scan images and determine the possibility of having malignant nodules (cancerous). This process consumes a lot of time since they have to examine each of the CT images and marking the lesion (nodules) manually. In addition, the radiologist may experience fatigue due to large number of images to be analysed. Therefore, automated detection is proposed to assist the radiologist in detecting the nodules. In this paper, the main novelty is the implementation of image processing methods to segment and classify the lung nodules. In this work, several image processing methods are utilized namely the median filter, histogram adjustment, Inverse Surface Adaptive Thresholding (ISAT) to segment the nodules in CT scan images. Then, 13 features are extracted and given as input to the Back Propagation Neural Network (BPNN) to classify the image either benign or malignant. Based on the result obtained, it showed that ISAT segmentation achieved 99.9% in term of accuracy. The extracted features were given as input to the Back Propagation Neural Network (BPNN) to classify the image either benign or malignant. Lung nodules that are less than 3 mm are considered as benign (non-cancerous) and if their size is more than 3 mm, there are considered as malignant (cancerous). The results showed that the proposed methods obtained 90.30% in term of accuracy.
A Drive Through Computer-Aided Diagnosis of Breast Cancer: A Comprehensive Study of Clinical and Technical Aspects
Oza, Parita
Sharma, Paawan
Patel, Samir
2022Conference Paper, cited 0 times
CBIS-DDSM
BREAST
Computer Aided Diagnosis (CADx)
Deep Learning
Convolutional Neural Networks (CNN)
Breast cancer is a very common and life-threatening disease in women worldwide. The number of breast cancer cases is increasing with time. Prevention of this disease is very challenging and still remains a question at large, but if detected in advance, the survival rate can be increased. The advances in deep learning have demonstrated a lot of changes in the development of Computer-Aided Diagnosis (CAD) of breast cancer. With the noteworthy progress of the new development of artificial intelligence which is deep neural networks, the diagnostic potentialities of deep learning methods are closely approaching the expertise of a human. Although deep learning has substantial improvements and advancements, especially Convolutional Neural Networks (CNN), there are still some challenges that are required to be addressed to build an effective CAD system that can serve as a “second opinion” tool for practitioners. A comprehensive review of clinical aspects of breast cancer like risk factors, breast abnormalities, and BIRADS (Breast Imaging Reporting and Data System) is presented in the paper. This paper also presents CAD systems that are recently developed for breast cancer segmentation, detection, and classification. An overview of mammography datasets used in literature and challenges in applying CNN for medical images are also discussed in the paper.
EMD-Based Binary Classification of Mammograms
Ghosh, Anirban
Ramakant, Pooja
Ranjan, Priya
Deshpande, Anuj
Janardhanan, Rajiv
2022Book Section, cited 0 times
CMMD
Mammography is an inexpensive and noninvasive imaging tool that is commonly used in detection of breast lesions. However, manual analysis of a mammogramic image can be both time intensive and prone to unwanted error. In recent times, there has been a lot of interest in using computer-aided techniques to classify medical images. The current study explores the efficacy of an Earth Mover’s Distance (EMD)-based mammographic image classification technique to identify the benign and the malignant lumps in the images. We further present a novel leader recognition (LR) technique which aids in the classification process to identify the most benign and malignant images from their respective cohort in the training set. The effect of image diversity in training sets on classification efficacy is also studied by considering training sets of different sizes. The proposed classification technique is found to identify malignant images with up to 80%$$80\%$$ sensitivity and also provides a maximum F1 score of 72.73%$$72.73\%$$.
Feature Enhanced and Context Inference Network for Pancreas Segmentation
Lou, Zheng-hao
Fan, Jian-cong
Ren, Yan-de
Tang, Lei-yu
2022Book Section, cited 0 times
Pancreas-CT
Segmenting pancreas from CT images is of great significance for clinical diagnosis and research. Traditional encoder-decoder networks, which are widely used in medical image segmentation, may fail to address low tissue contrast and large variability of pancreas shape and size due to underutilization of multi-level features and context information. To address these problems, this paper proposes a novel feature enhanced and context inference network (FECI-Net) for pancreas segmentation. Specifically, features are enhanced by imposing saliency region constraints to mine complementary regions and details between multi-level features; Gated Recurrent Unit convolution (ConvGRU) is introduced in the decoder to fully contact the context aimed to capture task-relevant fine features. By comparing experimental evaluations on the NIH-TCIA dataset, our method improves IOU and Dice by 5.5% and 4.1% respectively compared to the baseline, which outperforms current state-of-the-art medical image segmentation methods.
A Deep Learning-Based Approach for Mammographic Architectural Distortion Classification
Breast cancer is the most deadly cancer in females globally. Architectural distortion is the third most often reported irregularity on digital mammograms among the masses and microcalcification. Physically identifying architectural distortion for radiologists is problematic because of its subtle appearance on the dense breast. Automatic early identification of breast cancer using computer algorithms from a mammogram may assist doctors in eliminating unwanted biopsies. This research presents a novel diagnostic method to identify AD ROIs from mammograms using computer vision-based depth-wise CNN. The proposed methodology is examined on private PINUM 2885 and public DDSM 3568 images and achieved a 0.99 and 0.95 sensitivity, respectively. The experimental findings revealed that the proposed scheme outperformed SVM, KNN, and previous studies.
Binary Classification of Mammograms Using Horizontal Visibility Graph
Ghosh, Anirban
Ranjan, Priya
Chilamkurthy, Naga Srinivasarao
Gulati, Richa
Janardhanan, Rajiv
Ramakant, Pooja
2023Book Section, cited 0 times
CMMD
Mammography
BREAST
Algorithm Development
Horizontal visibility graph (HVG)
Hamming-Ipsen-Mikhailov (HIM) network similarity
Classification
Breast carcinoma, the most common cancer in women across the world now accounts for almost 30% of new malignant tumor cases. Despite the high incidence rate, breast cancer mortality has been maintained under control thanks to recent advances in molecular biology technology and an enhanced level of complete diagnosis and standard therapy. The method strives to overcome the clinical dilemma of undetected and misdiagnosed breast cancer, resulting in a poor clinical prognosis. Early computer-aided detection by mammography is an important aspect of the plan. In most of the diagnostic strategies currently in vogue, undue importance has been given to one of the performance metrics instead of a more balanced result. In our present study, we aim to resolve this dogma by first converting the mammograms into their equivalent graphical representation and then finding the network similarity between two such generated graphs. Subsequently, we will also elaborate on the use of horizontal visibility graph (HVG) representation to classify images and use Hamming-Ipsen-Mikhailov (HIM) network similarity (distance) metric to develop novel triage mammograms according to the severity of the disease. Our HVG-HIM metric-based classification of mammograms had an accuracy of 88.37%, specificity of 92%, and sensitivity of 83.33%. We also clearly highlight the trade off between performance and processing time.
Detection of Acute Myeloid Leukemia from Peripheral Blood Smear Images Using Transfer Learning in Modified CNN Architectures
Rahman, Jeba Fairooz
Ahmad, Mohiuddin
2023Book Section, cited 0 times
AML-Cytomorphology_LMU
Acute myeloid leukemia
Pathomics
Transfer learning
AlexNet
Convolutional Neural Network (CNN)
VGG-16 Convolutional Neural Network
ResNet50
DenseNet
Computer Aided Diagnosis (CADx)
Acute myeloid leukemia (AML), the most fatal hematological malignancy, is characterized by immature leukocyte proliferation in the bone marrow and peripheral blood. Conventional diagnosis of AML, performed by trained examiners using microscopic images of a peripheral blood smear, is a time-consuming and tedious process. Considering these issues, this study proposes a transfer learning-based approach for the accurate detection of immature leukocytes to diagnose AML. At first, the data was resized and transformed at the pre-processing stage. Then augmentation was performed on training data. Finally, the pre-trained convolutional neural network architectures were used with transfer learning. Transfer learning through modified AlexNet, ResNet50, DenseNet161, and VGG-16 were used to detect immature leukocytes. After model training and validation using different parameters, models with the best parameters were applied to the test set. Among other models, modified AlexNet achieved 96.52% accuracy, 94.94% AUC and an average recall, precision, and F1-score of 97.00%, 97.00%, and 97.00%, respectively. The investigative results of this study demonstrate that the proposed approach can aid the diagnosis of AML through an efficient screening of immature leukocytes.
Local Binary Pattern-Based Texture Analysis to Predict IDH Genotypes of Glioma Cancer Using Supervised Machine Learning Classifiers
Nowadays, machine learning-based quantified assessment of glioma has recently gained more attention by researchers in the field of medical image analysis. Such analysis makes use of either hand-crafted radiographic features with radiomic-based methods or auto-extracted features using deep learning-based methods. Radiomic-based methods cover a wide spectrum of radiographic features including texture, shape, volume, intensity, histogram, etc. The objective of the paper is to demonstrate the discriminative role of textures for molecular categorization of glioma using supervised machine learning techniques. This work aims to make state-of-the-art machine learning solutions available for magnetic resonance imaging (MRI)-based genomic analysis of glioma as a simple and sufficient technique based on single feature type, i.e., textures. The potential of this work demonstrates importance of texture features using simple, computationally efficient local binary pattern (LBP) method for isocitrate dehydrogenase (IDH)-based discrimination of glioma as IDH mutant and IDH wild type. Further, such texture-based discriminative analysis alone can definitely facilitate an immediate recommendation for further diagnostic decisions and personalized treatment plans for glioma patients.
Breast DCE-MRI Segmentation for Lesion Detection Using Clustering with Fireworks Algorithm
In this paper, kidney lesion segmentation in MRI using clustering with salp swarm algorithm (SSA) is proposed. The segmentation results of kidney MRI are degraded by the noise and intensity inhomogeneities (IIHs) in MR images. Therefore, at the outset, the MR images are denoised using median filter. Then IIHs are corrected using the max filter-based method. A hard-clustering technique using SSA is developed to segment the MR images. Finally, the lesions are extracted from the segmented MR images. The proposed method is compared with the K-means algorithm using well-known clustering validity measure DB-index. The experimental results demonstrate that the proposed method performs better than the K-means algorithm in the segmentation of kidney lesions in MRI.
Pulmonary Lung Cancer Classification Using Deep Neural Networks
Goswami, Jagriti
Singh, Koushlendra Kumar
2023Book Section, cited 0 times
Lung-PET-CT-Dx
Deep Learning
Transfer learning
Classification
Algorithm Development
Computer Aided Diagnosis (CADx)
Lung cancer is the leading cause of cancer-related deaths globally. Computer-assisted detection (CAD) systems have previously been used for various disease diagnosis and hence can serve as an efficient tool for lung cancer diagnosis. In this paper, we study the problem of lung cancer classification using chest computed tomography (CT) scans and positron emission tomography–computed tomography (PET-CT). A subset of publicly available Large-Scale CT and PET/CT Dataset for Lung Cancer Diagnosis (Lung-PET-CT-Dx) is used to train four different deep learning models using transfer learning for classifying three different types of lung cancer: Adenocarcinoma, Small Cell Carcinoma and Squamous Cell Carcinoma, by passing raw nodule patches to the network. The models are evaluated on metrics such as accuracy, precision, recall, F1-score and Cohen’s Kappa score. ROC curves and confusion matrices are also presented to provide a graphical representation of the models’ performance.
Classification of Lung Nodule from CT and PET/CT Images Using Artificial Neural Network
This work aims to design and develop an artificial neural network (ANN) architecture for the classification of cancerous tissue in the lung. A sequential model is used for the machine learning process. ReLU and Sigmoid activation functions have been used to supply weights to the model. The present work encompasses detecting and classifying the tumor cells into four categories. The four types of lung cancer nodules are adenocarcinoma, squamous-cell carcinoma, large-cell carcinoma, and small-cell carcinoma. Computed tomography (CT) and Positron emission tomography (PET) scan DICOM images are used for the classification. The proposed approach has been validated with the subset of the original dataset. A total of 6500 images have been taken in the experiment. The approach is to feed the CT scan images into ANNs and classify the image as the correct type. The dataset is provided by The Cancer Imaging Archive (TCIA). The dataset is titled “A Large-Scale CT and PET/CT Dataset for Lung Cancer Diagnosis.” The tumor cells are classified using the ANN architecture with 99.6% of validation accuracy and 4.35% loss.
We present the results of simulations of heating by low-intensity (non-ablating) focused ultrasound. The simulations are aimed at modelling hyperthermia treatment of organs affected by cancer [1] – particularly the prostate. The studies were carried out with the objective of developing low-cost medical devices for use in low- and middle-income countries (LMIC). Our innovation has been to favor the use of free and open-source tools, combining them so as to achieve realistic representations of the relevant tissue layers, regarding their geometric as well as their acoustic and thermal properties. The combination of tools we have selected are available to researchers in LMIC, to favor the emergence local research initiatives. To achieve precision in the shapes and locations of the models, we performed segmentation of Computed Tomography scan images obtained from public databases. The 3D representations thus generated were then inputted as voxelized matrix regions in a calculation grid of pressure field and heat simulations - using open source MATLAB® packages. We report on the results of simulations using this combination of software tools.
A Comprehensive Survey on Deep Learning-Based Pulmonary Nodule Identification on CT Images
Christina Sweetline, B.
Vijayakumaran, C.
2023Book Section, cited 0 times
QIN LUNG CT
Lung cancer is among the most rapidly increasing malignant tumor illnesses in terms of morbidity and death, posing a significant risk to human health. CT screening has shown to be beneficial in detecting lung cancer in its early stages, when it manifests as pulmonary nodules. Low-Dose Computed Tomography (LDCT) scanning has proven to improve the accuracy of detecting the lung nodules and categorizing during early stages, lowering the death rate. Radiologists can discover lung nodules by looking at images of the lungs. However, because the number of specialists is minimal and they are overloaded, proper assessment of image data is a difficult process. Nevertheless, with the rapid flooding of CT data, it is critical for radiologists to use an efficient Computer-Assisted Detection (CAD) system for analyzing the lung nodules automatically. CNNs are found to have a significant impact on lung cancer early detection and management. This research examines the current approaches for detecting lung nodules automatically. The experimental standards for nodule analysis are described with publicly available datasets of lung CT images. Finally, this field’s research trends, current issues, and future directions are discussed. It is concluded that CNNs have significantly changed lung cancer early diagnosis and treatment and this review will give the medical research groups the knowledge they need to understand the notion of CNN and use it to enhance the overall healthcare system for people.
Unsupervised Data Drift Detection Using Convolutional Autoencoders: A Breast Cancer Imaging Scenario
Imaging AI models are starting to reach real clinical settings, where model drift can happen due to diverse factors. That is why model monitoring must be set up in order to prevent model degradation over time. In this context, we test and propose a data drift detection solution based on unsupervised deep learning for a breast cancer imaging setting. A convolutional autoencoder is trained on a baseline set of expected images and controlled drifts are introduced in the data in order to test if a set of metrics extracted from the reconstructions and the latent space are able to distinguish them. We prove that this is a valid tool that manages to detect subtle differences even within these complex kind of images.
A Reversible Medical Image Watermarking for ROI Tamper Detection and Recovery
Bhalerao, Siddharth
Ansari, Irshad Ahmad
Kumar, Anil
Circuits, Systems, and Signal Processing2023Journal Article, cited 0 times
Website
LIDC-IDRI
PDMR-BL0293-F563
Security
Algorithm Development
Medical data security is an active area of research. With the increasing rate of digitalization, telemedicine industry is experiencing rapid growth, and medical data security has become more important than ever. In this work, a region-based reversible medical image watermarking scheme has been proposed. The scheme has ROI (region of interest) tamper detection and recovery capabilities. The medical image is divided into ROI and RONI (region of noninterest) regions. In ROI region, authentication data have been embedded using prediction-error expansion technique. The compressed copy of ROI has been embedded in RONI region. Data embedding in RONI region have been performed using difference histogram expansion technique. Reversible techniques are used for data embedding in both ROI and RONI. The proposed scheme authenticates both ROI and RONI for tampering. The scheme is 100% reversible when there is no tampering. The scheme checks for ROI tampering and recovers the ROI in its original state when tampering is detected. The scheme is able to perform equally well on different classes of medical images. The scheme provides average PSNR and SSIM equal to 55 dB and 0.99, respectively, for different types of medical images.
Semantic imaging features predict disease progression and survival in glioblastoma multiforme patients
Peeken, J. C.
Hesse, J.
Haller, B.
Kessel, K. A.
Nusslin, F.
Combs, S. E.
Strahlenther Onkol2018Journal Article, cited 1 times
Website
Radiomics
Semantic features
VASARI
TCGA-GBM
REMBRANDT
Biomarker
Prognostic model
BACKGROUND: For glioblastoma (GBM), multiple prognostic factors have been identified. Semantic imaging features were shown to be predictive for survival prediction. No similar data have been generated for the prediction of progression. The aim of this study was to assess the predictive value of the semantic visually accessable REMBRANDT [repository for molecular brain neoplasia data] images (VASARI) imaging feature set for progression and survival, and the creation of joint prognostic models in combination with clinical and pathological information. METHODS: 189 patients were retrospectively analyzed. Age, Karnofsky performance status, gender, and MGMT promoter methylation and IDH mutation status were assessed. VASARI features were determined on pre- and postoperative MRIs. Predictive potential was assessed with univariate analyses and Kaplan-Meier survival curves. Following variable selection and resampling, multivariate Cox regression models were created. Predictive performance was tested on patient test sets and compared between groups. The frequency of selection for single variables and variable pairs was determined. RESULTS: For progression free survival (PFS) and overall survival (OS), univariate significant associations were shown for 9 and 10 VASARI features, respectively. Multivariate models yielded concordance indices significantly different from random for the clinical, imaging, combined, and combined+ MGMT models of 0.657, 0.636, 0.694, and 0.716 for OS, and 0.602, 0.604, 0.633, and 0.643 for PFS. "Multilocality," "deep white-matter invasion," "satellites," and "ependymal invasion" were over proportionally selected for multivariate model generation, underlining their importance. CONCLUSIONS: We demonstrated a predictive value of several qualitative imaging features for progression and survival. The performance of prognostic models was increased by combining clinical, pathological, and imaging features.
Magnetic resonance imaging-based radiomic features for extrapolating infiltration levels of immune cells in lower-grade gliomas
Zhang, X.
Liu, S.
Zhao, X.
Shi, X.
Li, J.
Guo, J.
Niedermann, G.
Luo, R.
Zhang, X.
Strahlentherapie und Onkologie2020Journal Article, cited 3 times
Website
BRAIN
Radiomics
Radiogenomics
Lower-grade glioma (LGG)
PURPOSE: To extrapolate the infiltration levels of immune cells in patients with lower-grade gliomas (LGGs) using magnetic resonance imaging (MRI)-based radiomic features. METHODS: A retrospective dataset of 516 patients with LGGs from The Cancer Genome Atlas (TCGA) database was analysed for the infiltration levels of six types of immune cells using Tumor IMmune Estimation Resource (TIMER) based on RNA sequencing data. Radiomic features were extracted from 107 patients whose pre-operative MRI data are available in The Cancer Imaging Archive; 85 and 22 of these patients were assigned to the training and testing cohort, respectively. The least absolute shrinkage and selection operator (LASSO) was applied to select optimal radiomic features to build the radiomic signatures for extrapolating the infiltration levels of immune cells in the training cohort. The developed radiomic signatures were examined in the testing cohort using Pearson's correlation. RESULTS: The infiltration levels of B cells, CD4+ T cells, CD8+ T cells, macrophages, neutrophils and dendritic cells negatively correlated with overall survival in the 516 patient cohort when using univariate Cox's regression. Age, Karnofsky Performance Scale, WHO grade, isocitrate dehydrogenase mutant status and the infiltration of neutrophils correlated with survival using multivariate Cox's regression analysis. The infiltration levels of the 6 cell types could be estimated by radiomic features in the training cohort, and their corresponding radiomic signatures were built. The infiltration levels of B cells, CD8+ T cells, neutrophils and macrophages estimated by radiomics correlated with those estimated by TIMER in the testing cohort. Combining clinical/genomic features with the radiomic signatures only slightly improved the prediction of immune cell infiltrations. CONCLUSION: We developed MRI-based radiomic models for extrapolating the infiltration levels of immune cells in LGGs. Our results may have implications for treatment planning.
Segmentation of prostate and prostate zones using deep learning
Zavala-Romero, Olmo
Breto, Adrian L.
Xu, Isaac R.
Chang, Yu-Cherng C.
Gautney, Nicole
Dal Pra, Alan
Abramowitz, Matthew C.
Pollack, Alan
Stoyanova, Radka
Strahlentherapie und Onkologie2020Journal Article, cited 0 times
PROSTATEx
PurposeDevelop a deep-learning-based segmentation algorithm for prostate and its peripheral zone (PZ) that is reliable across multiple MRI vendors.MethodsThis is a retrospective study. The dataset consisted of 550 MRIs (Siemens-330, General Electric[GE]-220). A multistream 3D convolutional neural network is used for automatic segmentation of the prostate and its PZ using T2-weighted (T2-w) MRI. Prostate and PZ were manually contoured on axial T2‑w. The network uses axial, coronal, and sagittal T2‑w series as input. The preprocessing of the input data includes bias correction, resampling, and image normalization. A dataset from two MRI vendors (Siemens and GE) is used to test the proposed network. Six different models were trained, three for the prostate and three for the PZ. Of the three, two were trained on data from each vendor separately, and a third (Combined) on the aggregate of the datasets. The Dice coefficient (DSC) is used to compare the manual and predicted segmentation.ResultsFor prostate segmentation, the Combined model obtained DSCs of 0.893 ± 0.036 and 0.825 ± 0.112 (mean ± standard deviation) on Siemens and GE, respectively. For PZ, the best DSCs were from the Combined model: 0.811 ± 0.079 and 0.788 ± 0.093. While the Siemens model underperformed on the GE dataset and vice versa, the Combined model achieved robust performance on both datasets.ConclusionThe proposed network has a performance comparable to the interexpert variability for segmenting the prostate and its PZ. Combining images from different MRI vendors on the training of the network is of paramount importance for building a universal model for prostate and PZ segmentation.
Isodoses-a set theory-based patient-specific QA measure to compare planned and delivered isodose distributions in photon radiotherapy
Baran, M.
Tabor, Z.
Kabat, D.
Tulik, M.
Jelen, K.
Rzecki, K.
Forostianyi, B.
Balabuszek, K.
Koziarski, R.
Waligorski, M. P. R.
Strahlenther Onkol2022Journal Article, cited 0 times
Website
Head-Neck Cetuximab
Radiation Therapy
Confusion matrix
Dose distribution
Dose-volume histogram
Gamma analysis
Quality assurance
BACKGROUND: The gamma index and dose-volume histogram (DVH)-based patient-specific quality assurance (QA) measures commonly applied in radiotherapy planning are unable to simultaneously deliver detailed locations and magnitudes of discrepancy between isodoses of planned and delivered dose distributions. By exploiting statistical classification performance measures such as sensitivity or specificity, compliance between a planned and delivered isodose may be evaluated locally, both for organs-at-risk (OAR) and the planning target volume (PTV), at any specified isodose level. Thus, a patient-specific QA tool may be developed to supplement those presently available in clinical radiotherapy. MATERIALS AND METHODS: A method was developed to locally establish and report dose delivery errors in three-dimensional (3D) isodoses of planned (reference) and delivered (evaluated) dose distributions simultaneously as a function the dose level and of spatial location. At any given isodose level, the total volume of delivered dose containing the reference and the evaluated isodoses is locally decomposed into four subregions: true positive-subregions within both reference and evaluated isodoses, true negative-outside of both of these isodoses, false positive-inside the evaluated isodose but not the reference isodose, and false negatives-inside the reference isodose but not the evaluated isodose. Such subregions may be established over the whole volume of delivered dose. This decomposition allows the construction of a confusion matrix and calculation of various indices to quantify the discrepancies between the selected planned and delivered isodose distributions, over the complete range of values of dose delivered. The 3D projection and visualization of the spatial distribution of these discrepancies facilitates the application of the developed method in clinical practice. RESULTS: Several clinical photon radiotherapy plans were analyzed using the developed method. In some plans at certain isodose levels, dose delivery errors were found at anatomically significant locations. These errors were not otherwise highlighted-neither by gamma analysis nor by DVH-based QA measures. A specially developed 3D projection tool to visualize the spatial distribution of such errors against anatomical features of the patient aids in the proposed analysis of therapy plans. CONCLUSIONS: The proposed method is able to spatially locate delivery errors at selected isodose levels and may supplement the presently applied gamma analysis and DVH-based QA measures in patient-specific radiotherapy planning.
Einstieg ins Programmieren für Radiologen mit der Software R
Tavakoli, Anoshirwan Andrej
2021Journal Article, cited 0 times
LUAD-CT-Survival
Somatic mutations associated with MRI-derived volumetric features in glioblastoma
Gutman, David A
Dunn Jr, William D
Grossmann, Patrick
Cooper, Lee AD
Holder, Chad A
Ligon, Keith L
Alexander, Brian M
Aerts, Hugo JWL
Neuroradiology2015Journal Article, cited 45 times
Website
Radiomics
BRAIN
Glioblastoma Multiforme (GBM)
Magnetic Resonance Imaging (MRI)
INTRODUCTION: MR imaging can noninvasively visualize tumor phenotype characteristics at the macroscopic level. Here, we investigated whether somatic mutations are associated with and can be predicted by MRI-derived tumor imaging features of glioblastoma (GBM). METHODS: Seventy-six GBM patients were identified from The Cancer Imaging Archive for whom preoperative T1-contrast (T1C) and T2-FLAIR MR images were available. For each tumor, a set of volumetric imaging features and their ratios were measured, including necrosis, contrast enhancing, and edema volumes. Imaging genomics analysis assessed the association of these features with mutation status of nine genes frequently altered in adult GBM. Finally, area under the curve (AUC) analysis was conducted to evaluate the predictive performance of imaging features for mutational status. RESULTS: Our results demonstrate that MR imaging features are strongly associated with mutation status. For example, TP53-mutated tumors had significantly smaller contrast enhancing and necrosis volumes (p = 0.012 and 0.017, respectively) and RB1-mutated tumors had significantly smaller edema volumes (p = 0.015) compared to wild-type tumors. MRI volumetric features were also found to significantly predict mutational status. For example, AUC analysis results indicated that TP53, RB1, NF1, EGFR, and PDGFRA mutations could each be significantly predicted by at least one imaging feature. CONCLUSION: MRI-derived volumetric features are significantly associated with and predictive of several cancer-relevant, drug-targetable DNA mutations in glioblastoma. These results may shed insight into unique growth characteristics of individual tumors at the macroscopic level resulting from molecular events as well as increase the use of noninvasive imaging in personalized medicine.
New prognostic factor telomerase reverse transcriptase promotor mutation presents without MR imaging biomarkers in primary glioblastoma
Ersoy, Tunc F
Keil, Vera C
Hadizadeh, Dariusch R
Gielen, Gerrit H
Fimmers, Rolf
Waha, Andreas
Heidenreich, Barbara
Kumar, Rajiv
Schild, Hans H
Simon, Matthias
Neuroradiology2017Journal Article, cited 1 times
Website
Radiomics
Radiogenomics
Glioblastoma Multiforme (GBM)
REMBRANDT
TERT mutation
VASARI
Magnetic Resonance Imaging (MRI)
PURPOSE: Magnetic resonance (MR) imaging biomarkers can assist in the non-invasive assessment of the genetic status in glioblastomas (GBMs). Telomerase reverse transcriptase (TERT) promoter mutations are associated with a negative prognosis. This study was performed to identify MR imaging biomarkers to forecast the TERT mutation status. METHODS: Pre-operative MRIs of 64/67 genetically confirmed primary GBM patients (51/67 TERT-mutated with rs2853669 polymorphism) were analyzed according to Visually AcceSAble Rembrandt Images (VASARI) ( https://wiki.cancerimagingarchive.net/display/Public/VASARI+Research+Project ) imaging criteria by three radiological raters. TERT mutation and O(6)-methylguanine-DNA methyltransferase (MGMT) hypermethylation data were obtained through direct and pyrosequencing as described in a previous study. Clinical data were derived from a prospectively maintained electronic database. Associations of potential imaging biomarkers and genetic status were assessed by Fisher and Mann-Whitney U tests and stepwise linear regression. RESULTS: No imaging biomarkers could be identified to predict TERT mutational status (alone or in conjunction with TERT promoter polymorphism rs2853669 AA-allele). TERT promoter mutations were more common in patients with tumor-associated seizures as first symptom (26/30 vs. 25/37, p = 0.07); these showed significantly smaller tumors [13.1 (9.0-19.0) vs. 24.0 (16.6-37.5) all cm(3); p = 0.007] and prolonged median overall survival [17.0 (11.5-28.0) vs. 9.0 (4.0-12.0) all months; p = 0.02]. TERT-mutated GBMs were underrepresented in the extended angularis region (p = 0.03), whereas MGMT-methylated GBMs were overrepresented in the corpus callosum (p = 0.03) and underrepresented temporomesially (p = 0.01). CONCLUSION: Imaging biomarkers for prediction of TERT mutation status remain weak and cannot be derived from the VASARI protocol. Tumor-associated seizures are less common in TERT mutated glioblastomas.
Glioblastoma radiomics: can genomic and molecular characteristics correlate with imaging response patterns?
Soike, Michael H
McTyre, Emory R
Shah, Nameeta
Puchalski, Ralph B
Holmes, Jordan A
Paulsson, Anna K
Miller, Lance D
Cramer, Christina K
Lesser, Glenn J
Strowd, Roy E
Neuroradiology2018Journal Article, cited 1 times
Website
Ivy GAP
Glioblastoma Multiforme (GBM)
Radiomics
pseudoprogression (PSP)
Imaging-based stratification of adult gliomas prognosticates survival and correlates with the 2021 WHO classification
Kamble, A. N.
Agrawal, N. K.
Koundal, S.
Bhargava, S.
Kamble, A. N.
Joyner, D. A.
Kalelioglu, T.
Patel, S. H.
Jain, R.
Neuroradiology2022Journal Article, cited 0 times
Website
REMBRANDT
VASARI
TCGA-GBM
TCGA-LGG
Glioblastoma Multiforme (GBM)
Glioma
Isocitrate dehydrogenase (IDH) mutation
Magnetic Resonance Imaging (MRI)
Classification
BACKGROUND: Because of the lack of global accessibility, delay, and cost-effectiveness of genetic testing, there is a clinical need for an imaging-based stratification of gliomas that can prognosticate survival and correlate with the 2021-WHO classification. METHODS: In this retrospective study, adult primary glioma patients with pre-surgery/pre-treatment MRI brain images having T2, FLAIR, T1, T1 post-contrast, DWI sequences, and survival information were included in TCIA training-dataset (n = 275) and independent validation-dataset (n = 200). A flowchart for imaging-based stratification of adult gliomas(IBGS) was created in consensus by three authors to encompass all adult glioma types. Diagnostic features used were T2-FLAIR mismatch sign, central necrosis with peripheral enhancement, diffusion restriction, and continuous cortex sign. Roman numerals (I, II, and III) denote IBGS types. Two independent teams of three and two radiologists, blinded to genetic, histology, and survival information, manually read MRI into three types based on the flowchart. Overall survival-analysis was done using age-adjusted Cox-regression analysis, which provided both hazard-ratio (HR) and area-under-curve (AUC) for each stratification system(IBGS and 2021-WHO). The sensitivity and specificity of each IBSG type were analyzed with cross-table to identify the corresponding 2021-WHO genotype. RESULTS: Imaging-based stratification was statistically significant in predicting survival in both datasets with good inter-observer agreement (age-adjusted Cox-regression, AUC > 0.5, k > 0.6, p < 0.001). IBGS type-I, type-II, and type-III gliomas had good specificity in identifying IDHmut 1p19q-codel oligodendroglioma (training - 97%, validation - 85%); IDHmut 1p19q non-codel astrocytoma (training - 80%, validation - 85.9%); and IDHwt glioblastoma (training - 76.5%, validation- 87.3%) respectively (p-value < 0.01). CONCLUSIONS: Imaging-based stratification of adult diffuse gliomas predicted patient survival and correlated well with 2021-WHO glioma classification.
Identifying key factors for predicting O6-Methylguanine-DNA methyltransferase status in adult patients with diffuse glioma: a multimodal analysis of demographics, radiomics, and MRI by variable Vision Transformer
Usuzaki, T.
Takahashi, K.
Inamori, R.
Morishita, Y.
Shizukuishi, T.
Takagi, H.
Ishikuro, M.
Obara, T.
Takase, K.
Neuroradiology2024Journal Article, cited 0 times
Website
UCSF-PDGM
Radiomics
pyRadiomics
O6-methylguanine-DNA methyl transferase (MGMT)
Deep Learning
glioma
variable vision transformer (vViT)
vision transformer (ViT)
PURPOSE: This study aimed to perform multimodal analysis by vision transformer (vViT) in predicting O6-methylguanine-DNA methyl transferase (MGMT) promoter status among adult patients with diffuse glioma using demographics (sex and age), radiomic features, and MRI. METHODS: The training and test datasets contained 122 patients with 1,570 images and 30 patients with 484 images, respectively. The radiomic features were extracted from enhancing tumors (ET), necrotic tumor cores (NCR), and the peritumoral edematous/infiltrated tissues (ED) using contrast-enhanced T1-weighted images (CE-T1WI) and T2-weighted images (T2WI). The vViT had 9 sectors; 1 demographic sector, 6 radiomic sectors (CE-T1WI ET, CE-T1WI NCR, CE-T1WI ED, T2WI ET, T2WI NCR, and T2WI ED), 2 image sectors (CE-T1WI, and T2WI). Accuracy and area under the curve of receiver-operating characteristics (AUC-ROC) were calculated for the test dataset. The performance of vViT was compared with AlexNet, GoogleNet, VGG16, and ResNet by McNemar and Delong test. Permutation importance (PI) analysis with the Mann-Whitney U test was performed. RESULTS: The accuracy was 0.833 (95% confidence interval [95%CI]: 0.714-0.877) and the area under the curve of receiver-operating characteristics was 0.840 (0.650-0.995) in the patient-based analysis. The vViT had higher accuracy than VGG16 and ResNet, and had higher AUC-ROC than GoogleNet (p<0.05). The ED radiomic features extracted from the T2-weighted image demonstrated the highest importance (PI=0.239, 95%CI: 0.237-0.240) among all other sectors (p<0.0001). CONCLUSION: The vViT is a competent deep learning model in predicting MGMT status. The ED radiomic features of the T2-weighted image demonstrated the most dominant contribution.
Clinical and imaging characteristics of supratentorial glioma with IDH2 mutation
Ikeda, S.
Sakata, A.
Arakawa, Y.
Mineharu, Y.
Makino, Y.
Takeuchi, Y.
Fushimi, Y.
Okuchi, S.
Nakajima, S.
Otani, S.
Nakamoto, Y.
Neuroradiology2024Journal Article, cited 0 times
Website
UCSF-PDGM
TCGA-LGG
Glioma
Isocitrate Dehydrogenase
Magnetic Resonance Imaging
T2-FLAIR Mismatch Sign
PURPOSE: The rarity of IDH2 mutations in supratentorial gliomas has led to gaps in understanding their radiological characteristics, potentially resulting in misdiagnosis based solely on negative IDH1 immunohistochemical staining. We aimed to investigate the clinical and imaging characteristics of IDH2-mutant gliomas. METHODS: We analyzed imaging data from adult patients with pathologically confirmed diffuse lower-grade gliomas and known IDH1/2 alteration and 1p/19q codeletion statuses obtained from the records of our institute (January 2011 to August 2022, Cohort 1) and The Cancer Imaging Archive (TCIA, Cohort 2). Two radiologists evaluated clinical information and radiological findings using standardized methods. Furthermore, we compared the data for IDH2-mutant and IDH-wildtype gliomas. Multivariate logistic regression was used to identify the predictors of IDH2 mutation status, and receiver operating characteristic curve analysis was employed to assess the predictive performance of the model. RESULTS: Of the 20 IDH2-mutant supratentorial gliomas, 95% were in the frontal lobes, with 75% classified as oligodendrogliomas. Age and the T2-FLAIR discordance were independent predictors of IDH2 mutations. Receiver operating characteristic curve analysis for the model using age and T2-FLAIR discordance demonstrated a strong potential for discriminating between IDH2-mutant and IDH-wildtype gliomas, with an area under the curve of 0.96 (95% CI, 0.91-0.98, P = .02). CONCLUSION: A high frequency of oligodendrogliomas with 1p/19q codeletion was observed in IDH2-mutated gliomas. Younger age and the presence of the T2-FLAIR discordance were associated with IDH2 mutations and these findings may help with precise diagnoses and treatment decisions in clinical practice.
Visualizing the association between the location and prognosis of isocitrate dehydrogenase wild-type glioblastoma: a voxel-wise Cox regression analysis with open-source datasets
Atsukawa, N.
Tatekawa, H.
Ueda, D.
Oura, T.
Matsushita, S.
Horiuchi, D.
Takita, H.
Mitsuyama, Y.
Baba, R.
Tsukamoto, T.
Shimono, T.
Miki, Y.
Neuroradiology2024Journal Article, cited 0 times
Website
UCSF-PDGM
UPENN-GBM
Brain atlas
Glioblastoma
Magnetic Resonance Imaging (MRI)
Survival analysis
Tumor location
PURPOSE: This study examined the correlation between tumor location and prognosis in patients with glioblastoma using magnetic resonance images of various isocitrate dehydrogenase (IDH) wild-type glioblastomas from The Cancer Imaging Archive (TCIA). The relationship between tumor location and prognosis was visualized using voxel-wise Cox regression analysis. METHODS: Participants with IDH wild-type glioblastoma were selected, and their survival and demographic data and tumor characteristics were collected from TCIA datasets. Post-contrast-enhanced T1-weighted imaging, T2-fluid attenuated inversion recovery imaging, and tumor segmentation data were also compiled. Following affine registration of each image and tumor segmentation region of interest to the MNI standard space, a voxel-wise Cox regression analysis was conducted. This analysis determined the association of the presence or absence of the tumor with the prognosis in each voxel after adjusting for the covariates. RESULTS: The study included 769 participants of 464 men and 305 women (mean age, 63 years +/- 12 [standard deviation]). The hazard ratio map indicated that tumors in the medial frontobasal region and around the third and fourth ventricles were associated with poorer prognoses, underscoring the challenges of complete resection and treatment accessibility in these areas regardless of the tumor volume. Conversely, tumors located in the right temporal and occipital lobes had favorable prognoses. CONCLUSION: This study showed an association between tumor location and prognosis. These findings may assist clinicians in developing more precise and effective treatment plans for patients with glioblastoma to improve their management.
Development of a nomogram combining clinical staging with 18F-FDG PET/CT image features in non-small-cell lung cancer stage I–III
Desseroit, Marie-Charlotte
Visvikis, Dimitris
Tixier, Florent
Majdoub, Mohamed
Perdrisot, Rémy
Guillevin, Rémy
Le Rest, Catherine Cheze
Hatt, Mathieu
European Journal of Nuclear Medicine and Molecular Imaging2016Journal Article, cited 34 times
Website
RIDER Lung PET-CT
3D Slicer
Texture features
18F-FDG PET/CT
Non-Small Cell Lung Cancer (NSCLC)
MedCalc
Kaplan-Meier curve
Discovery of pre-therapy 2-deoxy-2-18 F-fluoro-D-glucose positron emission tomography-based radiomics classifiers of survival outcome in non-small-cell lung cancer patients
Arshad, Mubarik A
Thornton, Andrew
Lu, Haonan
Tam, Henry
Wallitt, Kathryn
Rodgers, Nicola
Scarsbrook, Andrew
McDermott, Garry
Cook, Gary J
Landau, David
European Journal of Nuclear Medicine and Molecular Imaging2018Journal Article, cited 0 times
Website
Effect of machine learning re-sampling techniques for imbalanced datasets in 18F-FDG PET-based radiomics model on prognostication performance in cohorts of head and neck cancer patients
Xie, Chenyi
Du, Richard
Ho, Joshua WK
Pang, Herbert H
Chiu, Keith WH
Lee, Elaine YP
Vardhanabhuti, Varut
European Journal of Nuclear Medicine and Molecular Imaging2020Journal Article, cited 0 times
Head-Neck-PET-CT
PurposeBiomedical data frequently contain imbalance characteristics which make achieving good predictive performance with data-driven machine learning approaches a challenging task. In this study, we investigated the impact of re-sampling techniques for imbalanced datasets in PET radiomics-based prognostication model in head and neck (HNC) cancer patients.MethodsRadiomics analysis was performed in two cohorts of patients, including 166 patients newly diagnosed with nasopharyngeal carcinoma (NPC) in our centre and 182 HNC patients from open database. Conventional PET parameters and robust radiomics features were extracted for correlation analysis of the overall survival (OS) and disease progression-free survival (DFS). We investigated a cross-combination of 10 re-sampling methods (oversampling, undersampling, and hybrid sampling) with 4 machine learning classifiers for survival prediction. Diagnostic performance was assessed in hold-out test sets. Statistical differences were analysed using Monte Carlo cross-validations by post hoc Nemenyi analysis.ResultsOversampling techniques like ADASYN and SMOTE could improve prediction performance in terms of G-mean and F-measures in minority class, without significant loss of F-measures in majority class. We identified optimal PET radiomics-based prediction model of OS (AUC of 0.82, G-mean of 0.77) for our NPC cohort. Similar findings that oversampling techniques improved the prediction performance were seen when this was tested on an external dataset indicating generalisability.ConclusionOur study showed a significant positive impact on the prediction performance in imbalanced datasets by applying re-sampling techniques. We have created an open-source solution for automated calculations and comparisons of multiple re-sampling techniques and machine learning classifiers for easy replication in future studies.
PET/CT radiomics signature of human papilloma virus association in oropharyngeal squamous cell carcinoma
Haider, S. P.
Mahajan, A.
Zeevi, T.
Baumeister, P.
Reichel, C.
Sharaf, K.
Forghani, R.
Kucukkaya, A. S.
Kann, B. H.
Judson, B. L.
Prasad, M. L.
Burtness, B.
Payabvash, S.
Eur J Nucl Med Mol Imaging2020Journal Article, cited 1 times
Website
Head-Neck-PET-CT
Radiomics
PURPOSE: To devise, validate, and externally test PET/CT radiomics signatures for human papillomavirus (HPV) association in primary tumors and metastatic cervical lymph nodes of oropharyngeal squamous cell carcinoma (OPSCC). METHODS: We analyzed 435 primary tumors (326 for training, 109 for validation) and 741 metastatic cervical lymph nodes (518 for training, 223 for validation) using FDG-PET and non-contrast CT from a multi-institutional and multi-national cohort. Utilizing 1037 radiomics features per imaging modality and per lesion, we trained, optimized, and independently validated machine-learning classifiers for prediction of HPV association in primary tumors, lymph nodes, and combined "virtual" volumes of interest (VOI). PET-based models were additionally validated in an external cohort. RESULTS: Single-modality PET and CT final models yielded similar classification performance without significant difference in independent validation; however, models combining PET and CT features outperformed single-modality PET- or CT-based models, with receiver operating characteristic area under the curve (AUC) of 0.78, and 0.77 for prediction of HPV association using primary tumor lesion features, in cross-validation and independent validation, respectively. In the external PET-only validation dataset, final models achieved an AUC of 0.83 for a virtual VOI combining primary tumor and lymph nodes, and an AUC of 0.73 for a virtual VOI combining all lymph nodes. CONCLUSION: We found that PET-based radiomics signatures yielded similar classification performance to CT-based models, with potential added value from combining PET- and CT-based radiomics for prediction of HPV status. While our results are promising, radiomics signatures may not yet substitute tissue sampling for clinical decision-making.
A convolutional neural network for fully automated blood SUV determination to facilitate SUR computation in oncological FDG-PET
Nikulin, P.
Hofheinz, F.
Maus, J.
Li, Y.
Butof, R.
Lange, C.
Furth, C.
Zschaeck, S.
Kreissl, M. C.
Kotzerke, J.
van den Hoff, J.
Eur J Nucl Med Mol Imaging2021Journal Article, cited 0 times
Website
Head-Neck-PET-CT
Segmentation
Convolutional Neural Network (CNN)
Vasculature
PURPOSE: The standardized uptake value (SUV) is widely used for quantitative evaluation in oncological FDG-PET but has well-known shortcomings as a measure of the tumor's glucose consumption. The standard uptake ratio (SUR) of tumor SUV and arterial blood SUV (BSUV) possesses an increased prognostic value but requires image-based BSUV determination, typically in the aortic lumen. However, accurate manual ROI delineation requires care and imposes an additional workload, which makes the SUR approach less attractive for clinical routine. The goal of the present work was the development of a fully automated method for BSUV determination in whole-body PET/CT. METHODS: Automatic delineation of the aortic lumen was performed with a convolutional neural network (CNN), using the U-Net architecture. A total of 946 FDG PET/CT scans from several sites were used for network training (N = 366) and testing (N = 580). For all scans, the aortic lumen was manually delineated, avoiding areas affected by motion-induced attenuation artifacts or potential spillover from adjacent FDG-avid regions. Performance of the network was assessed using the fractional deviations of automatically and manually derived BSUVs in the test data. RESULTS: The trained U-Net yields BSUVs in close agreement with those obtained from manual delineation. Comparison of manually and automatically derived BSUVs shows excellent concordance: the mean relative BSUV difference was (mean +/- SD) = (- 0.5 +/- 2.2)% with a 95% confidence interval of [- 5.1,3.8]% and a total range of [- 10.0, 12.0]%. For four test cases, the derived ROIs were unusable (< 1 ml). CONCLUSION: CNNs are capable of performing robust automatic image-based BSUV determination. Integrating automatic BSUV derivation into PET data processing workflows will significantly facilitate SUR computation without increasing the workload in the clinical setting.
Decentralized collaborative multi-institutional PET attenuation and scatter correction using federated deep learning
Shiri, Isaac
Vafaei Sadr, Alireza
Akhavan, Azadeh
Salimi, Yazdan
Sanaat, Amirhossein
Amini, Mehdi
Razeghi, Behrooz
Saberi, Abdollah
Arabi, Hossein
Ferdowsi, Sohrab
Voloshynovskiy, Slava
Gündüz, Deniz
Rahmim, Arman
Zaidi, Habib
European Journal of Nuclear Medicine and Molecular Imaging2023Journal Article, cited 0 times
ACRIN-NSCLC-FDG-PET (ACRIN 6668)
NSCLC Radiogenomics
HNSCC
Deep Learning
PET
Attenuation correction and scatter compensation (AC/SC) are two main steps toward quantitative PET imaging, which remain challenging in PET-only and PET/MRI systems. These can be effectively tackled via deep learning (DL) methods. However, trustworthy, and generalizable DL models commonly require well-curated, heterogeneous, and large datasets from multiple clinical centers. At the same time, owing to legal/ethical issues and privacy concerns, forming a large collective, centralized dataset poses significant challenges. In this work, we aimed to develop a DL-based model in a multicenter setting without direct sharing of data using federated learning (FL) for AC/SC of PET images.
Multicenter PET image harmonization using generative adversarial networks
Haberl, D.
Spielvogel, C. P.
Jiang, Z.
Orlhac, F.
Iommi, D.
Carrio, I.
Buvat, I.
Haug, A. R.
Papp, L.
Eur J Nucl Med Mol Imaging2024Journal Article, cited 0 times
Website
FDG-PET-CT-Lesions
Head-Neck-PET-CT
Deep learning
Generative adversarial networks
Harmonization
Multicenter
Quantitative PET
PURPOSE: To improve reproducibility and predictive performance of PET radiomic features in multicentric studies by cycle-consistent generative adversarial network (GAN) harmonization approaches. METHODS: GAN-harmonization was developed to harmonize whole-body PET scans to perform image style and texture translation between different centers and scanners. GAN-harmonization was evaluated by application to two retrospectively collected open datasets and different tasks. First, GAN-harmonization was performed on a dual-center lung cancer cohort (127 female, 138 male) where the reproducibility of radiomic features in healthy liver tissue was evaluated. Second, GAN-harmonization was applied to a head and neck cancer cohort (43 female, 154 male) acquired from three centers. Here, the clinical impact of GAN-harmonization was analyzed by predicting the development of distant metastases using a logistic regression model incorporating first-order statistics and texture features from baseline (18)F-FDG PET before and after harmonization. RESULTS: Image quality remained high (structural similarity: left kidney >/= 0.800, right kidney >/= 0.806, liver >/= 0.780, lung >/= 0.838, spleen >/= 0.793, whole-body >/= 0.832) after image harmonization across all utilized datasets. Using GAN-harmonization, inter-site reproducibility of radiomic features in healthy liver tissue increased at least by >/= 5 +/- 14% (first-order), >/= 16 +/- 7% (GLCM), >/= 19 +/- 5% (GLRLM), >/= 16 +/- 8% (GLSZM), >/= 17 +/- 6% (GLDM), and >/= 23 +/- 14% (NGTDM). In the head and neck cancer cohort, the outcome prediction improved from AUC 0.68 (95% CI 0.66-0.71) to AUC 0.73 (0.71-0.75) by application of GAN-harmonization. CONCLUSIONS: GANs are capable of performing image harmonization and increase reproducibility and predictive performance of radiomic features derived from different centers and scanners.
Quantitative evaluation of lesion response heterogeneity for superior prognostication of clinical outcome
Lokre, O.
Perk, T. G.
Weisman, A. J.
Govindan, R. M.
Chen, S.
Chen, M.
Eickhoff, J.
Liu, G.
Jeraj, R.
Eur J Nucl Med Mol Imaging2024Journal Article, cited 0 times
Website
CALGB50303
ACRIN-NSCLC-FDG-PET
Clinical imaging
Computational methods
Fdg pet/ct
Lung cancer
Lymphoma
Prognostication of clinical outcome
Tumor heterogeneity
PURPOSE: Standardized reporting of treatment response in oncology patients has traditionally relied on methods like RECIST, PERCIST and Deauville score. These endpoints assess only a few lesions, potentially overlooking the response heterogeneity of all disease. This study hypothesizes that comprehensive spatial-temporal evaluation of all individual lesions is necessary for superior prognostication of clinical outcome. METHODS: [(18)F]FDG PET/CT scans from 241 patients (127 diffuse large B-cell lymphoma (DLBCL) and 114 non-small cell lung cancer (NSCLC)) were retrospectively obtained at baseline and either during chemotherapy or post-chemoradiotherapy. An automated TRAQinform IQ software (AIQ Solutions) analyzed the images, performing quantification of change in regions of interest suspicious of cancer (lesion-ROI). Multivariable Cox proportional hazards (CoxPH) models were trained to predict overall survival (OS) with varied sets of quantitative features and lesion-ROI, compared by bootstrapping with C-index and t-tests. The best-fit model was compared to automated versions of previously established methods like RECIST, PERCIST and Deauville score. RESULTS: Multivariable CoxPH models demonstrated superior prognostic power when trained with features quantifying response heterogeneity in all individual lesion-ROI in DLBCL (C-index = 0.84, p < 0.001) and NSCLC (C-index = 0.71, p < 0.001). Prognostic power significantly deteriorated (p < 0.001) when using subsets of lesion-ROI (C-index = 0.78 and 0.67 for DLBCL and NSCLC, respectively) or excluding response heterogeneity (C-index = 0.67 and 0.70). RECIST, PERCIST, and Deauville score could not significantly associate with OS (C-index < 0.65 and p > 0.1), performing significantly worse than the multivariable models (p < 0.001). CONCLUSIONS: Quantitative evaluation of response heterogeneity of all individual lesions is necessary for the superior prognostication of clinical outcome.
Radiogenomics of clear cell renal cell carcinoma: preliminary findings of The Cancer Genome Atlas–Renal Cell Carcinoma (TCGA–RCC) Imaging Research Group
Shinagare, Atul B
Vikram, Raghu
Jaffe, Carl
Akin, Oguz
Kirby, Justin
Huang, Erich
Freymann, John
Sainani, Nisha I
Sadow, Cheryl A
Bathala, Tharakeswara K
Rubin, D. L.
Oto, A.
Heller, M. T.
Surabhi, V. R.
Katabathina, V.
Silverman, S. G.
Abdominal imaging2015Journal Article, cited 47 times
Website
TCGA-RCC
Radiogenomics
TCGA-KIRC
Clear cell renal cell carcinoma (ccRCC)
PURPOSE: To investigate associations between imaging features and mutational status of clear cell renal cell carcinoma (ccRCC). MATERIALS AND METHODS: This multi-institutional, multi-reader study included 103 patients (77 men; median age 59 years, range 34-79) with ccRCC examined with CT in 81 patients, MRI in 19, and both CT and MRI in three; images were downloaded from The Cancer Imaging Archive, an NCI-funded project for genome-mapping and analyses. Imaging features [size (mm), margin (well-defined or ill-defined), composition (solid or cystic), necrosis (for solid tumors: 0%, 1%-33%, 34%-66% or >66%), growth pattern (endophytic, <50% exophytic, or >/=50% exophytic), and calcification (present, absent, or indeterminate)] were reviewed independently by three readers blinded to mutational data. The association of imaging features with mutational status (VHL, BAP1, PBRM1, SETD2, KDM5C, and MUC4) was assessed. RESULTS: Median tumor size was 49 mm (range 14-162 mm), 73 (71%) tumors had well-defined margins, 98 (95%) tumors were solid, 95 (92%) showed presence of necrosis, 46 (45%) had >/=50% exophytic component, and 18 (19.8%) had calcification. VHL (n = 52) and PBRM1 (n = 24) were the most common mutations. BAP1 mutation was associated with ill-defined margin and presence of calcification (p = 0.02 and 0.002, respectively, Pearson's chi (2) test); MUC4 mutation was associated with an exophytic growth pattern (p = 0.002, Mann-Whitney U test). CONCLUSIONS: BAP1 mutation was associated with ill-defined tumor margins and presence of calcification; MUC4 mutation was associated with exophytic growth. Given the known prognostic implications of BAP1 and MUC4 mutations, these results support using radiogenomics to aid in prognostication and management.
Are we at a crossroads or a plateau? Radiomics and machine learning in abdominal oncology imaging
Summers, Ronald M.
Abdominal Radiology2018Journal Article, cited 0 times
CT Lymph Nodes
Advances in radiomics and machine learning have driven a technology boom in the automated analysis of radiology images. For the past several years, expectations have been nearly boundless for these new technologies to revolutionize radiology image analysis and interpretation. In this editorial, I compare the expectations with the realities with particular attention to applications in abdominal oncology imaging. I explore whether these technologies will leave us at a crossroads to an exciting future or to a sustained plateau and disillusionment.
Radiogenomics in renal cell carcinoma
Alessandrino, Francesco
Shinagare, Atul B
Bossé, Dominick
Choueiri, Toni K
Krajewski, Katherine M
Abdominal Radiology2018Journal Article, cited 0 times
Website
TCGA_RCC
renal cancer
radiogenomics
Deep learning-based artificial intelligence for prostate cancer detection at biparametric MRI
Mehralivand, S.
Yang, D.
Harmon, S. A.
Xu, D.
Xu, Z.
Roth, H.
Masoudi, S.
Kesani, D.
Lay, N.
Merino, M. J.
Wood, B. J.
Pinto, P. A.
Choyke, P. L.
Turkbey, B.
Abdom Radiol (NY)2022Journal Article, cited 0 times
PROSTATEx
Artificial Intelligence
Deep Learning
Prostate/pathology
*Prostatic Neoplasms/diagnostic imaging/pathology
Magnetic Resonance Imaging (MRI)
Prostate cancer
PURPOSE: To present fully automated DL-based prostate cancer detection system for prostate MRI. METHODS: MRI scans from two institutions, were used for algorithm training, validation, testing. MRI-visible lesions were contoured by an experienced radiologist. All lesions were biopsied using MRI-TRUS-guidance. Lesions masks, histopathological results were used as ground truth labels to train UNet, AH-Net architectures for prostate cancer lesion detection, segmentation. Algorithm was trained to detect any prostate cancer >/= ISUP1. Detection sensitivity, positive predictive values, mean number of false positive lesions per patient were used as performance metrics. RESULTS: 525 patients were included for training, validation, testing of the algorithm. Dataset was split into training (n = 368, 70%), validation (n = 79, 15%), test (n = 78, 15%) cohorts. Dice coefficients in training, validation sets were 0.403, 0.307, respectively, for AHNet model compared to 0.372, 0.287, respectively, for UNet model. In validation set, detection sensitivity was 70.9%, PPV was 35.5%, mean number of false positive lesions/patient was 1.41 (range 0-6) for UNet model compared to 74.4% detection sensitivity, 47.8% PPV, mean number of false positive lesions/patient was 0.87 (range 0-5) for AHNet model. In test set, detection sensitivity for UNet was 72.8% compared to 63.0% for AHNet, mean number of false positive lesions/patient was 1.90 (range 0-7), 1.40 (range 0-6) in UNet, AHNet models, respectively. CONCLUSION: We developed a DL-based AI approach which predicts prostate cancer lesions at biparametric MRI with reasonable performance metrics. While false positive lesion calls remain as a challenge of AI-assisted detection algorithms, this system can be utilized as an adjunct tool by radiologists.
Anatomical study and meta-analysis of the episternal ossicles
Pongruengkiat, W.
Pitaksinagorn, W.
Yurasakpong, L.
Taradolpisut, N.
Kruepunga, N.
Chaiyamoon, A.
Suwannakhan, A.
Surg Radiol Anat2024Journal Article, cited 0 times
Pediatric-CT-SEG
Anatomical variation
Computed Tomography (CT)
Episternal ossicles
Meta-analysis
Sternum
Systematic review
Bone
Segmentation
Episternal ossicles (EO) are accessory bones located superior and posterior to the manubrium, representing an anatomical variation in the thoracic region. This study aimed to investigate the prevalence and developmental aspects of EO in global populations. The prevalence of EO in pediatric populations was assessed using the "Pediatric-CT-SEG" open-access data set obtained from The Cancer Imaging Archive, revealing a single incidence of EO among 233 subjects, occurring in a 14-year-old patient. A meta-analysis was conducted using data from 16 studies (from 14 publications) through three electronic databases (Google Scholar, PubMed, and Journal Storage) encompassing 7997 subjects. An overall EO prevalence was 2.1% (95% CI 1.1-3.0%, I(2) = 93.75%). Subgroup analyses by continent and diagnostic methods were carried out. Asia exhibited the highest prevalence of EO at 3.8% (95% CI 0.3-7.5%, I(2) = 96.83%), and X-ray yielded the highest prevalence of 0.7% (95% CI 0.5-8.9%, I(2) = 0.00%) compared with other modalities. The small-study effect was indicated by asymmetric funnel plots (Egger's z = 4.78, p < 0.01; Begg's z = 2.30, p = 0.02). Understanding the prevalence and developmental aspects of EO is crucial for clinical practitioners' awareness of this anatomical variation.
Simulation of glioblastoma growth using a 3D multispecies tumor model with mass effect
Subramanian, Shashank
Gholami, Amir
Biros, George
2019Journal Article, cited 0 times
BraTS-TCGA-GBM
Glioblastoma
MRI
In this article, we present a multispecies reaction–advection–diffusion partial differential equation coupled with linear elasticity for modeling tumor growth. The model aims to capture the phenomenological features of glioblastoma multiforme observed in magnetic resonance imaging (MRI) scans. These include enhancing and necrotic tumor structures, brain edema and the so-called “mass effect”, a term-of-art that refers to the deformation of brain tissue due to the presence of the tumor. The multispecies model accounts for proliferating, invasive and necrotic tumor cells as well as a simple model for nutrition consumption and tumor-induced brain edema. The coupling of the model with linear elasticity equations with variable coefficients allows us to capture the mechanical deformations due to the tumor growth on surrounding tissues. We present the overall formulation along with a novel operator-splitting scheme with components that include linearly-implicit preconditioned elliptic solvers, and a semi-Lagrangian method for advection. We also present results showing simulated MRI images which highlight the capability of our method to capture the overall structure of glioblastomas in MRIs.
Simulating the behaviour of glioblastoma multiforme based on patient MRI during treatments
Alonzo, Flavien
Serandour, Aurelien A.
Saad, Mazen
2022Journal Article, cited 0 times
CPTAC-GBM
Glioblastoma multiforme is a brain cancer that still shows poor prognosis for patients despite the active research for new treatments. In this work, the goal is to model and simulate the evolution of tumour associated angiogenesis and the therapeutic response to glioblastoma multiforme. Multiple phenomena are modelled in order to fit different biological pathways, such as the cellular cycle, apoptosis, hypoxia or angiogenesis. This leads to a nonlinear system with 4 equations and 4 unknowns: the density of tumour cells, the O2$$\text {O}_{2}$$ concentration, the density of endothelial cells and the vascular endothelial growth factor concentration. This system is solved numerically on a mesh fitting the geometry of the brain and the tumour of a patient based on a 2D slice of MRI. We show that our numerical scheme is positive, and we give the energy estimates on the discrete solution to ensure its existence. The numerical scheme uses nonlinear control volume finite elements in space and is implicit in time. Numerical simulations have been done using the different standard treatments: surgery, chemotherapy and radiotherapy, in order to conform to the behaviour of a tumour in response to treatments according to empirical clinical knowledge. We find that our theoretical model exhibits realistic behaviours.
Radiomic features from the peritumoral brain parenchyma on treatment-naïve multi-parametric MR imaging predict long versus short-term survival in glioblastoma multiforme: Preliminary findings
Prasanna, Prateek
Patel, Jay
Partovi, Sasan
Madabhushi, Anant
Tiwari, Pallavi
European Radiology2016Journal Article, cited 45 times
Website
TCGA-GBM
Radiomics
Volume of high-risk intratumoral subregions at multi-parametric MR imaging predicts overall survival and complements molecular analysis of glioblastoma
Cui, Yi
Ren, Shangjie
Tha, Khin Khin
Wu, Jia
Shirato, Hiroki
Li, Ruijiang
European Radiology2017Journal Article, cited 10 times
Website
GBM
Genotype prediction of ATRX mutation in lower-grade gliomas using an MRI radiomics signature
Li, Y.
Liu, X.
Qian, Z.
Sun, Z.
Xu, K.
Wang, K.
Fan, X.
Zhang, Z.
Li, S.
Wang, Y.
Jiang, T.
Eur Radiol2018Journal Article, cited 2 times
Website
Radiogenomics
TCGA-LGG
Biomarkers
Genetics
Glioma
Machine learning
Magnetic resonance imaging
OBJECTIVES: To predict ATRX mutation status in patients with lower-grade gliomas using radiomic analysis. METHODS: Cancer Genome Atlas (TCGA) patients with lower-grade gliomas were randomly allocated into training (n = 63) and validation (n = 32) sets. An independent external-validation set (n = 91) was built based on the Chinese Genome Atlas (CGGA) database. After feature extraction, an ATRX-related signature was constructed. Subsequently, the radiomic signature was combined with a support vector machine to predict ATRX mutation status in training, validation and external-validation sets. Predictive performance was assessed by receiver operating characteristic curve analysis. Correlations between the selected features were also evaluated. RESULTS: Nine radiomic features were screened as an ATRX-associated radiomic signature of lower-grade gliomas based on the LASSO regression model. All nine radiomic features were texture-associated (e.g. sum average and variance). The predictive efficiencies measured by the area under the curve were 94.0 %, 92.5 % and 72.5 % in the training, validation and external-validation sets, respectively. The overall correlations between the nine radiomic features were low in both TCGA and CGGA databases. CONCLUSIONS: Using radiomic analysis, we achieved efficient prediction of ATRX genotype in lower-grade gliomas, and our model was effective in two independent databases. KEY POINTS: * ATRX in lower-grade gliomas could be predicted using radiomic analysis. * The LASSO regression algorithm and SVM performed well in radiomic analysis. * Nine radiomic features were screened as an ATRX-predictive radiomic signature. * The machine-learning model for ATRX-prediction was validated by an independent database.
Development of a radiomics nomogram based on the 2D and 3D CT features to predict the survival of non-small cell lung cancer patients
Yang, Lifeng
Yang, Jingbo
Zhou, Xiaobo
Huang, Liyu
Zhao, Weiling
Wang, Tao
Zhuang, Jian
Tian, Jie
European Radiology2018Journal Article, cited 0 times
Website
NSCLC-Radiomics
Radiomics
Influence of segmentation margin on machine learning–based high-dimensional quantitative CT texture analysis: a reproducibility study on renal clear cell carcinomas
Kocak, Burak
Ates, Ece
Durmaz, Emine Sebnem
Ulusan, Melis Baykara
Kilickesmez, Ozgur
European Radiology2019Journal Article, cited 0 times
Website
TCGA-KIRC
Segmentation
CT
Renal cell carcinoma: predicting RUNX3 methylation level and its consequences on survival with CT features
Dongzhi Cen
Li Xu
Siwei Zhang
Zhiguang Chen
Yan Huang
Ziqi Li
Bo Liang
European Radiology2019Journal Article, cited 0 times
Website
TCGA-KIRC
clear cell renal cell carcinoma (ccRCC)
Computed Tomography (CT)
Radiogenomics
Cox regression
PURPOSE: To investigate associations between CT imaging features, RUNX3 methylation level, and survival in clear cell renal cell carcinoma (ccRCC). MATERIALS AND METHODS: Patients were divided into high RUNX3 methylation and low RUNX3 methylation groups according to RUNX3 methylation levels (the threshold was identified by using X-tile). The CT scanning data from 106 ccRCC patients were retrospectively analyzed. The relationship between RUNX3 methylation level and overall survivals was evaluated using the Kaplan-Meyer analysis and Cox regression analysis (univariate and multivariate). The relationship between RUNX3 methylation level and CT features was evaluated using chi-square test and logistic regression analysis (univariate and multivariate). RESULTS: beta value cutoff of 0.53 to distinguish high methylation (N = 44) from low methylation tumors (N = 62). Patients with lower levels of methylation had longer median overall survival (49.3 vs. 28.4) months (low vs. high, adjusted hazard ratio [HR] 4.933, 95% CI 2.054-11.852, p < 0.001). On univariate logistic regression analysis, four risk factors (margin, side, long diameter, and intratumoral vascularity) were associated with RUNX3 methylation level (all p < 0.05). Multivariate logistic regression analysis found that three risk factors (side: left vs. right, odds ratio [OR] 2.696; p = 0.024; 95% CI 1.138-6.386; margin: ill-defined vs. well-defined, OR 2.685; p = 0.038; 95% CI 1.057-6.820; and intratumoral vascularity: yes vs. no, OR 3.286; p = 0.008; 95% CI 1.367-7.898) were significant independent predictors of high methylation tumors. This model had an area under the receiver operating characteristic curve (AUC) of 0.725 (95% CI 0.623-0.827). CONCLUSIONS: Higher levels of RUNX3 methylation are associated with shorter survival in ccRCC patients. And presence of intratumoral vascularity, ill-defined margin, and left side tumor were significant independent predictors of high methylation level of RUNX3 gene. KEY POINTS: * RUNX3 methylation level is negatively associated with overall survival in ccRCC patients. * Presence of intratumoral vascularity, ill-defined margin, and left side tumor were significant independent predictors of high methylation level of RUNX3 gene.
A radiomics nomogram based on multiparametric MRI might stratify glioblastoma patients according to survival
Zhang, Xi
Lu, Hongbing
Tian, Qiang
Feng, Na
Yin, Lulu
Xu, Xiaopan
Du, Peng
Liu, Yang
European Radiology2019Journal Article, cited 0 times
TCGA-GBM
machine learning
nomograms
A radiogenomics signature for predicting the clinical outcome of bladder urothelial carcinoma
Lin, Peng
Wen, Dong-Yue
Chen, Ling
Li, Xin
Li, Sheng-Hua
Yan, Hai-Biao
He, Rong-Quan
Chen, Gang
He, Yun
Yang, Hong
Eur Radiol2019Journal Article, cited 0 times
TCGA-BLCA
Bladder
Radiomics
Radiogenomics
Computed Tomography (CT)
OBJECTIVES: To determine the integrative value of contrast-enhanced computed tomography (CECT), transcriptomics data and clinicopathological data for predicting the survival of bladder urothelial carcinoma (BLCA) patients. METHODS: RNA sequencing data, radiomics features and clinical parameters of 62 BLCA patients were included in the study. Then, prognostic signatures based on radiomics features and gene expression profile were constructed by using least absolute shrinkage and selection operator (LASSO) Cox analysis. A multi-omics nomogram was developed by integrating radiomics, transcriptomics and clinicopathological data. More importantly, radiomics risk score-related genes were identified via weighted correlation network analysis and submitted to functional enrichment analysis. RESULTS: The radiomics and transcriptomics signatures significantly stratified BLCA patients into high- and low-risk groups in terms of the progression-free interval (PFI). The two risk models remained independent prognostic factors in multivariate analyses after adjusting for clinical parameters. A nomogram was developed and showed an excellent predictive ability for the PFI in BLCA patients. Functional enrichment analysis suggested that the radiomics signature we developed could reflect the angiogenesis status of BLCA patients. CONCLUSIONS: The integrative nomogram incorporated CECT radiomics, transcriptomics and clinical features improved the PFI prediction in BLCA patients and is a feasible and practical reference for oncological precision medicine. KEY POINTS: * Our radiomics and transcriptomics models are proved robust for survival prediction in bladder urothelial carcinoma patients. * A multi-omics nomogram model which integrates radiomics, transcriptomics and clinical features for prediction of progression-free interval in bladder urothelial carcinoma is established. * Molecular functional enrichment analysis is used to reveal the potential molecular function of radiomics signature.
Semi-automatic classification of prostate cancer on multi-parametric MR imaging using a multi-channel 3D convolutional neural network
Aldoj, Nader
Lukas, Steffen
Dewey, Marc
Penzkofer, Tobias
Eur Radiol2020Journal Article, cited 1 times
Website
PROSTATEx
Convolutional Neural Network (CNN)
Deep learning
Multi-parametric MRI
Prostate
OBJECTIVE: To present a deep learning-based approach for semi-automatic prostate cancer classification based on multi-parametric magnetic resonance (MR) imaging using a 3D convolutional neural network (CNN). METHODS: Two hundred patients with a total of 318 lesions for which histological correlation was available were analyzed. A novel CNN was designed, trained, and validated using different combinations of distinct MRI sequences as input (e.g., T2-weighted, apparent diffusion coefficient (ADC), diffusion-weighted images, and K-trans) and the effect of different sequences on the network's performance was tested and discussed. The particular choice of modeling approach was justified by testing all relevant data combinations. The model was trained and validated using eightfold cross-validation. RESULTS: In terms of detection of significant prostate cancer defined by biopsy results as the reference standard, the 3D CNN achieved an area under the curve (AUC) of the receiver operating characteristics ranging from 0.89 (88.6% and 90.0% for sensitivity and specificity respectively) to 0.91 (81.2% and 90.5% for sensitivity and specificity respectively) with an average AUC of 0.897 for the ADC, DWI, and K-trans input combination. The other combinations scored less in terms of overall performance and average AUC, where the difference in performance was significant with a p value of 0.02 when using T2w and K-trans; and 0.00025 when using T2w, ADC, and DWI. Prostate cancer classification performance is thus comparable to that reported for experienced radiologists using the prostate imaging reporting and data system (PI-RADS). Lesion size and largest diameter had no effect on the network's performance. CONCLUSION: The diagnostic performance of the 3D CNN in detecting clinically significant prostate cancer is characterized by a good AUC and sensitivity and high specificity. KEY POINTS: * Prostate cancer classification using a deep learning model is feasible and it allows direct processing of MR sequences without prior lesion segmentation. * Prostate cancer classification performance as measured by AUC is comparable to that of an experienced radiologist. * Perfusion MR images (K-trans), followed by DWI and ADC, have the highest effect on the overall performance; whereas T2w images show hardly any improvement.
Multiparametric MRI and auto-fixed volume of interest-based radiomics signature for clinically significant peripheral zone prostate cancer
Bleker, J.
Kwee, T. C.
Dierckx, Rajo
de Jong, I. J.
Huisman, H.
Yakar, D.
Eur Radiol2020Journal Article, cited 2 times
Website
Machine Learning
Magnetic Resonance Imaging (MRI)
PROSTATEx
OBJECTIVES: To create a radiomics approach based on multiparametric magnetic resonance imaging (mpMRI) features extracted from an auto-fixed volume of interest (VOI) that quantifies the phenotype of clinically significant (CS) peripheral zone (PZ) prostate cancer (PCa). METHODS: This study included 206 patients with 262 prospectively called mpMRI prostate imaging reporting and data system 3-5 PZ lesions. Gleason scores > 6 were defined as CS PCa. Features were extracted with an auto-fixed 12-mm spherical VOI placed around a pin point in each lesion. The value of dynamic contrast-enhanced imaging(DCE), multivariate feature selection and extreme gradient boosting (XGB) vs. univariate feature selection and random forest (RF), expert-based feature pre-selection, and the addition of image filters was investigated using the training (171 lesions) and test (91 lesions) datasets. RESULTS: The best model with features from T2-weighted (T2-w) + diffusion-weighted imaging (DWI) + DCE had an area under the curve (AUC) of 0.870 (95% CI 0.980-0.754). Removal of DCE features decreased AUC to 0.816 (95% CI 0.920-0.710), although not significantly (p = 0.119). Multivariate and XGB outperformed univariate and RF (p = 0.028). Expert-based feature pre-selection and image filters had no significant contribution. CONCLUSIONS: The phenotype of CS PZ PCa lesions can be quantified using a radiomics approach based on features extracted from T2-w + DWI using an auto-fixed VOI. Although DCE features improve diagnostic performance, this is not statistically significant. Multivariate feature selection and XGB should be preferred over univariate feature selection and RF. The developed model may be a valuable addition to traditional visual assessment in diagnosing CS PZ PCa. KEY POINTS: * T2-weighted and diffusion-weighted imaging features are essential components of a radiomics model for clinically significant prostate cancer; addition of dynamic contrast-enhanced imaging does not significantly improve diagnostic performance. * Multivariate feature selection and extreme gradient outperform univariate feature selection and random forest. * The developed radiomics model that extracts multiparametric MRI features with an auto-fixed volume of interest may be a valuable addition to visual assessment in diagnosing clinically significant prostate cancer.
Radiogenomics of lower-grade gliomas: machine learning-based MRI texture analysis for predicting 1p/19q codeletion status
Kocak, B.
Durmaz, E. S.
Ates, E.
Sel, I.
Turgut Gunes, S.
Kaya, O. K.
Zeynalova, A.
Kilickesmez, O.
Eur Radiol2019Journal Article, cited 0 times
LGG-1p19qDeletion
Radiogenomics
1p/19q codeletion
Machine learning
Radiomics
OBJECTIVE: To evaluate the potential value of the machine learning (ML)-based MRI texture analysis for predicting 1p/19q codeletion status of lower-grade gliomas (LGG), using various state-of-the-art ML algorithms. ; MATERIALS AND METHODS: For this retrospective study, 107 patients with LGG were included from a public database. Texture features were extracted from conventional T2-weighted and contrast-enhanced T1-weighted MRI images, using LIFEx software. Training and unseen validation splits were created using stratified 10-fold cross-validation technique along with minority over-sampling. Dimension reduction was done using collinearity analysis and feature selection (ReliefF). Classifications were done using adaptive boosting, k-nearest neighbours, naive Bayes, neural network, random forest, stochastic gradient descent, and support vector machine. Friedman test and pairwise post hoc analyses were used for comparison of classification performances based on the area under the curve (AUC). ; RESULTS: Overall, the predictive performance of the ML algorithms were statistically significantly different, chi2(6) = 26.7, p < 0.001. There was no statistically significant difference among the performance of the neural network, naive Bayes, support vector machine, random forest, and stochastic gradient descent, adjusted p > 0.05. The mean AUC and accuracy values of these five algorithms ranged from 0.769 to 0.869 and from 80.1 to 84%, respectively. The neural network had the highest mean rank with mean AUC and accuracy values of 0.869 and 83.8%, respectively. ; ; CONCLUSIONS: The ML-based MRI texture analysis might be a promising non-invasive technique for predicting the 1p/19q codeletion status of LGGs. Using this technique along with various ML algorithms, more than four-fifths of the LGGs can be correctly classified. KEY POINTS: * More than four-fifths of the lower-grade gliomas can be correctly classified with machine learning-based MRI texture analysis. Satisfying classification outcomes are not limited to a single algorithm. * A few-slice-based volumetric segmentation technique would be a valid approach, providing satisfactory predictive textural information and avoiding excessive segmentation duration in clinical practice. * Feature selection is sensitive to different patient data set samples so that each sampling leads to the selection of different feature subsets, which needs to be considered in future works.
Predicting the ISUP grade of clear cell renal cell carcinoma with multiparametric MR and multiphase CT radiomics
Cui, Enming
Li, Zhuoyong
Ma, Changyi
Li, Qing
Lei, Yi
Lan, Yong
Yu, Juan
Zhou, Zhipeng
Li, Ronggang
Long, Wansheng
Lin, Fan
Eur Radiol2020Journal Article, cited 0 times
Website
Clear cell renal cell carcinoma (ccRCC)
Machine Learning
Radiomics
TCGA-KIRC
OBJECTIVE: To investigate externally validated magnetic resonance (MR)-based and computed tomography (CT)-based machine learning (ML) models for grading clear cell renal cell carcinoma (ccRCC). MATERIALS AND METHODS: Patients with pathologically proven ccRCC in 2009-2018 were retrospectively included for model development and internal validation; patients from another independent institution and The Cancer Imaging Archive dataset were included for external validation. Features were extracted from T1-weighted, T2-weighted, corticomedullary-phase (CMP), and nephrographic-phase (NP) MR as well as precontrast-phase (PCP), CMP, and NP CT. CatBoost was used for ML-model investigation. The reproducibility of texture features was assessed using intraclass correlation coefficient (ICC). Accuracy (ACC) was used for ML-model performance evaluation. RESULTS: Twenty external and 440 internal cases were included. Among 368 and 276 texture features from MR and CT, 322 and 250 features with good to excellent reproducibility (ICC >/= 0.75) were included for ML-model development. The best MR- and CT-based ML models satisfactorily distinguished high- from low-grade ccRCCs in internal (MR-ACC = 73% and CT-ACC = 79%) and external (MR-ACC = 74% and CT-ACC = 69%) validation. Compared to single-sequence or single-phase images, the classifiers based on all-sequence MR (71% to 73% in internal and 64% to 74% in external validation) and all-phase CT (77% to 79% in internal and 61% to 69% in external validation) images had significant increases in ACC. CONCLUSIONS: MR- and CT-based ML models are valuable noninvasive techniques for discriminating high- from low-grade ccRCCs, and multiparameter MR- and multiphase CT-based classifiers are potentially superior to those based on single-sequence or single-phase imaging. KEY POINTS: * Both the MR- and CT-based machine learning models are reliable predictors for differentiating high- from low-grade ccRCCs. * ML models based on multiparameter MR sequences and multiphase CT images potentially outperform those based on single-sequence or single-phase images in ccRCC grading.
A quantitative model based on clinically relevant MRI features differentiates lower grade gliomas and glioblastoma
Cao, H.
Erson-Omay, E. Z.
Li, X.
Gunel, M.
Moliterno, J.
Fulbright, R. K.
Eur Radiol2020Journal Article, cited 0 times
Website
TCGA-LGG
TCGA-GBM
Radiomics
Radiogenomics
Magnetic Resonance Imaging (MRI)
Machine Learning
OBJECTIVES: To establish a quantitative MR model that uses clinically relevant features of tumor location and tumor volume to differentiate lower grade glioma (LRGG, grades II and III) and glioblastoma (GBM, grade IV). METHODS: We extracted tumor location and tumor volume (enhancing tumor, non-enhancing tumor, peritumor edema) features from 229 The Cancer Genome Atlas (TCGA)-LGG and TCGA-GBM cases. Through two sampling strategies, i.e., institution-based sampling and repeat random sampling (10 times, 70% training set vs 30% validation set), LASSO (least absolute shrinkage and selection operator) regression and nine-machine learning method-based models were established and evaluated. RESULTS: Principal component analysis of 229 TCGA-LGG and TCGA-GBM cases suggested that the LRGG and GBM cases could be differentiated by extracted features. For nine machine learning methods, stack modeling and support vector machine achieved the highest performance (institution-based sampling validation set, AUC > 0.900, classifier accuracy > 0.790; repeat random sampling, average validation set AUC > 0.930, classifier accuracy > 0.850). For the LASSO method, regression model based on tumor frontal lobe percentage and enhancing and non-enhancing tumor volume achieved the highest performance (institution-based sampling validation set, AUC 0.909, classifier accuracy 0.830). The formula for the best performance of the LASSO model was established. CONCLUSIONS: Computer-generated, clinically meaningful MRI features of tumor location and component volumes resulted in models with high performance (validation set AUC > 0.900, classifier accuracy > 0.790) to differentiate lower grade glioma and glioblastoma. KEY POINTS: * Lower grade glioma and glioblastoma have significant different location and component volume distributions. * We built machine learning prediction models that could help accurately differentiate lower grade gliomas and GBM cases. We introduced a fast evaluation model for possible clinical differentiation and further analysis.
Machine learning and radiomic phenotyping of lower grade gliomas: improving survival prediction
Choi, Yoon Seong
Ahn, Sung Soo
Chang, Jong Hee
Kang, Seok-Gu
Kim, Eui Hyun
Kim, Se Hoon
Jain, Rajan
Lee, Seung-Koo
Eur Radiol2020Journal Article, cited 0 times
Website
TCGA-LGG
Radiomics
Radiogenomics
Glioma
Machine learning
BACKGROUND AND PURPOSE: Recent studies have highlighted the importance of isocitrate dehydrogenase (IDH) mutational status in stratifying biologically distinct subgroups of gliomas. This study aimed to evaluate whether MRI-based radiomic features could improve the accuracy of survival predictions for lower grade gliomas over clinical and IDH status. MATERIALS AND METHODS: Radiomic features (n = 250) were extracted from preoperative MRI data of 296 lower grade glioma patients from databases at our institutional (n = 205) and The Cancer Genome Atlas (TCGA)/The Cancer Imaging Archive (TCIA) (n = 91) datasets. For predicting overall survival, random survival forest models were trained with radiomic features; non-imaging prognostic factors including age, resection extent, WHO grade, and IDH status on the institutional dataset, and validated on the TCGA/TCIA dataset. The performance of the random survival forest (RSF) model and incremental value of radiomic features were assessed by time-dependent receiver operating characteristics. RESULTS: The radiomics RSF model identified 71 radiomic features to predict overall survival, which were successfully validated on TCGA/TCIA dataset (iAUC, 0.620; 95% CI, 0.501-0.756). Relative to the RSF model from the non-imaging prognostic parameters, the addition of radiomic features significantly improved the overall survival prediction accuracy of the random survival forest model (iAUC, 0.627 vs. 0.709; difference, 0.097; 95% CI, 0.003-0.209). CONCLUSION: Radiomic phenotyping with machine learning can improve survival prediction over clinical profile and genomic data for lower grade gliomas. KEY POINTS: * Radiomics analysis with machine learning can improve survival prediction over the non-imaging factors (clinical and molecular profiles) for lower grade gliomas, across different institutions.
Integration of proteomics with CT-based qualitative and radiomic features in high-grade serous ovarian cancer patients: an exploratory analysis
Beer, Lucian
Sahin, Hilal
Bateman, Nicholas W
Blazic, Ivana
Vargas, Hebert Alberto
Veeraraghavan, Harini
Kirby, Justin
Fevrier-Sullivan, Brenda
Freymann, John B
Jaffe, C Carl
European Radiology2020Journal Article, cited 1 times
Website
TCGA-OV
CPTAC
OVARY
radiomics
CT
Radiomics risk score may be a potential imaging biomarker for predicting survival in isocitrate dehydrogenase wild-type lower-grade gliomas
Park, C. J.
Han, K.
Kim, H.
Ahn, S. S.
Choi, Y. S.
Park, Y. W.
Chang, J. H.
Kim, S. H.
Jain, R.
Lee, S. K.
Eur Radiol2020Journal Article, cited 0 times
Website
TCGA-LGG
Radiogenomics
Computer Aided Diagnosis (CADx)
Least absolute shrinkage and selection operator (LASSO)
OBJECTIVES: Isocitrate dehydrogenase wild-type (IDHwt) lower-grade gliomas of histologic grades II and III follow heterogeneous clinical outcomes, which necessitates risk stratification. We aimed to evaluate whether radiomics from MRI would allow prediction of overall survival in patients with IDHwt lower-grade gliomas and to investigate the added prognostic value of radiomics over clinical features. METHODS: Preoperative MRIs of 117 patients with IDHwt lower-grade gliomas from January 2007 to February 2018 were retrospectively analyzed. The external validation cohort consisted of 33 patients from The Cancer Genome Atlas. A total of 182 radiomic features were extracted. Radiomics risk scores (RRSs) for overall survival were derived from the least absolute shrinkage and selection operator (LASSO) and elastic net. Multivariable Cox regression analyses, including clinical features and RRSs, were performed. The integrated areas under the receiver operating characteristic curves (iAUCs) from models with and without RRSs were calculated for comparisons. The prognostic value of RRS was assessed in the validation cohort. RESULTS: The RRS derived from LASSO and elastic net independently predicted survival with hazard ratios of 9.479 (95% confidence interval [CI], 3.220-27.847) and 6.148 (95% CI, 3.009-12.563), respectively. Those RRSs enhanced model performance for predicting overall survival (iAUC increased to 0.780-0.797 from 0.726), which was externally validated. The RRSs stratified IDHwt lower-grade gliomas in the validation cohort with significantly different survival. CONCLUSION: Radiomics has the potential for noninvasive risk stratification and can improve prediction of overall survival in patients with IDHwt lower-grade gliomas when integrated with clinical features. KEY POINTS: * Isocitrate dehydrogenase wild-type lower-grade gliomas with histologic grades II and III follow heterogeneous clinical outcomes, which necessitates further risk stratification. * Radiomics risk scores derived from MRI independently predict survival even after incorporating strong clinical prognostic features (hazard ratios 6.148-9.479). * Radiomics risk scores derived from MRI have the potential to improve survival prediction when added to clinical features (integrated areas under the receiver operating characteristic curves increased from 0.726 to 0.780-0.797).
CT-based radiomics stratification of tumor grade and TNM stage of clear cell renal cell carcinoma
Demirjian, Natalie L
Varghese, Bino A
Cen, Steven Y
Hwang, Darryl H
Aron, Manju
Siddiqui, Imran
Fields, Brandon K K
Lei, Xiaomeng
Yap, Felix Y
Rivas, Marielena
Reddy, Sharath S
Zahoor, Haris
Liu, Derek H
Desai, Mihir
Rhie, Suhn K
Gill, Inderbir S
Duddalwar, Vinay
Eur Radiol2021Journal Article, cited 0 times
Website
TCGA-KIRC
Radiomics
Machine learning
Manual segmentation
KIDNEY
OBJECTIVES: To evaluate the utility of CT-based radiomics signatures in discriminating low-grade (grades 1-2) clear cell renal cell carcinomas (ccRCC) from high-grade (grades 3-4) and low TNM stage (stages I-II) ccRCC from high TNM stage (stages III-IV). METHODS: A total of 587 subjects (mean age 60.2 years +/- 12.2; range 22-88.7 years) with ccRCC were included. A total of 255 tumors were high grade and 153 were high stage. For each subject, one dominant tumor was delineated as the region of interest (ROI). Our institutional radiomics pipeline was then used to extract 2824 radiomics features across 12 texture families from the manually segmented volumes of interest. Separate iterations of the machine learning models using all extracted features (full model) as well as only a subset of previously identified robust metrics (robust model) were developed. Variable of importance (VOI) analysis was performed using the out-of-bag Gini index to identify the top 10 radiomics metrics driving each classifier. Model performance was reported using area under the receiver operating curve (AUC). RESULTS: The highest AUC to distinguish between low- and high-grade ccRCC was 0.70 (95% CI 0.62-0.78) and the highest AUC to distinguish between low- and high-stage ccRCC was 0.80 (95% CI 0.74-0.86). Comparable AUCs of 0.73 (95% CI 0.65-0.8) and 0.77 (95% CI 0.7-0.84) were reported using the robust model for grade and stage classification, respectively. VOI analysis revealed the importance of neighborhood operation-based methods, including GLCM, GLDM, and GLRLM, in driving the performance of the robust models for both grade and stage classification. CONCLUSION: Post-validation, CT-based radiomics signatures may prove to be useful tools to assess ccRCC grade and stage and could potentially add to current prognostic models. Multiphase CT-based radiomics signatures have potential to serve as a non-invasive stratification schema for distinguishing between low- and high-grade as well as low- and high-stage ccRCC. KEY POINTS: * Radiomics signatures derived from clinical multiphase CT images were able to stratify low- from high-grade ccRCC, with an AUC of 0.70 (95% CI 0.62-0.78). * Radiomics signatures derived from multiphase CT images yielded discriminative power to stratify low from high TNM stage in ccRCC, with an AUC of 0.80 (95% CI 0.74-0.86). * Models created using only robust radiomics features achieved comparable AUCs of 0.73 (95% CI 0.65-0.80) and 0.77 (95% CI 0.70-0.84) to the model with all radiomics features in classifying ccRCC grade and stage, respectively.
A comprehensive texture feature analysis framework of renal cell carcinoma: pathological, prognostic, and genomic evaluation based on CT images
Wu, K.
Wu, P.
Yang, K.
Li, Z.
Kong, S.
Yu, L.
Zhang, E.
Liu, H.
Guo, Q.
Wu, S.
Eur Radiol2022Journal Article, cited 14 times
Website
OBJECTIVES: We tried to realize accurate pathological classification, assessment of prognosis, and genomic molecular typing of renal cell carcinoma by CT texture feature analysis. To determine whether CT texture features can perform accurate pathological classification and evaluation of prognosis and genomic characteristics in renal cell carcinoma. METHODS: Patients with renal cell carcinoma from five open-source cohorts were analyzed retrospectively in this study. These data were randomly split to train and test machine learning algorithms to segment the lesion, predict the histological subtype, tumor stage, and pathological grade. Dice coefficient and performance metrics such as accuracy and AUC were calculated to evaluate the segmentation and classification model. Quantitative decomposition of the predictive model was conducted to explore the contribution of each feature. Besides, survival analysis and the statistical correlation between CT texture features, pathological, and genomic signatures were investigated. RESULTS: A total of 569 enhanced CT images of 443 patients (mean age 59.4, 278 males) were included in the analysis. In the segmentation task, the mean dice coefficient was 0.96 for the kidney and 0.88 for the cancer region. For classification of histologic subtype, tumor stage, and pathological grade, the model was on a par with radiologists and the AUC was 0.83 [Formula: see text] 0.1, 0.80 [Formula: see text] 0.1, and 0.77 [Formula: see text] 0.1 at 95% confidence intervals, respectively. Moreover, specific quantitative CT features related to clinical prognosis were identified. A strong statistical correlation (R(2) = 0.83) between the feature crosses and genomic characteristics was shown. The structural equation modeling confirmed significant associations between CT features, pathological (beta = - 0.75), and molecular subtype (beta = - 0.30). CONCLUSIONS: The framework illustrates high performance in the pathological classification of renal cell carcinoma. Prognosis and genomic characteristics can be inferred by quantitative image analysis. KEY POINTS: * The analytical framework exhibits high-performance pathological classification of renal cell carcinoma and is on a par with human radiologists. * Quantitative decomposition of the predictive model shows that specific texture features contribute to histologic subtype and tumor stage classification. * Structural equation modeling shows the associations of genomic characteristics to CT texture features. Overall survival and molecular characteristics can be inferred by quantitative CT texture analysis in renal cell carcinoma.
Accuracy of fractal analysis and PI-RADS assessment of prostate magnetic resonance imaging for prediction of cancer grade groups: a clinical validation study
Michallek, F.
Huisman, H.
Hamm, B.
Elezkurtaj, S.
Maxeiner, A.
Dewey, M.
Eur Radiol2021Journal Article, cited 1 times
Website
PROSTATEx-2 2017 challenge
Fractals
Multiparametric magnetic resonance imaging
Neoplasm grading
Perfusion
Prostatic neoplasms
OBJECTIVES: Multiparametric MRI with Prostate Imaging Reporting and Data System (PI-RADS) assessment is sensitive but not specific for detecting clinically significant prostate cancer. This study validates the diagnostic accuracy of the recently suggested fractal dimension (FD) of perfusion for detecting clinically significant cancer. MATERIALS AND METHODS: Routine clinical MR imaging data, acquired at 3 T without an endorectal coil including dynamic contrast-enhanced sequences, of 72 prostate cancer foci in 64 patients were analyzed. In-bore MRI-guided biopsy with International Society of Urological Pathology (ISUP) grading served as reference standard. Previously established FD cutoffs for predicting tumor grade were compared to measurements of the apparent diffusion coefficient (25th percentile, ADC25) and PI-RADS assessment with and without inclusion of the FD as separate criterion. RESULTS: Fractal analysis allowed prediction of ISUP grade groups 1 to 4 but not 5, with high agreement to the reference standard (kappaFD = 0.88 [CI: 0.79-0.98]). Integrating fractal analysis into PI-RADS allowed a strong improvement in specificity and overall accuracy while maintaining high sensitivity for significant cancer detection (ISUP > 1; PI-RADS alone: sensitivity = 96%, specificity = 20%, area under the receiver operating curve [AUC] = 0.65; versus PI-RADS with fractal analysis: sensitivity = 95%, specificity = 88%, AUC = 0.92, p < 0.001). ADC25 only differentiated low-grade group 1 from pooled higher-grade groups 2-5 (kappaADC = 0.36 [CI: 0.12-0.59]). Importantly, fractal analysis was significantly more reliable than ADC25 in predicting non-significant and clinically significant cancer (AUCFD = 0.96 versus AUCADC = 0.75, p < 0.001). Diagnostic accuracy was not significantly affected by zone location. CONCLUSIONS: Fractal analysis is accurate in noninvasively predicting tumor grades in prostate cancer and adds independent information when implemented into PI-RADS assessment. This opens the opportunity to individually adjust biopsy priority and method in individual patients. KEY POINTS: * Fractal analysis of perfusion is accurate in noninvasively predicting tumor grades in prostate cancer using dynamic contrast-enhanced sequences (kappaFD = 0.88). * Including the fractal dimension into PI-RADS as a separate criterion improved specificity (from 20 to 88%) and overall accuracy (AUC from 0.86 to 0.96) while maintaining high sensitivity (96% versus 95%) for predicting clinically significant cancer. * Fractal analysis was significantly more reliable than ADC25 in predicting clinically significant cancer (AUCFD = 0.96 versus AUCADC = 0.75).
Prediction of prostate cancer grade using fractal analysis of perfusion MRI: retrospective proof-of-principle study
Michallek, F.
Huisman, H.
Hamm, B.
Elezkurtaj, S.
Maxeiner, A.
Dewey, M.
Eur Radiol2021Journal Article, cited 1 times
Website
PROSTATEx
Fractals
Multiparametric magnetic resonance imaging
Neoplasm grading
Perfusion
Prostatic neoplasms
OBJECTIVES: Multiparametric MRI has high diagnostic accuracy for detecting prostate cancer, but non-invasive prediction of tumor grade remains challenging. Characterizing tumor perfusion by exploiting the fractal nature of vascular anatomy might elucidate the aggressive potential of a tumor. This study introduces the concept of fractal analysis for characterizing prostate cancer perfusion and reports about its usefulness for non-invasive prediction of tumor grade. METHODS: We retrospectively analyzed the openly available PROSTATEx dataset with 112 cancer foci in 99 patients. In all patients, histological grading groups specified by the International Society of Urological Pathology (ISUP) were obtained from in-bore MRI-guided biopsy. Fractal analysis of dynamic contrast-enhanced perfusion MRI sequences was performed, yielding fractal dimension (FD) as quantitative descriptor. Two-class and multiclass diagnostic accuracy was analyzed using area under the curve (AUC) receiver operating characteristic analysis, and optimal FD cutoffs were established. Additionally, we compared fractal analysis to conventional apparent diffusion coefficient (ADC) measurements. RESULTS: Fractal analysis of perfusion allowed accurate differentiation of non-significant (group 1) and clinically significant (groups 2-5) cancer with a sensitivity of 91% (confidence interval [CI]: 83-96%) and a specificity of 86% (CI: 73-94%). FD correlated linearly with ISUP groups (r(2) = 0.874, p < 0.001). Significant groupwise differences were obtained between low, intermediate, and high ISUP group 1-4 (p </= 0.001) but not group 5 tumors. Fractal analysis of perfusion was significantly more reliable than ADC in predicting non-significant and clinically significant cancer (AUCFD = 0.97 versus AUCADC = 0.77, p < 0.001). CONCLUSION: Fractal analysis of perfusion MRI accurately predicts prostate cancer grading in low-, intermediate-, and high-, but not highest-grade, tumors. KEY POINTS: * In 112 prostate carcinomas, fractal analysis of MR perfusion imaging accurately differentiated low-, intermediate-, and high-grade cancer (ISUP grade groups 1-4). * Fractal analysis detected clinically significant prostate cancer with a sensitivity of 91% (83-96%) and a specificity of 86% (73-94%). * Fractal dimension of perfusion at the tumor margin may provide an imaging biomarker to predict prostate cancer grading.
Deep learning in CT colonography: differentiating premalignant from benign colorectal polyps
Wesp, P.
Grosu, S.
Graser, A.
Maurus, S.
Schulz, C.
Knosel, T.
Fabritius, M. P.
Schachtner, B.
Yeh, B. M.
Cyran, C. C.
Ricke, J.
Kazmierczak, P. M.
Ingrisch, M.
Eur Radiol2022Journal Article, cited 0 times
Website
CT COLONOGRAPHY
Colonic polyp
Colonography
Computed Tomography (CT)
Deep learning
Computer Aided Detection (CADe)
OBJECTIVES: To investigate the differentiation of premalignant from benign colorectal polyps detected by CT colonography using deep learning. METHODS: In this retrospective analysis of an average risk colorectal cancer screening sample, polyps of all size categories and morphologies were manually segmented on supine and prone CT colonography images and classified as premalignant (adenoma) or benign (hyperplastic polyp or regular mucosa) according to histopathology. Two deep learning models SEG and noSEG were trained on 3D CT colonography image subvolumes to predict polyp class, and model SEG was additionally trained with polyp segmentation masks. Diagnostic performance was validated in an independent external multicentre test sample. Predictions were analysed with the visualisation technique Grad-CAM++. RESULTS: The training set consisted of 107 colorectal polyps in 63 patients (mean age: 63 +/- 8 years, 40 men) comprising 169 polyp segmentations. The external test set included 77 polyps in 59 patients comprising 118 polyp segmentations. Model SEG achieved a ROC-AUC of 0.83 and 80% sensitivity at 69% specificity for differentiating premalignant from benign polyps. Model noSEG yielded a ROC-AUC of 0.75, 80% sensitivity at 44% specificity, and an average Grad-CAM++ heatmap score of >/= 0.25 in 90% of polyp tissue. CONCLUSIONS: In this proof-of-concept study, deep learning enabled the differentiation of premalignant from benign colorectal polyps detected with CT colonography and the visualisation of image regions important for predictions. The approach did not require polyp segmentation and thus has the potential to facilitate the identification of high-risk polyps as an automated second reader. KEY POINTS: * Non-invasive deep learning image analysis may differentiate premalignant from benign colorectal polyps found in CT colonography scans. * Deep learning autonomously learned to focus on polyp tissue for predictions without the need for prior polyp segmentation by experts. * Deep learning potentially improves the diagnostic accuracy of CT colonography in colorectal cancer screening by allowing for a more precise selection of patients who would benefit from endoscopic polypectomy, especially for patients with polyps of 6-9 mm size.
Multi-center validation of an artificial intelligence system for detection of COVID-19 on chest radiographs in symptomatic patients
Kuo, M. D.
Chiu, K. W. H.
Wang, D. S.
Larici, A. R.
Poplavskiy, D.
Valentini, A.
Napoli, A.
Borghesi, A.
Ligabue, G.
Fang, X. H. B.
Wong, H. K. C.
Zhang, S.
Hunter, J. R.
Mousa, A.
Infante, A.
Elia, L.
Golemi, S.
Yu, L. H. P.
Hui, C. K. M.
Erickson, B. J.
Eur Radiol2022Journal Article, cited 0 times
Website
COVID-19-NY-SBU
COVID-19
Public health
Radiology
Thoracic
OBJECTIVES: While chest radiograph (CXR) is the first-line imaging investigation in patients with respiratory symptoms, differentiating COVID-19 from other respiratory infections on CXR remains challenging. We developed and validated an AI system for COVID-19 detection on presenting CXR. METHODS: A deep learning model (RadGenX), trained on 168,850 CXRs, was validated on a large international test set of presenting CXRs of symptomatic patients from 9 study sites (US, Italy, and Hong Kong SAR) and 2 public datasets from the US and Europe. Performance was measured by area under the receiver operator characteristic curve (AUC). Bootstrapped simulations were performed to assess performance across a range of potential COVID-19 disease prevalence values (3.33 to 33.3%). Comparison against international radiologists was performed on an independent test set of 852 cases. RESULTS: RadGenX achieved an AUC of 0.89 on 4-fold cross-validation and an AUC of 0.79 (95%CI 0.78-0.80) on an independent test cohort of 5,894 patients. Delong's test showed statistical differences in model performance across patients from different regions (p < 0.01), disease severity (p < 0.001), gender (p < 0.001), and age (p = 0.03). Prevalence simulations showed the negative predictive value increases from 86.1% at 33.3% prevalence, to greater than 98.5% at any prevalence below 4.5%. Compared with radiologists, McNemar's test showed the model has higher sensitivity (p < 0.001) but lower specificity (p < 0.001). CONCLUSION: An AI model that predicts COVID-19 infection on CXR in symptomatic patients was validated on a large international cohort providing valuable context on testing and performance expectations for AI systems that perform COVID-19 prediction on CXR. KEY POINTS: * An AI model developed using CXRs to detect COVID-19 was validated in a large multi-center cohort of 5,894 patients from 9 prospectively recruited sites and 2 public datasets. * Differences in AI model performance were seen across region, disease severity, gender, and age. * Prevalence simulations on the international test set demonstrate the model's NPV is greater than 98.5% at any prevalence below 4.5%.
ITHscore: comprehensive quantification of intra-tumor heterogeneity in NSCLC by multi-scale radiomic features
Li, J.
Qiu, Z.
Zhang, C.
Chen, S.
Wang, M.
Meng, Q.
Lu, H.
Wei, L.
Lv, H.
Zhong, W.
Zhang, X.
Eur Radiol2022Journal Article, cited 0 times
Website
NSCLC-Radiomics
NSCLC Radiogenomics
Head-Neck-Radiomics-HN1
RIDER LUNG CT
Non-small cell lung cancer
Radiomics
Computed Tomography (CT)
Tumor heterogeneity
OBJECTIVES: To quantify intra-tumor heterogeneity (ITH) in non-small cell lung cancer (NSCLC) from computed tomography (CT) images. METHODS: We developed a quantitative ITH measurement-ITHscore-by integrating local radiomic features and global pixel distribution patterns. The associations of ITHscore with tumor phenotypes, genotypes, and patient's prognosis were examined on six patient cohorts (n = 1399) to validate its effectiveness in characterizing ITH. RESULTS: For stage I NSCLC, ITHscore was consistent with tumor progression from stage IA1 to IA3 (p < 0.001) and captured key pathological change in terms of malignancy (p < 0.001). ITHscore distinguished the presence of lymphovascular invasion (p = 0.003) and pleural invasion (p = 0.001) in tumors. ITHscore also separated patient groups with different overall survival (p = 0.004) and disease-free survival conditions (p = 0.005). Radiogenomic analysis showed that the level of ITHscore in stage I and stage II NSCLC is correlated with heterogeneity-related pathways. In addition, ITHscore was proved to be a stable measurement and can be applied to ITH quantification in head-and-neck cancer (HNC). CONCLUSIONS: ITH in NSCLC can be quantified from CT images by ITHscore, which is an indicator for tumor phenotypes and patient's prognosis. KEY POINTS: * ITHscore provides a radiomic quantification of intra-tumor heterogeneity in NSCLC. * ITHscore is an indicator for tumor phenotypes and patient's prognosis. * ITHscore has the potential to be generalized to other cancer types such as HNC.
Algorithmic transparency and interpretability measures improve radiologists' performance in BI-RADS 4 classification
Jungmann, F.
Ziegelmayer, S.
Lohoefer, F. K.
Metz, S.
Muller-Leisse, C.
Englmaier, M.
Makowski, M. R.
Kaissis, G. A.
Braren, R. F.
Eur Radiol2022Journal Article, cited 0 times
CBIS-DDSM
Algorithms
Artificial intelligence
Perception
Radiologists
Trust
OBJECTIVE: To evaluate the perception of different types of AI-based assistance and the interaction of radiologists with the algorithm's predictions and certainty measures. METHODS: In this retrospective observer study, four radiologists were asked to classify Breast Imaging-Reporting and Data System 4 (BI-RADS4) lesions (n = 101 benign, n = 99 malignant). The effect of different types of AI-based assistance (occlusion-based interpretability map, classification, and certainty) on the radiologists' performance (sensitivity, specificity, questionnaire) were measured. The influence of the Big Five personality traits was analyzed using the Pearson correlation. RESULTS: Diagnostic accuracy was significantly improved by AI-based assistance (an increase of 2.8% +/- 2.3%, 95 %-CI 1.5 to 4.0 %, p = 0.045) and trust in the algorithm was generated primarily by the certainty of the prediction (100% of participants). Different human-AI interactions were observed ranging from nearly no interaction to humanization of the algorithm. High scores in neuroticism were correlated with higher persuasibility (Pearson's r = 0.98, p = 0.02), while higher consciousness and change of accuracy showed an inverse correlation (Pearson's r = -0.96, p = 0.04). CONCLUSION: Trust in the algorithm's performance was mostly dependent on the certainty of the predictions in combination with a plausible heatmap. Human-AI interaction varied widely and was influenced by personality traits. KEY POINTS: * AI-based assistance significantly improved the diagnostic accuracy of radiologists in classifying BI-RADS 4 mammography lesions. * Trust in the algorithm's performance was mostly dependent on the certainty of the prediction in combination with a reasonable heatmap. * Personality traits seem to influence human-AI collaboration. Radiologists with specific personality traits were more likely to change their classification according to the algorithm's prediction than others.
Free-breathing and instantaneous abdominal T(2) mapping via single-shot multiple overlapping-echo acquisition and deep learning reconstruction
Lin, X.
Dai, L.
Yang, Q.
Yang, Q.
He, H.
Ma, L.
Liu, J.
Cheng, J.
Cai, C.
Bao, J.
Chen, Z.
Cai, S.
Zhong, J.
Eur Radiol2023Journal Article, cited 0 times
TCGA-LIHC
Abdomen
Deep learning
Magnetic Resonance Imaging (MRI)
LIVER
KIDNEY
GALLBLADDER
SPLEEN
Segmentation
OBJECTIVES: To develop a real-time abdominal T(2) mapping method without requiring breath-holding or respiratory-gating. METHODS: The single-shot multiple overlapping-echo detachment (MOLED) pulse sequence was employed to achieve free-breathing T(2) mapping of the abdomen. Deep learning was used to untangle the non-linear relationship between the MOLED signal and T(2) mapping. A synthetic data generation flow based on Bloch simulation, modality synthesis, and randomization was proposed to overcome the inadequacy of real-world training set. RESULTS: The results from simulation and in vivo experiments demonstrated that our method could deliver high-quality T(2) mapping. The average NMSE and R(2) values of linear regression in the digital phantom experiments were 0.0178 and 0.9751. Pearson's correlation coefficient between our predicted T(2) and reference T(2) in the phantom experiments was 0.9996. In the measurements for the patients, real-time capture of the T(2) value changes of various abdominal organs before and after contrast agent injection was realized. A total of 33 focal liver lesions were detected in the group, and the mean and standard deviation of T(2) values were 141.1 +/- 50.0 ms for benign and 63.3 +/- 16.0 ms for malignant lesions. The coefficients of variance in a test-retest experiment were 2.9%, 1.2%, 0.9%, 3.1%, and 1.8% for the liver, kidney, gallbladder, spleen, and skeletal muscle, respectively. CONCLUSIONS: Free-breathing abdominal T(2) mapping is achieved in about 100 ms on a clinical MRI scanner. The work paved the way for the development of real-time dynamic T(2) mapping in the abdomen. KEY POINTS: * MOLED achieves free-breathing abdominal T(2) mapping in about 100 ms, enabling real-time capture of T(2) value changes due to CA injection in abdominal organs. * Synthetic data generation flow mitigates the issue of lack of sizable abdominal training datasets.
Integrative radiomics and transcriptomics analyses reveal subtype characterization of non-small cell lung cancer
Lin, P.
Lin, Y. Q.
Gao, R. Z.
Wan, W. J.
He, Y.
Yang, H.
Eur Radiol2023Journal Article, cited 0 times
Website
NSCLC-Radiomics-Genomics
NSCLC Radiogenomics
Heterogeneity
Non-small cell lung cancer
Radiomics
Transcriptomics
Radiomic features
Clustering
OBJECTIVES: To assess whether integrative radiomics and transcriptomics analyses could provide novel insights for radiomic features' molecular annotation and effective risk stratification in non-small cell lung cancer (NSCLC). METHODS: A total of 627 NSCLC patients from three datasets were included. Radiomics features were extracted from segmented 3-dimensional tumour volumes and were z-score normalized for further analysis. In transcriptomics level, 186 pathways and 28 types of immune cells were assessed by using the Gene Set Variation Analysis (GSVA) algorithm. NSCLC patients were categorized into subgroups based on their radiomic features and pathways enrichment scores using consensus clustering. Subgroup-specific radiomics features were used to validate clustering performance and prognostic value. Kaplan-Meier survival analysis with the log-rank test and univariable and multivariable Cox analyses were conducted to explore survival differences among the subgroups. RESULTS: Three radiotranscriptomics subtypes (RTSs) were identified based on the radiomics and pathways enrichment profiles. The three RTSs were characterized as having specific molecular hallmarks: RTS1 (proliferation subtype), RTS2 (metabolism subtype), and RTS3 (immune activation subtype). RTS3 showed increased infiltration of most immune cells. The RTS stratification strategy was validated in a validation cohort and showed significant prognostic value. Survival analysis demonstrated that the RTS strategy could stratify NSCLC patients according to prognosis (p = 0.009), and the RTS strategy remained an independent prognostic indicator after adjusting for other clinical parameters. CONCLUSIONS: This radiotranscriptomics study provides a stratification strategy for NSCLC that could provide information for radiomics feature molecular annotation and prognostic prediction. KEY POINTS: * Radiotranscriptomics subtypes (RTSs) could be used to stratify molecularly heterogeneous patients. * RTSs showed relationships between molecular phenotypes and radiomics features. * The RTS algorithm could be used to identify patients with poor prognosis.
Efficacy of exponentiation method with a convolutional neural network for classifying lung nodules on CT images by malignancy level
Usuzaki, Takuma
Takahashi, Kengo
Takagi, Hidenobu
Ishikuro, Mami
Obara, Taku
Yamaura, Takumi
Kamimoto, Masahiro
Majima, Kazuhiro
European Radiology2023Journal Article, cited 0 times
Website
LIDC-IDRI
Classification
Objectives; The aim of this study was to examine the performance of a convolutional neural network (CNN) combined with exponentiating each pixel value in classifying benign and malignant lung nodules on computed tomography (CT) images.; ; Materials and methods; Images in the Lung Image Database Consortium-Image Database Resource Initiative (LIDC-IDRI) were analyzed. Four CNN models were then constructed to classify the lung nodules by malignancy level (malignancy level 1 vs. 2, malignancy level 1 vs. 3, malignancy level 1 vs. 4, and malignancy level 1 vs. 5). The exponentiation method was applied for exponent values of 1.0 to 10.0 in increments of 0.5. Accuracy, sensitivity, specificity, and area under the curve of receiver operating characteristics (AUC-ROC) were calculated. These statistics were compared between an exponent value of 1.0 and all other exponent values in each model by the Mann–Whitney U-test.; ; Results; In malignancy 1 vs. 4, maximum test accuracy (MTA; exponent value = 2.0, 3.0, 3.5, 4.5, 5.0, 5.5, 6.0, 6.5, 7.0, 7.5, 8.0, 8.5, 9.0, 9.5, and 10.0) and specificity (6.5, 7.0, and 9.0) were improved by up to 0.012 and 0.037, respectively. In malignancy 1 vs. 5, MTA (6.5 and 7.0) and sensitivity (1.5) were improved by up to 0.030 and 0.0040, respectively.; ; Conclusions; The exponentiation method improved the performance of the CNN in the task of classifying lung nodules on CT images as benign or malignant. The exponentiation method demonstrated two advantages: improved accuracy, and the ability to adjust sensitivity and specificity by selecting an appropriate exponent value.; ; Clinical relevance statement; Adjustment of sensitivity and specificity by selecting an exponent value enables the construction of proper CNN models for screening, diagnosis, and treatment processes among patients with lung nodules.; ; Key Points; • The exponentiation method improved the performance of the convolutional neural network.; ; • Contrast accentuation by the exponentiation method may derive features of lung nodules.; ; • Sensitivity and specificity can be adjusted by selecting an exponent value.
Impact of signal intensity normalization of MRI on the generalizability of radiomic-based prediction of molecular glioma subtypes
Foltyn-Dumitru, Martha
Schell, Marianne
Rastogi, Aditya
Sahm, Felix
Kessler, Tobias
Wick, Wolfgang
Bendszus, Martin
Brugnara, Gianluca
Vollmuth, Philipp
European Radiology2023Journal Article, cited 0 times
UCSF-PDGM
glioma
radiomics
MRI
IDH genotype
Radiomic features have demonstrated encouraging results for non-invasive detection of molecular biomarkers, but the lack of guidelines for pre-processing MRI-data has led to poor generalizability. Here, we assessed the influence of different MRI-intensity normalization techniques on the performance of radiomics-based models for predicting molecular glioma subtypes.
Automated, fast, robust brain extraction on contrast-enhanced T1-weighted MRI in presence of brain tumors: an optimized model based on multi-center datasets
Teng, Y.
Chen, C.
Shu, X.
Zhao, F.
Zhang, L.
Xu, J.
Eur Radiol2023Journal Article, cited 0 times
TCGA-GBM
TCGA-LGG
Vestibular-Schwannoma-SEG
ACRIN-FMISO-Brain
ACRIN 6684
Segmentation
Brain extraction
Brain mask
Brain tumor
Deep learning
Magnetic Resonance Imaging (MRI)
OBJECTIVES: Existing brain extraction models should be further optimized to provide more information for oncological analysis. We aimed to develop an nnU-Net-based deep learning model for automated brain extraction on contrast-enhanced T1-weighted (T1CE) images in presence of brain tumors. METHODS: This is a multi-center, retrospective study involving 920 patients. A total of 720 cases with four types of intracranial tumors from private institutions were collected and set as the training group and the internal test group. Mann-Whitney U test (U test) was used to investigate if the model performance was associated with pathological types and tumor characteristics. Then, the generalization of model was independently tested on public datasets consisting of 100 glioma and 100 vestibular schwannoma cases. RESULTS: In the internal test, the model achieved promising performance with median Dice similarity coefficient (DSC) of 0.989 (interquartile range (IQR), 0.988-0.991), and Hausdorff distance (HD) of 6.403 mm (IQR, 5.099-8.426 mm). U test suggested a slightly descending performance in meningioma and vestibular schwannoma group. The results of U test also suggested that there was a significant difference in peritumoral edema group, with median DSC of 0.990 (IQR, 0.989-0.991, p = 0.002), and median HD of 5.916 mm (IQR, 5.000-8.000 mm, p = 0.049). In the external test, our model also showed to be robust performance, with median DSC of 0.991 (IQR, 0.983-0.998) and HD of 8.972 mm (IQR, 6.164-13.710 mm). CONCLUSIONS: For automated processing of MRI neuroimaging data presence of brain tumors, the proposed model can perform brain extraction including important superficial structures for oncological analysis. CLINICAL RELEVANCE STATEMENT: The proposed model serves as a radiological tool for image preprocessing in tumor cases, focusing on superficial brain structures, which could streamline the workflow and enhance the efficiency of subsequent radiological assessments. KEY POINTS: * The nnU-Net-based model is capable of segmenting significant superficial structures in brain extraction. * The proposed model showed feasible performance, regardless of pathological types or tumor characteristics. * The model showed generalization in the public datasets.
Shape matters: unsupervised exploration of IDH-wildtype glioma imaging survival predictors
Foltyn-Dumitru, M.
Mahmutoglu, M. A.
Brugnara, G.
Kessler, T.
Sahm, F.
Wick, W.
Heiland, S.
Bendszus, M.
Vollmuth, P.
Schell, M.
Eur Radiol2024Journal Article, cited 0 times
Website
UCSF-PDGM
Cluster analysis
Glioma
Radiogenomics
Magnetic Resonance Imaging (MRI)
Radiomics
OBJECTIVES: This study examines clustering based on shape radiomic features and tumor volume to identify IDH-wildtype glioma phenotypes and assess their impact on overall survival (OS). MATERIALS AND METHODS: This retrospective study included 436 consecutive patients diagnosed with IDH-wt glioma who underwent preoperative MR imaging. Alongside the total tumor volume, nine distinct shape radiomic features were extracted using the PyRadiomics framework. Different imaging phenotypes were identified using partition around medoids (PAM) clustering on the training dataset (348/436). The prognostic efficacy of these phenotypes in predicting OS was evaluated on the test dataset (88/436). External validation was performed using the public UCSF glioma dataset (n = 397). A decision-tree algorithm was employed to determine the relevance of features associated with cluster affiliation. RESULTS: PAM clustering identified two clusters in the training dataset: Cluster 1 (n = 233) had a higher proportion of patients with higher sphericity and elongation, while Cluster 2 (n = 115) had a higher proportion of patients with higher maximum 3D diameter, surface area, axis lengths, and tumor volume (p < 0.001 for each). OS differed significantly between clusters: Cluster 1 showed a median OS of 23.8 compared to 11.4 months of Cluster 2 in the holdout test dataset (p = 0.002). Multivariate Cox regression showed improved performance with cluster affiliation over clinical data alone (C index 0.67 vs 0.59, p = 0.003). Cluster-based models outperformed the models with tumor volume alone (evidence ratio: 5.16-5.37). CONCLUSION: Data-driven clustering reveals imaging phenotypes, highlighting the improved prognostic power of combining shape-radiomics with tumor volume, thereby outperforming predictions based on tumor volume alone in high-grade glioma survival outcomes. CLINICAL RELEVANCE STATEMENT: Shape-radiomics and volume-based cluster analyses of preoperative MRI scans can reveal imaging phenotypes that improve the prediction of OS in patients with IDH-wild type gliomas, outperforming currently known models based on tumor size alone or clinical parameters. KEY POINTS: Shape radiomics and tumor volume clustering in IDH-wildtype gliomas are investigated for enhanced prognostic accuracy. Two distinct phenotypic clusters were identified with different median OSs. Integrating shape radiomics and volume-based clustering enhances OS prediction in IDH-wildtype glioma patients.
Unpaired low-dose CT denoising via an improved cycle-consistent adversarial network with attention ensemble
Yin, Zhixian
Xia, Kewen
Wang, Sijie
He, Ziping
Zhang, Jiangnan
Zu, Baokai
The Visual Computer2022Journal Article, cited 0 times
Lung-PET-CT-Dx
Many deep learning-based approaches have been authenticated well performed for low-dose computed tomography (LDCT) image postprocessing. Unfortunately, most of them highly depend on well-paired datasets, which are difficult to acquire in clinical practice. Therefore, we propose an improved cycle-consistent adversarial networks (CycleGAN) to improve the quality of LDCT images. We employ a UNet-based network with attention gates ensembled as the generator, which could adaptively stress salient features which is useful for the denoising task. By doing so, the proposed network could enable the decoder to acquire available semantic features from the encoder with emphasis, thereby improving its performance. Then, perceptual loss found on the visual geometry group (VGG) is drawn into the cycle consistency loss to elevate the visual effect of denoised images to that of standard-dose computed tomography images as far as possible. Moreover, we raise an ameliorative adversarial loss based on the least square loss. In particular, the Lipschitz constraint is added to the objective function of the discriminator, while total variation is added to that of the generator, to further enhance the denoising capability of the network. The proposed method is trained and tested on a public dataset named ‘Lung-PET-CT-Dx’ and a real clinical dataset. Results show that the proposed method outperforms the comparative methods and even performs comparably results to that of an approach based on paired datasets in terms of quantitative scores and visual sense.
The Visual Computer2023Journal Article, cited 0 times
HER2 tumor ROIs
TCGA-BRCA
TCGA-KIRC
Pathomics
Whole Slide Imaging (WSI)
Classification
Genomic Data Commons
Cancer is one of the most common diseases around the world. For cancer diagnosis, pathological examination is the most effective method. But the heavy and time-consuming workflow has increased the workload of the pathologists. With the appearance of whole slide image (WSI) scanners, tissues on a glass slide can be saved as a high-definition digital image, which makes it possible to diagnose diseases with computer aid. However, the extreme size and the lack of pixel-level annotations of WSIs make machine learning face a great challenge in pathology image diagnosis. To solve this problem, we propose a metric learning-based two-stage MIL framework (TSMIL) for WSI classification, which combines two stages of supervised clustering and metric-based classification. The training samples (WSIs) are first clustered into different clusters based on their labels in supervised clustering. Then, based on the previous step, we propose four different strategies to measure the distance of the test samples to each class cluster to achieve the test samples classification: MaxS, AvgS, DenS and HybS. Our model is evaluated on three pathology datasets: TCGA-NSCLC, TCGA-RCC and HER2. The average AUC scores can be up to 0.9895 and 0.9988 over TCGA-NSCLC and TCGA-RCC, and 0.9265 on HER2, respectively. The results showed that compared with the state-of-the-art methods, our method outperformed. The excellent performance on different kinds of cancer datasets verifies the feasibility of our method as a general architecture.
DNA-methylome-assisted classification of patients with poor prognostic subventricular zone associated IDH-wildtype glioblastoma
Adeberg, S.
Knoll, M.
Koelsche, C.
Bernhardt, D.
Schrimpf, D.
Sahm, F.
Konig, L.
Harrabi, S. B.
Horner-Rieber, J.
Verma, V.
Bewerunge-Hudler, M.
Unterberg, A.
Sturm, D.
Jungk, C.
Herold-Mende, C.
Wick, W.
von Deimling, A.
Debus, J.
Rieken, S.
Abdollahi, A.
Acta Neuropathol2022Journal Article, cited 0 times
Website
TCGA-GBM
Radiomics
Radiogenomics
Methylation markers
Glioblastoma (GBM) derived from the "stem cell" rich subventricular zone (SVZ) may constitute a therapy-refractory subgroup of tumors associated with poor prognosis. Risk stratification for these cases is necessary but is curtailed by error prone imaging-based evaluation. Therefore, we aimed to establish a robust DNA methylome-based classification of SVZ GBM and subsequently decipher underlying molecular characteristics. MRI assessment of SVZ association was performed in a retrospective training set of IDH-wildtype GBM patients (n = 54) uniformly treated with postoperative chemoradiotherapy. DNA isolated from FFPE samples was subject to methylome and copy number variation (CNV) analysis using Illumina Platform and cnAnalysis450k package. Deep next-generation sequencing (NGS) of a panel of 130 GBM-related genes was conducted (Agilent SureSelect/Illumina). Methylome, transcriptome, CNV, MRI, and mutational profiles of SVZ GBM were further evaluated in a confirmatory cohort of 132 patients (TCGA/TCIA). A 15 CpG SVZ methylation signature (SVZM) was discovered based on clustering and random forest analysis. One third of CpG in the SVZM were associated with MAB21L2/LRBA. There was a 14.8% (n = 8) discordance between SVZM vs. MRI classification. Re-analysis of these patients favored SVZM classification with a hazard ratio (HR) for OS of 2.48 [95% CI 1.35-4.58], p = 0.004 vs. 1.83 [1.0-3.35], p = 0.049 for MRI classification. In the validation cohort, consensus MRI based assignment was achieved in 62% of patients with an intraclass correlation (ICC) of 0.51 and non-significant HR for OS (2.03 [0.81-5.09], p = 0.133). In contrast, SVZM identified two prognostically distinct subgroups (HR 3.08 [1.24-7.66], p = 0.016). CNV alterations revealed loss of chromosome 10 in SVZM- and gains on chromosome 19 in SVZM- tumors. SVZM- tumors were also enriched for differentially mutated genes (p < 0.001). In summary, SVZM classification provides a novel means for stratifying GBM patients with poor prognosis and deciphering molecular mechanisms governing aggressive tumor phenotypes.
Are shape morphologies associated with survival? A potential shape-based biomarker predicting survival in lung cancer
Saad, Maliazurina
Lee, Ik Hyun
Choi, Tae-Sun
J Cancer Res Clin Oncol2019Journal Article, cited 0 times
Website
Radiomics
LUNG
Classification
PURPOSE: Imaging biomarkers (IBMs) are increasingly investigated as prognostic indicators. IBMs might be capable of assisting treatment selection by providing useful insights into tumor-specific factors in a non-invasive manner. METHODS: We investigated six three-dimensional shape-based IBMs: eccentricities between (I) intermediate-major axis (Eimaj), (II) intermediate-minor axis (Eimin), (III) major-minor axis (Emj-mn) and volumetric index of (I) sphericity (VioS), (II) flattening (VioF), (III) elongating (VioE). Additionally, we investigated previously established two-dimensional shape IBMs: eccentricity (E), index of sphericity (IoS), and minor-to-major axis length (Mn_Mj). IBMs were compared in terms of their predictive performance for 5-year overall survival in two independent cohorts of patients with lung cancer. Cohort 1 received surgical excision, while cohort 2 received radiation therapy alone or chemo-radiation therapy. Univariate and multivariate survival analyses were performed. Correlations with clinical parameters were evaluated using analysis of variance. IBM reproducibility was assessed using concordance correlation coefficients (CCCs). RESULTS: E was associated with reduced survival in cohort 1 (hazard ratio [HR]: 0.664). Eimin and VioF were associated with reduced survival in cohort 2 (HR 1.477 and 1.701). VioS was associated with reduced survival in cohorts 1 and 2 (HR 1.758 and 1.472). Spherical tumors correlated with shorter survival durations than did irregular tumors (median survival difference: 1.21 and 0.35 years in cohorts 1 and 2, respectively). VioS was a significant predictor of survival in multivariate analyses of both cohorts. All IBMs showed good reproducibility (CCC ranged between 0.86-0.98). CONCLUSIONS: In both investigated cohorts, VioS successfully linked shape morphology to patient survival.
A survival prediction model via interpretable machine learning for patients with oropharyngeal cancer following radiotherapy
Pan, Xiaoying
Feng, Tianhao
Liu, Chen
Savjani, Ricky R.
Chin, Robert K.
Sharon Qi, X.
Journal of Cancer Research and Clinical Oncology2023Journal Article, cited 0 times
OPC-Radiomics
Oropharyngeal cancer
Head and neck cancer
Machine Learning
Algorithm Development
PyRadiomics
Purpose; To explore interpretable machine learning (ML) methods, with the hope of adding more prognosis value, for predicting survival for patients with Oropharyngeal-Cancer (OPC).; ; Methods; A cohort of 427 OPC patients (Training 341, Test 86) from TCIA database was analyzed. Radiomic features of gross-tumor-volume (GTV) extracted from planning CT using Pyradiomics, and HPV p16 status, etc. patient characteristics were considered as potential predictors. A multi-level dimension reduction algorithm consisting of Least-Absolute-Selection-Operator (Lasso) and Sequential-Floating-Backward-Selection (SFBS) was proposed to effectively remove redundant/irrelevant features. The interpretable model was constructed by quantifying the contribution of each feature to the Extreme-Gradient-Boosting (XGBoost) decision by Shapley-Additive-exPlanations (SHAP) algorithm.; ; Results; The Lasso-SFBS algorithm proposed in this study finally selected 14 features, and our prediction model achieved an area-under-ROC-curve (AUC) of 0.85 on the test dataset based on this feature set. The ranking of the contribution values calculated by SHAP shows that the top predictors that were most correlated with survival were ECOG performance status, wavelet-LLH_firstorder_Mean, chemotherapy, wavelet-LHL_glcm_InverseVariance, tumor size. Those patients who had chemotherapy, with positive HPV p16 status, and lower ECOG performance status, tended to have higher SHAP scores and longer survival; who had an older age at diagnosis, heavy drinking and smoking pack year history, tended to lower SHAP scores and shorter survival.; ; Conclusion; We demonstrated predictive values of combined patient characteristics and imaging features for the overall survival of OPC patients. The multi-level dimension reduction algorithm can reliably identify the most plausible predictors that are mostly associated with overall survival. The interpretable patient-specific survival prediction model, capturing correlations of each predictor and clinical outcome, was developed to facilitate clinical decision-making for personalized treatment.
Enhanced breast mass mammography classification approach based on pre-processing and hybridization of transfer learning models
Boudouh, S. S.
Bouakkaz, M.
J Cancer Res Clin Oncol2023Journal Article, cited 0 times
Website
CBIS-DDSM
Humans
*Neural Networks
Computer
Mammography/methods
Breast/diagnostic imaging
*Breast Neoplasms/diagnostic imaging
Machine Learning
Tumor Microenvironment
Breast cancer
Breast mass detection
Deep learning
Mammography processing
Image denoising
BACKGROUND AND OBJECTIVE: The second most prevalent cause of death among women is now breast cancer, surpassing heart disease. Mammography images must accurately identify breast masses to diagnose early breast cancer, which can significantly increase the patient's survival percentage. Although, due to the diversity of breast masses and the complexity of their microenvironment, it is still a significant issue. Hence, an issue that researchers need to continue searching into is how to establish a reliable breast mass detection approach in an effective factor application to increase patient survival. Even though several machine and deep learning-based approaches were proposed to address these issues, pre-processing strategies and network architectures were insufficient for breast mass detection in mammogram scans, which directly influences the accuracy of the proposed models. METHODS: Aiming to resolve these issues, we propose a two-stage classification method for breast mass mammography scans. First, we introduce a pre-processing stage divided into three sub-strategies, which include several filters for Region Of Interest (ROI) extraction, noise removal, and image enhancements. Secondly, we propose a classification stage based on transfer learning techniques for feature extraction, and global pooling for classification instead of standard machine learning algorithms or fully connected layers. However, instead of using the traditional fine-tuning feature extraction phase, we proposed a hybrid model where we concatenate two recent pre-trained CNNs to assist the feature extraction phase, rather than using one. RESULTS: Using the CBIS-DDSM dataset, we managed to increase mainly each of the accuracy, sensitivity, and specificity reaching the highest accuracy of 98,1% using the Median filter for noise removal. Followed by the Gaussian filter trial with 96% accuracy, meanwhile, the winner filter attained the lowest accuracy of 94.13%. Moreover, the usage of global average pooling as a classifier is suitable in our case better than global max pooling. CONCLUSION: The experimental findings demonstrate that the suggested strategy of breast Mass detection in mammography can outperform the top-ranked methods currently in use in terms of classification performance.
Fuzzy volumetric delineation of brain tumor and survival prediction
Bhadani, Saumya
Mitra, Sushmita
Banerjee, Subhashis
Soft Computing2020Journal Article, cited 0 times
Website
BRATS datasets
A novel three-dimensional detailed delineation algorithm is introduced for Glioblastoma multiforme tumors in MRI. It efficiently delineates the whole tumor, enhancing core, edema and necrosis volumes using fuzzy connectivity and multi-thresholding, based on a single seed voxel. While the whole tumor volume delineation uses FLAIR and T2 MRI channels, the outlining of the enhancing core, necrosis and edema volumes employs the T1C channel. Discrete curve evolution is initially applied for multi-thresholding, to determine intervals around significant (visually critical) points, and a threshold is determined in each interval using bi-level Otsu’s method or Li and Lee’s entropy. This is followed by an interactive whole tumor volume delineation using FLAIR and T2 MRI sequences, requiring a single user-defined seed. An efficient and robust whole tumor extraction is executed using fuzzy connectedness and dynamic thresholding. Finally, the segmented whole tumor volume in T1C MRI channel is again subjected to multi-level segmentation, to delineate its sub-parts, encompassing enhancing core, necrosis and edema. This was followed by survival prediction of patients using the concept of habitats. Qualitative and quantitative evaluation, on FLAIR, T2 and T1C MR sequences of 29 GBM patients, establish its superiority over related methods, visually as well as in terms of Dice scores, Sensitivity and Hausdorff distance.
Volumetric analysis framework for accurate segmentation and classification (VAF-ASC) of lung tumor from CT images
Lung tumor can be typically stated as the abnormal cell growth in lungs that may cause severe threat to patient health, since lung is a significant organ which comprises associated network of blood veins and lymphatic canals. The earlier detection and classification of lung tumor creates a greater impact on increasing the survival rate of patients. For analysis, the Computed Tomography (CT) lung images are broadly used, since it gives information about the various lung regions. The prediction of tumor contour, position, and volume plays an imperative role in accurate segmentation and classification of tumor cells. This will aid in successful tumor stage detection and treatment phases. With that concern, this paper develops a Volumetric Analysis Framework for Accurate Segmentation and Classification of lung tumors. The volumetric analysis framework comprises the estimation of length, thickness, and height of the detected tumor cell for achieving précised results. Though there are many models for tumor detection from 2D CT inputs, it is very important to develop a method for lung nodule separation from noisy background. For that, this paper connectivity and locality features of the lung image pixels. Moreover, morphological processing techniques are incorporated for removing the additional noises and airways. Tumor segmentation has been accomplished by the k-means clustering approach. Tumor Nodule Metastasis classification based-volumetric analysis is performed for accurate results. The Volumetric Analysis Framework provides better results with respect to factors such as accuracy rate of tumor diagnosis, reduced computation time, and appropriate tumor stage classification.
Lung cancer diagnosis using Hessian adaptive learning optimization in generative adversarial networks
Thirumagal, E.
Saruladha, K.
Soft Computing2023Journal Article, cited 0 times
SPIE-AAPM Lung CT Challenge
Lung cancer is the most frequent cancer and the reason for cancer death, with high morbidity and mortality. Computed tomography is one of the efficient medical imaging tools for lung cancer diagnosis, which offers internal lung details. However, as there is limited availability of datasets and requires large number of images for interpretation, it is hard for radiologists to diagnose lung cancer. The generative adversarial network (GAN) is a significant generative model employed for data augmentation, which has the benefit of simulating data distribution without the explicit modeling of potential probability density functions. Despite the benefits of GAN, training process remains challenging due to high convergence time and mode collapse problems. To resolve these issues, in this paper, a Hessian Adaptive Learning (HAL) Optimization technique. The proposed HL technique uses gradient and curvature data to eliminate the mode collapse problem and to improve the dataset size via a generation of diverse images. The experiments were conducted on Vanilla GAN, Wasserstein Generative Adversarial Network (WGAN), Conditional Generative Adversarial Network (CGAN), Wasserstein, and Deep Convolutional Generative Adversarial Network (DCGAN). Each GAN is tested using stochastic Gradient descent (SGD), Gauss–Newton (GN) Second-order learning, and proposed HAL optimization techniques. The experimental outcomes prove that the GANs with HAL optimization technique yields better performance compared to SGD and GN models. The experimental results assured that the GANs converge fast and eliminate mode collapse problems using HAL optimization.
A joint intensity and edge magnitude-based multilevel thresholding algorithm for the automatic segmentation of pathological MR brain images
Kaur, Taranjit
Saini, Barjinder Singh
Gupta, Savita
Neural Computing and Applications2016Journal Article, cited 1 times
Website
Radiomics
BraTS
Performance analysis of various machine learning-based approaches for detection and classification of lung cancer in humans
Singh, Gur Amrit Pal
Gupta, PK
Neural Computing and Applications2018Journal Article, cited 0 times
Website
Lung cancer is one of the most common causes of death among all cancer-related diseases (Cancer Research UK in Cancer mortality for common cancers. http://www.cancerresearchuk.org/health-professional/cancer-statistics/mortality/common-cancers-compared, 2017). It is primarily diagnosed by performing a scan analysis of the patient’s lung. This scan analysis could be of X-ray, CT scan, or MRI. Automated classification of lung cancer is one of the difficult tasks, attributing to the varying mechanisms used for imaging patient’s lungs. Image processing and machine learning approaches have shown a great potential for detection and classification of lung cancer. In this paper, we have demonstrated effective approach for detection and classification of lung cancer-related CT scan images into benign and malignant category. Proposed approach firstly processes these images using image processing techniques, and then further supervised learning algorithms are used for their classification. Here, we have extracted texture features along with statistical features and supplied various extracted features to classifiers. We have used seven different classifiers known as k-nearest neighbors classifier, support vector machine classifier, decision tree classifier, multinomial naive Bayes classifier, stochastic gradient descent classifier, random forest classifier, and multi-layer perceptron (MLP) classifier. We have used dataset of 15750 clinical images consisting of both 6910 benign and 8840 malignant lung cancer related images to train and test these classifiers. In the obtained results, it is found that accuracy of MLP classifier is higher with value of 88.55% in comparison with the other classifiers.
Automatic lung segmentation in low-dose chest CT scans using convolutional deep and wide network (CDWN)
Agnes, S Akila
Anitha, J
Peter, J Dinesh
Neural Computing and Applications2018Journal Article, cited 0 times
Website
LIDC-IDRI
fuzzy C-means clustering (FCM)
Deep learning
Convolutional Neural Network (CNN)
lung Segmentation
ROI-based feature learning for efficient true positive prediction using convolutional neural network for lung cancer diagnosis
Suresh, Supriya
Mohan, Subaji
Neural Computing and Applications2020Journal Article, cited 0 times
LIDC-IDRI
Convolutional Neural Network (CNN)
LUNG
Automatic lung cancer detection from CT image using improved deep neural network and ensemble classifier
Shakeel, P. Mohamed
Burhanuddin, M. A.
Desa, Mohammad Ishak
Neural Computing and Applications2020Journal Article, cited 0 times
CPTAC-LSCC
The development of the computer-aided detection system placed an important role in the clinical analysis for making the decision about the human disease. Among the various disease examination processes, lung cancer needs more attention because it affects both men and women, which leads to increase the mortality rate. Traditional lung cancer prediction techniques failed to manage the accuracy because of low-quality image that affects the segmentation process. So, in this paper new optimized image processing and machine learning technique is introduced to predict the lung cancer. For recognizing lung cancer, non-small cell lung cancer CT scan dataset images are collected. The gathered images are examined by applying the multilevel brightness-preserving approach which effectively examines each pixel, eliminates the noise and also increase the quality of the lung image. From the noise-removed lung CT image, affected region is segmented by using improved deep neural network that segments region in terms of using layers of network and various features are extracted. Then the effective features are selected with the help of hybrid spiral optimization intelligent-generalized rough set approach, and those features are classified using ensemble classifier. The discussed method increases the lung cancer prediction rate which is examined using MATLAB-based results such as logarithmic loss, mean absolute error, precision, recall and F-score.
“SPOCU”: scaled polynomial constant unit activation function
Kiseľák, Jozef
Lu, Ying
Švihra, Ján
Szépe, Peter
Stehlík, Milan
Neural Computing and Applications2020Journal Article, cited 0 times
Pancreas-CT
We address the following problem: given a set of complex images or a large database, the numerical and computational complexity and quality of approximation for neural network may drastically differ from one activation function to another. A general novel methodology, scaled polynomial constant unit activation function “SPOCU,” is introduced and shown to work satisfactorily on a variety of problems. Moreover, we show that SPOCU can overcome already introduced activation functions with good properties, e.g., SELU and ReLU, on generic problems. In order to explain the good properties of SPOCU, we provide several theoretical and practical motivations, including tissue growth model and memristive cellular nonlinear networks. We also provide estimation strategy for SPOCU parameters and its relation to generation of random type of Sierpinski carpet, related to the [pppq] model. One of the attractive properties of SPOCU is its genuine normalization of the output of layers. We illustrate SPOCU methodology on cancer discrimination, including mammary and prostate cancer and data from Wisconsin Diagnostic Breast Cancer dataset. Moreover, we compared SPOCU with SELU and ReLU on large dataset MNIST, which justifies usefulness of SPOCU by its very good performance.
Selective information passing for MR/CT image segmentation
Zhu, Qikui
Li, Liang
Hao, Jiangnan
Zha, Yunfei
Zhang, Yan
Cheng, Yanxiang
Liao, Fei
Li, Pingxiang
Neural Computing and Applications2020Journal Article, cited 0 times
Pancreas-CT
Automated medical image segmentation plays an important role in many clinical applications, which however is a very challenging task, due to complex background texture, lack of clear boundary and significant shape and texture variation between images. Many researchers proposed an encoder–decoder architecture with skip connections to combine low-level feature maps from the encoder path with high-level feature maps from the decoder path for automatically segmenting medical images. The skip connections have been shown to be effective in recovering fine-grained details of the target objects and may facilitate the gradient back-propagation. However, not all the feature maps transmitted by those connections contribute positively to the network performance. In this paper, to adaptively select useful information to pass through those skip connections, we propose a novel 3D network with self-supervised function, named selective information passing network. We evaluate our proposed model on the MICCAI Prostate MR Image Segmentation 2012 Grant Challenge dataset, TCIA Pancreas CT-82 and MICCAI 2017 Liver Tumor Segmentation Challenge dataset. The experimental results across these datasets show that our model achieved improved segmentation results and outperformed other state-of-the-art methods. The source code of this work is available at https://github.com/ahukui/SIPNet.
A novel prostate segmentation method: triple fusion model with hybrid loss
Ocal, Hakan
Barisci, Necaattin
Neural Computing and Applications2022Journal Article, cited 0 times
Website
NCI-ISBI-Prostate-2013
Algorithm Development
Segmentation
Early and rapid diagnosis of prostate cancer, the horsehead disease among men, has become increasingly important. Nowadays, many methods are used in the early diagnosis of prostate cancer. Compared to other imaging methods, magnetic resonance imaging (MRI) based on prostate gland imaging is preferred because angular imaging (axial, sagittal, and coronal) provides precise information. But diagnosing the disease from these MR images is time-consuming. For example, imaging differences between MR devices for prostate segmentation and inhomogeneous and inconsistent prostate appearance are significant challenges. Because of these segmentation difficulties, manual segmentation of prostate images is challenging. In recent years, computer-aided intelligent architectures (deep learning-based architecture) have been used to overcome the manual segmentation of prostate images. These architectures can now perform manual prostate segmentation in seconds that used to take days thanks to their end-to-end automatic deep convolutional neural networks (DCNN). Inspired by the studies mentioned above, this study proposes a novel DCNN approach for prostate segmentation by combining ResUnet 2D with residual blocks and Edge Attention Vnet 3D architectures. In addition, the weighted foal Twersky loss function, which was proposed for the first time, significantly increased the architecture's performance. Evaluation experiments were performed on the MICCAI 2012 Prostate Segmentation Challenge Dataset (PROMISE12) and the NCI-ISBI 2013(NCI_ISBI-13) Prostate Segmentation Challenge Dataset. As a result of the tests performed, Dice scores of 91.92 and 91.15% in the whole prostate volume were obtained in the Promise 12 and NCI_ISBI 13 datasets, respectively. Comparative analyses show that the advantages and robustness of our method are superior to the state-of-the-art approaches.
A survey on deep learning applied to medical images: from simple artificial neural networks to generative models
Celard, P.
Iglesias, E. L.
Sorribes-Fdez, J. M.
Romero, R.
Vieira, A. Seara
Borrajo, L.
Neural Computing and Applications2022Journal Article, cited 0 times
Breast-Cancer-Screening-DBT
Pancreas-CT
Deep learning techniques, in particular generative models, have taken on great importance in medical image analysis. This paper surveys fundamental deep learning concepts related to medical image generation. It provides concise overviews of studies which use some of the latest state-of-the-art models from last years applied to medical images of different injured body areas or organs that have a disease associated with (e.g., brain tumor and COVID-19 lungs pneumonia). The motivation for this study is to offer a comprehensive overview of artificial neural networks (NNs) and deep generative models in medical imaging, so more groups and authors that are not familiar with deep learning take into consideration its use in medicine works. We review the use of generative models, such as generative adversarial networks and variational autoencoders, as techniques to achieve semantic segmentation, data augmentation, and better classification algorithms, among other purposes. In addition, a collection of widely used public medical datasets containing magnetic resonance (MR) images, computed tomography (CT) scans, and common pictures is presented. Finally, we feature a summary of the current state of generative models in medical image including key features, current challenges, and future research paths.
Refinement of ensemble strategy for acute lymphoblastic leukemia microscopic images using hybrid CNN-GRU-BiLSTM and MSVM classifier
Mohammed, Kamel K.
Hassanien, Aboul Ella
Afify, Heba M.
Neural Computing and Applications2023Journal Article, cited 0 times
Website
C-NMC 2019
Long Short-Term Memory (LSTM)
Imaging features
Computer Aided Detection (CADe)
Acute lymphocytic leukemia (ALL) is a common serious cancer in white blood cells (WBC) that advances quickly and produces abnormal cells in the bone marrow. Cancerous cells associated with ALL lead to impairment of body systems. Microscopic examination of ALL in a blood sample is applied manually by hematologists with many defects. Computer-aided leukemia image detection is used to avoid human visual recognition and to provide a more accurate diagnosis. This paper employs the ensemble strategy to detect ALL cells versus normal WBCs using three stages automatically. Firstly, image pre-processing is applied to handle the unbalanced database through the oversampling process. Secondly, deep spatial features are generated using a convolution neural network (CNN). At the same time, the gated recurrent unit (GRU)-bidirectional long short-term memory (BiLSTM) architecture is utilized to extract long-distance dependent information features or temporal features to obtain active feature learning. Thirdly, a softmax function and the multiclass support vector machine (MSVM) classifier are used for the classification mission. The proposed strategy has the resilience to classify the C-NMC 2019 database into two categories by using splitting the entire dataset into 90% as training and 10% as testing datasets. The main motivation of this paper is the novelty of the proposed framework for the purposeful and accurate diagnosis of ALL images. The proposed CNN-GRU-BiLSTM-MSVM is simply stacked by existing tools. However, the empirical results on C-NMC 2019 database show that the proposed framework is useful to the ALL image recognition problem compared to previous works. The DenseNet-201 model yielded an F1-score of 96.23% and an accuracy of 96.29% using the MSVM classifier in the test dataset. The findings exhibited that the proposed strategy can be employed as a complementary diagnostic tool for ALL cells. Further, this proposed strategy will encourage researchers to augment the rare database, such as blood microscopic images by creating powerful applications in terms of combining machine learning with deep learning algorithms.
A2M-LEUK: attention-augmented algorithm for blood cancer detection in children
Talaat, Fatma M.
Gamel, Samah A.
Neural Computing and Applications2023Journal Article, cited 0 times
Website
C-NMC 2019
Computer Aided Detection (CADe)
Computer Aided Diagnosis (CADx)
Algorithm Development
Blood cancer
Leukemia is a malignancy that affects the blood and bone marrow. Its detection and classification are conventionally done through labor-intensive and specialized methods. The diagnosis of blood cancer in children is a critical task that requires high precision and accuracy. This study proposes a novel approach utilizing attention mechanism-based machine learning in conjunction with image processing techniques for the precise detection and classification of leukemia cells. The proposed attention-augmented algorithm for blood cancer detection in children (A2M-LEUK) is an innovative algorithm that leverages attention mechanisms to improve the detection of blood cancer in children. A2M-LEUK was evaluated on a dataset of blood cell images and achieved remarkable performance metrics: Precision = 99.97%, Recall = 100.00%, F1-score = 99.98%, and Accuracy = 99.98%. These results indicate the high accuracy and sensitivity of the proposed approach in identifying and categorizing leukemia, and its potential to reduce the workload of medical professionals and improve the diagnosis of leukemia. The proposed method provides a promising approach for accurate and efficient detection and classification of leukemia cells, which could potentially improve the diagnosis and treatment of leukemia. Overall, A2M-LEUK improves the diagnosis of leukemia in children and reduces the workload of medical professionals.
LeukoCapsNet: a resource-efficient modified CapsNet model to identify leukemia from blood smear images
Dhalla, Sabrina
Mittal, Ajay
Gupta, Savita
Neural Computing and Applications2023Journal Article, cited 0 times
Website
C-NMC 2019
Algorithm Development
Deep convolutional neural network (DCNN)
Computer Aided Detection (CADe)
Leukemia
Leukemia is one of the deadly cancers which spreads itself at an exponential rate and has a detrimental impact on leukocytes in the human blood. To automate the process of leukemia detection, researchers have utilized deep learning networks to analyze blood smear images. In our research, we have proposed the usage of networks that mimic the human brain’s real working. These models are fed features from numerous convolution layers, each having its own set of additional skip connections. It is then stored and processed as vectors, making them rotationally invariant as well, a characteristic not found in other deep learning networks, specifically convolutional neural networks (CNNs). The network is then pruned by 20% to make it more deployable in resource-constrained environments. This research also compares the model’s performance by four ablation experiments and concludes that the proposed model is optimal. It has also been tested on three different types of datasets to highlight its robustness. The average values of all three datasets correspond to specificity: 96.97%, sensitivity: 96.81%, precision: 96.79% and accuracy: 97.44%. In a nutshell, the outcomes of the proposed model, i.e., PrunedResCapsNet make it more dynamic and effective compared with other existing methods.
An integrated convolutional neural network with attention guidance for improved performance of medical image classification
Öksüz, Coşku
Urhan, Oğuzhan
Güllü, Mehmet Kemal
Neural Computing and Applications2023Journal Article, cited 0 times
COVID-19-AR
Algorithm Development
Feature Extraction
Brain cancer
COVID-19
Classification
Today, it becomes essential to develop computer vision algorithms that are both highly effective and cost-effective for supporting physicians' decisions. Convolutional Neural Network (CNN) is a deep learning architecture that enables learning relevant imaging features by simultaneously optimizing feature extraction and classification phases and has a high potential to meet this need. On the other hand, the lack of low- and high-level local details in a CNN is an issue that can reduce the task performance and prevent the network from focusing on the region of interest. To tackle this issue, we propose an attention-guided CNN architecture, which combines three lightweight encoders (the ensembled encoder) at the feature level to consolidate the feature maps with local details in this study. The proposed model is validated on the publicly available data sets for two commonly studied classification tasks, i.e., the brain tumor and COVID-19 disease classification. Performance improvements of 2.21% and 1.32%, respectively, achieved for brain tumor and COVID-19 classification tasks confirm our assumption that combining encoders recovers local details missed in a deeper encoder. In addition, the attention mechanism used after the ensembled encoder further improves performance by 2.29% for the brain tumor and 6.13% for the COVID-19 classification tasks. Besides that, our ensembled encoder with the attention mechanism enhances the focus on the region of interest by 4.4% in terms of the IoU score. Competitive performance scores accomplished for each classification task against state-of-the-art methods indicate that the proposed model can be an effective tool for medical image classification.
Integrating expert guidance with gradual moment approximation (GMAp)-enhanced transfer learning for improved pancreatic cancer classification
Chhikara, Jasmine
Goel, Nidhi
Rathee, Neeru
Neural Computing and Applications2024Journal Article, cited 0 times
Website
CPTAC-PDA
Pancreas-CT
Despite significant research efforts, pancreatic cancer remains a formidable foe. To address the critical need for improved diagnostics, this study presents a novel approach that integrates expert guidance with computer-aided imaging for fine needle aspiration (FNA). A meticulously curated computed tomography (CT) dataset of ground truth images, focusing on key subregions of the pancreas, was established in collaboration with medical professionals. The images provided the training ground for a novel diagnostic model equipped with the gradual moment approximation (GMAp) optimization algorithm, designed to enhance the precision of cancer detection. By efficiently transferring knowledge from pre-trained models, the proposed model achieved remarkable accuracy (98.16%) in classifying CT images across distinct cancerous pancreatic subregions (head, body, and tail) and healthy pancreas. Extensive evaluations against diverse pre-trained models and benchmark medical databases: medical segmentation decathlon, clinical proteomic tumor analysis consortium pancreatic ductal adenocarcinoma, and pancreas-computed tomography proved the model's robustness and superior F1-scores compared to existing approaches. The experiment demonstrates that the deep learning-based 4-class classification outperforms state-of-the-art machine learning-based method by 3.66% in terms of accuracy. This efficiency, coupled with rigorous testing, paves the way for seamless integration into clinical workflows, potentially enabling earlier and more accurate pancreatic cancer diagnoses.
Spatiotemporal context feedback bidirectional attention network for breast cancer segmentation based on DCE-MRI
Pan, Xiang
Lv, Tianxu
Liu, Yuan
Li, Ningjun
Li, Lihua
Zhang, Yan
Ni, Jianming
Jiang, Chunjuan
Neural Computing and Applications2024Journal Article, cited 0 times
Website
Breast-MRI-NACT-Pilot
Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI)
Segmentation
BREAST
Breast cancer is a highly heterogeneous, both between patients (inter-tumor) and within individual tumors (intra-tumor), leads to indistinct boundaries and varying sizes, shapes, appearances and densities of tumors. Current 3D structural imaging-based methods face challenges to segment breast cancer. The key ingredient to the problem is how to exploit the temporal correlations of 4D functional imaging that depict the heterogeneity of vascular permeability in cancer for accurate tumor segmentation. In this paper, we propose a unique spatiotemporal context feedback bidirectional attention network, which segments breast cancer by modeling dynamic contrast-enhanced dependency to exploit pharmacokinetics feature representations. Specifically, we design a temporal context feedback encoder to learn pharmacokinetics feature representations, which embeds bidirectional temporal attention for bidirectionally propagating contextual semantics across time sequences. Additionally, learned representations are fed into a temporal context feedback decoder to obtain a voxel-level classification of breast tumors. Experimental results demonstrated that the proposed method outperforms recent tumor segmentation methods. Furthermore, our approach achieves competitive results on a small training data and avoids the over-fitting phenomenon due to the model-driven skill to capture dynamic contrast-enhanced temporal correlations.
A novel study for automatic two-class COVID-19 diagnosis (between COVID-19 and Healthy, Pneumonia) on X-ray images using texture analysis and 2-D/3-D convolutional neural networks
Yasar, H.
Ceylan, M.
Multimed Syst2022Journal Article, cited 0 times
Website
The pandemic caused by the COVID-19 virus affects the world widely and heavily. When examining the CT, X-ray, and ultrasound images, radiologists must first determine whether there are signs of COVID-19 in the images. That is, COVID-19/Healthy detection is made. The second determination is the separation of pneumonia caused by the COVID-19 virus and pneumonia caused by a bacteria or virus other than COVID-19. This distinction is key in determining the treatment and isolation procedure to be applied to the patient. In this study, which aims to diagnose COVID-19 early using X-ray images, automatic two-class classification was carried out in four different titles: COVID-19/Healthy, COVID-19 Pneumonia/Bacterial Pneumonia, COVID-19 Pneumonia/Viral Pneumonia, and COVID-19 Pneumonia/Other Pneumonia. For this study, 3405 COVID-19, 2780 Bacterial Pneumonia, 1493 Viral Pneumonia, and 1989 Healthy images obtained by combining eight different data sets with open access were used. In the study, besides using the original X-ray images alone, classification results were obtained by accessing the images obtained using Local Binary Pattern (LBP) and Local Entropy (LE). The classification procedures were repeated for the images that were combined with the original images, LBP, and LE images in various combinations. 2-D CNN (Two-Dimensional Convolutional Neural Networks) and 3-D CNN (Three-Dimensional Convolutional Neural Networks) architectures were used as classifiers within the scope of the study. Mobilenetv2, Resnet101, and Googlenet architectures were used in the study as a 2-D CNN. A 24-layer 3-D CNN architecture has also been designed and used. Our study is the first to analyze the effect of diversification of input data type on classification results of 2-D/3-D CNN architectures. The results obtained within the scope of the study indicate that diversifying X-ray images with tissue analysis methods in the diagnosis of COVID-19 and including CNN input provides significant improvements in the results. Also, it is understood that the 3-D CNN architecture can be an important alternative to achieve a high classification result.
Brain tumor classification from multi-modality MRI using wavelets and machine learning
Usman, Khalid
Rajpoot, Kashif
Pattern Analysis and Applications2017Journal Article, cited 17 times
Website
MICCAI BraTS challenge
Radiomics
Machine Learning
BRAIN
Segmentation
Low grade glioma
high grade glioma
Segmentation of colon and removal of opacified fluid for virtual colonoscopy
Gayathri, Devi K
Radhakrishnan, R
Rajamani, Kumar
Pattern Analysis and Applications2017Journal Article, cited 0 times
Website
CT COLONOGRAPHY
Computed Tomography (CT)
Segmentation
Image denoising
Colorectal cancer (CRC) is the third most common type of cancer. The use of techniques such as flexible sigmoidoscopy and capsule endoscopy for the screening of colorectal cancer causes physical pain and hardship to the patients. Hence, to overcome the above disadvantages, computed tomography (CT) can be employed for the identification of polyps or growth, while screening for CRC. This proposed approach was implemented to improve the accuracy and to reduce the computation time of the accurate segmentation of the colon segments from the abdominal CT images which contain anatomical organs such as lungs, small bowels, large bowels (Colon), ribs, opacified fluid and bones. The segmentation is performed in two major steps. The first step segments the air-filled colon portions by placing suitable seed points using modified 3D seeded region growing which identify and match the similar voxels by 6-neighborhood connectivity technique. The segmentation of the opacified fluid portions is done using fuzzy connectedness approach enhanced with interval thresholding. The membership classes are defined and the voxels are categorized based on the class value. Interval thresholding is performed so that the bones and opacified fluid parts may be extracted. The bones are removed by the placement of seed points as the existence of the continuity of the bone region is more in the axial slices. The resultant image containing bones is subtracted from the threshold output to segment the opacified fluid segments in all the axial slices of a dataset. Finally, concatenation of the opacified fluid with the segmented colon is performed for the 3D rendering of the segmented colon. This method was implemented in 15 datasets downloaded from TCIA and in real-time dataset in both supine and prone position and the accuracy achieved was 98.73%.
Batch and online variational learning of hierarchical Dirichlet process mixtures of multivariate Beta distributions in medical applications
Manouchehri, Narges
Bouguila, Nizar
Fan, Wentao
Pattern Analysis and Applications2021Journal Article, cited 1 times
Website
Osteosarcoma-Tumor-Assessment
H&E-stained slides
Machine Learning
Thanks to the significant developments in healthcare industries, various types of medical data are generated. Analysing such valuable resources aid healthcare experts to understand the illnesses more precisely and provide better clinical services. Machine learning as one of the capable tools could assist healthcare experts in achieving expressive interpretation and making proper decisions. As annotation of medical data is a costly and sensitive task that can be performed just by healthcare professionals, label-free methods could be significantly promising. Interpretability and evidence-based decision are other concerns in medicine. These needs were our motivators to propose a novel clustering method based on hierarchical Dirichlet process mixtures of multivariate Beta distributions. To learn it, we applied batch and online variational methods for finding the proper number of clusters as well as estimating model parameters at the same time. The effectiveness of the proposed models is evaluated on three medical real applications, namely oropharyngeal carcinoma diagnosis, osteosarcoma analysis, and white blood cell counting.
2D MRI registration using glowworm swarm optimization with partial opposition-based learning for brain tumor progression
Si, Tapas
Pattern Analysis and Applications2023Journal Article, cited 0 times
Brain-Tumor-Progression
Magnetic Resonance Imaging (MRI)
Image Registration
Swarm
Optimization
Magnetic resonance imaging (MRI) registration is important in detection, diagnosis, treatment planning, determining radiographic progression, functional studies, computer-guided surgeries, and computer-guided therapies. The registration process is the way to solve the correspondence problem between features on MRI scans acquired at different time-points to study the changes while analyzing the brain tumor progression. Registration method generally requires a search strategy (optimizer) to search the transformation parameters of the registration to optimize some similarity metric between images. Metaheuristic algorithms are becoming more popular recently for image registration. In this paper, at the outset, a metaheuristic algorithm, namely glowworm swarm optimization (GSO), is improved by incorporating partial opposition-based learning (POBL) strategy. The improved GSO is applied to register the pre- and post-treatment MR images for brain tumor progression. A comparative study has been made with basic GSO, GSO with generalized opposition-based learning (GOBL-GSO), and existing particle swarm optimizer (PSO)-based registration method. The experimental results demonstrate that the proposed method has an extremely higher statistical significance in performance than others in brain MRI registration.
Controlling camera movement in VR colonography
Paulo, Soraia F.
Medeiros, Daniel
Lopes, Daniel
Jorge, Joaquim
Virtual Reality2022Journal Article, cited 1 times
Website
CT COLONOGRAPHY
Deep Learning
Classification
Computer Aided Diagnosis (CADx)
3D segmentation
Objectives: To investigate the differentiation of premalignant from benign colorectal polyps detected by CT colonography using deep learning.; ; Methods: In this retrospective analysis of an average risk colorectal cancer screening sample, polyps of all size categories and morphologies were manually segmented on supine and prone CT colonography images and classified as premalignant (adenoma) or benign (hyperplastic polyp or regular mucosa) according to histopathology. Two deep learning models SEG and noSEG were trained on 3D CT colonography image subvolumes to predict polyp class, and model SEG was additionally trained with polyp segmentation masks. Diagnostic performance was validated in an independent external multicentre test sample. Predictions were analysed with the visualisation technique Grad-CAM++.; ; Results: The training set consisted of 107 colorectal polyps in 63 patients (mean age: 63 ± 8 years, 40 men) comprising 169 polyp segmentations. The external test set included 77 polyps in 59 patients comprising 118 polyp segmentations. Model SEG achieved a ROC-AUC of 0.83 and 80% sensitivity at 69% specificity for differentiating premalignant from benign polyps. Model noSEG yielded a ROC-AUC of 0.75, 80% sensitivity at 44% specificity, and an average Grad-CAM++ heatmap score of ≥ 0.25 in 90% of polyp tissue.; ; Conclusions: In this proof-of-concept study, deep learning enabled the differentiation of premalignant from benign colorectal polyps detected with CT colonography and the visualisation of image regions important for predictions. The approach did not require polyp segmentation and thus has the potential to facilitate the identification of high-risk polyps as an automated second reader.; ; Key points: • Non-invasive deep learning image analysis may differentiate premalignant from benign colorectal polyps found in CT colonography scans. • Deep learning autonomously learned to focus on polyp tissue for predictions without the need for prior polyp segmentation by experts. • Deep learning potentially improves the diagnostic accuracy of CT colonography in colorectal cancer screening by allowing for a more precise selection of patients who would benefit from endoscopic polypectomy, especially for patients with polyps of 6-9 mm size.; ; Keywords: Colonic polyp; Colonography; Computed tomographic; Deep learning; Early detection of cancer.
Generation of hemipelvis surface geometry based on statistical shape modelling and contralateral mirroring
Krishna, Praveen
Robinson, Dale L.
Bucknill, Andrew
Lee, Peter Vee Sin
Biomechanics and Modeling in Mechanobiology2022Journal Article, cited 0 times
Website
CT Lymph Nodes
Model
Personalised fracture plates manufactured using 3D printing offer an improved treatment option for unstable pelvic ring fractures that may not be adequately secured using off-the-shelf components. To design fracture plates that secure the bone fragments in their pre-fracture positions, the fractures must be reduced virtually using medical imaging-based reconstructions, a time-consuming process involving segmentation and repositioning of fragments until surface congruency is achieved. This study compared statistical shape models (SSMs) and contralateral mirroring as automated methods to reconstruct the hemipelvis using varying amounts of bone surface geometry. The training set for the geometries was obtained from pelvis CT scans of 33 females. The root-mean-squared error (RMSE) was quantified across the entire surface of the hemipelvis and within specific regions, and deviations of pelvic landmarks were computed from their positions in the intact hemipelvis. The reconstruction of the entire hemipelvis surfaced based on contralateral mirroring had an RMSE of 1.21 ± 0.29 mm, whereas for SSMs based on the entire hemipelvis surface, the RMSE was 1.11 ± 0.29 mm, a difference that was not significant (p = 0.32). Moreover, all hemipelvis reconstructions based on the full or partial bone geometries had RMSEs and landmark deviations from contralateral mirroring that were significantly lower (p < 0.05) or statistically equivalent to the SSMs. These results indicate that contralateral mirroring tends to be more accurate than SSMs for reconstructing unilateral pelvic fractures. SSMs may still be a viable method for hemipelvis fracture reconstruction in situations where contralateral geometries are not available, such as bilateral pelvic factures, or for highly asymmetric pelvic anatomies.
Quantitative Imaging Biomarker Ontology (QIBO) for Knowledge Representation of Biomedical Imaging Biomarkers
Buckler, AndrewJ
Ouellette, M.
Danagoulian, J.
Wernsing, G.
Liu, TiffanyTing
Savig, Erica
Suzek, BarisE
Rubin, DanielL
Paik, David
Journal of Digital Imaging2013Journal Article, cited 17 times
Website
Imaging biomarker
Ontology development
Quantitative imaging
Content-Based Medical Image Retrieval: A Survey of Applications to Multidimensional and Multimodality Data
Kumar, Ashnil
Kim, Jinman
Cai, Weidong
Fulham, Michael
Feng, Dagan
Journal of Digital Imaging2013Journal Article, cited 109 times
Website
Content based image retrieval (CBIR)
Interoperability
Review
Medical imaging is fundamental to modern healthcare, and its widespread use has resulted in the creation of image databases, as well as picture archiving and communication systems. These repositories now contain images from a diverse range of modalities, multidimensional (three-dimensional or time-varying) images, as well as co-aligned multimodality images. These image collections offer the opportunity for evidence-based diagnosis, teaching, and research; for these applications, there is a requirement for appropriate methods to search the collections for images that have characteristics similar to the case(s) of interest. Content-based image retrieval (CBIR) is an image search technique that complements the conventional text-based retrieval of images by using visual features, such as color, texture, and shape, as search criteria. Medical CBIR is an established field of study that is beginning to realize promise when applied to multidimensional and multimodality medical data. In this paper, we present a review of state-of-the-art medical CBIR approaches in five main categories: two-dimensional image retrieval, retrieval of images with three or more dimensions, the use of nonimage data to enhance the retrieval, multimodality image retrieval, and retrieval from diverse datasets. We use these categories as a framework for discussing the state of the art, focusing on the characteristics and modalities of the information used during medical image retrieval.;
Visual Interpretation with Three-Dimensional Annotations (VITA): Three-Dimensional Image Interpretation Tool for Radiological Reporting
Roy, Sharmili
Brown, Michael S
Shih, George L
Journal of Digital Imaging2014Journal Article, cited 5 times
Website
Algorithm Development
This paper introduces a software framework called Visual Interpretation with Three-Dimensional Annotations (VITA) that is able to automatically generate three-dimensional (3D) visual summaries based on radiological annotations made during routine exam reporting. VITA summaries are in the form of rotating 3D volumes where radiological annotations are highlighted to place important clinical observations into a 3D context. The rendered volume is produced as a Digital Imaging and Communications in Medicine (DICOM) object and is automatically added to the study for archival in Picture Archiving and Communication System (PACS). In addition, a video summary (e.g., MPEG4) can be generated for sharing with patients and for situations where DICOM viewers are not readily available to referring physicians. The current version of VITA is compatible with ClearCanvas; however, VITA can work with any PACS workstation that has a structured annotation implementation (e.g., Extendible Markup Language, Health Level 7, Annotation and Image Markup) and is able to seamlessly integrate into the existing reporting workflow. In a survey with referring physicians, the vast majority strongly agreed that 3D visual summaries improve the communication of the radiologists' reports and aid communication with patients.;
Test–Retest Reproducibility Analysis of Lung CT Image Features
Balagurunathan, Yoganand
Kumar, Virendra
Gu, Yuhua
Kim, Jongphil
Wang, Hua
Liu, Ying
Goldgof, Dmitry B
Hall, Lawrence O
Korn, Rene
Zhao, Binsheng
Journal of Digital Imaging2014Journal Article, cited 85 times
Website
RIDER Lung CT
Non Small Cell Lung Cancer (NSCLC)
Quantitative size, shape, and texture features derived from computed tomographic (CT) images may be useful as predictive, prognostic, or response biomarkers in non-small cell lung cancer (NSCLC). However, to be useful, such features must be reproducible, non-redundant, and have a large dynamic range. We developed a set of quantitative three-dimensional (3D) features to describe segmented tumors and evaluated their reproducibility to select features with high potential to have prognostic utility. Thirty-two patients with NSCLC were subjected to unenhanced thoracic CT scans acquired within 15 min of each other under an approved protocol. Primary lung cancer lesions were segmented using semi-automatic 3D region growing algorithms. Following segmentation, 219 quantitative 3D features were extracted from each lesion, corresponding to size, shape, and texture, including features in transformed spaces (laws, wavelets). The most informative features were selected using the concordance correlation coefficient across test–retest, the biological range and a feature independence measure. There were 66 (30.14 %) features with concordance correlation coefficient ≥ 0.90 across test–retest and acceptable dynamic range. Of these, 42 features were non-redundant after grouping features with R2Bet ≥ 0.95. These reproducible features were found to be predictive of radiological prognosis. The area under the curve (AUC) was 91 % for a size-based feature and 92 % for the texture features (runlength, laws). We tested the ability of image features to predict a radiological prognostic score on an independent NSCLC (39 adenocarcinoma) samples, the AUC for texture features (runlength emphasis, energy) was 0.84 while the conventional size-based features (volume, longest diameter) was 0.80. Test–retest and correlation analyses have identified non-redundant CT image features with both high intra-patient reproducibility and inter-patient biological range. Thus making the case that quantitative image features are informative and prognostic biomarkers for NSCLC.
A Comparison of Lung Nodule Segmentation Algorithms: Methods and Results from a Multi-institutional Study
Kalpathy-Cramer, Jayashree
Zhao, Binsheng
Goldgof, Dmitry
Gu, Yuhua
Wang, Xingwei
Yang, Hao
Tan, Yongqiang
Gillies, Robert
Napel, Sandy
Journal of Digital Imaging2016Journal Article, cited 18 times
Website
LUNG
Computed Tomography (CT)
Tumor volume estimation, as well as accurate and reproducible borders segmentation in medical images, are important in the diagnosis, staging, and assessment of response to cancer therapy. The goal of this study was to demonstrate the feasibility of a multi-institutional effort to assess the repeatability and reproducibility of nodule borders and volume estimate bias of computerized segmentation algorithms in CT images of lung cancer, and to provide results from such a study. The dataset used for this evaluation consisted of 52 tumors in 41 CT volumes (40 patient datasets and 1 dataset containing scans of 12 phantom nodules of known volume) from five collections available in The Cancer Imaging Archive. Three academic institutions developing lung nodule segmentation algorithms submitted results for three repeat runs for each of the nodules. We compared the performance of lung nodule segmentation algorithms by assessing several measurements of spatial overlap and volume measurement. Nodule sizes varied from 29 mul to 66 ml and demonstrated a diversity of shapes. Agreement in spatial overlap of segmentations was significantly higher for multiple runs of the same algorithm than between segmentations generated by different algorithms (p < 0.05) and was significantly higher on the phantom dataset compared to the other datasets (p < 0.05). Algorithms differed significantly in the bias of the measured volumes of the phantom nodules (p < 0.05) underscoring the need for assessing performance on clinical data in addition to phantoms. Algorithms that most accurately estimated nodule volumes were not the most repeatable, emphasizing the need to evaluate both their accuracy and precision. There were considerable differences between algorithms, especially in a subset of heterogeneous nodules, underscoring the recommendation that the same software be used at all time points in longitudinal studies.;
Cloud-based NoSQL open database of pulmonary nodules for computer-aided lung cancer diagnosis and reproducible research
Junior, José Raniery Ferreira
Oliveira, Marcelo Costa
de Azevedo-Marques, Paulo Mazzoncini
Journal of Digital Imaging2016Journal Article, cited 14 times
Website
Journal of Digital Imaging2016Journal Article, cited 3 times
Website
Biometric Identification
Confidentiality
Face Recognition
Privacy
Computed tomography (CT)
The recognizability of facial images extracted from publically available medical scans raises patient privacy concerns. This study examined how accurately facial images extracted from computed tomography (CT) scans are objectively matched with corresponding photographs of the scanned individuals. The test subjects were 128 adult Americans ranging in age from 18 to 60 years, representing both sexes and three self-identified population (ancestral descent) groups (African, European, and Hispanic). Using facial recognition software, the 2D images of the extracted facial models were compared for matches against five differently sized photo galleries. Depending on the scanning protocol and gallery size, in 6-61 % of the cases, a correct life photo match for a CT-derived facial image was the top ranked image in the generated candidate lists, even when blind searching in excess of 100,000 images. In 31-91 % of the cases, a correct match was located within the top 50 images. Few significant differences (p > 0.05) in match rates were observed between the sexes or across the three age cohorts. Highly significant differences (p < 0.01) were, however, observed across the three ancestral cohorts and between the two CT scanning protocols. Results suggest that the probability of a match between a facial image extracted from a medical scan and a photograph of the individual is moderately high. The facial image data inherent in commonly employed medical imaging modalities may need to consider a potentially identifiable form of "comparable" facial imagery and protected as such under patient privacy legislation.
Creation and curation of the society of imaging informatics in Medicine Hackathon Dataset
Kohli, Marc
Morrison, James J
Wawira, Judy
Morgan, Matthew B
Hostetter, Jason
Genereaux, Brad
Hussain, Mohannad
Langer, Steve G
Journal of Digital Imaging2018Journal Article, cited 4 times
Website
SIIM hackathon dataset
FHIR
HL7
DICOM
DICOMweb
Radiology and Enterprise Medical Imaging Extensions (REMIX)
Erdal, Barbaros S
Prevedello, Luciano M
Qian, Songyue
Demirer, Mutlu
Little, Kevin
Ryu, John
O’Donnell, Thomas
White, Richard D
Journal of Digital Imaging2017Journal Article, cited 1 times
Website
Algorithm Development
QIN
enterprise medical imaging
Image reconstruction
quantitative imaging
business intelligence
artificial intelligence
Characterization of Pulmonary Nodules Based on Features of Margin Sharpness and Texture
Ferreira, José Raniery
Oliveira, Marcelo Costa
de Azevedo-Marques, Paulo Mazzoncini
Journal of Digital Imaging2017Journal Article, cited 1 times
Website
LIDC-IDRI
lung cancer
pulmonary nodule
image classification
pattern recognition
An Efficient Pipeline for Abdomen Segmentation in CT Images
Koyuncu, H.
Ceylan, R.
Sivri, M.
Erdogan, H.
J Digit Imaging2018Journal Article, cited 4 times
Website
TCGA-LUAD
Segmentation
Classification
Computed tomography (CT) scans usually include some disadvantages due to the nature of the imaging procedure, and these handicaps prevent accurate abdomen segmentation. Discontinuous abdomen edges, bed section of CT, patient information, closeness between the edges of the abdomen and CT, poor contrast, and a narrow histogram can be regarded as the most important handicaps that occur in abdominal CT scans. Currently, one or more handicaps can arise and prevent technicians obtaining abdomen images through simple segmentation techniques. In other words, CT scans can include the bed section of CT, a patient's diagnostic information, low-quality abdomen edges, low-level contrast, and narrow histogram, all in one scan. These phenomena constitute a challenge, and an efficient pipeline that is unaffected by handicaps is required. In addition, analysis such as segmentation, feature selection, and classification has meaning for a real-time diagnosis system in cases where the abdomen section is directly used with a specific size. A statistical pipeline is designed in this study that is unaffected by the handicaps mentioned above. Intensity-based approaches, morphological processes, and histogram-based procedures are utilized to design an efficient structure. Performance evaluation is realized in experiments on 58 CT images (16 training, 16 test, and 26 validation) that include the abdomen and one or more disadvantage(s). The first part of the data (16 training images) is used to detect the pipeline's optimum parameters, while the second and third parts are utilized to evaluate and to confirm the segmentation performance. The segmentation results are presented as the means of six performance metrics. Thus, the proposed method achieves remarkable average rates for training/test/validation of 98.95/99.36/99.57% (jaccard), 99.47/99.67/99.79% (dice), 100/99.91/99.91% (sensitivity), 98.47/99.23/99.85% (specificity), 99.38/99.63/99.87% (classification accuracy), and 98.98/99.45/99.66% (precision). In summary, a statistical pipeline performing the task of abdomen segmentation is achieved that is not affected by the disadvantages, and the most detailed abdomen segmentation study is performed for the use before organ and tumor segmentation, feature extraction, and classification.
Application of Super-Resolution Convolutional Neural Network for Enhancing Image Resolution in Chest CT
Umehara, Kensuke
Ota, Junko
Ishida, Takayuki
Journal of Digital Imaging2017Journal Article, cited 182 times
Website
NSCLC-Radiomics-Genomics
Analysis of Variance
Non-Small-Cell Lung
In this study, the super-resolution convolutional neural network (SRCNN) scheme, which is the emerging deep-learning-based super-resolution method for enhancing image resolution in chest CT images, was applied and evaluated using the post-processing approach. For evaluation, 89 chest CT cases were sampled from The Cancer Imaging Archive. The 89 CT cases were divided randomly into 45 training cases and 44 external test cases. The SRCNN was trained using the training dataset. With the trained SRCNN, a high-resolution image was reconstructed from a low-resolution image, which was down-sampled from an original test image. For quantitative evaluation, two image quality metrics were measured and compared to those of the conventional linear interpolation methods. The image restoration quality of the SRCNN scheme was significantly higher than that of the linear interpolation methods (p < 0.001 or p < 0.05). The high-resolution image reconstructed by the SRCNN scheme was highly restored and comparable to the original reference image, in particular, for a ×2 magnification. These results indicate that the SRCNN scheme significantly outperforms the linear interpolation methods for enhancing image resolution in chest CT images. The results also suggest that SRCNN may become a potential solution for generating high-resolution CT images from standard CT images.
Collaborative and Reproducible Research: Goals, Challenges, and Strategies
Langer, S. G.
Shih, G.
Nagy, P.
Landman, B. A.
J Digit Imaging2018Journal Article, cited 1 times
Website
TCIA General
imaging biomarker
Genomics
Electronic Medical Record (EMR)
Computer analytics
Computers in medicine
Machine learning
Combining imaging biomarkers with genomic and clinical phenotype data is the foundation of precision medicine research efforts. Yet, biomedical imaging research requires unique infrastructure compared with principally text-driven clinical electronic medical record (EMR) data. The issues are related to the binary nature of the file format and transport mechanism for medical images as well as the post-processing image segmentation and registration needed to combine anatomical and physiological imaging data sources. The SiiM Machine Learning Committee was formed to analyze the gaps and challenges surrounding research into machine learning in medical imaging and to find ways to mitigate these issues. At the 2017 annual meeting, a whiteboard session was held to rank the most pressing issues and develop strategies to meet them. The results, and further reflections, are summarized in this paper.
Restoration of Full Data from Sparse Data in Low-Dose Chest Digital Tomosynthesis Using Deep Convolutional Neural Networks
Lee, Donghoon
Kim, Hee-Joung
Journal of Digital Imaging2018Journal Article, cited 0 times
Website
SPIE-AAPM Lung CT Challenge
model-based iterative reconstruction (MBIR)
structure similarity index measure (SSIM)
Content-Based Image Retrieval System for Pulmonary Nodules Using Optimal Feature Sets and Class Membership-Based Retrieval
Mehre, Shrikant A
Dhara, Ashis Kumar
Garg, Mandeep
Kalra, Naveen
Khandelwal, Niranjan
Mukhopadhyay, Sudipta
Journal of Digital Imaging2018Journal Article, cited 0 times
Website
LIDC-IDRI
content-based image retrieval (CBIR)
Bone-Cancer Assessment and Destruction Pattern Analysis in Long-Bone X-ray Image
Bandyopadhyay, Oishila
Biswas, Arindam
Bhattacharya, Bhargab B
J Digit Imaging2018Journal Article, cited 0 times
Website
Algorithm Development
Support Vector Machine (SVM)
Bone cancer originates from bone and rapidly spreads to the rest of the body affecting the patient. A quick and preliminary diagnosis of bone cancer begins with the analysis of bone X-ray or MRI image. Compared to MRI, an X-ray image provides a low-cost diagnostic tool for diagnosis and visualization of bone cancer. In this paper, a novel technique for the assessment of cancer stage and grade in long bones based on X-ray image analysis has been proposed. Cancer-affected bone images usually appear with a variation in bone texture in the affected region. A fusion of different methodologies is used for the purpose of our analysis. In the proposed approach, we extract certain features from bone X-ray images and use support vector machine (SVM) to discriminate healthy and cancerous bones. A technique based on digital geometry is deployed for localizing cancer-affected regions. Characterization of the present stage and grade of the disease and identification of the underlying bone-destruction pattern are performed using a decision tree classifier. Furthermore, the method leads to the development of a computer-aided diagnostic tool that can readily be used by paramedics and doctors. Experimental results on a number of test cases reveal satisfactory diagnostic inferences when compared with ground truth known from clinical findings.
Automatic Labeling of Special Diagnostic Mammography Views from Images and DICOM Headers
Lituiev, D. S.
Trivedi, H.
Panahiazar, M.
Norgeot, B.
Seo, Y.
Franc, B.
Harnish, R.
Kawczynski, M.
Hadley, D.
J Digit Imaging2019Journal Article, cited 0 times
CBIS-DDSM
BREAST
Computer Aided Diagnosis (CADx)
Automation
Breast Neoplasms/*diagnostic imaging
Datasets as Topic
Female
Humans
*Machine Learning
Mammography/*classification/*methods
Radiology Information Systems
Sensitivity and Specificity
Convolutional Neural Network (CNN)
DICOM
Machine learning
Mammography
Applying state-of-the-art machine learning techniques to medical images requires a thorough selection and normalization of input data. One of such steps in digital mammography screening for breast cancer is the labeling and removal of special diagnostic views, in which diagnostic tools or magnification are applied to assist in assessment of suspicious initial findings. As a common task in medical informatics is prediction of disease and its stage, these special diagnostic views, which are only enriched among the cohort of diseased cases, will bias machine learning disease predictions. In order to automate this process, here, we develop a machine learning pipeline that utilizes both DICOM headers and images to predict such views in an automatic manner, allowing for their removal and the generation of unbiased datasets. We achieve AUC of 99.72% in predicting special mammogram views when combining both types of models. Finally, we apply these models to clean up a dataset of about 772,000 images with expected sensitivity of 99.0%. The pipeline presented in this paper can be applied to other datasets to obtain high-quality image sets suitable to train algorithms for disease detection.
Levels Propagation Approach to Image Segmentation: Application to Breast MR Images
Bouchebbah, Fatah
Slimani, Hachem
Journal of Digital Imaging2019Journal Article, cited 0 times
RIDER Breast MRI
Breast
MRI
Accurate segmentation of a breast tumor region is fundamental for treatment. Magnetic resonance imaging (MRI) is a widely used diagnostic tool. In this paper, a new semi-automatic segmentation approach for MRI breast tumor segmentation called Levels Propagation Approach (LPA) is introduced. The introduced segmentation approach takes inspiration from tumor propagation and relies on a finite set of nested and non-overlapped levels. LPA has several features: it is highly suitable to parallelization and offers a simple and dynamic possibility to automate the threshold selection. Furthermore, it allows stopping of the segmentation at any desired limit. Particularly, it allows to avoid to reach the breast skin-line region which is known as a significant issue that reduces the precision and the effectiveness of the breast tumor segmentation. The proposed approach have been tested on two clinical datasets, namely RIDER breast tumor dataset and CMH-LIMED breast tumor dataset. The experimental evaluations have shown that LPA has produced competitive results to some state-of-the-art methods and has acceptable computation complexity.
Advancing Semantic Interoperability of Image Annotations: Automated Conversion of Non-standard Image Annotations in a Commercial PACS to the Annotation and Image Markup
Swinburne, Nathaniel C
Mendelson, David
Rubin, Daniel L
J Digit Imaging2019Journal Article, cited 0 times
Website
Algorithm Development
DICOM
Interoperability
Sharing radiologic image annotations among multiple institutions is important in many clinical scenarios; however, interoperability is prevented because different vendors’ PACS store annotations in non-standardized formats that lack semantic interoperability. Our goal was to develop software to automate the conversion of image annotations in a commercial PACS to the Annotation and Image Markup (AIM) standardized format and demonstrate the utility of this conversion for automated matching of lesion measurements across time points for cancer lesion tracking. We created a software module in Java to parse the DICOM presentation state (DICOM-PS) objects (that contain the image annotations) for imaging studies exported from a commercial PACS (GE Centricity v3.x). Our software identifies line annotations encoded within the DICOM-PS objects and exports the annotations in the AIM format. A separate Python script processes the AIM annotation files to match line measurements (on lesions) across time points by tracking the 3D coordinates of annotated lesions. To validate the interoperability of our approach, we exported annotations from Centricity PACS into ePAD (http://epad.stanford.edu) (Rubin et al., Transl Oncol 7(1):23–35, 2014), a freely available AIM-compliant workstation, and the lesion measurement annotations were correctly linked by ePAD across sequential imaging studies. As quantitative imaging becomes more prevalent in radiology, interoperability of image annotations gains increasing importance. Our work demonstrates that image annotations in a vendor system lacking standard semantics can be automatically converted to a standardized metadata format such as AIM, enabling interoperability and potentially facilitating large-scale analysis of image annotations and the generation of high-quality labels for deep learning initiatives. This effort could be extended for use with other vendors’ PACS.
Deep Learning for Low-Dose CT Denoising Using Perceptual Loss and Edge Detection Layer
Gholizadeh-Ansari, M.
Alirezaie, J.
Babyn, P.
J Digit Imaging2019Journal Article, cited 1 times
Website
TCGA-BRCA
Low-dose CT denoising is a challenging task that has been studied by many researchers. Some studies have used deep neural networks to improve the quality of low-dose CT images and achieved fruitful results. In this paper, we propose a deep neural network that uses dilated convolutions with different dilation rates instead of standard convolution helping to capture more contextual information in fewer layers. Also, we have employed residual learning by creating shortcut connections to transmit image information from the early layers to later ones. To further improve the performance of the network, we have introduced a non-trainable edge detection layer that extracts edges in horizontal, vertical, and diagonal directions. Finally, we demonstrate that optimizing the network by a combination of mean-square error loss and perceptual loss preserves many structural details in the CT image. This objective function does not suffer from over smoothing and blurring effects causing by per-pixel loss and grid-like artifacts resulting from perceptual loss. The experiments show that each modification to the network improves the outcome while changing the complexity of the network, minimally.
A Block Adaptive Near-Lossless Compression Algorithm for Medical Image Sequences and Diagnostic Quality Assessment
Sharma, Urvashi
Sood, Meenakshi
Puthooran, Emjee
J Digit Imaging2019Journal Article, cited 0 times
LungCT-Diagnosis
RIDER NEURO MRI
RIDER Breast MRI
JPEG2000
DICOM
The near-lossless compression technique has better compression ratio than lossless compression technique while maintaining a maximum error limit for each pixel. It takes the advantage of both the lossy and lossless compression methods providing high compression ratio, which can be used for medical images while preserving diagnostic information. The proposed algorithm uses a resolution and modality independent threshold-based predictor, optimal quantization (q) level, and adaptive block size encoding. The proposed method employs resolution independent gradient edge detector (RIGED) for removing inter-pixel redundancy and block adaptive arithmetic encoding (BAAE) is used after quantization to remove coding redundancy. Quantizer with an optimum q level is used to implement the proposed method for high compression efficiency and for the better quality of the recovered images. The proposed method is implemented on volumetric 8-bit and 16-bit standard medical images and also validated on real time 16-bit-depth images collected from government hospitals. The results show the proposed algorithm yields a high coding performance with BPP of 1.37 and produces high peak signal-to-noise ratio (PSNR) of 51.35 dB for 8-bit-depth image dataset as compared with other near-lossless compression. The average BPP values of 3.411 and 2.609 are obtained by the proposed technique for 16-bit standard medical image dataset and real-time medical dataset respectively with maintained image quality. The improved near-lossless predictive coding technique achieves high compression ratio without losing diagnostic information from the image.
A Comparison of the Efficiency of Using a Deep CNN Approach with Other Common Regression Methods for the Prediction of EGFR Expression in Glioblastoma Patients
Hedyehzadeh, Mohammadreza
Maghooli, Keivan
MomenGharibvand, Mohammad
Pistorius, Stephen
J Digit Imaging2020Journal Article, cited 0 times
Website
TCGA-GBM
Radiogenomics
Glioblastoma
Deep convolution neural network
To estimate epithermal growth factor receptor (EGFR) expression level in glioblastoma (GBM) patients using radiogenomic analysis of magnetic resonance images (MRI). A comparative study using a deep convolutional neural network (CNN)-based regression, deep neural network, least absolute shrinkage and selection operator (LASSO) regression, elastic net regression, and linear regression with no regularization was carried out to estimate EGFR expression of 166 GBM patients. Except for the deep CNN case, overfitting was prevented by using feature selection, and loss values for each method were compared. The loss values in the training phase for deep CNN, deep neural network, Elastic net, LASSO, and the linear regression with no regularization were 2.90, 8.69, 7.13, 14.63, and 21.76, respectively, while in the test phase, the loss values were 5.94, 10.28, 13.61, 17.32, and 24.19 respectively. These results illustrate that the efficiency of the deep CNN approach is better than that of the other methods, including Lasso regression, which is a regression method known for its advantage in high-dimension cases. A comparison between deep CNN, deep neural network, and three other common regression methods was carried out, and the efficiency of the CNN deep learning approach, in comparison with other regression models, was demonstrated.
An Embedded Multi-branch 3D Convolution Neural Network for False Positive Reduction in Lung Nodule Detection
Zuo, Wangxia
Zhou, Fuqiang
He, Yuzhu
Journal of Digital Imaging2020Journal Article, cited 0 times
LIDC-IDRI
Lung
CT
Numerous lung nodule candidates can be produced through an automated lung nodule detection system. Classifying these candidates to reduce false positives is an important step in the detection process. The objective during this paper is to predict real nodules from a large number of pulmonary nodule candidates. Facing the challenge of the classification task, we propose a novel 3D convolution neural network (CNN) to reduce false positives in lung nodule detection. The novel 3D CNN includes embedded multiple branches in its structure. Each branch processes a feature map from a layer with different depths. All of these branches are cascaded at their ends; thus, features from different depth layers are combined to predict the categories of candidates. The proposed method obtains a competitive score in lung nodule candidate classification on LUNA16 dataset with an accuracy of 0.9783, a sensitivity of 0.8771, a precision of 0.9426, and a specificity of 0.9925. Moreover, a good performance on the competition performance metric (CPM) is also obtained with a score of 0.830. As a 3D CNN, the proposed model can learn complete and three-dimensional discriminative information about nodules and non-nodules to avoid some misidentification problems caused due to lack of spatial correlation information extracted from traditional methods or 2D networks. As an embedded multi-branch structure, the model is also more effective in recognizing the nodules of various shapes and sizes. As a result, the proposed method gains a competitive score on the false positive reduction in lung nodule detection and can be used as a reference for classifying nodule candidates.
Prediction of Non-small Cell Lung Cancer Histology by a Deep Ensemble of Convolutional and Bidirectional Recurrent Neural Network
Moitra, Dipanjan
Mandal, Rakesh Kumar
Journal of Digital Imaging2020Journal Article, cited 0 times
NSCLC Radiogenomics
Lung
Deep convolutional neural network (DCNN)
deep learning
Improving the Subtype Classification of Non-small Cell Lung Cancer by Elastic Deformation Based Machine Learning
Gao, Yang
Song, Fan
Zhang, Peng
Liu, Jian
Cui, Jingjing
Ma, Yingying
Zhang, Guanglei
Luo, Jianwen
J Digit Imaging2021Journal Article, cited 0 times
Website
NSCLC-Radiomics
Machine learning
Non-small cell lung cancer (NSCLC)
Radiomics
Subtype classification
Non-invasive image-based machine learning models have been used to classify subtypes of non-small cell lung cancer (NSCLC). However, the classification performance is limited by the dataset size, because insufficient data cannot fully represent the characteristics of the tumor lesions. In this work, a data augmentation method named elastic deformation is proposed to artificially enlarge the image dataset of NSCLC patients with two subtypes (squamous cell carcinoma and large cell carcinoma) of 3158 images. Elastic deformation effectively expanded the dataset by generating new images, in which tumor lesions go through elastic shape transformation. To evaluate the proposed method, two classification models were trained on the original and augmented dataset, respectively. Using augmented dataset for training significantly increased classification metrics including area under the curve (AUC) values of receiver operating characteristics (ROC) curves, accuracy, sensitivity, specificity, and f1-score, thus improved the NSCLC subtype classification performance. These results suggest that elastic deformation could be an effective data augmentation method for NSCLC tumor lesion images, and building classification models with the help of elastic deformation has the potential to serve for clinical lung cancer diagnosis and treatment design.
A New General Maximum Intensity Projection Technology via the Hybrid of U-Net and Radial Basis Function Neural Network
Chao, Zhen
Xu, Wenting
Journal of Digital Imaging2021Journal Article, cited 0 times
Website
LIDC-IDRI
U-Net
Robustifying Deep Networks for Medical Image Segmentation
Liu, Zheng
Zhang, Jinnian
Jog, Varun
Loh, Po-Ling
McMillan, Alan B
J Digit Imaging2021Journal Article, cited 0 times
Website
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Deep Learning
Magnetic Resonance Imaging (MRI)
Segmentation
The purpose of this study is to investigate the robustness of a commonly used convolutional neural network for image segmentation with respect to nearly unnoticeable adversarial perturbations, and suggest new methods to make these networks more robust to such perturbations. In this retrospective study, the accuracy of brain tumor segmentation was studied in subjects with low- and high-grade gliomas. Two representative UNets were implemented to segment four different MR series (T1-weighted, post-contrast T1-weighted, T2-weighted, and T2-weighted FLAIR) into four pixelwise labels (Gd-enhancing tumor, peritumoral edema, necrotic and non-enhancing tumor, and background). We developed attack strategies based on the fast gradient sign method (FGSM), iterative FGSM (i-FGSM), and targeted iterative FGSM (ti-FGSM) to produce effective but imperceptible attacks. Additionally, we explored the effectiveness of distillation and adversarial training via data augmentation to counteract these adversarial attacks. Robustness was measured by comparing the Dice coefficients for the attacks using Wilcoxon signed-rank tests. The experimental results show that attacks based on FGSM, i-FGSM, and ti-FGSM were effective in reducing the quality of image segmentation by up to 65% in the Dice coefficient. For attack defenses, distillation performed significantly better than adversarial training approaches. However, all defense approaches performed worse compared to unperturbed test images. Therefore, segmentation networks can be adversely affected by targeted attacks that introduce visually minor (and potentially undetectable) modifications to existing images. With an increasing interest in applying deep learning techniques to medical imaging data, it is important to quantify the ramifications of adversarial inputs (either intentional or unintentional).
3D Isotropic Super-resolution Prostate MRI Using Generative Adversarial Networks and Unpaired Multiplane Slices
Liu, Y.
Liu, Y.
Vanguri, R.
Litwiller, D.
Liu, M.
Hsu, H. Y.
Ha, R.
Shaish, H.
Jambawalikar, S.
J Digit Imaging2021Journal Article, cited 0 times
Website
PROSTATEx
Magnetic Resonance Imaging (MRI)
Generative Adversarial Network (GAN)
Image Enhancement/methods
PROSTATE
Super-resolution
We developed a deep learning-based super-resolution model for prostate MRI. 2D T2-weighted turbo spin echo (T2w-TSE) images are the core anatomical sequences in a multiparametric MRI (mpMRI) protocol. These images have coarse through-plane resolution, are non-isotropic, and have long acquisition times (approximately 10-15 min). The model we developed aims to preserve high-frequency details that are normally lost after 3D reconstruction. We propose a novel framework for generating isotropic volumes using generative adversarial networks (GAN) from anisotropic 2D T2w-TSE and single-shot fast spin echo (ssFSE) images. The CycleGAN model used in this study allows the unpaired dataset mapping to reconstruct super-resolution (SR) volumes. Fivefold cross-validation was performed. The improvements from patch-to-volume reconstruction (PVR) to SR are 80.17%, 63.77%, and 186% for perceptual index (PI), RMSE, and SSIM, respectively; the improvements from slice-to-volume reconstruction (SVR) to SR are 72.41%, 17.44%, and 7.5% for PI, RMSE, and SSIM, respectively. Five ssFSE cases were used to test for generalizability; the perceptual quality of SR images surpasses the in-plane ssFSE images by 37.5%, with 3.26% improvement in SSIM and a higher RMSE by 7.92%. SR images were quantitatively assessed with radiologist Likert scores. Our isotropic SR volumes are able to reproduce high-frequency detail, maintaining comparable image quality to in-plane TSE images in all planes without sacrificing perceptual accuracy. The SR reconstruction networks were also successfully applied to the ssFSE images, demonstrating that high-quality isotropic volume achieved from ultra-fast acquisition is feasible.
Multi-scale Selection and Multi-channel Fusion Model for Pancreas Segmentation Using Adversarial Deep Convolutional Nets
Li, M.
Lian, F.
Guo, S.
J Digit Imaging2021Journal Article, cited 0 times
Website
Pancreas-CT
Segmentation
Deep convolutional neural network (DCNN)
Organ segmentation from existing imaging is vital to the medical image analysis and disease diagnosis. However, the boundary shapes and area sizes of the target region tend to be diverse and flexible. And the frequent applications of pooling operations in traditional segmentor result in the loss of spatial information which is advantageous to segmentation. All these issues pose challenges and difficulties for accurate organ segmentation from medical imaging, particularly for organs with small volumes and variable shapes such as the pancreas. To offset aforesaid information loss, we propose a deep convolutional neural network (DCNN) named multi-scale selection and multi-channel fusion segmentation model (MSC-DUnet) for pancreas segmentation. This proposed model contains three stages to collect detailed cues for accurate segmentation: (1) increasing the consistency between the distributions of the output probability maps from the segmentor and the original samples by involving the adversarial mechanism that can capture spatial distributions, (2) gathering global spatial features from several receptive fields via multi-scale field selection (MSFS), and (3) integrating multi-level features located in varying network positions through the multi-channel fusion module (MCFM). Experimental results on the NIH Pancreas-CT dataset show that our proposed MSC-DUnet obtains superior performance to the baseline network by achieving an improvement of 5.1% in index dice similarity coefficient (DSC), which adequately indicates that MSC-DUnet has great potential for pancreas segmentation.
Deep Learning-Based Time-to-Death Prediction Model for COVID-19 Patients Using Clinical Data and Chest Radiographs
Matsumoto, T.
Walston, S. L.
Walston, M.
Kabata, D.
Miki, Y.
Shiba, M.
Ueda, D.
J Digit Imaging2022Journal Article, cited 0 times
Website
COVID-19-NY-SBU
Covid-19
Chest radiography
Deep learning
Prognosis
Accurate estimation of mortality and time to death at admission for COVID-19 patients is important and several deep learning models have been created for this task. However, there are currently no prognostic models which use end-to-end deep learning to predict time to event for admitted COVID-19 patients using chest radiographs and clinical data. We retrospectively implemented a new artificial intelligence model combining DeepSurv (a multiple-perceptron implementation of the Cox proportional hazards model) and a convolutional neural network (CNN) using 1356 COVID-19 inpatients. For comparison, we also prepared DeepSurv only with clinical data, DeepSurv only with images (CNNSurv), and Cox proportional hazards models. Clinical data and chest radiographs at admission were used to estimate patient outcome (death or discharge) and duration to the outcome. The Harrel's concordance index (c-index) of the DeepSurv with CNN model was 0.82 (0.75-0.88) and this was significantly higher than the DeepSurv only with clinical data model (c-index = 0.77 (0.69-0.84), p = 0.011), CNNSurv (c-index = 0.70 (0.63-0.79), p = 0.001), and the Cox proportional hazards model (c-index = 0.71 (0.63-0.79), p = 0.001). These results suggest that the time-to-event prognosis model became more accurate when chest radiographs and clinical data were used together.
A Comparison of Three Different Deep Learning-Based Models to Predict the MGMT Promoter Methylation Status in Glioblastoma Using Brain MRI
Faghani, S.
Khosravi, B.
Moassefi, M.
Conte, G. M.
Erickson, B. J.
J Digit Imaging2023Journal Article, cited 0 times
Website
BraTS 2021
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Brain tumor
Classification
Deep learning
MGMT methylation status
Glioblastoma (GBM) is the most common primary malignant brain tumor in adults. The standard treatment for GBM consists of surgical resection followed by concurrent chemoradiotherapy and adjuvant temozolomide. O-6-methylguanine-DNA methyltransferase (MGMT) promoter methylation status is an important prognostic biomarker that predicts the response to temozolomide and guides treatment decisions. At present, the only reliable way to determine MGMT promoter methylation status is through the analysis of tumor tissues. Considering the complications of the tissue-based methods, an imaging-based approach is preferred. This study aimed to compare three different deep learning-based approaches for predicting MGMT promoter methylation status. We obtained 576 T2WI with their corresponding tumor masks, and MGMT promoter methylation status from, The Brain Tumor Segmentation (BraTS) 2021 datasets. We developed three different models: voxel-wise, slice-wise, and whole-brain. For voxel-wise classification, methylated and unmethylated MGMT tumor masks were made into 1 and 2 with 0 background, respectively. We converted each T2WI into 32 x 32 x 32 patches. We trained a 3D-Vnet model for tumor segmentation. After inference, we constructed the whole brain volume based on the patch's coordinates. The final prediction of MGMT methylation status was made by majority voting between the predicted voxel values of the biggest connected component. For slice-wise classification, we trained an object detection model for tumor detection and MGMT methylation status prediction, then for final prediction, we used majority voting. For the whole-brain approach, we trained a 3D Densenet121 for prediction. Whole-brain, slice-wise, and voxel-wise, accuracy was 65.42% (SD 3.97%), 61.37% (SD 1.48%), and 56.84% (SD 4.38%), respectively.
Tensor-RT-Based Transfer Learning Model for Lung Cancer Classification
Bishnoi, V.
Goel, N.
J Digit Imaging2023Journal Article, cited 0 times
Website
LIDC-IDRI
Computed Tomography (CT)
Computer Aided Detection (CADe)
Computer Aided Diagnosis (CADx)
DICOM
Lung cancer
Nvidia tensor-RT
Transfer learning
Cancer is a leading cause of death across the globe, in which lung cancer constitutes the maximum mortality rate. Early diagnosis through computed tomography scan imaging helps to identify the stages of lung cancer. Several deep learning-based classification methods have been employed for developing automatic systems for the diagnosis and detection of computed tomography scan lung slices. However, the diagnosis based on nodule detection is a challenging task as it requires manual annotation of nodule regions. Also, these computer-aided systems have yet not achieved the desired performance in real-time lung cancer classification. In the present paper, a high-speed real-time transfer learning-based framework is proposed for the classification of computed tomography lung cancer slices into benign and malignant. The proposed framework comprises of three modules: (i) pre-processing and segmentation of lung images using K-means clustering based on cosine distance and morphological operations; (ii) tuning and regularization of the proposed model named as weighted VGG deep network (WVDN); (iii) model inference in Nvidia tensor-RT during post-processing for the deployment in real-time applications. In this study, two pre-trained CNN models were experimented and compared with the proposed model. All the models have been trained on 19,419 computed tomography scan lung slices, which were obtained from the publicly available Lung Image Database Consortium and Image Database Resource Initiative dataset. The proposed model achieved the best classification metric, an accuracy of 0.932, precision, recall, an F1 score of 0.93, and Cohen's kappa score of 0.85. A statistical evaluation has also been performed on the classification parameters and achieved a p-value <0.0001 for the proposed model. The quantitative and statistical results validate the improved performance of the proposed model as compared to state-of-the-art methods. The proposed framework is based on complete computed tomography slices rather than the marked annotations and may help in improving clinical diagnosis.
Evaluation of Semiautomatic and Deep Learning-Based Fully Automatic Segmentation Methods on [(18)F]FDG PET/CT Images from Patients with Lymphoma: Influence on Tumor Characterization
Constantino, C. S.
Leocadio, S.
Oliveira, F. P. M.
Silva, M.
Oliveira, C.
Castanheira, J. C.
Silva, A.
Vaz, S.
Teixeira, R.
Neves, M.
Lucio, P.
Joao, C.
Costa, D. C.
J Digit Imaging2023Journal Article, cited 0 times
Website
FDG-PET-CT-Lesions
AutoPET
Artificial intelligence
Computer-assisted image analysis
Lymphoma
Reproducibility of results
[18f]fdg pet/ct
Semi-automatic segmentation
Automatic segmentation
The objective is to assess the performance of seven semiautomatic and two fully automatic segmentation methods on [(18)F]FDG PET/CT lymphoma images and evaluate their influence on tumor quantification. All lymphoma lesions identified in 65 whole-body [(18)F]FDG PET/CT staging images were segmented by two experienced observers using manual and semiautomatic methods. Semiautomatic segmentation using absolute and relative thresholds, k-means and Bayesian clustering, and a self-adaptive configuration (SAC) of k-means and Bayesian was applied. Three state-of-the-art deep learning-based segmentations methods using a 3D U-Net architecture were also applied. One was semiautomatic and two were fully automatic, of which one is publicly available. Dice coefficient (DC) measured segmentation overlap, considering manual segmentation the ground truth. Lymphoma lesions were characterized by 31 features. Intraclass correlation coefficient (ICC) assessed features agreement between different segmentation methods. Nine hundred twenty [(18)F]FDG-avid lesions were identified. The SAC Bayesian method achieved the highest median intra-observer DC (0.87). Inter-observers' DC was higher for SAC Bayesian than manual segmentation (0.94 vs 0.84, p < 0.001). Semiautomatic deep learning-based median DC was promising (0.83 (Obs1), 0.79 (Obs2)). Threshold-based methods and publicly available 3D U-Net gave poorer results (0.56 </= DC </= 0.68). Maximum, mean, and peak standardized uptake values, metabolic tumor volume, and total lesion glycolysis showed excellent agreement (ICC >/= 0.92) between manual and SAC Bayesian segmentation methods. The SAC Bayesian classifier is more reproducible and produces similar lesion features compared to manual segmentation, giving the best concordant results of all other methods. Deep learning-based segmentation can achieve overall good segmentation results but failed in few patients impacting patients' clinical evaluation.
External Validation of Robust Radiomic Signature to Predict 2-Year Overall Survival in Non-Small-Cell Lung Cancer
Jha, A. K.
Sherkhane, U. B.
Mthun, S.
Jaiswar, V.
Purandare, N.
Prabhash, K.
Wee, L.
Rangarajan, V.
Dekker, A.
J Digit Imaging2023Journal Article, cited 0 times
NSCLC-Radiomics
Computed Tomography (CT)
Radiomic feature
LUNG
Classification
Random Forest
Lung cancer is the second most fatal disease worldwide. In the last few years, radiomics is being explored to develop prediction models for various clinical endpoints in lung cancer. However, the robustness of radiomic features is under question and has been identified as one of the roadblocks in the implementation of a radiomic-based prediction model in the clinic. Many past studies have suggested identifying the robust radiomic feature to develop a prediction model. In our earlier study, we identified robust radiomic features for prediction model development. The objective of this study was to develop and validate the robust radiomic signatures for predicting 2-year overall survival in non-small cell lung cancer (NSCLC). This retrospective study included a cohort of 300 stage I-IV NSCLC patients. Institutional 200 patients' data were included for training and internal validation and 100 patients' data from The Cancer Image Archive (TCIA) open-source image repository for external validation. Radiomic features were extracted from the CT images of both cohorts. The feature selection was performed using hierarchical clustering, a Chi-squared test, and recursive feature elimination (RFE). In total, six prediction models were developed using random forest (RF-Model-O, RF-Model-B), gradient boosting (GB-Model-O, GB-Model-B), and support vector(SV-Model-O, SV-Model-B) classifiers to predict 2-year overall survival (OS) on original data as well as balanced data. Model validation was performed using 10-fold cross-validation, internal validation, and external validation. Using a multistep feature selection method, the overall top 10 features were chosen. On internal validation, the two random forest models (RF-Model-O, RF-Model-B) displayed the highest accuracy; their scores on the original and balanced datasets were 0.81 and 0.77 respectively. During external validation, both the random forest models' accuracy was 0.68. In our study, robust radiomic features showed promising predictive performance to predict 2-year overall survival in NSCLC.
Brain Tumor Segmentation for Multi-Modal MRI with Missing Information
Feng, X.
Ghimire, K.
Kim, D. D.
Chandra, R. S.
Zhang, H.
Peng, J.
Han, B.
Huang, G.
Chen, Q.
Patel, S.
Bettagowda, C.
Sair, H. I.
Jones, C.
Jiao, Z.
Yang, L.
Bai, H.
J Digit Imaging2023Journal Article, cited 0 times
Website
BraTS-TCGA-GBM
BraTS 2021
3D U-Net
Brain tumor segmentation
Deep learning
Multi-contrast MRI
Sequence dropout
Deep convolutional neural networks (DCNNs) have shown promise in brain tumor segmentation from multi-modal MRI sequences, accommodating heterogeneity in tumor shape and appearance. The fusion of multiple MRI sequences allows networks to explore complementary tumor information for segmentation. However, developing a network that maintains clinical relevance in situations where certain MRI sequence(s) might be unavailable or unusual poses a significant challenge. While one solution is to train multiple models with different MRI sequence combinations, it is impractical to train every model from all possible sequence combinations. In this paper, we propose a DCNN-based brain tumor segmentation framework incorporating a novel sequence dropout technique in which networks are trained to be robust to missing MRI sequences while employing all other available sequences. Experiments were performed on the RSNA-ASNR-MICCAI BraTS 2021 Challenge dataset. When all MRI sequences were available, there were no significant differences in performance of the model with and without dropout for enhanced tumor (ET), tumor (TC), and whole tumor (WT) (p-values 1.000, 1.000, 0.799, respectively), demonstrating that the addition of dropout improves robustness without hindering overall performance. When key sequences were unavailable, the network with sequence dropout performed significantly better. For example, when tested on only T1, T2, and FLAIR sequences together, DSC for ET, TC, and WT increased from 0.143 to 0.486, 0.431 to 0.680, and 0.854 to 0.901, respectively. Sequence dropout represents a relatively simple yet effective approach for brain tumor segmentation with missing MRI sequences.
Development of End-to-End AI–Based MRI Image Analysis System for Predicting IDH Mutation Status of Patients with Gliomas: Multicentric Validation
Santinha, João
Katsaros, Vasileios
Stranjalis, George
Liouta, Evangelia
Boskos, Christos
Matos, Celso
Viegas, Catarina
Papanikolaou, Nickolas
Journal of Imaging Informatics in Medicine2024Journal Article, cited 0 times
TCGA-GBM
TCGA-LGG
Radiogenomics
Isocitrate dehydrogenase (IDH) mutation
BRAIN
Classification
Radiogenomics has shown potential to predict genomic phenotypes from medical images. The development of models using standard-of-care pre-operative MRI images, as opposed to advanced MRI images, enables a broader reach of such models. In this work, a radiogenomics model for IDH mutation status prediction from standard-of-care MRIs in patients with glioma was developed and validated using multicentric data. A cohort of 142 (wild-type: 32.4%) patients with glioma retrieved from the TCIA/TCGA was used to train a logistic regression model to predict the IDH mutation status. The model was evaluated using retrospective data collected in two distinct hospitals, comprising 36 (wild-type: 63.9%) and 53 (wild-type: 75.5%) patients. Model development utilized ROC analysis. Model discrimination and calibration were used for validation. The model yielded an AUC of 0.741 vs. 0.716 vs. 0.938, a sensitivity of 0.784 vs. 0.739 vs. 0.875, and a specificity of 0.657 vs. 0.692 vs. 1.000 on the training, test cohort 1, and test cohort 2, respectively. The assessment of model fairness suggested an unbiased model for age and sex, and calibration tests showed a p < 0.05. These results indicate that the developed model allows the prediction of the IDH mutation status in gliomas using standard-of-care MRI images and does not appear to hold sex and age biases.
Robustness of Deep Networks for Mammography: Replication Across Public Datasets
Velarde, Osvaldo M.
Lin, Clarissa
Eskreis-Winkler, Sarah
Parra, Lucas C.
Journal of Imaging Informatics in Medicine2024Journal Article, cited 0 times
CBIS-DDSM
CMMD
Deep Learning
Computer Aided Diagnosis (CADx)
BREAST
Mammography
Deep neural networks have demonstrated promising performance in screening mammography with recent studies reporting performance at or above the level of trained radiologists on internal datasets. However, it remains unclear whether the performance of these trained models is robust and replicates across external datasets. In this study, we evaluate four state-of-the-art publicly available models using four publicly available mammography datasets (CBIS-DDSM, INbreast, CMMD, OMI-DB). Where test data was available, published results were replicated. The best-performing model, which achieved an area under the ROC curve (AUC) of 0.88 on internal data from NYU, achieved here an AUC of 0.9 on the external CMMD dataset (N = 826 exams). On the larger OMI-DB dataset (N = 11,440 exams), it achieved an AUC of 0.84 but did not match the performance of individual radiologists (at a specificity of 0.92, the sensitivity was 0.97 for the radiologist and 0.53 for the network for a 1-year follow-up). The network showed higher performance for in situ cancers, as opposed to invasive cancers. Among invasive cancers, it was relatively weaker at identifying asymmetries and was relatively stronger at identifying masses. The three other trained models that we evaluated all performed poorly on external datasets. Independent validation of trained models is an essential step to ensure safe and reliable use. Future progress in AI for mammography may depend on a concerted effort to make larger datasets publicly available that span multiple clinical sites.
Generative Adversarial Networks for Brain MRI Synthesis: Impact of Training Set Size on Clinical Application
Zoghby, M. M.
Erickson, B. J.
Conte, G. M.
J Imaging Inform Med2024Journal Article, cited 0 times
Website
BraTS-TCGA-GBM
Generative Adversarial Network (GAN)
Glioma
Image-to-image translation
Magnetic Resonance Imaging (MRI)
BRAIN
Synthetic MRI
We evaluated the impact of training set size on generative adversarial networks (GANs) to synthesize brain MRI sequences. We compared three sets of GANs trained to generate pre-contrast T1 (gT1) from post-contrast T1 and FLAIR (gFLAIR) from T2. The baseline models were trained on 135 cases; for this study, we used the same model architecture but a larger cohort of 1251 cases and two stopping rules, an early checkpoint (early models) and one after 50 epochs (late models). We tested all models on an independent dataset of 485 newly diagnosed gliomas. We compared the generated MRIs with the original ones using the structural similarity index (SSI) and mean squared error (MSE). We simulated scenarios where either the original T1, FLAIR, or both were missing and used their synthesized version as inputs for a segmentation model with the original post-contrast T1 and T2. We compared the segmentations using the dice similarity coefficient (DSC) for the contrast-enhancing area, non-enhancing area, and the whole lesion. For the baseline, early, and late models on the test set, for the gT1, median SSI was .957, .918, and .947; median MSE was .006, .014, and .008. For the gFLAIR, median SSI was .924, .908, and .915; median MSE was .016, .016, and .019. The range DSC was .625-.955, .420-.952, and .610-.954. Overall, GANs trained on a relatively small cohort performed similarly to those trained on a cohort ten times larger, making them a viable option for rare diseases or institutions with limited resources.
Low-Dose CT Image Super-resolution Network with Noise Inhibition Based on Feedback Feature Distillation Mechanism
Chi, J.
Wei, X.
Sun, Z.
Yang, Y.
Yang, B.
J Imaging Inform Med2024Journal Article, cited 0 times
Pancreas-CT
Attention mechanism
Deep learning
Image super-resolution
Low-dose computed tomography
Low-dose computed tomography (LDCT) has been widely used in medical diagnosis. In practice, doctors often zoom in on LDCT slices for clearer lesions and issues, while, a simple zooming operation fails to suppress low-dose artifacts, leading to distorted details. Therefore, numerous LDCT super-resolution (SR) methods have been proposed to promote the quality of zooming without the increase of the dose in CT scanning. However, there are still some drawbacks that need to be addressed in existing methods. First, the region of interest (ROI) is not emphasized due to the lack of guidance in the reconstruction process. Second, the convolutional blocks extracting fix-resolution features fail to concentrate on the essential multi-scale features. Third, a single SR head cannot suppress the residual artifacts. To address these issues, we propose an LDCT CT joint SR and denoising reconstruction network. Our proposed network consists of global dual-guidance attention fusion modules (GDAFMs) and multi-scale anastomosis blocks (MABs). The GDAFM directs the network to focus on ROI by fusing the extra mask guidance and average CT image guidance, while the MAB introduces hierarchical features through anastomosis connections to leverage multi-scale features and promote the feature representation ability. To suppress radial residual artifacts, we optimize our network using the feedback feature distillation mechanism (FFDM) which shares the backbone to learn features corresponding to the denoising task. We apply the proposed method to the 3D-IRCADB and PANCREAS datasets to evaluate its ability on LDCT image SR reconstruction. The experimental results compared with state-of-the-art methods illustrate the superiority of our approach with respect to peak signal-to-noise (PSNR), structural similarity (SSIM), and qualitative observations. Our proposed LDCT joint SR and denoising reconstruction network has been extensively evaluated through ablation, quantitative, and qualitative experiments. The results demonstrate that our method can recover noise-free and detail-sharp images, resulting in better reconstruction results. Code is available at https://github.com/neu-szy/ldct_sr_dn_w_ffdm .
Synthesis of Hybrid Data Consisting of Chest Radiographs and Tabular Clinical Records Using Dual Generative Models for COVID-19 Positive Cases
Kikuchi, T.
Hanaoka, S.
Nakao, T.
Takenaga, T.
Nomura, Y.
Mori, H.
Yoshikawa, T.
J Imaging Inform Med2024Journal Article, cited 0 times
Website
COVID-19-NY-SBU
Auto-encoding GAN
Covid-19
Generative Adversarial Network (GAN)
Data sharing
Synthetic data generation
To generate synthetic medical data incorporating image-tabular hybrid data by merging an image encoding/decoding model with a table-compatible generative model and assess their utility. We used 1342 cases from the Stony Brook University Covid-19-positive cases, comprising chest X-ray radiographs (CXRs) and tabular clinical data as a private dataset (pDS). We generated a synthetic dataset (sDS) through the following steps: (I) dimensionally reducing CXRs in the pDS using a pretrained encoder of the auto-encoding generative adversarial networks (alphaGAN) and integrating them with the correspondent tabular clinical data; (II) training the conditional tabular GAN (CTGAN) on this combined data to generate synthetic records, encompassing encoded image features and clinical data; and (III) reconstructing synthetic images from these encoded image features in the sDS using a pretrained decoder of the alphaGAN. The utility of sDS was assessed by the performance of the prediction models for patient outcomes (deceased or discharged). For the pDS test set, the area under the receiver operating characteristic (AUC) curve was calculated to compare the performance of prediction models trained separately with pDS, sDS, or a combination of both. We created an sDS comprising CXRs with a resolution of 256 x 256 pixels and tabular data containing 13 variables. The AUC for the outcome was 0.83 when the model was trained with the pDS, 0.74 with the sDS, and 0.87 when combining pDS and sDS for training. Our method is effective for generating synthetic records consisting of both images and tabular clinical data.
An AI-Based Low-Risk Lung Health Image Visualization Framework Using LR-ULDCT
Rai, S.
Bhatt, J. S.
Patra, S. K.
J Imaging Inform Med2024Journal Article, cited 0 times
Website
Phantom FDA
Image Enhancement/methods
LUNG
Artificial intelligence
Deep learning
Lung infection
Reconstruction
Ultra-low-dose computed tomography
Visualization system
COVID-19
Pneumonia
In this article, we propose an AI-based low-risk visualization framework for lung health monitoring using low-resolution ultra-low-dose CT (LR-ULDCT). We present a novel deep cascade processing workflow to achieve diagnostic visualization on LR-ULDCT (<0.3 mSv) at par high-resolution CT (HRCT) of 100 mSV radiation technology. To this end, we build a low-risk and affordable deep cascade network comprising three sequential deep processes: restoration, super-resolution (SR), and segmentation. Given degraded LR-ULDCT, the first novel network unsupervisedly learns restoration function from augmenting patch-based dictionaries and residuals. The restored version is then super-resolved (SR) for target (sensor) resolution. Here, we combine perceptual and adversarial losses in novel GAN to establish the closeness between probability distributions of generated SR-ULDCT and restored LR-ULDCT. Thus SR-ULDCT is presented to the segmentation network that first separates the chest portion from SR-ULDCT followed by lobe-wise colorization. Finally, we extract five lobes to account for the presence of ground glass opacity (GGO) in the lung. Hence, our AI-based system provides low-risk visualization of input degraded LR-ULDCT to various stages, i.e., restored LR-ULDCT, restored SR-ULDCT, and segmented SR-ULDCT, and achieves diagnostic power of HRCT. We perform case studies by experimenting on real datasets of COVID-19, pneumonia, and pulmonary edema/congestion while comparing our results with state-of-the-art. Ablation experiments are conducted for better visualizing different operating pipelines. Finally, we present a verification report by fourteen (14) experienced radiologists and pulmonologists.
A Novel Structure Fusion Attention Model to Detect Architectural Distortion on Mammography
Ou, T. W.
Weng, T. C.
Chang, R. F.
J Imaging Inform Med2024Journal Article, cited 0 times
Website
CBIS-DDSM
Architectural distortion
Architecture enhancement
Attention mechanism
Computer Aided Detection (CADe)
Convergence map
Deep learning
Mammography
Structure fusion attention model
Architectural distortion (AD) is one of the most common findings on mammograms, and it may represent not only cancer but also a lesion such as a radial scar that may have an associated cancer. AD accounts for 18-45% missed cancer, and the positive predictive value of AD is approximately 74.5%. Early detection of AD leads to early diagnosis and treatment of the cancer and improves the overall prognosis. However, detection of AD is a challenging task. In this work, we propose a new approach for detecting architectural distortion in mammography images by combining preprocessing methods and a novel structure fusion attention model. The proposed structure-focused weighted orientation preprocessing method is composed of the original image, the architecture enhancement map, and the weighted orientation map, highlighting suspicious AD locations. The proposed structure fusion attention model captures the information from different channels and outperforms other models in terms of false positives and top sensitivity, which refers to the maximum sensitivity that a model can achieve under the acceptance of the highest number of false positives, reaching 0.92 top sensitivity with only 0.6590 false positive per image. The findings suggest that the combination of preprocessing methods and a novel network architecture can lead to more accurate and reliable AD detection. Overall, the proposed approach offers a novel perspective on detecting ADs, and we believe that our method can be applied to clinical settings in the future, assisting radiologists in the early detection of ADs from mammography, ultimately leading to early treatment of breast cancer patients.
Uncertainty Estimation for Dual View X-ray Mammographic Image Registration Using Deep Ensembles
Walton, W. C.
Kim, S. J.
J Imaging Inform Med2024Journal Article, cited 0 times
Website
Breast-Cancer-Screening-DBT
CBIS-DDSM
Breast cancer
Image registration
Lesion correspondence
Mammography
Neural network
Uncertainty
Techniques are developed for generating uncertainty estimates for convolutional neural network (CNN)-based methods for registering the locations of lesions between the craniocaudal (CC) and mediolateral oblique (MLO) mammographic X-ray image views. Multi-view lesion correspondence is an important task that clinicians perform for characterizing lesions during routine mammographic exams. Automated registration tools can aid in this task, yet if the tools also provide confidence estimates, they can be of greater value to clinicians, especially in cases involving dense tissue where lesions may be difficult to see. A set of deep ensemble-based techniques, which leverage a negative log-likelihood (NLL)-based cost function, are implemented for estimating uncertainties. The ensemble architectures involve significant modifications to an existing CNN dual-view lesion registration algorithm. Three architectural designs are evaluated, and different ensemble sizes are compared using various performance metrics. The techniques are tested on synthetic X-ray data, real 2D X-ray data, and slices from real 3D X-ray data. The ensembles generate covariance-based uncertainty ellipses that are correlated with registration accuracy, such that the ellipse sizes can give a clinician an indication of confidence in the mapping between the CC and MLO views. The results also show that the ellipse sizes can aid in improving computer-aided detection (CAD) results by matching CC/MLO lesion detects and reducing false alarms from both views, adding to clinical utility. The uncertainty estimation techniques show promise as a means for aiding clinicians in confidently establishing multi-view lesion correspondence, thereby improving diagnostic capability.
Brain structural disorders detection and classification approaches: a review
Bhatele, Kirti Raj
Bhadauria, Sarita Singh
2019Journal Article, cited 0 times
TCGA-GBM
TCGA-LGG
Machine Learning
This paper is an effort to encapsulate the various developments in the domain of different unsupervised, supervised and half supervised brain anomaly detection approaches or techniques proposed by the researchers working in the domain of the Medical image segmentation and classification. As researchers are constantly working hard in the domain of image segregation, interpretation and computer vision in order to automate the task of tumour segmentation, anomaly detection, classification and other structural disorder prediction at an early stage with the aid of computer. The different medical imaging modalities are used by the doctors in order to diagnose the brain tumour and other structural brain disorders which are an integral part of diagnosis and prognosis process. When these different medical image modalities are used along with various image segmentation methods and machine learning approaches tends to perform brain structural disorder detection and classification in a semi-automated or fully automated manner with high accuracy. This paper presents all such approaches using various medical image modalities for the accurate detection and classification of brain tumour and other brain structural disorders. In this paper, all the major phases of any brain tumour or brain structural disorder detection and classification approach is covered begin with the comparison of various medical image pre-processing techniques then major segmentation approaches followed by the approaches based on machine learning. This paper also presents an evaluation and comparison among the various popular texture and shape based feature extraction methods used in combination with different machine learning classifiers on the BRATS 2013 dataset. The fusion of MRI modalities used along with the hybrid features extraction methods and ensemble model delivers the best result in terms of accuracy.
URO-GAN: An untrustworthy region optimization approach for adipose tissue segmentation based on adversarial learning
Shen, Kaifei
Quan, Hongyan
Han, Jun
Wu, Min
Applied Intelligence2022Journal Article, cited 0 times
Website
CT Lymph Nodes
Segmentation
Computed Tomography (CT)
Automatic segmentation of adipose tissue from CT images is an essential module of medical assistant diagnosis. A large scale of abdominal cross-section CT images can be used to segment subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT) with deep learning method. However, the CT images still need to be professionally and accurately annotated to improve the segmentation quality. The paper proposes a semi-supervised segmentation network based on adversarial learning. The model is called URO-GAN and consists of two paths used to segment SAT and VAT, respectively. An SAT-to-VAT transmission mechanism is set up between these two paths, where several inverse-SAT excitation blocks are set to help the SAT segmentation network guide the VAT segmentation network. An untrustworthy region optimization mechanism is proposed to improve the segmentation quality and keep the adversarial learning stable. With the confidence map output from the discriminator network, an optimizer network is used to fix the error in the masks predicted by the segmentation network. The URO-GAN achieves good results by training with 84 annotated images and 3969 unannotated images. Experimental results demonstrate the effectiveness of our approach on the segmentation of adipose tissue in medical images.
Joint model- and immunohistochemistry-driven few-shot learning scheme for breast cancer segmentation on 4D DCE-MRI
Wu, Youqing
Wang, Yihang
Sun, Heng
Jiang, Chunjuan
Li, Bo
Li, Lihua
Pan, Xiang
Applied Intelligence2022Journal Article, cited 0 times
Website
Breast-MRI-NACT-Pilot
Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI)
BREAST
Segmentation
Automatic segmentation of breast cancer on dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI), which reveals both temporal and spatial profiles of the foundational anatomy, plays a crucial role in the clinical diagnosis and treatment of breast cancer. Recently, deep learning has witnessed great advances in tumour segmentation tasks. However, most of those high-performing models require a large number of annotated gold-standard samples, which remains a challenge in the accurate segmentation of 4D DCE-MRI breast cancer with high heterogeneity. To address this problem, we propose a joint immunohistochemistry- (IHC) and model-driven few-shot learning scheme for 4D DCE-MRI breast cancer segmentation. Specifically, a unique bidirectional convolutional recurrent graph attention autoencoder (BiCRGADer) is developed to exploit the spatiotemporal pharmacokinetic characteristics contained in 4D DCE-MRI sequences. Moreover, the IHC-driven strategy that employs a few-shot learning scenario optimizes BiCRGADer by learning the features of MR imaging phenotypes of specific molecular subtypes during training. In particular, a parameter-free module (PFM) is designed to adaptively enrich query features with support features and masks. The combined model- and IHC-driven scheme boosts performance with only a small training sample size. We conduct methodological analyses and empirical evaluations on datasets from The Cancer Imaging Archive (TCIA) to justify the effectiveness and adaptability of our scheme. Extensive experiments show that the proposed scheme outperforms state-of-the-art segmentation models and provides a potential and powerful noninvasive approach for the artificial intelligence community dealing with oncological applications.
Scalable and flexible management of medical image big data
Teng, Dejun
Kong, Jun
Wang, Fusheng
Distributed and Parallel Databases2018Journal Article, cited 0 times
Website
Algorithm Development
Brain Tumour Segmentation with a Muti-Pathway ResNet Based UNet
Saha, Aheli
Zhang, Yu-Dong
Satapathy, Suresh Chandra
Journal of Grid Computing2021Journal Article, cited 0 times
Website
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Machine Learning
Segmentation
Classification
Automatic segmentation of brain tumour regions is essential in today’s scenario for proper diagnosis and treatment of the disease. Gliomas can appear in any region and can be of any shape and size, which makes automatic detection challenging. However, now, with the availability of high-quality MRI scans, various strides have been made in this field. In this paper, we propose a novel multi-pathway UNet incorporated with residual networks and skip connections to segment multimodal Magnetic Resonance images into three hierarchical glioma sub-regions. The multi-pathway serves as a medium to decompose the multiclass segmentation problem into subsequent binary segmentation tasks, where each pathway is responsible for segmenting one class from the background. Instead of a cascaded architecture for the hierarchical regions, we propose a shared encoder, followed by separate decoders for each category. Residual connections employed in the model facilitate increasing the performance. Experiments have been carried out on BraTS 2020 dataset and have achieved promising results.
Total Variation for Image Denoising Based on a Novel Smart Edge Detector: An Application to Medical Images
Said, Ahmed Ben
Hadjidj, Rachid
Foufou, Sebti
Journal of Mathematical Imaging and Vision2018Journal Article, cited 0 times
Website
Algorithm Development
Image denoising
Rudin-Osher-Fatemi denoising model
Segmentation of three-dimensional images with parametric active surfaces and topology changes
Benninghoff, Heike
Garcke, Harald
Journal of Scientific Computing2017Journal Article, cited 1 times
Website
Algorithm Development
Segmentation
In this paper, we introduce a novel parametric finite element method for segmentation of three-dimensional images. We consider a piecewise constant version of the Mumford-Shah and the Chan-Vese functionals and perform a region-based segmentation of 3D image data. An evolution law is derived from energy minimization problems which push the surfaces to the boundaries of 3D objects in the image. We propose a parametric scheme which describes the evolution of parametric surfaces. An efficient finite element scheme is proposed for a numerical approximation of the evolution equations. Since standard parametric methods cannot handle topology changes automatically, an efficient method is presented to detect, identify and perform changes in the topology of the surfaces. One main focus of this paper are the algorithmic details to handle topology changes like splitting and merging of surfaces and change of the genus of a surface. Different artificial images are studied to demonstrate the ability to detect the different types of topology changes. Finally, the parametric method is applied to segmentation of medical 3D images.
Medical Image Retrieval Using Vector Quantization and Fuzzy S-tree
Nowaková, Jana
Prílepok, Michal
Snášel, Václav
Journal of Medical Systems2017Journal Article, cited 33 times
Website
QIN Breast DCE-MRI
Classification
Content based image retrieval (CBIR)
The aim of the article is to present a novel method for fuzzy medical image retrieval (FMIR) using vector quantization (VQ) with fuzzy signatures in conjunction with fuzzy S-trees. In past times, a task of similar pictures searching was not based on searching for similar content (e.g. shapes, colour) of the pictures but on the picture name. There exist some methods for the same purpose, but there is still some space for development of more efficient methods. The proposed image retrieval system is used for finding similar images, in our case in the medical area - in mammography, in addition to the creation of the list of similar images - cases. The created list is used for assessing the nature of the finding - whether the medical finding is malignant or benign. The suggested method is compared to the method using Normalized Compression Distance (NCD) instead of fuzzy signatures and fuzzy S-tree. The method with NCD is useful for the creation of the list of similar cases for malignancy assessment, but it is not able to capture the area of interest in the image. The proposed method is going to be added to the complex decision support system to help to determine appropriate healthcare according to the experiences of similar, previous cases.;
ECM-CSD: An Efficient Classification Model for Cancer Stage Diagnosis in CT Lung Images Using FCM and SVM Techniques
Kavitha, MS
Shanthini, J
Sabitha, R
Journal of Medical Systems2019Journal Article, cited 0 times
Website
LIDC-IDRI
Radiomics
Versatile Convolutional Networks Applied to Computed Tomography and Magnetic Resonance Image Segmentation
Almeida, Gonçalo
Tavares, João Manuel R. S.
Journal of Medical Systems2021Journal Article, cited 0 times
LCTSC
Segmentation
Deep Learning
Medical image segmentation has seen positive developments in recent years but remains challenging with many practical obstacles to overcome. The applications of this task are wide-ranging in many fields of medicine, and used in several imaging modalities which usually require tailored solutions. Deep learning models have gained much attention and have been lately recognized as the most successful for automated segmentation. In this work we show the versatility of this technique by means of a single deep learning architecture capable of successfully performing segmentation on two very different types of imaging: computed tomography and magnetic resonance. The developed model is fully convolutional with an encoder-decoder structure and high-resolution pathways which can process whole three-dimensional volumes at once, and learn directly from the data to find which voxels belong to the regions of interest and localize those against the background. The model was applied to two publicly available datasets achieving equivalent results for both imaging modalities, as well as performing segmentation of different organs in different anatomic regions with comparable success.
A simple texture feature for retrieval of medical images
Lan, Rushi
Zhong, Si
Liu, Zhenbing
Shi, Zhuo
Luo, Xiaonan
Multimedia Tools and Applications2017Journal Article, cited 2 times
Website
Imaging features
Classification
Algorithm Development
Texture characteristic is an important attribute of medical images, and has been applied in many medical image applications. This paper proposes a simple approach to employ the texture features of medical images for retrieval. The developed approach first conducts image filtering to medical images using different Gabor and Schmid filters, and then uniformly partitions the filtered images into non-overlapping patches. These operations provide extensive local texture information of medical images. The bag-of-words model is finally used to obtain feature representations of the images. Compared with several existing features, the proposed one is more discriminative and efficient. Experiments on two benchmark medical CT image databases have demonstrated the effectiveness of the proposed approach.
An efficient low-dose CT reconstruction technique using partial derivatives based guided image filter
Pathak, Yadunath
Arya, KV
Tiwari, Shailendra
Multimedia Tools and Applications2018Journal Article, cited 0 times
Website
Multi-orientation geometric medical volumes segmentation using 3D multiresolution analysis
AlZu’bi, Shadi
Jararweh, Yaser
Al-Zoubi, Hassan
Elbes, Mohammed
Kanan, Tarek
Gupta, Brij
Multimedia Tools and Applications2018Journal Article, cited 40 times
Website
Lung Phantom
QIN-LungCT-Seg
Medical images have a very significant impact in the diagnosing and treating process of patient ailments and radiology applications. For many reasons, processing medical images can greatly improve the quality of radiologists’ job. While 2D models have been in use for medical applications for decades, wide-spread utilization of 3D models appeared only in recent years. The proposed work in this paper aims to segment medical volumes under various conditions and in different axel representations. In this paper, we propose an algorithm for segmenting Medical Volumes based on Multiresolution Analysis. Different 3D volume reconstructed versions have been considered to come up with a robust and accurate segmentation results. The proposed algorithm is validated using real medical and Phantom Data. Processing time, segmentation accuracy of predefined data sets and radiologist’s opinions were the key factors for methods validations.
An improved computer based diagnosis system for early detection of abnormal lesions in the brain tissues with using magnetic resonance and computerized tomography images
Ural, Berkan
Özışık, Pınar
Hardalaç, Fırat
Multimedia Tools and Applications2019Journal Article, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Detection of masses can be a challenging task for radiologists and physicians. Manual tumor diagnosis in the brain is sometimes a time consuming process and can be insufficient for fast and accurate detection and interpretation. This study introduces an improved interface-supported early diagnosis system to increase the speed and accuracy for supporting the traditional methods. The first stage in the system involves collecting information from the brain tissue, and assessing whether it is normal or abnormal through the processing of Magnetic Resonance Imaging (MRI) and Computerized Tomography (CT) images. The next stage involves gathering results from the image(s) after the single/multiple and volumetric and multiscale image analysis. The other stage involves Feature Extraction for some cases and making an interpretation about the abnormal Region of Interest (ROI) area via Deep Learning and conventional Artificial Intelligence methods is the last stage. The output of the system is mainly the name of the mass type which was introduced to the network. The results were obtained for totally 300 images for High-Grade Glioma (HGG), Low-Grade Glioma (LGG), Glioblastoma (GBM), Meningioma as well as Ischemic and Hemorrhagic stroke. For the cases, the DICE score was obtained as 0.927 and the normal/abnormal differentiation of the brain tissues was also achieved successfully. Finally, this system can give a chance to the doctors for supporting the results, speeding up the diagnosis process and also decreasing the rate of possible misdiagnosis.
Breast cancer masses classification using deep convolutional neural networks and transfer learning
Hassan, Shayma’a A.
Sayed, Mohammed S.
Abdalla, Mahmoud I.
Rashwan, Mohsen A.
Multimedia Tools and Applications2020Journal Article, cited 0 times
Website
CBIS-DDSM
Deep Learning
Deep convolutional neural network (DCNN)
With the recent advances in the deep learning field, the use of deep convolutional neural networks (DCNNs) in biomedical image processing becomes very encouraging. This paper presents a new classification model for breast cancer masses based on DCNNs. We investigated the use of transfer learning from AlexNet and GoogleNet pre-trained models to suit this task. We experimentally determined the best DCNN model for accurate classification by comparing different models, which vary according to the design and hyper-parameters. The effectiveness of these models were demonstrated using four mammogram databases. All models were trained and tested using a mammographic dataset from CBIS-DDSM and INbreast databases to select the best AlexNet and GoogleNet models. The performance of the two proposed models was further verified using images from Egyptian National Cancer Institute (NCI) and MIAS database. When tested on CBIS-DDSM and INbreast databases, the proposed AlexNet model achieved an accuracy of 100% for both databases. While, the proposed GoogleNet model achieved accuracy of 98.46% and 92.5%, respectively. When tested on NCI images and MIAS databases, AlexNet achieved an accuracy of 97.89% with AUC of 98.32%, and accuracy of 98.53% with AUC of 98.95%, respectively. GoogleNet achieved an accuracy of 91.58% with AUC of 96.5%, and accuracy of 88.24% with AUC of 94.65%, respectively. These results suggest that AlexNet has better performance and more robustness than GoogleNet. To the best of our knowledge, the proposed AlexNet model outperformed the latest methods. It achieved the highest accuracy and AUC score and the lowest testing time reported on CBIS-DDSM, INbreast and MIAS databases.
A novel comparative study for detection of Covid-19 on CT lung images using texture analysis, machine learning, and deep learning methods
Yasar, Huseyin
Ceylan, Murat
Multimedia Tools and Applications2020Journal Article, cited 0 times
LIDC-IDRI
The Covid-19 virus outbreak that emerged in China at the end of 2019 caused a huge and devastating effect worldwide. In patients with severe symptoms of the disease, pneumonia develops due to Covid-19 virus. This causes intense involvement and damage in lungs. Although the emergence of the disease occurred a short time ago, many literature studies have been carried out in which these effects of the disease on the lungs were revealed by the help of lung CT imaging. In this study, 1.396 lung CT images in total (386 Covid-19 and 1.010 Non-Covid-19) were subjected to automatic classification. In this study, Convolutional Neural Network (CNN), one of the deep learning methods, was used which suggested automatic classification of CT images of lungs for early diagnosis of Covid-19 disease. In addition, k-Nearest Neighbors (k-NN) and Support Vector Machine (SVM) was used to compare the classification successes of deep learning with machine learning. Within the scope of the study, a 23-layer CNN architecture was designed and used as a classifier. Also, training and testing processes were performed for Alexnet and Mobilenetv2 CNN architectures as well. The classification results were also calculated for the case of increasing the number of images used in training for the first 23-layer CNN architecture by 5, 10, and 20 times using data augmentation methods. To reveal the effect of the change in the number of images in the training and test clusters on the results, two different training and testing processes, 2-fold and 10-fold cross-validation, were performed and the results of the study were calculated. As a result, thanks to these detailed calculations performed within the scope of the study, a comprehensive comparison of the success of the texture analysis method, machine learning, and deep learning methods in Covid-19 classification from CT images was made. The highest mean sensitivity, specificity, accuracy, F-1 score, and AUC values obtained as a result of the study were 0,9197, 0,9891, 0,9473, 0,9058, 0,9888; respectively for 2-fold cross-validation, and they were 0,9404, 0,9901, 0,9599, 0,9284, 0,9903; respectively for 10-fold cross-validation.
Region of interest based selective coding technique for volumetric MR image sequence
Urvashi S
Sood, Meenakshi
Puthooran, Emjee
Multimedia Tools and Applications2021Journal Article, cited 0 times
RIDER Breast MRI
Advanced image scanning techniques produce high resolution medical images such as CT, MRI which in turn needs large storage space and bandwidth for transmitting over a network. Lossless compression is preferred for medical images to preserve important diagnostic details. However, it is only sufficient to maintain the high quality of an image in a diagnostically important region, namely Region of interest (ROI) for an accurate diagnosis. Non -ROI portion when compressed near-losslessly does not affect the image quality but reduces the file size effectively. We propose a compression technique where the prediction is done by Resolution Independent Gradient Edge Detector (RIGED) to de-correlate the image pixels and block-based arithmetic coding is used for encoding. The optimal threshold value, optimal q-level and the block-based coding removes inter-pixel, psycho-visual and coding redundancy from non-ROI part to achieve high compression whereas ROI part is compressed losslessly by removing inter-pixel and coding redundancy only. In this paper, optimal threshold-based predictive lossless compression in the ROI and optimal quantization (q) based near-lossless compression in the rest of the region is proposed. The proposed method is evaluated on volumetric 8 bit and 16 bit standard MR image data-set and validated on real patient’s 16 bit depth MR images collected from local hospitals. The performance of the proposed technique showed improvement over the existing techniques JPEG 2000, JPEG-LS, M-CALIC, JP3D, and CALIC by 40.89%, 34.50%, 32.92%, 22.36%, and 17.25% respectively in terms of Bits per Pixel (BPP).
A lossless DWT-SVD domain watermarking for medical information security
Zermi, N.
Khaldi, A.
Kafi, M. R.
Kahlessenane, F.
Euschi, S.
Multimed Tools Appl2021Journal Article, cited 0 times
Website
TCGA-LUAD
Security
Discrete wavelet transform
Support Vector Machine (SVM)
Ultrasound
Computed Tomography (CT)
The goal of this work is to protect as much as possible the images exchanged in telemedicine, to avoid any confusion between the patient's radiographs, these images are watermarked with the patient's information as well as the acquisition data. Thus, during the extraction, the doctor will be able to affirm with certainty that the images belong to the treated patient. The ultimate goal of our completed work is to properly integrate the watermark with as little distortion as possible to typically retain the medical information in the image. In this innovative approach used DWT decomposition is appropriately applied to the image which allows a remarkably satisfactory adjustment during the insertion. An SVD is then applied to the three subbands LL, LH and HL, which ideally allows retaining the maximum energy of the used image in a guaranteed minimum of singular values. A specific combination of the three resulting singular value matrices is then performed for watermark integration. The proposed approach ensures data integrity, patient confidentiality when sharing data, and robustness to several conventional attacks.
RD2A: densely connected residual networks using ASPP for brain tumor segmentation
Ahmad, Parvez
Jin, Hai
Qamar, Saqib
Zheng, Ran
Saeed, Adnan
Multimedia Tools and Applications2021Journal Article, cited 2 times
Website
BraTS-TCGA-GBM
BraTS-TCGA-LGG
BRAIN
Automatic segmentation
Machine Learning
The variations among shapes, sizes, and locations of tumors are obstacles for accurate automatic segmentation. U-Net is a simplified approach for automatic segmentation. Generally, the convolutional or the dilated convolutional layers are used for brain tumor segmentation. However, existing segmentation methods of the significant dilation rates degrade the final accuracy. Moreover, tuning parameters and imbalance ratio between the different tumor classes are the issues for segmentation. The proposed model, known as Residual-Dilated Dense Atrous-Spatial Pyramid Pooling (RD2A) 3D U-Net, is found adequate to solve these issues. The RD2A is the combination of the residual connections, dilation, and dense ASPP to preserve more contextual information of small sizes of tumors at each level encoder path. The multi-scale contextual information minimizes the ambiguities among the tissues of the white matter (WM) and gray matter (GM) of the infant’s brain MRI. The BRATS 2018, BRATS 2019, and iSeg-2019 datasets are used on different evaluation metrics to validate the RD2A. In the BRATS 2018 validation dataset, the proposed model achieves the average dice scores of 90.88, 84.46, and 78.18 for the whole tumor, the tumor core, and the enhancing tumor, respectively. We also evaluated on iSeg-2019 testing set, where the proposed approach achieves the average dice scores of 79.804, 77.925, and 80.569 for the cerebrospinal fluid (CSF), the gray matter (GM), and the white matter (WM), respectively. Furthermore, the presented work also obtains the mean dice scores of 90.35, 82.34, and 71.93 for the whole tumor, the tumor core, and the enhancing tumor, respectively on the BRATS 2019 validation dataset. Experimentally, it is found that the proposed approach is ideal for exploiting the full contextual information of the 3D brain MRI datasets.
Classification of malignant tumors by a non-sequential recurrent ensemble of deep neural network model
Moitra, D.
Mandal, R. K.
Multimed Tools Appl2022Journal Article, cited 0 times
Website
TCGA-BLCA
TCGA-BRCA
Head-Neck-Radiomics-HN1
TCGA-KIRP
TCGA-LIHC
TCGA-THCA
TCGA-UCEC
NSCLC Radiogenomics
BLADDER
LIVER
KIDNEY
BREAST
LUNG
THYROID
UTERUS
Classification
Deep learning
Many significant efforts have so far been made to classify malignant tumors by using various machine learning methods. Most of the studies have considered a particular tumor genre categorized according to its originating organ. This has enriched the domain-specific knowledge of malignant tumor prediction, we are devoid of an efficient model that may predict the stages of tumors irrespective of their origin. Thus, there is ample opportunity to study if a heterogeneous collection of tumor images can be classified according to their respective stages. The present research work has prepared a heterogeneous tumor dataset comprising eight different datasets from The Cancer Imaging Archives and classified them according to their respective stages, as suggested by the American Joint Committee on Cancer. The proposed model has been used for classifying 717 subjects comprising different imaging modalities and varied Tumor-Node-Metastasis stages. A new non-sequential deep hybrid model ensemble has been developed by exploiting branched and re-injected layers, followed by bidirectional recurrent layers to classify tumor images. Results have been compared with standard sequential deep learning models and notable recent studies. The training and validation accuracy along with the ROC-AUC scores have been found satisfactory over the existing models. No model or method in the literature could ever classify such a diversified mix of tumor images with such high accuracy. The proposed model may help radiologists by acting as an auxiliary decision support system and speed up the tumor diagnosis process.
De-noising low dose CT images of the ovarian region using modified discrete wavelet transform
Maria, H. Heartlin
Jossy, A. Maria
Malarvizhi, G.
Jenitta, A.
Multimedia Tools and Applications2022Journal Article, cited 0 times
TCGA-OV-Proteogenomics
Computed Tomography (CT) is a medical imaging technique that is being prominently used in the healthcare domain to obtain a detailed view of the body for diagnostic purposes of various diseases. This form of medical imaging is associated with high ionizing radiations that are powerful enough to penetrate through the body to create images on the computer screen. But, multiple exposures to such high dose ionizing radiation could raise the chances of cancer and could be fatal for patients diagnosed with cancer. However, to reduce the radiation dosage associated with CT images, low-dose CT (LDCT) is being used for medical screening in recent days. The United States Preventive Services Task Force guidelines has revealed that LDCT screening reduces the mortality rate of patients and can be considered safe comparatively. But, as radiation dosage is reduced, LDCT images are corrupted with noise and artifacts which affect the visibility of the medical image. This in turn can affect the decision of the radiologists. Therefore, LDCT images are required to be de-noised before being used for diagnosis to improve the image quality and elevate visibility. This work is one such method where a combination of modified Discrete Wavelet Transform (DWT) and Goodness of Fit shrinkage (GoFShrink) thresholding technique is used to denoise the LDCT images to enhance the quality for diagnostic purposes. The modified DWT is associated with a lifting scheme that provides memory for in-place arithmetic operations, flexible factorization of the 2-channel filter banks as well as robustness with exact reversible reconstruction and enhances the performance of the wavelet transform (WT). The PSNR, MSE, SSIM, and SNR values of the de-noised images are calculated and a comparative analysis is performed with the conventional de-noising techniques. A comparative analysis between the various shrinkage techniques is also performed. The simulation results show an increase in PSNR and SNR values when compared with the conventional methods.
Neuro-evolutional based computer aided detection system on computed tomography for the early detection of lung cancer.
Huidrom, R.
Chanu, Y. J.
Singh, K. M.
Multimedia Tools and Applications2022Journal Article, cited 0 times
Website
LIDC-IDRI
Computer Aided Detection (CADe)
LUNG
regularized discriminant features
cuckoo search algorithm
particle swarm optimization
nodule detection
pulmonary nodules
automatic detection
ct images
chest ct
algorithms
Lung cancer is one of the highest deadly disease which can be treated effectively in its early stage. Computer aided detection (CADe) can detect pulmonary nodules of lung cancer accurately and faster than manual detection. This paper presents a new CADe system using neuro-evolutional approach. The proposed method is focused on machine learning algorithm which is a crucial area of the system. The CADe system extracts lung regions from computed tomography images and detects pulmonary nodules within the lung regions. False positive reduction is performed by using a new neuro-evolutionary approach which consists of a feed-forward neural network and a combination of cuckoo search algorithm and particle swarm optimization. The performance of the proposed method is further improved by using regularized discriminant features and achieves 95.8% sensitivity, 95.3% specificity and 95.5% accuracy.
Achieving enhanced accuracy and strength performance with parallel programming for invariant affine point cloud registration
Khan, Usman
Yasin, Amanullah
Jalal, Ahmed
Abid, Muhammad
Multimedia Tools and Applications2022Journal Article, cited 0 times
RIDER PHANTOM PET-CT
Affine-transform of tomographic images maps pixels from image to world coordinates. However, affine transform application on each pixel consumes much time. Extraction of the point cloud of interest from the background is another challenge. The benchmark algorithms use approximations, therefore, compromising accuracy. Because of this fact, there arises a need for affine registration for 3D reconstruction. In this work, we present a computationally efficient affine registration of Digital Imaging and COmmunications in Medicine (DICOM) images. We introduce a novel GPU accelerated hierarchical clustering algorithm using Gaussian thresholding of inter-coordinate distances followed by maximal mutual information score merging for clutter removal. We also show that the reconstructed 3d models using our methodology have a best-case minimum error of 0.18 cm against physical measurements and have higher structural strength. This algorithm should apply to reconstruction, 3D printing, virtual reality, and 3D visualization.
Breast cancer: toward an accurate breast tumor detection model in mammography using transfer learning techniques
Boudouh, Saida Sarra
Bouakkaz, Mustapha
Multimedia Tools and Applications2023Journal Article, cited 0 times
CMMD
Female breast cancer has now surpassed lung cancer as the most common form of cancer globally. Although several methods exist for breast cancer detection and diagnosis, mammography is the most effective and widely used technique. In this study, our purpose is to propose an accurate breast tumor detection model as the first step into cancer detection. To guarantee diversity and a larger amount of data, we collected samples from three different databases: the Mammographic Image Analysis Society MiniMammographic (MiniMIAS), the Digital Database for Screening Mammography (DDSM), and the Chinese Mammography Database (CMMD). Several filters were used in the pre-processing phase to extract the Region Of Interest (ROI), remove noise, and enhance images. Next, transfer learning, data augmentation, and Global Pooling (GAP/GMP) techniques were used to avoid imagery overfitting and to increase accuracy. To do so, seven pre-trained Convolutional Neural Networks (CNNs) were modified in several trials with different hyper-parameters to determine which ones are the most suitable for our situation and the criteria that influenced our results. The selected pre-trained CNNs were Xception, InceptionV3, ResNet101V2, ResNet50V2, ALexNet, VGG16, and VGG19. The obtained results were satisfying, especially for ResNet50V2 followed by InceptionV3 reaching the highest accuracy of 99.9%, and 99.54% respectively. Meanwhile, the remaining models achieved great results as well, proving that our approach starting from the chosen filters, databases, and pre-trained models with the fine-tuning phase and the used global pooling technique is effective for breast tumor detection. Furthermore, we also managed to determine the most suitable hyper-parameters for each model using our collected dataset.
Accurate segmentation of lung nodule with low contrast boundaries by least weight navigation
Beula, R. Janefer
Wesley, A. Boyed
Multimedia Tools and Applications2023Journal Article, cited 0 times
Website
Lung-PET-CT-Dx
LIDC-IDRI
Computed Tomography (CT)
Otsu's thresholding method
LUNG
Segmentation
Accurate segmentation of lung nodules with low contrast boundaries in CT images is a challenging task since the intensity of nodules and non-nodules overlap with each other. This work proposes a lung nodule segmentation scheme based on least weight navigation (LWN) that segments the lung nodule accurately with such low contrast boundaries. The complete lung nodule segmentation is categorized intothree stages namely, (i) Lung segmentation, (ii) Coarse segmentation of nodule, and (iii) Fine segmentation of nodule. The lung segmentation aims to eliminate the background other than the lung, whereas the coarse segmentation eliminates the lung leaving the nodules. The lung segmentation and coarse segmentation can be achieved using the traditional algorithms namely, dilation, erosion, and Otsu’s thresholding. The proposed work focused on fine segmentation where the boundaries are accurately detected by the LWN algorithm. The LWN algorithm estimates the edge points and then navigation is performed based on the least weight. This navigation is done till the final termination is reached, which results in accurate segmentation results. The experimental validation was done on LIDC and Cancer Imaging dataset with three different nodules such as Juxta vascular, Juxta pleura, and Solitary. The evaluation was done using the metrics such as dice similarity coefficient (DSC), sensitivity (SEN), positive prediction value (PPV). Hausdorff distance (HD) andProbability rand index(PRI). The proposed approach provides a DSC, SEN, and PPV of 84.27%, 89.92%, and 80.12% respectively. The result reveals that the proposed work outperforms the traditional lung nodule segmentation algorithms.
A prediction error based reversible data hiding scheme in encrypted image using block marking and cover image pre-processing
Panchikkil, Shaiju
Manikandan, V. M.
Multimedia Tools and Applications2023Journal Article, cited 0 times
Website
TCGA-LUAD
Algorithm Development
Reversibility
Security
A drastic change in communication is happening with digitization. Technological advancements will escalate its pace further. The human health care systems have improved with technology, remodeling the traditional way of treatments. There has been a peak increase in the rate of telehealth and e-health care services during the coronavirus disease 2019 (COVID-19) pandemic. These implications make reversible data hiding (RDH) a hot topic in research, especially for medical image transmission. Recovering the transmitted medical image (MI) at the receiver side is challenging, as an incorrect MI can lead to the wrong diagnosis. Hence, in this paper, we propose a MSB prediction error-based RDH scheme in an encrypted image with high embedding capacity, which recovers the original image with a peak signal-to-noise ratio (PSNR) of ∞; dB and structural similarity index (SSIM) value of 1. We scan the MI from the first pixel on the top left corner using the snake scan approach in dual modes: i) performing a rightward direction scan and ii) performing a downward direction scan to identify the best optimal embedding rate for an image. Banking upon the prediction error strategy, multiple MSBs are utilized for embedding the encrypted PHR data. The experimental studies on test images project a high embedding rate with more than 3 bpp for 16-bit high-quality DICOM images and more than 1 bpp for most natural images. The outcomes are much more promising compared to other similar state-of-the-art RDH methods.
A novel deep learning-based technique for detecting prostate cancer in MRI images
Singh, Sanjay Kumar
Sinha, Amit
Singh, Harikesh
Mahanti, Aniket
Patel, Abhishek
Mahajan, Shubham
Pandit, Amit Kant
Varadarajan, Vijayakumar
Multimedia Tools and Applications2023Journal Article, cited 0 times
Website
PROSTATEx-2 2017 challenge
SPIE-AAPM PROSTATEx Challenge
PROSTATEx
Computer Aided Detection (CADe)
Algorithm Development
Challenge
Magnetic Resonance Imaging (MRI)
TensorFlow
Deep Learning
Prostate-specific antigen (PSA)
In the western world,the prostate cancer is major cause of death in males. Magnetic Resonance Imaging (MRI) is widely used for the detection of prostate cancer due to which it is an open area of research. The proposed method uses deep learning framework for the detection of prostate cancer using the concept of Gleason grading of the historical images. A 3D convolutional neural network has been used to observe the affected region and predicting the affected region with the help of Epithelial and the Gleason grading network. The proposed model has performed the state-of-art while detecting epithelial and the Gleason score simultaneously. The performance has been measured by considering all the slices of MRI, volumes of MRI with the test fold, and segmenting prostate cancer with help of Endorectal Coil for collecting the images of MRI of the prostate 3D CNN network. Experimentally, it was observed that the proposed deep learning approach has achieved overall specificity of 85% with an accuracy of 87% and sensitivity 89% over the patient-level for the different targeted MRI images of the challenge of the SPIE-AAPM-NCI Prostate dataset.
A quantitative analysis of imaging features in lung CT images using the RW-T hybrid segmentation model
Adiraju, RamaVasantha
Elias, Susan
Multimedia Tools and Applications2023Journal Article, cited 0 times
Website
LungCT-Diagnosis
Segmentation
LUNG
Automatic Segmentation
Ground truth
Radiomic features
Lung cancer is the leading cause of cancer death worldwide. A lung nodule is the most common symptom of lung cancer. The analysis of lung cancer relies heavily on the segmentation of nodules, which aids in optimal treatment planning. However, because there are several lung nodules, accurate segmentation remains challenging. We propose an RW-T hybrid approach capable of segmenting all types of nodules, primarily externally attached nodules (juxta-pleural and juxta-vascular), and estimate the effect of nodule segmentation techniques to assess the quantitative Computer Tomography (CT) imaging features in lung adenocarcinoma. On 301 lung CT images from 40 patients with lung adenocarcinoma cases from the LungCT- Diagnosis dataset publicly available in The Cancer Imaging Archive, we used a random-walk strategy and a thresholding method to implement nodule segmentation (TCIA). We extracted two quantitative CT features from the segmented nodule using morphological techniques: convexity and entropy scores. The proposed method’s resultant segmented nodules are compared to the single-click ensemble segmentation method and validated using ground-truth segmented nodules. Our proposed segmentation approach had a high level of agreement with ground truth delineations, with a dice-similarity coefficient of 0.7884, compared to single-click ensemble segmentation, with a dice-similarity metric of 0.6407.
STRAMPN: Histopathological image dataset for ovarian cancer detection incorporating AI-based methods
Singh, Samridhi
Maurya, Malti Kumari
Singh, Nagendra Pratap
Multimedia Tools and Applications2023Journal Article, cited 0 times
Ovarian Bevacizumab Response
Ovarian cancer, characterized by uncontrolled cell growth in the ovaries, poses a significant threat to women’s reproductive health. Often referred to as the “silent killer,” it is notorious for its elusive nature, as symptoms do not manifest until the disease has advanced to critical stages. Recognizing the urgent need for early detection, this research paper aimed to enhance the identification of ovarian cancer during its initial phases. To bolster the dataset and improve the chances of accurate classification, a comprehensive approach was undertaken. Leveraging available online images, an extensive pre-processing and data augmentation methodology was employed to enrich the dataset. By expanding the dataset size and ensuring its diversity, the research sought to capture a broader range of cancerous manifestations and mitigate potential biases. Utilizing MATLAB, a suite of six state-of-the-art classifiers were employed to categorize the augmented images. To assess the efficacy of the classifiers, a holdout method was adopted for cross-validation. Remarkably, the results showcased an exceptional accuracy rate of 99%, underscoring the effectiveness of the methodology in detecting ovarian cancer at its incipient stages. The implications of this research are far-reaching, as the early identification of ovarian cancer holds immense potential for improved prognosis and treatment outcomes. By shedding light on the significance of expanding and diversifying datasets and leveraging advanced classification techniques, this study contributes to the growing body of knowledge aimed at combating ovarian cancer and underscores the importance of early intervention in reducing mortality rates associated with this insidious disease.
Visual attention condenser model for multiple disease detection from heterogeneous medical image modalities
Kotei, Evans
Thirunavukarasu, Ramkumar
Multimedia Tools and Applications2023Journal Article, cited 0 times
CBIS-DDSM
BREAST
Computer Aided Detection (CADe)
Algorithm Development
The World Health Organization (WHO) has identified breast cancer and tuberculosis (TB) as major global health issues. While breast cancer is a top killer of women, TB is an infectious disease caused by a single bacterium with a high mortality rate. Since both TB and breast cancer are curable, early screening ensures treatment. Medical imaging modalities, such as chest X-ray radiography and ultrasound, are widely used for diagnosing TB and breast cancer. Artificial intelligence (AI) techniques are applied to supplement the screening process for effective and early treatment due to the global shortage of radiologists and oncologists. These techniques fast-track the screening process leading to early detection and treatment. Deep learning (DL) is the most used technique producing outstanding results. Despite the success of these DL models in the automatic detection of TB and breast cancer, the suggested models are task-specific, meaning they are disease-oriented. Again, the complexity and weight of the DL applications make it difficult to apply the models on edge devices. Motivated by this, a Multi Disease Visual Attention Condenser Network (MD-VACNet) got proposed for multiple disease identification from different medical image modalities. The network architecture got designed automatically through a machine-driven design exploration with generative synthesis. The proposed MD-VACNet is a lightweight stand-alone visual recognition deep neural network based on VAC with a self-attention mechanism to run on edge devices. In the experiment, TB was identified based on chest X-ray images and breast cancer was based on ultrasound images. The suggested model achieved a 98.99% accuracy score, a 99.85% sensitivity score, and a 98.20% specificity score on the x-ray radiographs for TB diagnosis. The model also produced a cutting-edge performance on breast cancer classification into benign and malignant, with accuracy, sensitivity and specificity scores of 98.47%, 98.42%, and 98.31%, respectively. Regarding model architectural complexity, MD-VACNet is simple and lightweight for edge device implementation.
Hybrid optimized MRF based lung lobe segmentation and lung cancer classification using Shufflenet
B, Spoorthi
Mahesh, Shanthi
Multimedia Tools and Applications2023Journal Article, cited 1 times
Website
LIDC-IDRI
Radiomics
Lung cancer is a kind of harmful cancer type that originates from the lungs. In this research, the lung lobe segmentation is carried out using Markov Random Field (MRF)-based Artificial Hummingbird Cuckoo algorithm (AHCA). The AHCA algorithm is modelled by considering the benefits of both the Artificial Hummingbird algorithm (AHA) and the Cuckoo search (CS) algorithm. Moreover, the lung cancer classification is done with ShuffleNet, which is trained by the Artificial Hummingbird Firefly optimization algorithm (AHFO) which is the integration of AHA and Firefly algorithm (FA). In this research, two algorithms are devised for both segmentation and classification. From these two algorithms, the AHA algorithm is used for updating the location. The AHA algorithm had three phases, such as foraging, guided foraging and migrating foraging where the guided foraging stage is selected to update the location for both segmentation and classification. Besides, the developed AHFO-based ShuffleNet scheme attained superior performance with respect to the testing accuracy of 0.9071, sensitivity of 0.9137 and specificity of 0.9039. The performance improvement of the proposed method for testing accuracy is 6.615%, 3.197%, 2.756%, and 1.764% higher than the existing methods. In future, the performance will be boosted by the advanced scheme for identifying the grade of disease.
Empirical evaluation of filter pruning methods for acceleration of convolutional neural network
Kumar, Dheeraj
Mehta, Mayuri A.
Joshi, Vivek C.
Oza, Rachana S.
Kotecha, Ketan
Lin, Jerry Chun-Wei
Multimedia Tools and Applications2023Journal Article, cited 0 times
C_NMC_2019
Classification
Deep convolutional neural network (DCNN)
Algorithm Development
Histopathology imaging features
Training and inference of deep convolutional neural networks are usually slow due to the depth of the network and the number of parameters in the network. Although high-performance processors usually accelerate the training of these networks, their use on resource-constrained devices is still limited. Several compression-based acceleration methods have been presented to optimize the performance of neural networks. However, their use and adaptation are still limited due to their adverse effects on the network structure. Therefore, different filter pruning methods have been proposed to keep the network structure intact. To better solve the above limitations, we first propose a detailed classification of model acceleration method to explain the different ways of enhancing the inference performance of the convolutional neural network. Second, we present a broad classification of filter pruning methods including the comparison of these methods. Third, we present an empirical evaluation of four filter pruning methods to understand the effects of filter pruning on model accuracy and parameter reduction. Fourth, we perform several experiments with ResNet20, a pre-trained CNN, and with the proposed custom CNN to show the effect of filter pruning on them. ResNet20 is used to address the multiclass classification using CIFAR 10 dataset and custom CNN is used to address the binary classification using Leukaemia image classification dataset that includes low-information medical images. The experimental results show that among the four filter pruning methods, the soft filter pruning method best preserves the accuracy of the original model for both ResNet20 and the custom CNN. In addition, the sampling-based filter pruning method shows the highest reduction of 99.8% in parameters on custom CNN. The overall results show a reasonable pruning ratio within five training epochs for both the pre-trained CNN and custom CNN. In addition, our results show that pruning redundant filters significantly reduces the model size, and number of floating point operations.
Ensemble coupled convolution network for three-class brain tumor grade classification
Isunuri, Bala Venkateswarlu
Kakarla, Jagadeesh
Multimedia Tools and Applications2023Journal Article, cited 2 times
Website
REMBRANDT
Convolutional Neural Network (CNN)
Transfer learning
Feature Extraction
Classification
The brain tumor grade classification is one of the prevalent tasks in brain tumor image classification. The existing models have employed transfer learning and are unable to preserve semantic features. Moreover, the results are reported on small datasets with pre-trained models. Thus, there is a need for an optimized model that can exhibit superior performance on larger datasets. We have proposed an efficientnet and coupled convolution network for the grade classification of brain magnetic resonance images. The feature extraction is performed using a pre-trained EfficientNetB0. Then, we have proposed a coupled convolution network for feature enhancement. Finally, enhanced features are classified using a fully connected dense network. We have utilized a global average pooling and dropout layers to avoid model overfitting. We have evaluated the proposed model on the REMBRANDT dataset and have achieved 96.95% accuracy. The proposed model outperforms existing pre-trained models and state-of-the-art models in vital metrics.
Novel convolutional neural network architecture for improved pulmonary nodule classification on computed tomography
Wang, Yi
Zhang, Hao
Chae, Kum Ju
Choi, Younhee
Jin, Gong Yong
Ko, Seok-Bum
Multidimensional Systems and Signal Processing2020Journal Article, cited 0 times
Website
Computed tomography (CT) is widely used to locate pulmonary nodules for preliminary diagnosis of the lung cancer. However, due to high visual similarities between malignant (cancer) and benign (non-cancer) nodules, distinguishing malignant from malign nodules is not an easy task for a thoracic radiologist. In this paper, a novel convolutional neural network (ConvNet) architecture is proposed to classify the pulmonary nodules as either benign or malignant. Due to the high variance of nodule characteristics in CT scans, such as size and shape, a multi-path, multi-scale architecture is proposed and applied in the proposed ConvNet to improve the classification performance. The multi-scale method utilizes filters with different sizes to more effectively extracted nodule features from local regions, and the multi-path architecture combines features extracted from different ConvNet layers thereby enhancing the nodule features with respect to global regions. The proposed ConvNet is trained and evaluated on the LUNGx Challenge database, and achieves a sensitivity of 0.887 and a specificity of 0.924 with an area under the curve (AUC) of 0.948. The proposed ConvNet achieves a 14% AUC improvement compared to the state-of-the-art unsupervised learning approach. The proposed ConvNet also outperforms the other state-of-the-art ConvNets explicitly designed for pulmonary nodule classification. For clinical usage, the proposed ConvNet could potentially assist the radiologists to make diagnostic decisions in CT screening.
Three-dimensional steerable discrete cosine transform with application to 3D image compression
Lima, Verusca S.
Madeiro, Francisco
Lima, Juliano B.
Multidimensional Systems and Signal Processing2020Journal Article, cited 0 times
Website
Algorithm Development
RIDER NEURO MRI
Mouse-Mammary
QIN Breast
PROSTATEx
TCGA-CESC
Image compression
This work introduces the three-dimensional steerable discrete cosine transform (3D-SDCT), which is obtained from the relationship between the discrete cosine transform (DCT) and the graph Fourier transform of a signal on a path graph. One employs the fact that the basis vectors of the 3D-DCT constitute a possible eigenbasis for the Laplacian of the product of such graphs. The proposed transform employs a rotated version of the 3D-DCT basis. We then evaluate the applicability of the 3D-SDCT in the field of 3D medical image compression. We consider the case where we have only one pair of rotation angles per block, rotating all the 3D-DCT basis vectors by the same pair. The obtained results show that the 3D-SDCT can be efficiently used in the referred application scenario and it outperforms the classical 3D-DCT.
Computer-extracted MR imaging features are associated with survival in glioblastoma patients
Mazurowski, Maciej A
Zhang, Jing
Peters, Katherine B
Hobbs, Hasan
Journal of Neuro-Oncology2014Journal Article, cited 33 times
Website
Segmentation
Cox regression
MRI
Automatic survival prognosis in glioblastoma (GBM) could result in improved treatment planning for the patient. The purpose of this research is to investigate the association of survival in GBM patients with tumor features in pre-operative magnetic resonance (MR) images assessed using a fully automatic computer algorithm. MR imaging data for 68 patients from two US institutions were used in this study. The images were obtained from the Cancer Imaging Archive. A fully automatic computer vision algorithm was applied to segment the images and extract eight imaging features from the MRI studies. The features included tumor side, proportion of enhancing tumor, proportion of necrosis, T1/FLAIR ratio, major axis length, minor axis length, tumor volume, and thickness of enhancing margin. We constructed a multivariate Cox proportional hazards regression model and used a likelihood ratio test to establish whether the imaging features are prognostic of survival. We also evaluated the individual prognostic value of each feature through multivariate analysis using the multivariate Cox model and univariate analysis using univariate Cox models for each feature. We found that the automatically extracted imaging features were predictive of survival (p = 0.031). Multivariate analysis of individual features showed that two individual features were predictive of survival: proportion of enhancing tumor (p = 0.013), and major axis length (p = 0.026). Univariate analysis indicated the same two features as significant (p = 0.021, and p = 0.017 respectively). We conclude that computer-extracted MR imaging features can be used for survival prognosis in GBM patients.;
Integrative analysis of diffusion-weighted MRI and genomic data to inform treatment of glioblastoma
Jajamovich, Guido H
Valiathan, Chandni R
Cristescu, Razvan
Somayajula, Sangeetha
Journal of Neuro-Oncology2016Journal Article, cited 4 times
Website
TCGA-GBM
Radiogenomics
Classification
Gene expression profiling from glioblastoma (GBM) patients enables characterization of cancer into subtypes that can be predictive of response to therapy. An integrative analysis of imaging and gene expression data can potentially be used to obtain novel biomarkers that are closely associated with the genetic subtype and gene signatures and thus provide a noninvasive approach to stratify GBM patients. In this retrospective study, we analyzed the expression of 12,042 genes for 558 patients from The Cancer Genome Atlas (TCGA). Among these patients, 50 patients had magnetic resonance imaging (MRI) studies including diffusion weighted (DW) MRI in The Cancer Imaging Archive (TCIA). We identified the contrast enhancing region of the tumors using the pre- and post-contrast T1-weighted MRI images and computed the apparent diffusion coefficient (ADC) histograms from the DW-MRI images. Using the gene expression data, we classified patients into four molecular subtypes, determined the number and composition of genes modules using the gap statistic, and computed gene signature scores. We used logistic regression to find significant predictors of GBM subtypes. We compared the predictors for different subtypes using Mann-Whitney U tests. We assessed detection power using area under the receiver operating characteristic (ROC) analysis. We computed Spearman correlations to determine the associations between ADC and each of the gene signatures. We performed gene enrichment analysis using Ingenuity Pathway Analysis (IPA). We adjusted all p values using the Benjamini and Hochberg method. The mean ADC was a significant predictor for the neural subtype. Neural tumors had a significantly lower mean ADC compared to non-neural tumors ([Formula: see text]), with mean ADC of [Formula: see text] and [Formula: see text] for neural and non-neural tumors, respectively. Mean ADC showed an area under the ROC of 0.75 for detecting neural tumors. We found eight gene modules in the GBM cohort. The mean ADC was significantly correlated with the gene signature related with dendritic cell maturation ([Formula: see text], [Formula: see text]). Mean ADC could be used as a biomarker of a gene signature associated with dendritic cell maturation and to assist in identifying patients with neural GBMs, known to be resistant to aggressive standard of care.;
Algorithmic three-dimensional analysis of tumor shape in MRI improves prognosis of survival in glioblastoma: a multi-institutional study
Czarnek, Nicholas
Clark, Kal
Peters, Katherine B
Mazurowski, Maciej A
Journal of Neuro-Oncology2017Journal Article, cited 15 times
Website
TCGA-GBM
Radiomics
BRAIN
Glioblastoma Multiforme (GBM)
In this retrospective, IRB-exempt study, we analyzed data from 68 patients diagnosed with glioblastoma (GBM) in two institutions and investigated the relationship between tumor shape, quantified using algorithmic analysis of magnetic resonance images, and survival. Each patient's Fluid Attenuated Inversion Recovery (FLAIR) abnormality and enhancing tumor were manually delineated, and tumor shape was analyzed by automatic computer algorithms. Five features were automatically extracted from the images to quantify the extent of irregularity in tumor shape in two and three dimensions. Univariate Cox proportional hazard regression analysis was performed to determine how prognostic each feature was of survival. Kaplan Meier analysis was performed to illustrate the prognostic value of each feature. To determine whether the proposed quantitative shape features have additional prognostic value compared with standard clinical features, we controlled for tumor volume, patient age, and Karnofsky Performance Score (KPS). The FLAIR-based bounding ellipsoid volume ratio (BEVR), a 3D complexity measure, was strongly prognostic of survival, with a hazard ratio of 0.36 (95% CI 0.20-0.65), and remained significant in regression analysis after controlling for other clinical factors (P = 0.0061). Three enhancing-tumor based shape features were prognostic of survival independently of clinical factors: BEVR (P = 0.0008), margin fluctuation (P = 0.0013), and angular standard deviation (P = 0.0078). Algorithmically assessed tumor shape is statistically significantly prognostic of survival for patients with GBM independently of patient age, KPS, and tumor volume. This shows promise for extending the utility of MR imaging in treatment of GBM patients.
Radiogenomics of lower-grade glioma: algorithmically-assessed tumor shape is associated with tumor genomic subtypes and patient outcomes in a multi-institutional study with The Cancer Genome Atlas data
Mazurowski, Maciej A
Clark, Kal
Czarnek, Nicholas M
Shamsesfandabadi, Parisa
Peters, Katherine B
Saha, Ashirbani
Journal of Neuro-Oncology2017Journal Article, cited 8 times
Website
TCGA-LGG
Radiogenomics
Imaging features
Recent studies identified distinct genomic subtypes of lower-grade gliomas that could potentially be used to guide patient treatment. This study aims to determine whether there is an association between genomics of lower-grade glioma tumors and patient outcomes using algorithmic measurements of tumor shape in magnetic resonance imaging (MRI). We analyzed preoperative imaging and genomic subtype data from 110 patients with lower-grade gliomas (WHO grade II and III) from The Cancer Genome Atlas. Computer algorithms were applied to analyze the imaging data and provided five quantitative measurements of tumor shape in two and three dimensions. Genomic data for the analyzed cohort of patients consisted of previously identified genomic clusters based on IDH mutation and 1p/19q co-deletion, DNA methylation, gene expression, DNA copy number, and microRNA expression. Patient outcomes were quantified by overall survival. We found that there is a strong association between angular standard deviation (ASD), which measures irregularity of the tumor boundary, and the IDH-1p/19q subtype (p < 0.0017), RNASeq cluster (p < 0.0002), DNA copy number cluster (p < 0.001), and the cluster of clusters (p < 0.0002). The RNASeq cluster was also associated with bounding ellipsoid volume ratio (p < 0.0005). Tumors in the IDH wild type cluster and R2 RNASeq cluster which are associated with much poorer outcomes generally had higher ASD reflecting more irregular shape. ASD also showed association with patient overall survival (p = 0.006). Shape features in MRI were strongly associated with genomic subtypes and patient outcomes in lower-grade glioma.;
Machine learning: a useful radiological adjunct in determination of a newly diagnosed glioma’s grade and IDH status
De Looze, Céline
Beausang, Alan
Cryan, Jane
Loftus, Teresa
Buckley, Patrick G
Farrell, Michael
Looby, Seamus
Reilly, Richard
Brett, Francesca
Kearney, Hugh
Journal of Neuro-Oncology2018Journal Article, cited 0 times
REMBRANDT
glioma
machine learning
Radiographic assessment of contrast enhancement and T2/FLAIR mismatch sign in lower grade gliomas: correlation with molecular groups
Juratli, Tareq A
Tummala, Shilpa S
Riedl, Angelika
Daubner, Dirk
Hennig, Silke
Penson, Tristan
Zolal, Amir
Thiede, Christian
Schackert, Gabriele
Krex, Dietmar
Journal of Neuro-Oncology2018Journal Article, cited 0 times
Website
TCGA-LGG
IDH mutation
MRI
Radiogenomics
1p/19q co-deletion
Machine learning reveals multimodal MRI patterns predictive of isocitrate dehydrogenase and 1p/19q status in diffuse low-and high-grade gliomas.
Zhou, H.
Chang, K.
Bai, H. X.
Xiao, B.
Su, C.
Bi, W. L.
Zhang, P. J.
Senders, J. T.
Vallieres, M.
Kavouridis, V. K.
Boaro, A.
Arnaout, O.
Yang, L.
Huang, R. Y.
Journal of Neuro-Oncology2019Journal Article, cited 0 times
Website
TCGA-LGG
1p/19q codeletion
Magnetic Resonance Imaging (MRI)
Random forest
machine learning
PURPOSE: Isocitrate dehydrogenase (IDH) and 1p19q codeletion status are importantin providing prognostic information as well as prediction of treatment response in gliomas. Accurate determination of the IDH mutation status and 1p19q co-deletion prior to surgery may complement invasive tissue sampling and guide treatment decisions. METHODS: Preoperative MRIs of 538 glioma patients from three institutions were used as a training cohort. Histogram, shape, and texture features were extracted from preoperative MRIs of T1 contrast enhanced and T2-FLAIR sequences. The extracted features were then integrated with age using a random forest algorithm to generate a model predictive of IDH mutation status and 1p19q codeletion. The model was then validated using MRIs from glioma patients in the Cancer Imaging Archive. RESULTS: Our model predictive of IDH achieved an area under the receiver operating characteristic curve (AUC) of 0.921 in the training cohort and 0.919 in the validation cohort. Age offered the highest predictive value, followed by shape features. Based on the top 15 features, the AUC was 0.917 and 0.916 for the training and validation cohort, respectively. The overall accuracy for 3 group prediction (IDH-wild type, IDH-mutant and 1p19q co-deletion, IDH-mutant and 1p19q non-codeletion) was 78.2% (155 correctly predicted out of 198). CONCLUSION: Using machine-learning algorithms, high accuracy was achieved in the prediction of IDH genotype in gliomas and moderate accuracy in a three-group prediction including IDH genotype and 1p19q codeletion.
Glioblastomas located in proximity to the subventricular zone (SVZ) exhibited enrichment of gene expression profiles associated with the cancer stem cell state
Steed, T. C.
Treiber, J. M.
Taha, B.
Engin, H. B.
Carter, H.
Patel, K. S.
Dale, A. M.
Carter, B. S.
Chen, C. C.
J Neurooncol2020Journal Article, cited 2 times
Website
BRAIN
Glioblastoma Multiforme (GBM)
Magnetic Resonance Imaging (MRI)
Radiogenomics
INTRODUCTION: Conflicting results have been reported in the association between glioblastoma proximity to the subventricular zone (SVZ) and enrichment of cancer stem cell properties. Here, we examined this hypothesis using magnetic resonance (MR) images derived from 217 The Cancer Imaging Archive (TCIA) glioblastoma subjects. METHODS: Pre-operative MR images were segmented automatically into contrast enhancing (CE) tumor volumes using Iterative Probabilistic Voxel Labeling (IPVL). Distances were calculated from the centroid of CE tumor volumes to the SVZ and correlated with gene expression profiles of the corresponding glioblastomas. Correlative analyses were performed between SVZ distance, gene expression patterns, and clinical survival. RESULTS: Glioblastoma located in proximity to the SVZ showed increased mRNA expression patterns associated with the cancer stem-cell state, including CD133 (P = 0.006). Consistent with the previous observations suggesting that glioblastoma stem cells exhibit increased DNA repair capacity, glioblastomas in proximity to the SVZ also showed increased expression of DNA repair genes, including MGMT (P = 0.018). Reflecting this enhanced DNA repair capacity, the genomes of glioblastomas in SVZ proximity harbored fewer single nucleotide polymorphisms relative to those located distant to the SVZ (P = 0.003). Concordant with the notion that glioblastoma stem cells are more aggressive and refractory to therapy, patients with glioblastoma in proximity to SVZ exhibited poorer progression free and overall survival (P < 0.01). CONCLUSION: An unbiased analysis of TCIA suggests that glioblastomas located in proximity to the SVZ exhibited mRNA expression profiles associated with stem cell properties, increased DNA repair capacity, and is associated with poor clinical survival.
Automated apparent diffusion coefficient analysis for genotype prediction in lower grade glioma: association with the T2-FLAIR mismatch sign
Aliotta, E.
Dutta, S. W.
Feng, X.
Tustison, N. J.
Batchala, P. P.
Schiff, D.
Lopes, M. B.
Jain, R.
Druzgal, T. J.
Mukherjee, S.
Patel, S. H.
J Neurooncol2020Journal Article, cited 0 times
Website
TCGA-LGG
Radiomics
Radiogenomics
BRAIN
PURPOSE: The prognosis of lower grade glioma (LGG) patients depends (in large part) on both isocitrate dehydrogenase (IDH) gene mutation and chromosome 1p/19q codeletion status. IDH-mutant LGG without 1p/19q codeletion (IDHmut-Noncodel) often exhibit a unique imaging appearance that includes high apparent diffusion coefficient (ADC) values not observed in other subtypes. The purpose of this study was to develop an ADC analysis-based approach that can automatically identify IDHmut-Noncodel LGG. METHODS: Whole-tumor ADC metrics, including fractional tumor volume with ADC > 1.5 x 10(-3)mm(2)/s (VADC>1.5), were used to identify IDHmut-Noncodel LGG in a cohort of N = 134 patients. Optimal threshold values determined in this dataset were then validated using an external dataset containing N = 93 cases collected from The Cancer Imaging Archive. Classifications were also compared with radiologist-identified T2-FLAIR mismatch sign and evaluated concurrently to identify added value from a combined approach. RESULTS: VADC>1.5 classified IDHmut-Noncodel LGG in the internal cohort with an area under the curve (AUC) of 0.80. An optimal threshold value of 0.35 led to sensitivity/specificity = 0.57/0.93. Classification performance was similar in the validation cohort, with VADC>1.5 >/= 0.35 achieving sensitivity/specificity = 0.57/0.91 (AUC = 0.81). Across both groups, 37 cases exhibited positive T2-FLAIR mismatch sign-all of which were IDHmut-Noncodel. Of these, 32/37 (86%) also exhibited VADC>1.5 >/= 0.35, as did 23 additional IDHmut-Noncodel cases which were negative for T2-FLAIR mismatch sign. CONCLUSION: Tumor subregions with high ADC were a robust indicator of IDHmut-Noncodel LGG, with VADC>1.5 achieving > 90% classification specificity in both internal and validation cohorts. VADC>1.5 exhibited strong concordance with the T2-FLAIR mismatch sign and the combination of both parameters improved sensitivity in detecting IDHmut-Noncodel LGG.
Non-navigated 2D intraoperative ultrasound: An unsophisticated surgical tool to achieve high standards of care in glioma surgery
Cepeda, S.
Garcia-Garcia, S.
Arrese, I.
Sarabia, R.
J Neurooncol2024Journal Article, cited 0 times
Website
RHUH-GBM
Glioma
Intraoperative imaging
Intraoperative ultrasound
Surgery
PURPOSE: In an era characterized by rapid progression in neurosurgical technologies, traditional tools such as the non-navigated two-dimensional intraoperative ultrasound (nn-2D-IOUS) risk being overshadowed. Against this backdrop, this study endeavors to provide a comprehensive assessment of the clinical efficacy and surgical relevance of nn-2D-IOUS, specifically in the context of glioma resections. METHODS: This retrospective study undertaken at a single center evaluated 99 consecutive, non-selected patients diagnosed with both high-grade and low-grade gliomas. The primary objective was to assess the proficiency of nn-2D-IOUS in generating satisfactory image quality, identifying residual tumor tissue, and its influence on the extent of resection. To validate these results, early postoperative MRI data served as the reference standard. RESULTS: The nn-2D-IOUS exhibited a high level of effectiveness, successfully generating good quality images in 79% of the patients evaluated. With a sensitivity rate of 68% and a perfect specificity of 100%, nn-2D-IOUS unequivocally demonstrated its utility in intraoperative residual tumor detection. Notably, when total tumor removal was the surgical objective, a resection exceeding 95% of the initial tumor volume was achieved in 86% of patients. Additionally, patients in whom residual tumor was not detected by nn-2D-IOUS, the mean volume of undetected tumor tissue was remarkably minimal, averaging at 0.29 cm(3). CONCLUSION: Our study supports nn-2D-IOUS's invaluable role in glioma surgery. The results highlight the utility of traditional technologies for enhanced surgical outcomes, even when compared to advanced alternatives. This is particularly relevant for resource-constrained settings and emphasizes optimizing existing tools for efficient patient care. NCT05873946 - 24/05/2023 - Retrospectively registered.
Evaluation of Feature Robustness Against Technical Parameters in CT Radiomics: Verification of Phantom Study with Patient Dataset
Jin, Hyeongmin
Kim, Jong Hyo
Journal of Signal Processing Systems2020Journal Article, cited 1 times
Website
RIDER Lung PET-CT
National Lung Screening Trial (NLST)
Radiomics
PHANTOM
Computed Tomography (CT)
Recent advances in radiomics have shown promising results in prognostic and diagnostic studies with high dimensional imaging feature analysis. However, radiomic features are known to be affected by technical parameters and feature extraction methodology. We evaluate the robustness of CT radiomic features against the technical parameters involved in CT acquisition and feature extraction procedures using a standardized phantom and verify the feature robustness by using patient cases. ACR phantom was scanned with two tube currents, two reconstruction kernels, and two fields of view size. A total of 47 radiomic features of textures and first-order statistics were extracted on the homogeneous region from all scans. Intrinsic variability was measured to identify unstable features vulnerable to inherent CT noise and texture. Susceptibility index was defined to represent the susceptibility to the variation of a given technical parameter. Eighteen radiomic features were shown to be intrinsically unstable on reference condition. The features were more susceptible to the reconstruction kernel variation than to other sources of variation. The feature robustness evaluated on the phantom CT correlated with those evaluated on clinical CT scans. We revealed a number of scan parameters could significantly affect the radiomic features. These characteristics should be considered in a radiomic study when different scan parameters are used in a clinical dataset.
Next-generation radiogenomics sequencing for prediction of EGFR and KRAS mutation status in NSCLC patients using multimodal imaging and machine learning algorithms
Shiri, Isaac
Maleki, Hasan
Hajianfar, Ghasem
Abdollahi, Hamid
Ashrafinia, Saeed
Hatt, Mathieu
Zaidi, Habib
Oveisi, Mehrdad
Rahmim, Arman
Molecular Imaging and Biology2020Journal Article, cited 60 times
Website
NSCLC Radiogenomics
Radiogenomics
Non Small Cell Lung Cancer (NSCLC)
PET
3D-Printed Tumor Phantoms for Assessment of In Vivo Fluorescence Imaging Analysis Methods
LaRochelle, E. P. M.
Streeter, S. S.
Littler, E. A.
Ruiz, A. J.
Mol Imaging Biol2022Journal Article, cited 0 times
Website
Soft-Tissue-Sarcoma
Fluorescence guided surgery
Optical phantom
Standards
Surgical navigation
Fluoroscopy
Contrast enhancement
Model
PURPOSE: Interventional fluorescence imaging is increasingly being utilized to quantify cancer biomarkers in both clinical and preclinical models, yet absolute quantification is complicated by many factors. The use of optical phantoms has been suggested by multiple professional organizations for quantitative performance assessment of fluorescence guidance imaging systems. This concept can be further extended to provide standardized tools to compare and assess image analysis metrics. PROCEDURES: 3D-printed fluorescence phantoms based on solid tumor models were developed with representative bio-mimicking optical properties. Phantoms were produced with discrete tumors embedded with an NIR fluorophore of fixed concentration and either zero or 3% non-specific fluorophore in the surrounding material. These phantoms were first imaged by two fluorescence imaging systems using two methods of image segmentation, and four assessment metrics were calculated to demonstrate variability in the quantitative assessment of system performance. The same analysis techniques were then applied to one tumor model with decreasing tumor fluorophore concentrations. RESULTS: These anatomical phantom models demonstrate the ability to use 3D printing to manufacture anthropomorphic shapes with a wide range of reduced scattering (mu(s)': 0.24-1.06 mm(-1)) and absorption (mu(a): 0.005-0.14 mm(-1)) properties. The phantom imaging and analysis highlight variability in the measured sensitivity metrics associated with tumor visualization. CONCLUSIONS: 3D printing techniques provide a platform for demonstrating complex biological models that introduce real-world complexities for quantifying fluorescence image data. Controlled iterative development of these phantom designs can be used as a tool to advance the field and provide context for consensus-building beyond performance assessment of fluorescence imaging platforms, and extend support for standardizing how quantitative metrics are extracted from imaging data and reported in literature.
A CADe system for nodule detection in thoracic CT images based on artificial neural network
Liu, Xinglong
Hou, Fei
Qin, Hong
Hao, Aimin
Science China Information Sciences2017Journal Article, cited 11 times
Website
LIDC-IDRI
Artificial neural network (ANN)
LUNG
Computed Tomography (CT)
computer aided detection (CADe)
Extracted magnetic resonance texture features discriminate between phenotypes and are associated with overall survival in glioblastoma multiforme patients
Chaddad, Ahmad
Tanougast, Camel
Medical & Biological Engineering & Computing2016Journal Article, cited 16 times
Website
Algorithm Development
Radiomics
Glioblastoma Multiforme (GBM)
Image registration
3D Slicer
Classification
GBM is a markedly heterogeneous brain tumor consisting of three main volumetric phenotypes identifiable on magnetic resonance imaging: necrosis (vN), active tumor (vAT), and edema/invasion (vE). The goal of this study is to identify the three glioblastoma multiforme (GBM) phenotypes using a texture-based gray-level co-occurrence matrix (GLCM) approach and determine whether the texture features of phenotypes are related to patient survival. MR imaging data in 40 GBM patients were analyzed. Phenotypes vN, vAT, and vE were segmented in a preprocessing step using 3D Slicer for rigid registration by T1-weighted imaging and corresponding fluid attenuation inversion recovery images. The GBM phenotypes were segmented using 3D Slicer tools. Texture features were extracted from GLCM of GBM phenotypes. Thereafter, Kruskal-Wallis test was employed to select the significant features. Robust predictive GBM features were identified and underwent numerous classifier analyses to distinguish phenotypes. Kaplan-Meier analysis was also performed to determine the relationship, if any, between phenotype texture features and survival rate. The simulation results showed that the 22 texture features were significant with p value < 0.05. GBM phenotype discrimination based on texture features showed the best accuracy, sensitivity, and specificity of 79.31, 91.67, and 98.75 %, respectively. Three texture features derived from active tumor parts: difference entropy, information measure of correlation, and inverse difference were statistically significant in the prediction of survival, with log-rank p values of 0.001, 0.001, and 0.008, respectively. Among 22 features examined, three texture features have the ability to predict overall survival for GBM patients demonstrating the utility of GLCM analyses in both the diagnosis and prognosis of this patient population.
A novel fused convolutional neural network for biomedical image classification
Pang, Shuchao
Du, Anan
Orgun, Mehmet A
Yu, Zhezhou
Medical & Biological Engineering & Computing2018Journal Article, cited 0 times
Website
Algorithm Development
CNN
image classification
Prediction of survival with multi-scale radiomic analysis in glioblastoma patients
Chaddad, Ahmad
Sabri, Siham
Niazi, Tamim
Abdulkarim, Bassam
Medical & Biological Engineering & Computing2018Journal Article, cited 1 times
Website
Radiomics
GBM
We propose a multiscale texture features based on Laplacian-of Gaussian (LoG) filter to predict progression free (PFS) and overall survival (OS) in patients newly diagnosed with glioblastoma (GBM). Experiments use the extracted features derived from 40 patients of GBM with T1-weighted imaging (T1-WI) and Fluid-attenuated inversion recovery (FLAIR) images that were segmented manually into areas of active tumor, necrosis, and edema. Multiscale texture features were extracted locally from each of these areas of interest using a LoG filter and the relation between features to OS and PFS was investigated using univariate (i.e., Spearman’s rank correlation coefficient, log-rank test and Kaplan-Meier estimator) and multivariate analyses (i.e., Random Forest classifier). Three and seven features were statistically correlated with PFS and OS, respectively, with absolute correlation values between 0.32 and 0.36 and p < 0.05. Three features derived from active tumor regions only were associated with OS (p < 0.05) with hazard ratios (HR) of 2.9, 3, and 3.24, respectively. Combined features showed an AUC value of 85.37 and 85.54% for predicting the PFS and OS of GBM patients, respectively, using the random forest (RF) classifier. We presented a multiscale texture features to characterize the GBM regions and predict the PFS and OS. The efficiency achievable suggests that this technique can be developed into a GBM MR analysis system suitable for clinical use after a thorough validation involving more patients.
Regression based overall survival prediction of glioblastoma multiforme patients using a single discovery cohort of multi-institutional multi-channel MR images
Sanghani, Parita
Ang, Beng Ti
King, Nicolas Kon Kam
Ren, Hongliang
Med Biol Eng Comput2019Journal Article, cited 0 times
Website
BraTS
Glioblastoma multiforme (GBM) are malignant brain tumors, associated with poor overall survival (OS). This study aims to predict OS of GBM patients (in days) using a regression framework and assess the impact of tumor shape features on OS prediction. Multi-channel MR image derived texture features, tumor shape, and volumetric features, and patient age were obtained for 163 GBM patients. In order to assess the impact of tumor shape features on OS prediction, two feature sets, with and without tumor shape features, were created. For the feature set with tumor shape features, the mean prediction error (MPE) was 14.6 days and its 95% confidence interval (CI) was 195.8 days. For the feature set excluding shape features, the MPE was 17.1 days and its 95% CI was observed to be 212.7 days. The coefficient of determination (R2) value obtained for the feature set with shape features was 0.92, while it was 0.90 for the feature set excluding shape features. Although marginal, inclusion of shape features improves OS prediction in GBM patients. The proposed OS prediction method using regression provides good accuracy and overcomes the limitations of GBM OS classification, like choosing data-derived or pre-decided thresholds to define the OS groups.
Design and evaluation of an accurate CNR-guided small region iterative restoration-based tumor segmentation scheme for PET using both simulated and real heterogeneous tumors
Koç, Alpaslan
Güveniş, Albert
Med Biol Eng Comput2020Journal Article, cited 0 times
Website
RIDER PHANTOM PET-CT
Segmentation
Positron Emission Tomography (PET)
Tumor delineation accuracy directly affects the effectiveness of radiotherapy. This study presents a methodology that minimizes potential errors during the automated segmentation of tumors in PET images. Iterative blind deconvolution was implemented in a region of interest encompassing the tumor with the number of iterations determined from contrast-to-noise ratios. The active contour and random forest classification-based segmentation method was evaluated using three distinct image databases that included both synthetic and real heterogeneous tumors. Ground truths about tumor volumes were known precisely. The volumes of the tumors were in the range of 0.49-26.34 cm(3), 0.64-1.52 cm(3), and 40.38-203.84 cm(3) respectively. Widely available software tools, namely, MATLAB, MIPAV, and ITK-SNAP were utilized. When using the active contour method, image restoration reduced mean errors in volumes estimation from 95.85 to 3.37%, from 815.63 to 17.45%, and from 32.61 to 6.80% for the three datasets. The accuracy gains were higher using datasets that include smaller tumors for which PVE is known to be more predominant. Computation time was reduced by a factor of about 10 in the smaller deconvolution region. Contrast-to-noise ratios were improved for all tumors in all data. The presented methodology has the potential to improve delineation accuracy in particular for smaller tumors at practically feasible computational times. Graphical abstract Evaluation of accurate lesion volumes using CNR-guided and ROI-based restoration method for PET images.
Automated lung cancer diagnosis using three-dimensional convolutional neural networks
Perez, Gustavo
Arbelaez, Pablo
Med Biol Eng Comput2020Journal Article, cited 0 times
Website
LIDC-IDRI
National Lung Screening Trial (NLST)
Computed Tomography (CT)
Computer Aided Diagnosis (CADx)
Deep Learning
LUNG
Lung cancer is the deadliest cancer worldwide. It has been shown that early detection using low-dose computer tomography (LDCT) scans can reduce deaths caused by this disease. We present a general framework for the detection of lung cancer in chest LDCT images. Our method consists of a nodule detector trained on the LIDC-IDRI dataset followed by a cancer predictor trained on the Kaggle DSB 2017 dataset and evaluated on the IEEE International Symposium on Biomedical Imaging (ISBI) 2018 Lung Nodule Malignancy Prediction test set. Our candidate extraction approach is effective to produce accurate candidates with a recall of 99.6%. In addition, our false positive reduction stage classifies successfully the candidates and increases precision by a factor of 2000. Our cancer predictor obtained a ROC AUC of 0.913 and was ranked 1st place at the ISBI 2018 Lung Nodule Malignancy Prediction challenge. Graphical abstract.
Superpixel-based deep convolutional neural networks and active contour model for automatic prostate segmentation on 3D MRI scans
da Silva, Giovanni L F
Diniz, Petterson S
Ferreira, Jonnison L
Franca, Joao V F
Silva, Aristofanes C
de Paiva, Anselmo C
de Cavalcanti, Elton A A
Med Biol Eng Comput2020Journal Article, cited 0 times
Website
Prostate-3T
Deep convolutional neural network (DCNN)
Segmentation
PROSTATE
Automatic and reliable prostate segmentation is an essential prerequisite for assisting the diagnosis and treatment, such as guiding biopsy procedure and radiation therapy. Nonetheless, automatic segmentation is challenging due to the lack of clear prostate boundaries owing to the similar appearance of prostate and surrounding tissues and the wide variation in size and shape among different patients ascribed to pathological changes or different resolutions of images. In this regard, the state-of-the-art includes methods based on a probabilistic atlas, active contour models, and deep learning techniques. However, these techniques have limitations that need to be addressed, such as MRI scans with the same spatial resolution, initialization of the prostate region with well-defined contours and a set of hyperparameters of deep learning techniques determined manually, respectively. Therefore, this paper proposes an automatic and novel coarse-to-fine segmentation method for prostate 3D MRI scans. The coarse segmentation step combines local texture and spatial information using the Intrinsic Manifold Simple Linear Iterative Clustering algorithm and probabilistic atlas in a deep convolutional neural networks model jointly with the particle swarm optimization algorithm to classify prostate and non-prostate tissues. Then, the fine segmentation uses the 3D Chan-Vese active contour model to obtain the final prostate surface. The proposed method has been evaluated on the Prostate 3T and PROMISE12 databases presenting a dice similarity coefficient of 84.86%, relative volume difference of 14.53%, sensitivity of 90.73%, specificity of 99.46%, and accuracy of 99.11%. Experimental results demonstrate the high performance potential of the proposed method compared to those previously published.
An efficient brain tumor image classifier by combining multi-pathway cascaded deep neural network and handcrafted features in MR images
Bal, A.
Banerjee, M.
Chaki, R.
Sharma, P.
Med Biol Eng Comput2021Journal Article, cited 0 times
Website
BraTS-TCGA-LGG
Magnetic Resonance Imaging (MRI)
Neural Networks
Computer
Brain tumor
Segmentation
Deep convolution neural network
Manual segmentation
Radiomic features
Two-pathway CNN
Accurate segmentation and delineation of the sub-tumor regions are very challenging tasks due to the nature of the tumor. Traditionally, convolutional neural networks (CNNs) have succeeded in achieving most promising performance for the segmentation of brain tumor; however, handcrafted features remain very important in identification of tumor's boundary regions accurately. The present work proposes a robust deep learning-based model with three different CNN architectures along with pre-defined handcrafted features for brain tumor segmentation, mainly to find out more prominent boundaries of the core and enhanced tumor regions. Generally, automatic CNN architecture does not use the pre-defined handcrafted features because it extracts the features automatically. In this present work, several pre-defined handcrafted features are computed from four MRI modalities (T2, FLAIR, T1c, and T1) with the help of additional handcrafted masks according to user interest and fed to the convolutional features (automatic features) to improve the overall performance of the proposed CNN model for tumor segmentation. Multi-pathway CNN is explored in this present work along with single-pathway CNN, which extracts simultaneously both local and global features to identify the accurate sub-regions of the tumor with the help of handcrafted features. The present work uses a cascaded CNN architecture, where the outcome of a CNN is considered as an additional input information to next subsequent CNNs. To extract the handcrafted features, convolutional operation was applied on the four MRI modalities with the help of several pre-defined masks to produce a predefined set of handcrafted features. The present work also investigates the usefulness of intensity normalization and data augmentation in pre-processing stage in order to handle the difficulties related to the imbalance of tumor labels. The proposed method is experimented on the BraST 2018 datasets and achieved promising results than the existing (currently published) methods with respect to different metrics such as specificity, sensitivity, and dice similarity coefficient (DSC) for complete, core, and enhanced tumor regions. Quantitatively, a notable gain is achieved around the boundaries of the sub-tumor regions using the proposed two-pathway CNN along with the handcrafted features. Graphical Abstract This data is mandatory. Please provide.
A real use case of semi-supervised learning for mammogram classification in a local clinic of Costa Rica
Calderon-Ramirez, S.
Murillo-Hernandez, D.
Rojas-Salazar, K.
Elizondo, D.
Yang, S.
Moemeni, A.
Molina-Cabello, M.
Med Biol Eng Comput2022Journal Article, cited 13 times
Website
CBIS-DDSM
Costa Rica
*Diagnosis
Computer-Assisted/methods
Humans
Mammography
Reproducibility of Results
*Supervised Machine Learning
Breast cancer
Data imbalance
Mammogram
Semi-supervised deep learning
Transfer learning
The implementation of deep learning-based computer-aided diagnosis systems for the classification of mammogram images can help in improving the accuracy, reliability, and cost of diagnosing patients. However, training a deep learning model requires a considerable amount of labelled images, which can be expensive to obtain as time and effort from clinical practitioners are required. To address this, a number of publicly available datasets have been built with data from different hospitals and clinics, which can be used to pre-train the model. However, using models trained on these datasets for later transfer learning and model fine-tuning with images sampled from a different hospital or clinic might result in lower performance. This is due to the distribution mismatch of the datasets, which include different patient populations and image acquisition protocols. In this work, a real-world scenario is evaluated where a novel target dataset sampled from a private Costa Rican clinic is used, with few labels and heavily imbalanced data. The use of two popular and publicly available datasets (INbreast and CBIS-DDSM) as source data, to train and test the models on the novel target dataset, is evaluated. A common approach to further improve the model's performance under such small labelled target dataset setting is data augmentation. However, often cheaper unlabelled data is available from the target clinic. Therefore, semi-supervised deep learning, which leverages both labelled and unlabelled data, can be used in such conditions. In this work, we evaluate the semi-supervised deep learning approach known as MixMatch, to take advantage of unlabelled data from the target dataset, for whole mammogram image classification. We compare the usage of semi-supervised learning on its own, and combined with transfer learning (from a source mammogram dataset) with data augmentation, as also against regular supervised learning with transfer learning and data augmentation from source datasets. It is shown that the use of a semi-supervised deep learning combined with transfer learning and data augmentation can provide a meaningful advantage when using scarce labelled observations. Also, we found a strong influence of the source dataset, which suggests a more data-centric approach needed to tackle the challenge of scarcely labelled data. We used several different metrics to assess the performance gain of using semi-supervised learning, when dealing with very imbalanced test datasets (such as the G-mean and the F2-score), as mammogram datasets are often very imbalanced. Graphical Abstract Description of the test-bed implemented in this work. Two different source data distributions were used to fine-tune the different models tested in this work. The target dataset is the in-house CR-Chavarria-2020 dataset.
Secure medical image encryption with Walsh-Hadamard transform and lightweight cryptography algorithm
Kasim, Ömer
Med Biol Eng Comput2022Journal Article, cited 0 times
Website
REMBRANDT
Algorithms
*Computer Security
*Privacy
Medical image encryption
It is important to ensure the privacy and security of the medical images that are produced with electronic health records. Security is ensured by encrypting and transmitting the electronic health records, and privacy is provided according to the integrity of the data and the decryption of data with the user role. Both the security and privacy of medical images are provided with the innovative use of lightweight cryptology (LWC) and Walsh-Hadamard transform (WHT) in this study. Unlike the light cryptology algorithm used in encryption, the hex key in the algorithm is obtained in two parts. The first part is used as the public key and the second part as the user-specific private key. This eliminated the disadvantage of the symmetric encryption algorithm. After the encryption was performed with a two-part hex key, the Walsh-Hadamard transform was applied to the encrypted image. In the Walsh-Hadamard transform, the Hadamard matrix was rotated with certain angles according to the user role. This allowed the encoded medical image to be obtained as a vector. The proposed method was verified with the results of the number of pixel change rates and unified average changing intensity measurement parameters and histogram analysis. The results showed that the method is more successful than the lightweight cryptology method and the proposed methods in the literature to solve security and privacy of the data in medical applications with user roles.
Automatic lung tumor segmentation from CT images using improved 3D densely connected UNet
Zhang, G.
Yang, Z.
Jiang, S.
Med Biol Eng Comput2022Journal Article, cited 0 times
Website
LIDC-IDRI
Algorithms
Deep learning
Segmentation
Accurate lung tumor segmentation has great significance in the treatment planning of lung cancer. However, robust lung tumor segmentation becomes challenging due to the heterogeneity of tumors and the similar visual characteristics between tumors and surrounding tissues. Hence, we developed an improved 3D dense connected UNet (I-3D DenseUNet) to segment various lung tumors from CT images. The nested dense skip connection adopted in the I-3D DenseUNet aims to contribute similar feature maps between encoder and decoder sub-networks. The dense connection used in encoder-decoder blocks also encourages feature propagation and reuse. A robust data augmentation strategy was employed to alleviate over-fitting based on a 3D thin plate spline (TPS) algorithm. We evaluated our method on 938 lung tumors from three datasets consisting of 421 tumors from the Cancer Imaging Archive (TCIA), 450 malignant tumors from the Lung Image Database Consortium (LIDC), and 67 tumors from the private dataset. Experiment results showed an excellent Dice similarity coefficients (DSC) of 0.8316 for the TCIA and LIDC and 0.8167 for the private dataset. The proposed method presents a strong ability in lung tumor segmentation, and it has the potential to help radiologists in lung cancer treatment planning. Framework of the proposed lung tumor segmentation method.
mResU-Net: multi-scale residual U-Net-based brain tumor segmentation from multimodal MRI
Li, P.
Li, Z.
Wang, Z.
Li, C.
Wang, M.
Med Biol Eng Comput2023Journal Article, cited 0 times
BraTS 2021
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Segmentation
Brain tumor segmentation
Multi-scale Residual U-Net
Multimodal MRI
Automatic Segmentation
Brain tumor segmentation is an important direction in medical image processing, and its main goal is to accurately mark the tumor part in brain MRI. This study proposes a brand new end-to-end model for brain tumor segmentation, which is a multi-scale deep residual convolutional neural network called mResU-Net. The semantic gap between the encoder and decoder is bridged by using skip connections in the U-Net structure. The residual structure is used to alleviate the vanishing gradient problem during training and ensure sufficient information in deep networks. On this basis, multi-scale convolution kernels are used to improve the segmentation accuracy of targets of different sizes. At the same time, we also integrate channel attention modules into the network to improve its accuracy. The proposed model has an average dice score of 0.9289, 0.9277, and 0.8965 for tumor core (TC), whole tumor (WT), and enhanced tumor (ET) on the BraTS 2021 dataset, respectively. Comparing the segmentation results of this method with existing techniques shows that mResU-Net can significantly improve the segmentation performance of brain tumor subregions.
ResDAC-Net: a novel pancreas segmentation model utilizing residual double asymmetric spatial kernels
Ji, Z.
Liu, J.
Mu, J.
Zhang, H.
Dai, C.
Yuan, N.
Ganchev, I.
Med Biol Eng Comput2024Journal Article, cited 0 times
Pancreas-CT
Medical Decathlon
Image segmentation
Medical image processing
Pancreatic segmentation
ResDAC-Net
Adjacent layer feature fusion block
Convolutional Neural Network (CNN)
The pancreas not only is situated in a complex abdominal background but is also surrounded by other abdominal organs and adipose tissue, resulting in blurred organ boundaries. Accurate segmentation of pancreatic tissue is crucial for computer-aided diagnosis systems, as it can be used for surgical planning, navigation, and assessment of organs. In the light of this, the current paper proposes a novel Residual Double Asymmetric Convolution Network (ResDAC-Net) model. Firstly, newly designed ResDAC blocks are used to highlight pancreatic features. Secondly, the feature fusion between adjacent encoding layers fully utilizes the low-level and deep-level features extracted by the ResDAC blocks. Finally, parallel dilated convolutions are employed to increase the receptive field to capture multiscale spatial information. ResDAC-Net is highly compatible to the existing state-of-the-art models, according to three (out of four) evaluation metrics, including the two main ones used for segmentation performance evaluation (i.e., DSC and Jaccard index).
Impact of harmonization on the reproducibility of MRI radiomic features when using different scanners, acquisition parameters, and image pre-processing techniques: a phantom study
Hajianfar, G.
Hosseini, S. A.
Bagherieh, S.
Oveisi, M.
Shiri, I.
Zaidi, H.
Med Biol Eng Comput2024Journal Article, cited 0 times
RIDER PHANTOM MRI
Harmonization
Magnetic Resonance Imaging (MRI)
Pre-processing
Radiomics
Robustness
Reproducibility
This study investigated the impact of ComBat harmonization on the reproducibility of radiomic features extracted from magnetic resonance images (MRI) acquired on different scanners, using various data acquisition parameters and multiple image pre-processing techniques using a dedicated MRI phantom. Four scanners were used to acquire an MRI of a nonanatomic phantom as part of the TCIA RIDER database. In fast spin-echo inversion recovery (IR) sequences, several inversion durations were employed, including 50, 100, 250, 500, 750, 1000, 1500, 2000, 2500, and 3000 ms. In addition, a 3D fast spoiled gradient recalled echo (FSPGR) sequence was used to investigate several flip angles (FA): 2, 5, 10, 15, 20, 25, and 30 degrees. Nineteen phantom compartments were manually segmented. Different approaches were used to pre-process each image: Bin discretization, Wavelet filter, Laplacian of Gaussian, logarithm, square, square root, and gradient. Overall, 92 first-, second-, and higher-order statistical radiomic features were extracted. ComBat harmonization was also applied to the extracted radiomic features. Finally, the Intraclass Correlation Coefficient (ICC) and Kruskal-Wallis's (KW) tests were implemented to assess the robustness of radiomic features. The number of non-significant features in the KW test ranged between 0-5 and 29-74 for various scanners, 31-91 and 37-92 for three times tests, 0-33 to 34-90 for FAs, and 3-68 to 65-89 for IRs before and after ComBat harmonization, with different image pre-processing techniques, respectively. The number of features with ICC over 90% ranged between 0-8 and 6-60 for various scanners, 11-75 and 17-80 for three times tests, 3-83 to 9-84 for FAs, and 3-49 to 3-63 for IRs before and after ComBat harmonization, with different image pre-processing techniques, respectively. The use of various scanners, IRs, and FAs has a great impact on radiomic features. However, the majority of scanner-robust features is also robust to IR and FA. Among the effective parameters in MR images, several tests in one scanner have a negligible impact on radiomic features. Different scanners and acquisition parameters using various image pre-processing might affect radiomic features to a large extent. ComBat harmonization might significantly impact the reproducibility of MRI radiomic features.
Relationship between visceral adipose tissue and genetic mutations (VHL and KDM5C) in clear cell renal cell carcinoma
Greco, Federico
Mallio, Carlo Augusto
La radiologia medica2021Journal Article, cited 0 times
Website
TCGA-KIRC
renal cancer
Molecular hallmarks of breast multiparametric magnetic resonance imaging during neoadjuvant chemotherapy
Lin, P.
Wan, W. J.
Kang, T.
Qin, L. F.
Meng, Q. X.
Wu, X. X.
Qin, H. Y.
Lin, Y. Q.
He, Y.
Yang, H.
Radiol Med2023Journal Article, cited 0 times
Website
ACRIN 6698
ACRIN 6698/I-SPY2 Breast DWI
BMMR2 Challenge
TCGA-BRCA
Radiomics
Radiogenomics
Multiparametric Magnetic Resonance Imaging (mpMRI)
Neoadjuvant Therapy/methods
Magnetic Resonance Imaging/methods
Prognosis
Retrospective Studies
Contrast Media
Treatment Outcome
Breast cancer
Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI)
Neoadjuvant chemotherapy
Radiogenomics
PURPOSE: To identify molecular basis of four parameters obtained from dynamic contrast-enhanced magnetic resonance imaging, including functional tumor volume (FTV), longest diameter (LD), sphericity, and contralateral background parenchymal enhancement (BPE). MATERIAL AND METHODS: Pretreatment-available gene expression profiling and different treatment timepoints MRI features were integrated for Spearman correlation analysis. MRI feature-related genes were submitted to hypergeometric distribution-based gene functional enrichment analysis to identify related Kyoto Encyclopedia of Genes and Genomes annotation. Gene set variation analysis was utilized to assess the infiltration of distinct immune cells, which were used to determine relationships between immune phenotypes and medical imaging phenotypes. The clinical significance of MRI and relevant molecular features were analyzed to identify their prediction performance of neoadjuvant chemotherapy (NAC) and prognostic impact. RESULTS: Three hundred and eighty-three patients were included for integrative analysis of MRI features and molecular information. FTV, LD, and sphericity measurements were most positively significantly correlated with proliferation-, signal transmission-, and immune-related pathways, respectively. However, BPE did not show marked correlation relationships with gene expression alteration status. FTV, LD and sphericity all showed significant positively or negatively correlated with some immune-related processes and immune cell infiltration levels. Sphericity decreased at 3 cycles after treatment initiation was also markedly negatively related to baseline sphericity measurements and immune signatures. Its decreased status could act as a predictor for prediction of response to NAC. CONCLUSION: Different MRI features capture different tumor molecular characteristics that could explain their corresponding clinical significance.
Time-to-event overall survival prediction in glioblastoma multiforme patients using magnetic resonance imaging radiomics
Hajianfar, G.
Haddadi Avval, A.
Hosseini, S. A.
Nazari, M.
Oveisi, M.
Shiri, I.
Zaidi, H.
Radiol Med2023Journal Article, cited 0 times
TCGA-GBM
Glioblastoma
Magnetic Resonance Imaging (MRI)
Machine learning
Overall survival
Radiomics
PURPOSE: Glioblastoma Multiforme (GBM) represents the predominant aggressive primary tumor of the brain with short overall survival (OS) time. We aim to assess the potential of radiomic features in predicting the time-to-event OS of patients with GBM using machine learning (ML) algorithms. MATERIALS AND METHODS: One hundred nineteen patients with GBM, who had T1-weighted contrast-enhanced and T2-FLAIR MRI sequences, along with clinical data and survival time, were enrolled. Image preprocessing methods included 64 bin discretization, Laplacian of Gaussian (LOG) filters with three Sigma values and eight variations of Wavelet Transform. Images were then segmented, followed by the extraction of 1212 radiomic features. Seven feature selection (FS) methods and six time-to-event ML algorithms were utilized. The combination of preprocessing, FS, and ML algorithms (12 x 7 x 6 = 504 models) was evaluated by multivariate analysis. RESULTS: Our multivariate analysis showed that the best prognostic FS/ML combinations are the Mutual Information (MI)/Cox Boost, MI/Generalized Linear Model Boosting (GLMB) and MI/Generalized Linear Model Network (GLMN), all of which were done via the LOG (Sigma = 1 mm) preprocessing method (C-index = 0.77). The LOG filter with Sigma = 1 mm preprocessing method, MI, GLMB and GLMN achieved significantly higher C-indices than other preprocessing, FS, and ML methods (all p values < 0.05, mean C-indices of 0.65, 0.70, and 0.64, respectively). CONCLUSION: ML algorithms are capable of predicting the time-to-event OS of patients using MRI-based radiomic and clinical features. MRI-based radiomics analysis in combination with clinical variables might appear promising in assisting clinicians in the survival prediction of patients with GBM. Further research is needed to establish the applicability of radiomics in the management of GBM in the clinic.
Improved prognostication of overall survival after radiotherapy in lung cancer patients by an interpretable machine learning model integrating lung and tumor radiomics and clinical parameters
Luo, T.
Yan, M.
Zhou, M.
Dekker, A.
Appelt, A. L.
Ji, Y.
Zhu, J.
de Ruysscher, D.
Wee, L.
Zhao, L.
Zhang, Z.
Radiol Med2024Journal Article, cited 0 times
Website
NSCLC-Cetuximab
NSCLC-Radiomics
Explainability
Lung cancer
Prognosis
Radiomics
Shap
BACKGROUND: Accurate prognostication of overall survival (OS) for non-small cell lung cancer (NSCLC) patients receiving definitive radiotherapy (RT) is crucial for developing personalized treatment strategies. This study aims to construct an interpretable prognostic model that combines radiomic features extracted from normal lung and from primary tumor with clinical parameters. Our model aimed to clarify the complex, nonlinear interactions between these variables and enhance prognostic accuracy. METHODS: We included 661 stage III NSCLC patients from three multi-national datasets: a training set (N = 349), test-set-1 (N = 229), and test-set-2 (N = 83), all undergoing definitive RT. A total of 104 distinct radiomic features were separately extracted from the regions of interest in the lung and the tumor. We developed four predictive models using eXtreme gradient boosting and selected the top 10 features based on the Shapley additive explanations (SHAP) values. These models were the tumor radiomic model (Model-T), lung radiomic model (Model-L), a combined radiomic model (Model-LT), and an integrated model incorporating clinical parameters (Model-LTC). Model performance was evaluated through Harrell's concordance index, Kaplan-Meier survival curves, time-dependent area under the receiver operating characteristic curve (AUC), calibration curves, and decision curve analysis. Interpretability was assessed using the SHAP framework. RESULTS: Model-LTC exhibited superior performance, with notable predictive accuracy (C-index: training set, 0.87; test-set-2, 0.76) and time-dependent AUC above 0.75. Complex nonlinear relationships and interactions were evident among the model's variables. CONCLUSION: The integration of radiomic and clinical factors within an interpretable framework significantly improved OS prediction. The SHAP analysis provided insightful interpretability, enhancing the model's clinical applicability and potential for aiding personalized treatment decisions.
A comparison of ground truth estimation methods
Biancardi, Alberto M
Jirapatnakul, Artit C
Reeves, Anthony P
International Journal of Computer Assisted Radiology and Surgery2010Journal Article, cited 17 times
Website
LIDC-IDRI
Algorithm Development
LUNG
PURPOSE: Knowledge of the exact shape of a lesion, or ground truth (GT), is necessary for the development of diagnostic tools by means of algorithm validation, measurement metric analysis, accurate size estimation. Four methods that estimate GTs from multiple readers' documentations by considering the spatial location of voxels were compared: thresholded Probability-Map at 0.50 (TPM(0.50)) and at 0.75 (TPM(0.75)), simultaneous truth and performance level estimation (STAPLE) and truth estimate from self distances (TESD). METHODS: A subset of the publicly available Lung Image Database Consortium archive was used, selecting pulmonary nodules documented by all four radiologists. The pair-wise similarities between the estimated GTs were analyzed by computing the respective Jaccard coefficients. Then, with respect to the readers' marking volumes, the estimated volumes were ranked and the sign test of the differences between them was performed. RESULTS: (a) the rank variations among the four methods and the volume differences between STAPLE and TESD are not statistically significant, (b) TPM(0.50) estimates are statistically larger (c) TPM(0.75) estimates are statistically smaller (d) there is some spatial disagreement in the estimates as the one-sided 90% confidence intervals between TPM(0.75) and TPM(0.50), TPM(0.75) and STAPLE, TPM(0.75) and TESD, TPM(0.50) and STAPLE, TPM(0.50) and TESD, STAPLE and TESD, respectively, show: [0.67, 1.00], [0.67, 1.00], [0.77, 1.00], [0.93, 1.00], [0.85, 1.00], [0.85, 1.00]. CONCLUSIONS: The method used to estimate the GT is important: the differences highlighted that STAPLE and TESD, notwithstanding a few weaknesses, appear to be equally viable as a GT estimator, while the increased availability of computing power is decreasing the appeal afforded to TPMs. Ultimately, the choice of which GT estimation method, between the two, should be preferred depends on the specific characteristics of the marked data that is used with respect to the two elements that differentiate the method approaches: relative reliabilities of the readers and the reliability of the region boundaries.
Collaborative projects
Armato, S
McNitt-Gray, M
Meyer, C
Reeves, A
Clarke, L
Int J CARS2012Journal Article, cited 307 times
Website
LIDC-IDRI
vPSNR: a visualization-aware image fidelity metric tailored for diagnostic imaging
Lundström, Claes
International Journal of Computer Assisted Radiology and Surgery2013Journal Article, cited 0 times
Website
Algorithm Development
Image compression
Purpose Often, the large amounts of data generated in diagnostic imaging cause overload problems for IT systems and radiologists. This entails a need of effective use of data reduction beyond lossless levels, which, in turn, underlines the need to measure and control the image fidelity. Existing image fidelity metrics, however, fail to fully support important requirements from a modern clinical context: support for high-dimensional data, visualization awareness, and independence from the original data.; Methods We propose an image fidelity metric, called the visual peak signal-to-noise ratio (vPSNR), fulfilling the three main requirements. A series of image fidelity tests on CT data sets is employed. The impact of visualization transform (grayscale window) on diagnostic quality of irreversibly compressed data sets is evaluated through an observer-based study. In addition, several tests were performed demonstrating the benefits, limitations, and characteristics of vPSNR in different data reduction scenarios.; Results The visualization transform has a significant impact on diagnostic quality, and the vPSNR is capable of representing this effect. Moreover, the tests establish that the vPSNR is broadly applicable.; Conclusions vPSNR fills a gap not served by existing image fidelity metrics, relevant for the clinical context. While vPSNR alone cannot fulfill all image fidelity needs, it can be a useful complement in a wide range of scenarios.;
Primary lung tumor segmentation from PET–CT volumes with spatial–topological constraint
Cui, Hui
Wang, Xiuying
Lin, Weiran
Zhou, Jianlong
Eberl, Stefan
Feng, Dagan
Fulham, Michael
International Journal of Computer Assisted Radiology and Surgery2016Journal Article, cited 14 times
Website
RIDER Phantom PET–CT
LUNG
Pulmonary nodule classification with deep residual networks
Nibali, Aiden
He, Zhen
Wollersheim, Dennis
International Journal of Computer Assisted Radiology and Surgery2017Journal Article, cited 19 times
Website
LIDC-IDRI
Segmentation
Lung cancer has the highest death rate among all cancers in the USA. In this work we focus on improving the ability of computer-aided diagnosis (CAD) systems to predict the malignancy of nodules from cropped CT images of lung nodules.
Measurement of smaller colon polyp in CT colonography images using morphological image processing
Manjunath, KN
Siddalingaswamy, PC
Prabhu, GK
International Journal of Computer Assisted Radiology and Surgery2017Journal Article, cited 1 times
Website
CT COLONOGRAPHY
ACRIN 6664
radiomics
Colon polyp
Shape descriptor
Feature fusion for lung nodule classification
Farag, Amal A
Ali, Asem
Elshazly, Salwa
Farag, Aly A
International Journal of Computer Assisted Radiology and Surgery2017Journal Article, cited 3 times
Website
LIDC-IDRI
LUNG
Computed tomography (CT)
Features extraction
Gabor filter
Classification
K Nearest Neighbor (KNN)
support vector machine (SVM)
Agile convolutional neural network for pulmonary nodule classification using CT images
Zhao, X.
Liu, L.
Qi, S.
Teng, Y.
Li, J.
Qian, W.
Int J Comput Assist Radiol Surg2018Journal Article, cited 6 times
Website
LIDC-IDRI
Convolutional Neural Network (CNN)
Deep learning
Lung cancer
Nodule classification
OBJECTIVE: To distinguish benign from malignant pulmonary nodules using CT images is critical for their precise diagnosis and treatment. A new Agile convolutional neural network (CNN) framework is proposed to conquer the challenges of a small-scale medical image database and the small size of the nodules, and it improves the performance of pulmonary nodule classification using CT images. METHODS: A hybrid CNN of LeNet and AlexNet is constructed through combining the layer settings of LeNet and the parameter settings of AlexNet. A dataset with 743 CT image nodule samples is built up based on the 1018 CT scans of LIDC to train and evaluate the Agile CNN model. Through adjusting the parameters of the kernel size, learning rate, and other factors, the effect of these parameters on the performance of the CNN model is investigated, and an optimized setting of the CNN is obtained finally. RESULTS: After finely optimizing the settings of the CNN, the estimation accuracy and the area under the curve can reach 0.822 and 0.877, respectively. The accuracy of the CNN is significantly dependent on the kernel size, learning rate, training batch size, dropout, and weight initializations. The best performance is achieved when the kernel size is set to [Formula: see text], the learning rate is 0.005, the batch size is 32, and dropout and Gaussian initialization are used. CONCLUSIONS: This competitive performance demonstrates that our proposed CNN framework and the optimization strategy of the CNN parameters are suitable for pulmonary nodule classification characterized by small medical datasets and small targets. The classification model might help diagnose and treat pulmonary nodules effectively.
3D/2D model-to-image registration by imitation learning for cardiac procedures
Toth, Daniel
Miao, Shun
Kurzendorfer, Tanja
Rinaldi, Christopher A
Liao, Rui
Mansi, Tommaso
Rhode, Kawal
Mountney, Peter
International Journal of Computer Assisted Radiology and Surgery2018Journal Article, cited 1 times
Website
LIDC-IDRI
cardiac resynchronization therapy (CRT)
Evolutionary image simplification for lung nodule classification with convolutional neural networks
Lückehe, Daniel
von Voigt, Gabriele
International Journal of Computer Assisted Radiology and Surgery2018Journal Article, cited 0 times
Website
LIDC-IDRI
lung cancer
image simplification
evolutionary algorithm
Classification of lung adenocarcinoma transcriptome subtypes from pathological images using deep convolutional networks
Antonio, Victor Andrew A
Ono, Naoaki
Saito, Akira
Sato, Tetsuo
Altaf-Ul-Amin, Md
Kanaya, Shigehiko
International Journal of Computer Assisted Radiology and Surgery2018Journal Article, cited 0 times
Website
TCGA-LUAD
Machine learning
histopathology imaging features
PURPOSE: Convolutional neural networks have become rapidly popular for image recognition and image analysis because of its powerful potential. In this paper, we developed a method for classifying subtypes of lung adenocarcinoma from pathological images using neural network whose that can evaluate phenotypic features from wider area to consider cellular distributions. METHODS: In order to recognize the types of tumors, we need not only to detail features of cells, but also to incorporate statistical distribution of the different types of cells. Variants of autoencoders as building blocks of pre-trained convolutional layers of neural networks are implemented. A sparse deep autoencoder which minimizes local information entropy on the encoding layer is then proposed and applied to images of size [Formula: see text]. We applied this model for feature extraction from pathological images of lung adenocarcinoma, which is comprised of three transcriptome subtypes previously defined by the Cancer Genome Atlas network. Since the tumor tissue is composed of heterogeneous cell populations, recognition of tumor transcriptome subtypes requires more information than local pattern of cells. The parameters extracted using this approach will then be used in multiple reduction stages to perform classification on larger images. RESULTS: We were able to demonstrate that these networks successfully recognize morphological features of lung adenocarcinoma. We also performed classification and reconstruction experiments to compare the outputs of the variants. The results showed that the larger input image that covers a certain area of the tissue is required to recognize transcriptome subtypes. The sparse autoencoder network with [Formula: see text] input provides a 98.9% classification accuracy. CONCLUSION: This study shows the potential of autoencoders as a feature extraction paradigm and paves the way for a whole slide image analysis tool to predict molecular subtypes of tumors from pathological features.
Automatic estimation of the aortic lumen geometry by ellipse tracking
Tahoces, Pablo G
Alvarez, Luis
González, Esther
Cuenca, Carmelo
Trujillo, Agustín
Santana-Cedrés, Daniel
Esclarín, Julio
Gomez, Luis
Mazorra, Luis
Alemán-Flores, Miguel
International Journal of Computer Assisted Radiology and Surgery2019Journal Article, cited 0 times
LIDC-IDRI
Prostate cancer detection using residual networks
Xu, Helen
Baxter, John S H
Akin, Oguz
Cantor-Rivera, Diego
Int J Comput Assist Radiol Surg2019Journal Article, cited 0 times
PROSTATEx
Machine Learning
Deep Learning
Segmentation
PURPOSE: To automatically identify regions where prostate cancer is suspected on multi-parametric magnetic resonance images (mp-MRI). METHODS: A residual network was implemented based on segmentations from an expert radiologist on T2-weighted, apparent diffusion coefficient map, and high b-value diffusion-weighted images. Mp-MRIs from 346 patients were used in this study. RESULTS: The residual network achieved a hit or miss accuracy of 93% for lesion detection, with an average Jaccard score of 71% that compared the agreement between network and radiologist segmentations. CONCLUSION: This paper demonstrated the ability for residual networks to learn features for prostate lesion segmentation.
Learning to detect anatomical landmarks of the pelvis in X-rays from arbitrary views
Bier, B.
Goldmann, F.
Zaech, J. N.
Fotouhi, J.
Hegeman, R.
Grupp, R.
Armand, M.
Osgood, G.
Navab, N.
Maier, A.
Unberath, M.
Int J Comput Assist Radiol Surg2019Journal Article, cited 0 times
Website
CT Lymph Nodes
Image registration
Purpose; Minimally invasive alternatives are now available for many complex surgeries. These approaches are enabled by the increasing availability of intra-operative image guidance. Yet, fluoroscopic X-rays suffer from projective transformation and thus cannot provide direct views onto anatomy. Surgeons could highly benefit from additional information, such as the anatomical landmark locations in the projections, to support intra-operative decision making. However, detecting landmarks is challenging since the viewing direction changes substantially between views leading to varying appearance of the same landmark. Therefore, and to the best of our knowledge, view-independent anatomical landmark detection has not been investigated yet.; ; Methods; In this work, we propose a novel approach to detect multiple anatomical landmarks in X-ray images from arbitrary viewing directions. To this end, a sequential prediction framework based on convolutional neural networks is employed to simultaneously regress all landmark locations. For training, synthetic X-rays are generated with a physically accurate forward model that allows direct application of the trained model to real X-ray images of the pelvis. View invariance is achieved via data augmentation by sampling viewing angles on a spherical segment of 120∘×90∘ .; ; Results; On synthetic data, a mean prediction error of 5.6 ± 4.5 mm is achieved. Further, we demonstrate that the trained model can be directly applied to real X-rays and show that these detections define correspondences to a respective CT volume, which allows for analytic estimation of the 11 degree of freedom projective mapping.; ; Conclusion; We present the first tool to detect anatomical landmarks in X-ray images independent of their viewing direction. Access to this information during surgery may benefit decision making and constitutes a first step toward global initialization of 2D/3D registration without the need of calibration. As such, the proposed concept has a strong prospect to facilitate and enhance applications and methods in the realm of image-guided surgery.
Endoscopic navigation in the clinic: registration in the absence of preoperative imaging
Sinha, A.
Ishii, M.
Hager, G. D.
Taylor, R. H.
Int J Comput Assist Radiol Surg2019Journal Article, cited 0 times
QIN-HEADNECK
PURPOSE: Clinical examinations that involve endoscopic exploration of the nasal cavity and sinuses often do not have a reference preoperative image, like a computed tomography (CT) scan, to provide structural context to the clinician. The aim of this work is to provide structural context during clinical exploration without requiring additional CT acquisition. METHODS: We present a method for registration during clinical endoscopy in the absence of CT scans by making use of shape statistics from past CT scans. Using a deformable registration algorithm that uses these shape statistics along with dense point clouds from video, we simultaneously achieve two goals: (1) register the statistically mean shape of the target anatomy with the video point cloud, and (2) estimate patient shape by deforming the mean shape to fit the video point cloud. Finally, we use statistical tests to assign confidence to the computed registration. RESULTS: We are able to achieve submillimeter errors in registrations and patient shape reconstructions using simulated data. We establish and evaluate the confidence criteria for our registrations using simulated data. Finally, we evaluate our registration method on in vivo clinical data and assign confidence to these registrations using the criteria established in simulation. All registrations that are not rejected by our criteria produce submillimeter residual errors. CONCLUSION: Our deformable registration method can produce submillimeter registrations and reconstructions as well as statistical scores that can be used to assign confidence to the registrations.
Enabling machine learning in X-ray-based procedures via realistic simulation of image formation
Unberath, Mathias
Zaech, Jan-Nico
Gao, Cong
Bier, Bastian
Goldmann, Florian
Lee, Sing Chun
Fotouhi, Javad
Taylor, Russell
Armand, Mehran
Navab, Nassir
International Journal of Computer Assisted Radiology and Surgery2019Journal Article, cited 0 times
machine learning
image reconstruction
A manifold learning regularization approach to enhance 3D CT image-based lung nodule classification
Ren, Y.
Tsai, M. Y.
Chen, L.
Wang, J.
Li, S.
Liu, Y.
Jia, X.
Shen, C.
Int J Comput Assist Radiol Surg2020Journal Article, cited 2 times
Website
LIDC-IDRI
PURPOSE: Diagnosis of lung cancer requires radiologists to review every lung nodule in CT images. Such a process can be very time-consuming, and the accuracy is affected by many factors, such as experience of radiologists and available diagnosis time. To address this problem, we proposed to develop a deep learning-based system to automatically classify benign and malignant lung nodules. METHODS: The proposed method automatically determines benignity or malignancy given the 3D CT image patch of a lung nodule to assist diagnosis process. Motivated by the fact that real structure among data is often embedded on a low-dimensional manifold, we developed a novel manifold regularized classification deep neural network (MRC-DNN) to perform classification directly based on the manifold representation of lung nodule images. The concise manifold representation revealing important data structure is expected to benefit the classification, while the manifold regularization enforces strong, but natural constraints on network training, preventing over-fitting. RESULTS: The proposed method achieves accurate manifold learning with reconstruction error of ~ 30 HU on real lung nodule CT image data. In addition, the classification accuracy on testing data is 0.90 with sensitivity of 0.81 and specificity of 0.95, which outperforms state-of-the-art deep learning methods. CONCLUSION: The proposed MRC-DNN facilitates an accurate manifold learning approach for lung nodule classification based on 3D CT images. More importantly, MRC-DNN suggests a new and effective idea of enforcing regularization for network training, possessing the potential impact to a board range of applications.
Multimodal mixed reality visualisation for intraoperative surgical guidance
Cartucho, João
Shapira, David
Ashrafian, Hutan
Giannarou, Stamatia
International Journal of Computer Assisted Radiology and Surgery2020Journal Article, cited 0 times
Website
TCGA-LIHC
Visualization
surgical guidance
Multimodal 3D ultrasound and CT in image-guided spinal surgery: public database and new registration algorithms
Masoumi, N.
Belasso, C. J.
Ahmad, M. O.
Benali, H.
Xiao, Y.
Rivaz, H.
Int J Comput Assist Radiol Surg2021Journal Article, cited 0 times
Website
TCGA-SARC
Registration
Ultrasound
Computed Tomography (CT)
BONE
PURPOSE: Accurate multimodal registration of intraoperative ultrasound (US) and preoperative computed tomography (CT) is a challenging problem. Construction of public datasets of US and CT images can accelerate the development of such image registration techniques. This can help ensure the accuracy and safety of spinal surgeries using image-guided surgery systems where an image registration is employed. In addition, we present two algorithms to register US and CT images. METHODS: We present three different datasets of vertebrae with corresponding CT, US, and simulated US images. For each of the two latter datasets, we also provide 16 landmark pairs of matching structures between the CT and US images and performed fiducial registration to acquire a silver standard for assessing image registration. Besides, we proposed two patch-based rigid image registration algorithms, one based on normalized cross-correlation (NCC) and the other based on correlation ratio (CR) to register misaligned CT and US images. RESULTS: The CT and corresponding US images of the proposed database were pre-processed and misaligned with different error intervals, resulting in 6000 registration problems solved using both NCC and CR methods. Our results show that the methods were successful in aligning the pre-processed CT and US images by decreasing the warping index. CONCLUSIONS: The database provides a resource for evaluating image registration techniques. The simulated data have two applications. First, they provide the gold standard ground-truth which is difficult to obtain with ex vivo and in vivo data for validating US-CT registration methods. Second, the simulated US images can be used to validate real-time US simulation methods. Besides, the proposed image registration techniques can be useful for developing methods in clinical application.
LiverNet: efficient and robust deep learning model for automatic diagnosis of sub-types of liver hepatocellular carcinoma cancer from H&E stained liver histopathology images
Aatresh, A. A.
Alabhya, K.
Lal, S.
Kini, J.
Saxena, P. U. P.
Int J Comput Assist Radiol Surg2021Journal Article, cited 0 times
TCGA-LIHC
Convolutional Neural Network (CNN)
Deep learning
H&E-stained slides
Classification
Algorithm Development
PURPOSE: Liver cancer is one of the most common types of cancers in Asia with a high mortality rate. A common method for liver cancer diagnosis is the manual examination of histopathology images. Due to its laborious nature, we focus on alternate deep learning methods for automatic diagnosis, providing significant advantages over manual methods. In this paper, we propose a novel deep learning framework to perform multi-class cancer classification of liver hepatocellular carcinoma (HCC) tumor histopathology images which shows improvements in inference speed and classification quality over other competitive methods. METHOD: The BreastNet architecture proposed by Togacar et al. shows great promise in using convolutional block attention modules (CBAM) for effective cancer classification in H&E stained breast histopathology images. As part of our experiments with this framework, we have studied the addition of atrous spatial pyramid pooling (ASPP) blocks to effectively capture multi-scale features in H&E stained liver histopathology data. We classify liver histopathology data into four classes, namely the non-cancerous class, low sub-type liver HCC tumor, medium sub-type liver HCC tumor, and high sub-type liver HCC tumor. To prove the robustness and efficacy of our models, we have shown results for two liver histopathology datasets-a novel KMC dataset and the TCGA dataset. RESULTS: Our proposed architecture outperforms state-of-the-art architectures for multi-class cancer classification of HCC histopathology images, not just in terms of quality of classification, but also in computational efficiency on the novel proposed KMC liver data and the publicly available TCGA-LIHC dataset. We have considered precision, recall, F1-score, intersection over union (IoU), accuracy, number of parameters, and FLOPs as metrics for comparison. The results of our meticulous experiments have shown improved classification performance along with added efficiency. LiverNet has been observed to outperform all other frameworks in all metrics under comparison with an approximate improvement of [Formula: see text] in accuracy and F1-score on the KMC and TCGA-LIHC datasets. CONCLUSION: To the best of our knowledge, our work is among the first to provide concrete proof and demonstrate results for a successful deep learning architecture to handle multi-class HCC histopathology image classification among various sub-types of liver HCC tumor. Our method shows a high accuracy of [Formula: see text] on the proposed KMC liver dataset requiring only 0.5739 million parameters and 1.1934 million floating point operations per second.
Comparison of methods for sensitivity correction in Talbot-Lau computed tomography
Felsner, L.
Roser, P.
Maier, A.
Riess, C.
Int J Comput Assist Radiol Surg2021Journal Article, cited 0 times
Website
CT Lymph Nodes
Algorithms
Image Processing
Image reconstruction
Phantom
Talbot-Lau interferometer
X-ray phase contrast imaging
Computed Tomography (CT)
PURPOSE: In Talbot-Lau X-ray phase contrast imaging, the measured phase value depends on the position of the object in the measurement setup. When imaging large objects, this may lead to inhomogeneous phase contributions within the object. These inhomogeneities introduce artifacts in tomographic reconstructions of the object. METHODS: In this work, we compare recently proposed approaches to correct such reconstruction artifacts. We compare an iterative reconstruction algorithm, a known operator network and a U-net. The methods are qualitatively and quantitatively compared on the Shepp-Logan phantom and on the anatomy of a human abdomen. We also perform a dedicated experiment on the noise behavior of the methods. RESULTS: All methods were able to reduce the specific artifacts in the reconstructions for the simulated and virtual real anatomy data. The results show method-specific residual errors that are indicative for the inherently different correction approaches. While all methods were able to correct the artifacts, we report a different noise behavior. CONCLUSION: The iterative reconstruction performs very well, but at the cost of a high runtime. The known operator network shows consistently a very competitive performance. The U-net performs slightly worse, but has the benefit that it is a general-purpose network that does not require special application knowledge.
3D spatial priors for semi-supervised organ segmentation with deep convolutional neural networks
Petit, O.
Thome, N.
Soler, L.
Int J Comput Assist Radiol Surg2022Journal Article, cited 0 times
Website
Pancreas-CT
Convolutional Neural Network (CNN)
*Image Processing
Computer-Assisted
*Neural Networks
Computer
Pancreas/diagnostic imaging
Tomography
X-Ray Computed
3D spatial prior
Deep Learning
Medical image segmentation
Pseudo-labeling
Semi-supervised learning
PURPOSE: Fully Convolutional neural Networks (FCNs) are the most popular models for medical image segmentation. However, they do not explicitly integrate spatial organ positions, which can be crucial for proper labeling in challenging contexts. METHODS: In this work, we propose a method that combines a model representing prior probabilities of an organ position in 3D with visual FCN predictions by means of a generalized prior-driven prediction function. The prior is also used in a self-labeling process to handle low-data regimes, in order to improve the quality of the pseudo-label selection. RESULTS: Experiments carried out on CT scans from the public TCIA pancreas segmentation dataset reveal that the resulting STIPPLE model can significantly increase performances compared to the FCN baseline, especially with few training images. We also show that STIPPLE outperforms state-of-the-art semi-supervised segmentation methods by leveraging the spatial prior information. CONCLUSIONS: STIPPLE provides a segmentation method effective with few labeled examples, which is crucial in the medical domain. It offers an intuitive way to incorporate absolute position information by mimicking expert annotators.
Quantification of pulmonary involvement in COVID-19 pneumonia by means of a cascade of two U-nets: training and assessment on multiple datasets using different annotation criteria
Lizzi, Francesca
Agosti, Abramo
Brero, Francesca
Cabini, Raffaella Fiamma
Fantacci, Maria Evelina
Figini, Silvia
Lascialfari, Alessandro
Laruina, Francesco
Oliva, Piernicola
Piffer, Stefano
Postuma, Ian
Rinaldi, Lisa
Talamonti, Cinzia
Retico, Alessandra
International Journal of Computer Assisted Radiology and Surgery2021Journal Article, cited 0 times
CT Images in COVID-19
LCTSC
PurposeThis study aims at exploiting artificial intelligence (AI) for the identification, segmentation and quantification of COVID-19 pulmonary lesions. The limited data availability and the annotation quality are relevant factors in training AI-methods. We investigated the effects of using multiple datasets, heterogeneously populated and annotated according to different criteria.MethodsWe developed an automated analysis pipeline, the LungQuant system, based on a cascade of two U-nets. The first one (U-net1$$_1$$) is devoted to the identification of the lung parenchyma; the second one (U-net2$$_2$$) acts on a bounding box enclosing the segmented lungs to identify the areas affected by COVID-19 lesions. Different public datasets were used to train the U-nets and to evaluate their segmentation performances, which have been quantified in terms of the Dice Similarity Coefficients. The accuracy in predicting the CT-Severity Score (CT-SS) of the LungQuant system has been also evaluated.ResultsBoth the volumetric DSC (vDSC) and the accuracy showed a dependency on the annotation quality of the released data samples. On an independent dataset (COVID-19-CT-Seg), both the vDSC and the surface DSC (sDSC) were measured between the masks predicted by LungQuant system and the reference ones. The vDSC (sDSC) values of 0.95±0.01 and 0.66±0.13 (0.95±0.02 and 0.76±0.18, with 5 mm tolerance) were obtained for the segmentation of lungs and COVID-19 lesions, respectively. The system achieved an accuracy of 90% in CT-SS identification on this benchmark dataset.ConclusionWe analysed the impact of using data samples with different annotation criteria in training an AI-based quantification system for pulmonary involvement in COVID-19 pneumonia. In terms of vDSC measures, the U-net segmentation strongly depends on the quality of the lesion annotations. Nevertheless, the CT-SS can be accurately predicted on independent test sets, demonstrating the satisfactory generalization ability of the LungQuant.
A cascaded fully convolutional network framework for dilated pancreatic duct segmentation
Shen, C.
Roth, H. R.
Hayashi, Y.
Oda, M.
Miyamoto, T.
Sato, G.
Mori, K.
Int J Comput Assist Radiol Surg2022Journal Article, cited 1 times
Website
Pancreas-CT
Computed Tomography (CT)
Segmentation
PURPOSE: Pancreatic duct dilation can be considered an early sign of pancreatic ductal adenocarcinoma (PDAC). However, there is little existing research focused on dilated pancreatic duct segmentation as a potential screening tool for people without PDAC. Dilated pancreatic duct segmentation is difficult due to the lack of readily available labeled data and strong voxel imbalance between the pancreatic duct region and other regions. To overcome these challenges, we propose a two-step approach for dilated pancreatic duct segmentation from abdominal computed tomography (CT) volumes using fully convolutional networks (FCNs). METHODS: Our framework segments the pancreatic duct in a cascaded manner. The pancreatic duct occupies a tiny portion of abdominal CT volumes. Therefore, to concentrate on the pancreas regions, we use a public pancreas dataset to train an FCN to generate an ROI covering the pancreas and use a 3D U-Net-like FCN for coarse pancreas segmentation. To further improve the dilated pancreatic duct segmentation, we deploy a skip connection on each corresponding resolution level and an attention mechanism in the bottleneck layer. Moreover, we introduce a combined loss function based on Dice loss and Focal loss. Random data augmentation is adopted throughout the experiments to improve the generalizability of the model. RESULTS: We manually created a dilated pancreatic duct dataset with semi-automated annotation tools. Experimental results showed that our proposed framework is practical for dilated pancreatic duct segmentation. The average Dice score and sensitivity were 49.9% and 51.9%, respectively. These results show the potential of our approach as a clinical screening tool. CONCLUSIONS: We investigate an automated framework for dilated pancreatic duct segmentation. The cascade strategy effectively improved the segmentation performance of the pancreatic duct. Our modifications to the FCNs together with random data augmentation and the proposed combined loss function facilitate automated segmentation.
Brain tumor segmentation in MRI images using nonparametric localization and enhancement methods with U-net
Ilhan, A.
Sekeroglu, B.
Abiyev, R.
Int J Comput Assist Radiol Surg2022Journal Article, cited 2 times
Website
BraTS-TCGA-GBM
BraTS-TCGA-LGG
BRAIN
Image Enhancement/methods
Magnetic Resonance Imaging (MRI)
Segmentation
U-net
PURPOSE: Segmentation is one of the critical steps in analyzing medical images since it provides meaningful information for the diagnosis, monitoring, and treatment of brain tumors. In recent years, several artificial intelligence-based systems have been developed to perform this task accurately. However, the unobtrusive or low-contrast occurrence of some tumors and similarities to healthy brain tissues make the segmentation task challenging. These yielded researchers to develop new methods for preprocessing the images and improving their segmentation abilities. METHODS: This study proposes an efficient system for the segmentation of the complete brain tumors from MRI images based on tumor localization and enhancement methods with a deep learning architecture named U-net. Initially, the histogram-based nonparametric tumor localization method is applied to localize the tumorous regions and the proposed tumor enhancement method is used to modify the localized regions to increase the visual appearance of indistinct or low-contrast tumors. The resultant images are fed to the original U-net architecture to segment the complete brain tumors. RESULTS: The performance of the proposed tumor localization and enhancement methods with the U-net is tested on benchmark datasets, BRATS 2012, BRATS 2019, and BRATS 2020, and achieved superior results as 0.94, 0.85, 0.87, 0.88 dice scores for the BRATS 2012 HGG-LGG, BRATS 2019, and BRATS 2020 datasets, respectively. CONCLUSION: The results and comparisons showed how the proposed methods improve the segmentation ability of the deep learning models and provide high-accuracy and low-cost segmentation of complete brain tumors in MRI images. The results might yield the implementation of the proposed methods in segmentation tasks of different medical fields.
Texture synthesis for generating realistic-looking bronchoscopic videos
Guo, L.
Nahm, W.
Int J Comput Assist Radiol Surg2023Journal Article, cited 2 times
Website
CPTAC-LSCC
Bronchoscopy
Endoscopy
Generative Adversarial Network (GAN)
Synthetic data generation
Synthetic images
Texture synthesis
Video augmentation
PURPOSE: Synthetic realistic-looking bronchoscopic videos are needed to develop and evaluate depth estimation methods as part of investigating vision-based bronchoscopic navigation system. To generate these synthetic videos under the circumstance where access to real bronchoscopic images/image sequences is limited, we need to create various realistic-looking image textures of the airway inner surface with large size using a small number of real bronchoscopic image texture patches. METHODS: A generative adversarial networks-based method is applied to create realistic-looking textures of the airway inner surface by learning from a limited number of small texture patches from real bronchoscopic images. By applying a purely convolutional architecture without any fully connected layers, this method allows the production of textures with arbitrary size. RESULTS: Authentic image textures of airway inner surface are created. An example of the synthesized textures and two frames of the thereby generated bronchoscopic video are shown. The necessity and sufficiency of the generated textures as image features for further depth estimation methods are demonstrated. CONCLUSIONS: The method can generate textures of the airway inner surface that meet the requirements for the texture itself and for the thereby generated bronchoscopic videos, including "realistic-looking," "long-term temporal consistency," "sufficient image features for depth estimation," and "large size and variety of synthesized textures." Besides, it also shows advantages with respect to the easy accessibility to required data source. A further validation of this approach is planned by utilizing the realistic-looking bronchoscopic videos with textures generated by this method as training and test data for some depth estimation networks.
A semantic fidelity interpretable-assisted decision model for lung nodule classification
Zhan, X.
Long, H.
Gou, F.
Wu, J.
Int J Comput Assist Radiol Surg2023Journal Article, cited 1 times
Website
LIDC-IDRI
Algorithm Development
Capsule networks
Interpretability
Lung nodule
Multi-class classification
Semantic fidelity
PURPOSE: Early diagnosis of lung nodules is important for the treatment of lung cancer patients, existing capsule network-based assisted diagnostic models for lung nodule classification have shown promising prospects in terms of interpretability. However, these models lack the ability to draw features robustly at shallow networks, which in turn limits the performance of the models. Therefore, we propose a semantic fidelity capsule encoding and interpretable (SFCEI)-assisted decision model for lung nodule multi-class classification. METHODS: First, we propose multilevel receptive field feature encoding block to capture multi-scale features of lung nodules of different sizes. Second, we embed multilevel receptive field feature encoding blocks in the residual code-and-decode attention layer to extract fine-grained context features. Integrating multi-scale features and contextual features to form semantic fidelity lung nodule attribute capsule representations, which consequently enhances the performance of the model. RESULTS: We implemented comprehensive experiments on the dataset (LIDC-IDRI) to validate the superiority of the model. The stratified fivefold cross-validation results show that the accuracy (94.17%) of our method exceeds existing advanced approaches in the multi-class classification of malignancy scores for lung nodules. CONCLUSION: The experiments confirm that the methodology proposed can effectively capture the multi-scale features and contextual features of lung nodules. It enhances the capability of shallow structure drawing features in capsule networks, which in turn improves the classification performance of malignancy scores. The interpretable model can support the physicians' confidence in clinical decision-making.
Anatomical attention can help to segment the dilated pancreatic duct in abdominal CT
Shen, C.
Roth, H. R.
Hayashi, Y.
Oda, M.
Sato, G.
Miyamoto, T.
Rueckert, D.
Mori, K.
Int J Comput Assist Radiol Surg2024Journal Article, cited 0 times
Website
Pancreas-CT
Humans
*Image Processing
Computer-Assisted/methods
*Abdomen
Pancreas
Tomography
X-Ray Computed
Pancreatic Ducts/diagnostic imaging
Anatomical attention
Dilated pancreatic duct
Pancreatic duct segmentation
Tubular structure enhancement
PURPOSE: Pancreatic duct dilation is associated with an increased risk of pancreatic cancer, the most lethal malignancy with the lowest 5-year relative survival rate. Automatic segmentation of the dilated pancreatic duct from contrast-enhanced CT scans would facilitate early diagnosis. However, pancreatic duct segmentation poses challenges due to its small anatomical structure and poor contrast in abdominal CT. In this work, we investigate an anatomical attention strategy to address this issue. METHODS: Our proposed anatomical attention strategy consists of two steps: pancreas localization and pancreatic duct segmentation. The coarse pancreatic mask segmentation is used to guide the fully convolutional networks (FCNs) to concentrate on the pancreas' anatomy and disregard unnecessary features. We further apply a multi-scale aggregation scheme to leverage the information from different scales. Moreover, we integrate the tubular structure enhancement as an additional input channel of FCN. RESULTS: We performed extensive experiments on 30 cases of contrast-enhanced abdominal CT volumes. To evaluate the pancreatic duct segmentation performance, we employed four measurements, including the Dice similarity coefficient (DSC), sensitivity, normalized surface distance, and 95 percentile Hausdorff distance. The average DSC achieves 55.7%, surpassing other pancreatic duct segmentation methods on single-phase CT scans only. CONCLUSIONS: We proposed an anatomical attention-based strategy for the dilated pancreatic duct segmentation. Our proposed strategy significantly outperforms earlier approaches. The attention mechanism helps to focus on the pancreas region, while the enhancement of the tubular structure enables FCNs to capture the vessel-like structure. The proposed technique might be applied to other tube-like structure segmentation tasks within targeted anatomies.
Optimized convolutional neural network by firefly algorithm for magnetic resonance image classification of glioma brain tumor grade
Bacanin, Nebojsa
Bezdan, Timea
Venkatachalam, K.
Al-Turjman, Fadi
Journal of Real-Time Image Processing2021Journal Article, cited 0 times
Website
TCGA-GBM
Classification
REMBRANDT
Magnetic Resonance Imaging (MRI)
Convolutional Neural Network (CNN)
Computer Aided Diagnosis (CADx)
The most frequent brain tumor types are gliomas. The magnetic resonance imaging technique helps to make the diagnosis of brain tumors. It is hard to get the diagnosis in the early stages of the glioma brain tumor, although the specialist has a lot of experience. Therefore, for the magnetic resonance imaging interpretation, a reliable and efficient system is required which helps the doctor to make the diagnosis in early stages. To make classification of the images, to which class the glioma belongs, convolutional neural networks, which proved that they can obtain an excellent performance in the image classification tasks, can be used. Convolutional network hyperparameters’ tuning is a very important issue in this domain for achieving high accuracy on the image classification; however, this task takes a lot of computational time. Approaching this issue, in this manuscript, we propose a metaheuristics method to automatically find the near-optimal values of convolutional neural network hyperparameters based on a modified firefly algorithm and develop a system for automatic image classification of glioma brain tumor grades from magnetic resonance imaging. First, we have tested the proposed modified algorithm on the set of standard unconstrained benchmark functions and the performance is compared to the original algorithm and other modified variants. Upon verifying the efficiency of the proposed approach in general, it is applied for hyperparameters’ optimization of the convolutional neural network. The IXI dataset and the cancer imaging archive with more collections of data are used for evaluation purposes, and additionally, the method is evaluated on the axial brain tumor images. The obtained experimental results and comparative analysis with other state-of-the-art algorithms tested under the same conditions show the robustness and efficiency of the proposed method.
Validation of a convolutional neural network for the automated creation of curved planar reconstruction images along the main pancreatic duct
Koretsune, Y.
Sone, M.
Sugawara, S.
Wakatsuki, Y.
Ishihara, T.
Hattori, C.
Fujisawa, Y.
Kusumoto, M.
Jpn J Radiol2022Journal Article, cited 0 times
Website
Pancreas-CT
Curved planar reconstruction
3d convolutional neural network (CNN)
Image Processing
Segmentation
Algorithm Development
Contrast enhancement
Computed Tomography (CT)
Deep learning
Imaging algorithm
Main pancreatic duct
Pancreatic cancer
PANCREAS
PURPOSE: To evaluate the accuracy and time-efficiency of newly developed software in automatically creating curved planar reconstruction (CPR) images along the main pancreatic duct (MPD), which was developed based on a 3-dimensional convolutional neural network, and compare them with those of conventional manually generated CPR ones. MATERIALS AND METHODS: A total of 100 consecutive patients with MPD dilatation (>/= 3 mm) who underwent contrast-enhanced computed tomography between February 2021 and July 2021 were included in the study. Two radiologists independently performed blinded qualitative analysis of automated and manually created CPR images. They rated overall image quality based on a four-point scale and weighted kappa analysis was employed to compare between manually created and automated CPR images. A quantitative analysis of the time required to create CPR images and the total length of the MPD measured from CPR images was performed. RESULTS: The kappa value was 0.796, and a good correlation was found between the manually created and automated CPR images. The average time to create automated and manually created CPR images was 61.7 s and 174.6 s, respectively (P < 0.001). The total MPD length of the automated and manually created CPR images was 110.5 and 115.6 mm, respectively (P = 0.059). CONCLUSION: The automated CPR software significantly reduced reconstruction time without compromising image quality.
Memory-efficient 3D connected component labeling with parallel computing
Ohira, Norihiro
Signal, Image and Video Processing2017Journal Article, cited 0 times
Website
Phantom FDA
Algorithm Development
Image processing
labeling
memory-efficient
parallel computing
Pulmonary nodule detection on computed tomography using neuro-evolutionary scheme
Huidrom, Ratishchandra
Chanu, Yambem Jina
Singh, Khumanthem Manglem
Signal, Image and Video Processing2018Journal Article, cited 0 times
Website
LIDC-IDRI
lung cancer
particle swarm optimization
A principal component fusion-based thresholded bin-stretching for CT image enhancement
Kumar, Sonu
Bhandari, Ashish Kumar
Signal, Image and Video Processing2023Journal Article, cited 0 times
Lung-PET-CT-Dx
Algorithm Development
Image denoising
Computed Tomography (CT)
Computed tomography (CT) images play an important role in the medical field to diagnose unhealthy organs, the structure of the inner body, and other diseases. The acquisition of CT images is a challenging task because a sufficient amount of electromagnetic wave is required to capture better contrast images, but for some unavoidable reason, CT machine captures degraded images like low contrast, dark images, and noisy images. So, the enhancement of the CT images is required to visualize the internal body structure. For enhancing the degraded CT image, a novel enhancement technique is produced on the basis of multilevel Thresholding (MLT)-based bin-stretching with power law transform (PLT). Initially, the distorted CT image is processed using an MLT-based bin-stretching approach to improve the contrast of the image. After that, a median filter is applied to the processed image using MLT-based bin-stretching to eliminate the impulse noise. Now, adaptive PLT is applied to the processed filtered image to improve the overall contrast of the image. Finally, contrast improved image and processed image by histogram equalization are fused using the principle component analysis method to control the over-improved portion of the image using PLT. The enhanced image is found in the form of a fused image. The qualitative and quantitative parameters are much better than the other recently introduced enhancement methods.
Advanced MRI Techniques in the Monitoring of Treatment of Gliomas
Hyare, Harpreet
Thust, Steffi
Rees, Jeremy
Current treatment options in neurology2017Journal Article, cited 11 times
Website
TCGA-GBM
glioma
OPINION STATEMENT: With advances in treatments and survival of patients with glioblastoma (GBM), it has become apparent that conventional imaging sequences have significant limitations both in terms of assessing response to treatment and monitoring disease progression. Both 'pseudoprogression' after chemoradiation for newly diagnosed GBM and 'pseudoresponse' after anti-angiogenesis treatment for relapsed GBM are well-recognised radiological entities. This in turn has led to revision of response criteria away from the standard MacDonald criteria, which depend on the two-dimensional measurement of contrast-enhancing tumour, and which have been the primary measure of radiological response for over three decades. A working party of experts published RANO (Response Assessment in Neuro-oncology Working Group) criteria in 2010 which take into account signal change on T2/FLAIR sequences as well as the contrast-enhancing component of the tumour. These have recently been modified for immune therapies, which are associated with specific issues related to the timing of radiological response. There has been increasing interest in quantification and validation of physiological and metabolic parameters in GBM over the last 10 years utilising the wide range of advanced imaging techniques available on standard MRI platforms. Previously, MRI would provide structural information only on the anatomical location of the tumour and the presence or absence of a disrupted blood-brain barrier. Advanced MRI sequences include proton magnetic resonance spectroscopy (MRS), vascular imaging (perfusion/permeability) and diffusion imaging (diffusion weighted imaging/diffusion tensor imaging) and are now routinely available. They provide biologically relevant functional, haemodynamic, cellular, metabolic and cytoarchitectural information and are being evaluated in clinical trials to determine whether they offer superior biomarkers of early treatment response than conventional imaging, when correlated with hard survival endpoints. Multiparametric imaging, incorporating different combinations of these modalities, improves accuracy over single imaging modalities but has not been widely adopted due to the amount of post-processing analysis required, lack of clinical trial data, lack of radiology training and wide variations in threshold values. New techniques including diffusion kurtosis and radiomics will offer a higher level of quantification but will require validation in clinical trial settings. Given all these considerations, it is clear that there is an urgent need to incorporate advanced techniques into clinical trial design to avoid the problems of under or over assessment of treatment response.;
Neurosense: deep sensing of full or near-full coverage head/brain scans in human magnetic resonance imaging
Kanber, B.
Ruffle, J.
Cardoso, J.
Ourselin, S.
Ciccarelli, O.
Neuroinformatics2019Journal Article, cited 0 times
BRAIN
Magnetic Resonance Imaging (MRI)
Classification
The application of automated algorithms to imaging requires knowledge of its content, a curatorial task, for which we ordinarily rely on the Digital Imaging and Communications in Medicine (DICOM) header as the only source of image meta-data. However, identifying brain MRI scans that have full or near-full coverage among a large number (e.g. >5000) of scans comprising both head/brain and other body parts is a time-consuming task that cannot be automated with the use of the information stored in the DICOM header attributes alone. Depending on the clinical scenario, an entire set of scans acquired in a single visit may often be labelled “BRAIN” in the DICOM field 0018,0015 (Body Part Examined), while the individual scans will often not only include brain scans with full coverage, but also others with partial brain coverage, scans of the spinal cord, and in some cases other body parts.
DeepDicomSort: An Automatic Sorting Algorithm for Brain Magnetic Resonance Imaging Data
van der Voort, Sebastian R.
Smits, Marion
Klein, Stefan
Neuroinformatics2020Journal Article, cited 0 times
CPTAC-GBM
IvyGAP
LGG-1p19qDeletion
REMBRANDT
RIDER NEURO MRI
TCGA-GBM
TCGA-LGG
With the increasing size of datasets used in medical imaging research, the need for automated data curation is arising. One important data curation task is the structured organization of a dataset for preserving integrity and ensuring reusability. Therefore, we investigated whether this data organization step can be automated. To this end, we designed a convolutional neural network (CNN) that automatically recognizes eight different brain magnetic resonance imaging (MRI) scan types based on visual appearance. Thus, our method is unaffected by inconsistent or missing scan metadata. It can recognize pre-contrast T1-weighted (T1w),post-contrast T1-weighted (T1wC), T2-weighted (T2w), proton density-weighted (PDw) and derived maps (e.g. apparent diffusion coefficient and cerebral blood flow). In a first experiment,we used scans of subjects with brain tumors: 11065 scans of 719 subjects for training, and 2369 scans of 192 subjects for testing. The CNN achieved an overall accuracy of 98.7%. In a second experiment, we trained the CNN on all 13434 scans from the first experiment and tested it on 7227 scans of 1318 Alzheimer’s subjects. Here, the CNN achieved an overall accuracy of 98.5%. In conclusion, our method can accurately predict scan type, and can quickly and automatically sort a brain MRI dataset virtually without the need for manual verification. In this way, our method can assist with properly organizing a dataset, which maximizes the shareability and integrity of the data.
Evaluation of brain tumor using brain MRI with modified-moth-flame algorithm and Kapur’s thresholding: a study
Kadry, Seifedine
Rajinikanth, V
Raja, N Sri Madhava
Hemanth, D Jude
Hannon, Naeem MS
Raj, Alex Noel Joseph
Evolutionary Intelligence2021Journal Article, cited 0 times
TCGA-GBM
Segmentation
MTF1 has the potential as a diagnostic and prognostic marker for gastric cancer and is associated with good prognosis
He, J.
Jiang, X.
Yu, M.
Wang, P.
Fu, L.
Zhang, G.
Cai, H.
Clin Transl Oncol2023Journal Article, cited 0 times
Website
TCGA-STAD
Radiogenomics
Biomarker
Gastric cancer
Mtf1
Survival
PURPOSE: Metal Regulatory Transcription Factor 1 (MTF1) can be an essential transcription factor for heavy metal response in cells and can also reduce oxidative and hypoxic stresses in cells. However, the current research on MTF1 in gastric cancer is lacking. METHODS: Bioinformatics techniques were used to perform expression analysis, prognostic analysis, enrichment analysis, tumor microenvironment correlation analysis, immunotherapy Immune cell Proportion Score (IPS) correlation and drug sensitivity correlation analysis of MTF1 in gastric cancer. And qRT-PCR was used to verify MTF1 expression in gastric cancer cells and tissues. RESULTS: MTF1 showed low expression in gastric cancer cells and tissues, and low expression in T3 stage compared with T1 stage. KM prognostic analysis showed that high expression of MTF1 was significantly associated with longer overall survival (OS), FP (first progression) and PPS (post-progression survival) in gastric cancer patients. Cox regression analysis showed that MTF1 was an independent prognostic factor and a protective factor in gastric cancer patients. MTF1 is involved in pathways in cancer, and the high expression of MTF1 is negatively correlated with the half maximal inhibitory concentration (IC50) of common chemotherapeutic drugs. CONCLUSION: MTF1 is relatively lowly expressed in gastric cancer. MTF1 is also an independent prognostic factor for gastric cancer patients and is associated with good prognosis. It has the potential to be a diagnostic and prognostic marker for gastric cancer.
Usefulness of gradient tree boosting for predicting histological subtype and EGFR mutation status of non-small cell lung cancer on (18)F FDG-PET/CT
Koyasu, S.
Nishio, M.
Isoda, H.
Nakamoto, Y.
Togashi, K.
Ann Nucl Med2020Journal Article, cited 3 times
Website
NSCLC Radiogenomics
LUNG
Non Small Cell Lung Cancer (NSCLC)
OBJECTIVE: To develop and evaluate a radiomics approach for classifying histological subtypes and epidermal growth factor receptor (EGFR) mutation status in lung cancer on PET/CT images. METHODS: PET/CT images of lung cancer patients were obtained from public databases and used to establish two datasets, respectively to classify histological subtypes (156 adenocarcinomas and 32 squamous cell carcinomas) and EGFR mutation status (38 mutant and 100 wild-type samples). Seven types of imaging features were obtained from PET/CT images of lung cancer. Two types of machine learning algorithms were used to predict histological subtypes and EGFR mutation status: random forest (RF) and gradient tree boosting (XGB). The classifiers used either a single type or multiple types of imaging features. In the latter case, the optimal combination of the seven types of imaging features was selected by Bayesian optimization. Receiver operating characteristic analysis, area under the curve (AUC), and tenfold cross validation were used to assess the performance of the approach. RESULTS: In the classification of histological subtypes, the AUC values of the various classifiers were as follows: RF, single type: 0.759; XGB, single type: 0.760; RF, multiple types: 0.720; XGB, multiple types: 0.843. In the classification of EGFR mutation status, the AUC values were: RF, single type: 0.625; XGB, single type: 0.617; RF, multiple types: 0.577; XGB, multiple types: 0.659. CONCLUSIONS: The radiomics approach to PET/CT images, together with XGB and Bayesian optimization, is useful for classifying histological subtypes and EGFR mutation status in lung cancer.
Prognostic value of tumor metabolic imaging phenotype by FDG PET radiomics in HNSCC
Yoon, H.
Ha, S.
Kwon, S. J.
Park, S. Y.
Kim, J.
O, J. H.
Yoo, I. R.
Ann Nucl Med2021Journal Article, cited 1 times
Website
Head-Neck-CT-Atlas
HNSCC
Machine Learning
Radiomic feature
Fluorodeoxyglucose F18
Image Processing
Computer-Assisted
*Phenotype
Positron Emission Tomography (PET)
HEADNECK
Prognosis
Squamous Cell Carcinoma of Head and Neck/*diagnostic imaging/*metabolism
Objective; Tumor metabolic phenotype can be assessed with integrated image pattern analysis of 18F-fluoro-deoxy-glucose (FDG) Positron Emission Tomography/Computed Tomography (PET/CT), called radiomics. This study was performed to assess the prognostic value of radiomics PET parameters in head and neck squamous cell carcinoma (HNSCC) patients.; ; Methods; 18F-fluoro-deoxy-glucose (FDG) PET/CT data of 215 patients from HNSCC collection free database in The Cancer Imaging Archive (TCIA), and 122 patients in Seoul St. Mary’s Hospital with baseline FDG PET/CT for locally advanced HNSCC were reviewed. Data from TCIA database were used as a training cohort, and data from Seoul St. Mary’s Hospital as a validation cohort. With the training cohort, primary tumors were segmented by Nestles’ adaptive thresholding method. Segmental tumors in PET images were preprocessed using relative resampling of 64 bins. Forty-two PET parameters, including conventional parameters and texture parameters, were measured. Binary groups of homogeneous imaging phenotypes, clustered by K-means method, were compared for overall survival (OS) and disease-free survival (DFS) by log-rank test. Selected individual radiomics parameters were tested along with clinical factors, including age and sex, by Cox-regression test for OS and DFS, and the significant parameters were tested with multivariate analysis. Significant parameters on multivariate analysis were again tested with multivariate analysis in the validation cohort.; ; Results; A total of 119 patients, 70 from training, and 49 from validation cohort, were included in the study. The median follow-up period was 62 and 52 months for the training and the validation cohort, respectively. In the training cohort. binary groups with different metabolic radiomics phenotypes showed significant difference in OS (p = 0.036), and borderline difference in DFS (p = 0.086). Gray-Level Non-Uniformity for zone (GLNUGLZLM) was the most significant prognostic factor for both OS (hazard ratio [HR] 3.1, 95% confidence interval [CI] 1.4–7.3, p = 0.008) and DFS (HR 4.5, CI 1.3–16, p = 0.020). Multivariate analysis revealed GLNUGLZLM as an independent prognostic factor for OS (HR 3.7, 95% CI 1.1–7.5, p = 0.032). GLNUGLZLM remained as an independent prognostic factor in the validation cohort (HR 14.8. 95% CI 3.3–66, p < 0.001).; ; Conclusions; Baseline FDG PET radiomics contain risk information for survival prognosis in HNSCC patients. The metabolic heterogeneity parameter, GLNUGLZLM, may assist clinicians in patient risk assessment as a feasible prognostic factor.
Potentials of radiomics for cancer diagnosis and treatment in comparison with computer-aided diagnosis
Arimura, Hidetaka
Soufi, Mazen
Ninomiya, Kenta
Kamezawa, Hidemi
Yamada, Masahiro
Radiological Physics and Technology2018Journal Article, cited 0 times
Website
RIDER
non-small cell lung cancer (NSCLC)
Computer Aided Diagnosis (CADx)
Radiomics
Segmentation
Computer-aided diagnosis (CAD) is a field that is essentially based on pattern recognition that improves the accuracy of a diagnosis made by a physician who takes into account the computer’s “opinion” derived from the quantitative analysis of radiological images. Radiomics is a field based on data science that massively and comprehensively analyzes a large number of medical images to extract a large number of phenotypic features reflecting disease traits, and explores the associations between the features and patients’ prognoses for precision medicine. According to the definitions for both, you may think that radiomics is not a paraphrase of CAD, but you may also think that these definitions are “image manipulation”. However, there are common and different features between the two fields. This review paper elaborates on these common and different features and introduces the potential of radiomics for cancer diagnosis and treatment by comparing it with CAD.
Comparison of performances of conventional and deep learning-based methods in segmentation of lung vessels and registration of chest radiographs
Guo, W.
Gu, X.
Fang, Q.
Li, Q.
Radiol Phys Technol2020Journal Article, cited 0 times
Website
LIDC-IDRI
Convolutional Neural Network (CNN)
Deep learning
Image registration
Segmentation
LUNG
Vasculature
Conventional machine learning-based methods have been effective in assisting physicians in making accurate decisions and utilized in computer-aided diagnosis for more than 30 years. Recently, deep learning-based methods, and convolutional neural networks in particular, have rapidly become preferred options in medical image analysis because of their state-of-the-art performance. However, the performances of conventional and deep learning-based methods cannot be compared reliably because of their evaluations on different datasets. Hence, we developed both conventional and deep learning-based methods for lung vessel segmentation and chest radiograph registration, and subsequently compared their performances on the same datasets. The results strongly indicated the superiority of deep learning-based methods over their conventional counterparts.
RadiomicsJ: a library to compute radiomic features
Kobayashi, T.
Radiol Phys Technol2022Journal Article, cited 0 times
Website
LGG-1p19qDeletion
Computer Aided Diagnosis (CADx)
Imaging biomarker
Machine learning
Radiomic features
Radiomics
Despite the widely recognized need for radiomics research, the development and use of full-scale radiomics-based predictive models in clinical practice remains scarce. This is because of the lack of well-established methodologies for radiomic research and the need to develop systems to support radiomic feature calculations and predictive model use. Several excellent programs for calculating radiomic features have been developed. However, there are still issues such as the types of image features, variations in the calculated results, and the limited system environment in which to run the program. Against this background, we developed RadiomicsJ, an open-source radiomic feature computation library. RadiomicsJ will not only be a new research tool to enhance the efficiency of radiomics research but will also become a knowledge resource for medical imaging feature studies through its release as an open-source program.
Automatic Detection of Lung Nodules Using 3D Deep Convolutional Neural Networks
Fu, Ling
Ma, Jingchen
Chen, Yizhi
Larsson, Rasmus
Zhao, Jun
Journal of Shanghai Jiaotong University (Science)2019Journal Article, cited 0 times
Website
LIDC-IDRI
Computer Aided Detection (CADe)
Convolutional Neural Network (CNN)
Lung cancer is the leading cause of cancer deaths worldwide. Accurate early diagnosis is critical in increasing the 5-year survival rate of lung cancer, so the efficient and accurate detection of lung nodules, the potential precursors to lung cancer, is paramount. In this paper, a computer-aided lung nodule detection system using 3D deep convolutional neural networks (CNNs) is developed. The first multi-scale 11-layer 3D fully convolutional neural network (FCN) is used for screening all lung nodule candidates. Considering relative small sizes of lung nodules and limited memory, the input of the FCN consists of 3D image patches rather than of whole images. The candidates are further classified in the second CNN to get the final result. The proposed method achieves high performance in the LUNA16 challenge and demonstrates the effectiveness of using 3D deep CNNs for lung nodule detection.
Expression Profiling of the MAP Kinase Phosphatase Family Reveals a Role for DUSP1 in the Glioblastoma Stem Cell Niche
The dual specificity phosphatases (DUSPs) constitute a family of stress-induced enzymes that provide feedback inhibition on mitogen-activated protein kinases (MAPKs) critical in key aspects of oncogenic signaling. While described in other tumor types, the landscape of DUSP mRNA expression in glioblastoma (GB) remains largely unexplored. Interrogation of the REpository for Molecular BRAin Neoplasia DaTa (REMBRANDT) revealed induction (DUSP4, DUSP6), repression (DUSP2, DUSP7–9), or mixed (DUSP1, DUSP5, DUSP10, DUSP15) DUSP transcription of select DUSPs in bulk tumor specimens. To resolve features specific to the tumor microenvironment, we searched the Ivy Glioblastoma Atlas Project (Ivy GAP) repository, which highlight DUSP1, DUSP5, and DUSP6 as the predominant family members induced within pseudopalisading and perinecrotic regions. The inducibility of DUSP1 in response to hypoxia, dexamethasone, or the chemotherapeutic agent camptothecin was confirmed in GB cell lines and tumor-derived stem cells (TSCs). Moreover, we show that loss of DUSP1 expression is a characteristic of TSCs and correlates with expression of tumor stem cell markers in situ (ABCG2, PROM1, L1CAM, NANOG, SOX2). This work reveals a dynamic pattern of DUSP expression within the tumor microenvironment that reflects the cumulative effects of factors including regional ischemia, chemotherapeutic exposure among others. Moreover, our observation regarding DUSP1 dysregulation within the stem cell niche argue for its importance in the survival and proliferation of this therapeutically resistant population.
Lung tumor cell classification with lightweight mobileNetV2 and attention-based SCAM enhanced faster R-CNN
Jenipher, V. Nisha
Radhika, S.
Evolving Systems2024Journal Article, cited 0 times
Website
Lung-PET-CT-Dx
LIDC-IDRI
U-Net
Classification
MobileNetV2
Segmentation
Algorithm Development
Early and precise detection of lung tumor cell is paramount for providing adequate medication and increasing the survivability of the patients. To achieve this, the Enhanced Faster R-CNN with MobileNetV2 and SCAM framework is bestowed for improving the diagnostic accuracy of lung tumor cell classification. The U-Net architecture optimized by Stochastic Gradient Descent (SGD) is employed to carry out clinical image segmentation. The developed approach leverages the advantage of the lightweight design MobileNetV2 backbone network and the attention mechanism called Spatial and Channel Attention Module (SCAM) for improving the feature extraction as well as the feature representation and localization process of lung tumor cell. The proposed method integrated a MobileNetV2 backbone network due to its lightweight design for deriving valuable features of the input clinical images to reduce the complexity of the network architecture. Moreover, it also incorporates the attention module SCAM for the creation of spatially and channel wise informative features to enhance the lung tumor cell features representation and also its localization to concentrate on important locations. To assess the efficacy of the method, several high performance lung tumor cell classification techniques ECNN, Lung-Retina Net, CNN-SVM, CCDC-HNN, and MTL-MGAN, and datasets including Lung-PET-CT-Dx dataset, LIDC-IDRI dataset, and Chest CT-Scan images dataset are taken to carry out experimental evaluation. By conducting the comprehensive comparative analysis for different metrics with respect to different methods, the proposed method obtains the impressive performance rate with accuracy of 98.6%, specificity of 96.8%, sensitivity of 97.5%, and precision of 98.2%. Furthermore, the experimental outcomes also reveal that the proposed method reduces the complexity of the network and obtains improved diagnostic outcomes with available annotated data.
Prostate cancer prediction from multiple pretrained computer vision model
John, Jisha
Ravikumar, Aswathy
Abraham, Bejoy
Health and Technology2021Journal Article, cited 0 times
Website
PROSTATEx
PROSTATE
Deep Learning
DenseNet
Computer Aided Detection (CADe)
Radiomics
Classification
The prostate gland found among men is a male reproductive gland responsible for separating a thin alkaline fluid that forms a major portion of the ejaculate. The gland has the shape of a small walnut and the cancer caused in this gland is called Prostate Cancer. It has the second highest mortality rate according to studies. Therefore, its detection at the earlier stage when it is still confined to the prostate gland is life saving. This ensures a better chance of successful treatment. The existing preliminary screening approaches for its detection includes prostate specific antigen (PSA) blood test and digital rectal exam (DRE). In the proposed method we use two popular pretrained models for feature extraction, MobileNet and DenseNet. The extracted features are stacked and augmented and fed to a two-stage classifier that provides the prediction. The proposed system is found to have an accuracy of 93.3% and outperforms other traditional approaches.
Social group optimization–assisted Kapur’s entropy and morphological segmentation for automated detection of COVID-19 infection from computed tomography images
Dey, Nilanjan
Rajinikanth, V
Fong, Simon James
Kaiser, M Shamim
Mahmud, Mufti
Cognitive Computation2020Journal Article, cited 0 times
LIDC-IDRI
RIDER
COVID-19
Segmentation
Machine Learning
Deep Learning–Based Approaches to Improve Classification Parameters for Diagnosing COVID-19 from CT Images
Yasar, H.
Ceylan, M.
Cognit Comput2021Journal Article, cited 0 times
Website
LIDC-IDRI
Deep Learning
Computed Tomography (CT)
Classification
Algorithm Development
Patients infected with the COVID-19 virus develop severe pneumonia, which generally leads to death. Radiological evidence has demonstrated that the disease causes interstitial involvement in the lungs and lung opacities, as well as bilateral ground-glass opacities and patchy opacities. In this study, new pipeline suggestions are presented, and their performance is tested to decrease the number of false-negative (FN), false-positive (FP), and total misclassified images (FN + FP) in the diagnosis of COVID-19 (COVID-19/non-COVID-19 and COVID-19 pneumonia/other pneumonia) from CT lung images. A total of 4320 CT lung images, of which 2554 were related to COVID-19 and 1766 to non-COVID-19, were used for the test procedures in COVID-19 and non-COVID-19 classifications. Similarly, a total of 3801 CT lung images, of which 2554 were related to COVID-19 pneumonia and 1247 to other pneumonia, were used for the test procedures in COVID-19 pneumonia and other pneumonia classifications. A 24-layer convolutional neural network (CNN) architecture was used for the classification processes. Within the scope of this study, the results of two experiments were obtained by using CT lung images with and without local binary pattern (LBP) application, and sub-band images were obtained by applying dual-tree complex wavelet transform (DT-CWT) to these images. Next, new classification results were calculated from these two results by using the five pipeline approaches presented in this study. For COVID-19 and non-COVID-19 classification, the highest sensitivity, specificity, accuracy, F-1, and AUC values obtained without using pipeline approaches were 0.9676, 0.9181, 0.9456, 0.9545, and 0.9890, respectively; using pipeline approaches, the values were 0.9832, 0.9622, 0.9577, 0.9642, and 0.9923, respectively. For COVID-19 pneumonia/other pneumonia classification, the highest sensitivity, specificity, accuracy, F-1, and AUC values obtained without using pipeline approaches were 0.9615, 0.7270, 0.8846, 0.9180, and 0.9370, respectively; using pipeline approaches, the values were 0.9915, 0.8140, 0.9071, 0.9327, and 0.9615, respectively. The results of this study show that classification success can be increased by reducing the time to obtain per-image results through using the proposed pipeline approaches.
Multi image super resolution of MRI images using generative adversarial network
Nimitha, U.
Ameer, P. M.
Journal of Ambient Intelligence and Humanized Computing2024Journal Article, cited 0 times
Website
PROSTATE-DIAGNOSIS
Convolutional Neural Network (CNN)
Generative Adversarial Network (GAN)
Classification
Image Enhancement/methods
In recent decades, computer-aided medical image analysis has become a popular techniques for disease detection and diagnosis. Deep learning-based image processing techniques, have gained popularity in areas such as remote sensing, computer vision, and healthcare,compared to conventional techniques. However, hardware limitations, acquisition time, low radiation dose, and patient motion are factors that can limit the quality of medical images. High-resolution medical images are more accurate in localizing disease regions than low-resolution images. Hardware limitations, patient motion, radiation dose etc. can result in low-resolution (LR) medical images. To enhance the quality of LR medical images, we propose a multi-image super-resolution architecture using a generative adversarial network (GAN) with a generator architecture that employs multi-stage feature extraction, incorporating both residual blocks and an attention network and a discriminator having fewer convolutional layers to reduce computational complexity. The method enhances the resolution of low-resolution (LR) prostate cancer MRI images by combining multiple MRI slices with slight spatial shifts, utilizing shared weights for feature extraction for each MRI image. Unlike super-resolution techniques in literature, the network uses perceptual loss, which is computed by fine-tuning the VGG19 network with sparse categorical cross entropy loss. The features to compute perceptual loss are extracted from the final dense layer, instead of the convolutional block in VGG19 the literature. Our experiments were conducted on MRI images having a resolution of 80x80 for lower resolution and 320x320 for high resolution achieving an upscaling of x4. The experimental analysis shows that the proposed model outperforms the existing deep learning architectures for super-resolution with an average peak-signal-to-noise ratio (PSNR) of 30.58 ± 0.76 dB and average structural similarity index measure (SSIM) of 0.8105 ± 0.0656 for prostate MRI images. The application of a CNN-based SVM classifier confirmed that enhancing the resolution of normal LR brain MRI images using super-resolution techniques did not result in any false positive cases. This same architecture has the potential to be extended to other medical imaging modalities as well.
An intelligent lung tumor diagnosis system using whale optimization algorithm and support vector machine
Vijh, Surbhi
Gaur, Deepak
Kumar, Sushil
International Journal of System Assurance Engineering and Management2019Journal Article, cited 0 times
LIDC-IDRI
Medical image processing technique are widely used for detection of tumor to increase the survival rate of patients. The development of computer-aided diagnosis system shows improvement in observing the medical image and determining the treatment stages. The earlier detection of tumor reduces the mortality of lung cancer by increasing the probability of successful treatment. In this paper, the intelligent lung tumor diagnosis system is developed using various image processing technique. The simulated steps involve image enhancement, image segmentation, post-processing, feature extraction, feature selection and classification using support vector machine (SVM) kernel. Gray level co-occurrence matrix method is used for extracting the 19 texture and statistical features of lung computed tomography (CT) image. Whale optimization algorithm (WOA) is considered for selection of best prominent feature subset. The contribution provided in this paper is the development of WOA_SVM to automate the aided diagnosis system for determining whether the lung CT image is normal or abnormal. An improved technique is developed using whale optimization algorithm for optimal feature selection to obtain accurate results and constructing the robust model. The performance of proposed methodology is evaluated using accuracy, sensitivity and specificity and obtained as 95%, 100% and 92% using radial bias function support vector kernel.
Convolutional neural networks: an overview and application in radiology
Yamashita, Rikiya
Nishio, Mizuho
Do, Richard Kinh Gian
Togashi, Kaori
2018Journal Article, cited 0 times
TCGA-CESC
Convolutional neural network (CNN), a class of artificial neural networks that has become dominant in various computer vision tasks, is attracting interest across a variety of domains, including radiology. CNN is designed to automatically and adaptively learn spatial hierarchies of features through backpropagation by using multiple building blocks, such as convolution layers, pooling layers, and fully connected layers. This review article offers a perspective on the basic concepts of CNN and its application to various radiological tasks, and discusses its challenges and future directions in the field of radiology. Two challenges in applying CNN to radiological tasks, small dataset and overfitting, will also be covered in this article, as well as techniques to minimize them. Being familiar with the concepts and advantages, as well as limitations, of CNN is essential to leverage its potential in diagnostic radiology, with the goal of augmenting the performance of radiologists and improving patient care.Key Points• Convolutional neural network is a class of deep learning methods which has become dominant in various computer vision tasks and is attracting interest across a variety of domains, including radiology.• Convolutional neural network is composed of multiple building blocks, such as convolution layers, pooling layers, and fully connected layers, and is designed to automatically and adaptively learn spatial hierarchies of features through a backpropagation algorithm.• Familiarity with the concepts and advantages, as well as limitations, of convolutional neural network is essential to leverage its potential to improve radiologist performance and, eventually, patient care.
Bone suppression for chest X-ray image using a convolutional neural filter
Matsubara, N.
Teramoto, A.
Saito, K.
Fujita, H.
Australas Phys Eng Sci Med2019Journal Article, cited 0 times
LIDC-IDRI
Image Enhancement/methods
Chest X-rays are used for mass screening for the early detection of lung cancer. However, lung nodules are often overlooked because of bones overlapping the lung fields. Bone suppression techniques based on artificial intelligence have been developed to solve this problem. However, bone suppression accuracy needs improvement. In this study, we propose a convolutional neural filter (CNF) for bone suppression based on a convolutional neural network which is frequently used in the medical field and has excellent performance in image processing. CNF outputs a value for the bone component of the target pixel by inputting pixel values in the neighborhood of the target pixel. By processing all positions in the input image, a bone-extracted image is generated. Finally, bone-suppressed image is obtained by subtracting the bone-extracted image from the original chest X-ray image. Bone suppression was most accurate when using CNF with six convolutional layers, yielding bone suppression of 89.2%. In addition, abnormalities, if present, were effectively imaged by suppressing only bone components and maintaining soft-tissue. These results suggest that the chances of missing abnormalities may be reduced by using the proposed method. The proposed method is useful for bone suppression in chest X-ray images.
Generating missing patient anatomy from partially acquired cone-beam computed tomography images using deep learning: a proof of concept
Shields, B.
Ramachandran, P.
Phys Eng Sci Med2023Journal Article, cited 0 times
Website
NLST
Cone-beam computed tomography
Machine learning
Radiotherapy
The patient setup technique currently in practice in most radiotherapy departments utilises on-couch cone-beam computed tomography (CBCT) imaging. Patients are positioned on the treatment couch using visual markers, followed by fine adjustments to the treatment couch position depending on the shift observed between the computed tomography (CT) image acquired for treatment planning and the CBCT image acquired immediately before commencing treatment. The field of view of CBCT images is limited to the size of the kV imager which leads to the acquisition of partial CBCT scans for lateralised tumors. The cone-beam geometry results in high amounts of streaking artifacts and in conjunction with limited anatomical information reduces the registration accuracy between planning CT and the CBCT image. This study proposes a methodology that can improve radiotherapy patient setup CBCT images by removing streaking artifacts and generating the missing patient anatomy with patient-specific precision. This research was split into two separate studies. In Study A, synthetic CBCT (sCBCT) data was created and used to train two machine learning models, one for removing streaking artifacts and the other for generating the missing patient anatomy. In Study B, planning CT and on-couch CBCT data from several patients was used to train a base model, from which a transfer of learning was performed using imagery from a single patient, producing a patient-specific model. The models developed for Study A performed well at removing streaking artifacts and generating the missing anatomy. The outputs yielded in Study B show that the model understands the individual patient and can generate the missing anatomy from partial CBCT datasets. The outputs generated demonstrate that there is utility in the proposed methodology which could improve the patient setup and ultimately lead to improving overall treatment quality.
Improving Brain Tumor Diagnosis Using MRI Segmentation Based on Collaboration of Beta Mixture Model and Learning Automata
Edalati-rad, Akram
Mosleh, Mohammad
Arabian Journal for Science and Engineering2018Journal Article, cited 0 times
Website
brain cancer
segmentation
beta mixture
learning automata (LA)
dice similarity score (DSS)
Jaccard similarity Index (JSI)
GBM
Prostate Segmentation via Dynamic Fusion Model
Ocal, Hakan
Barisci, Necaattin
Arabian Journal for Science and Engineering2022Journal Article, cited 0 times
Website
NCI-ISBI 2013 Challenge: Automated Segmentation of Prostate Structures
Segmentation
PROSTATE
Nowadays, many different methods are used in diagnosing prostate cancer. Among these methods, MRI-based imaging methods provide more precise information than other methods by obtaining the prostate's image from different angles (axial, sagittal, coronal). However, manually segmenting these images is very time-consuming and laborious. Besides, another challenge is the inhomogeneous and inconsistent appearance around the prostate borders, which is essential for cancer diagnosis. Nowadays, scientists are working intensively on deep learning-based techniques to identify prostate boundaries more efficiently and with high accuracy. In this study, a dynamic fusion architecture is proposed. For the fusion model, the Unet + Resnet3D and Unet + Resnet2D models were fused. Evaluation experiments were performed on the MICCAI 2012 Prostate Segmentation Challenge Dataset (PROMISE12) and the NCI-ISBI 2013(NCI_ISBI-13) Prostate Segmentation Challenge Dataset. Comparative analyzes show that the advantages and robustness of our method are superior to state-of-the-art approaches.
Automated segmentation of the larynx on computed tomography images: a review
Rao, Divya
K, Prakashini
Singh, Rohit
J, Vijayananda
2022Journal Article, cited 0 times
TCGA-HNSC
The larynx, or the voice-box, is a common site of occurrence of Head and Neck cancers. Yet, automated segmentation of the larynx has been receiving very little attention. Segmentation of organs is an essential step in cancer treatment-planning. Computed Tomography scans are routinely used to assess the extent of tumor spread in the Head and Neck as they are fast to acquire and tolerant to some movement.This paper reviews various automated detection and segmentation methods used for the larynx on Computed Tomography images. Image registration and deep learning approaches to segmenting the laryngeal anatomy are compared, highlighting their strengths and shortcomings. A list of available annotated laryngeal computed tomography datasets is compiled for encouraging further research. Commercial software currently available for larynx contouring are briefed in our work.We conclude that the lack of standardisation on larynx boundaries and the complexity of the relatively small structure makes automated segmentation of the larynx on computed tomography images a challenge. Reliable computer aided intervention in the contouring and segmentation process will help clinicians easily verify their findings and look for oversight in diagnosis. This review is useful for research that works with artificial intelligence in Head and Neck cancer, specifically that deals with the segmentation of laryngeal anatomy.
Automated grading of non-small cell lung cancer by fuzzy rough nearest neighbour method
Moitra, Dipanjan
Mandal, Rakesh Kr
Network Modeling Analysis in Health Informatics and Bioinformatics2019Journal Article, cited 0 times
NSCLC Radiogenomics
Classification
Lung cancer is one of the most lethal diseases across the world. Most lung cancers belong to the category of non-small cell lung cancer (NSCLC). Many studies have so far been carried out to avoid the hazards and bias of manual classification of NSCLC tumors. A few of such studies were intended towards automated nodal staging using the standard machine learning algorithms. Many others tried to classify tumors as either benign or malignant. None of these studies considered the pathological grading of NSCLC. Automated grading may perfectly depict the dissimilarity between normal tissue and cancer affected tissue. Such automation may save patients from undergoing a painful biopsy and may also help radiologists or oncologists in grading the tumor or lesion correctly. The present study aims at the automated grading of NSCLC tumors using the fuzzy rough nearest neighbour (FRNN) method. The dataset was extracted from The Cancer Imaging Archive and it comprised PET/CT images of NSCLC tumors of 211 patients. Accelerated segment test (FAST) and histogram oriented gradients methods were used to detect and extract features from the segmented images. Gray level co-occurrence matrix (GLCM) features were also considered in the study. The features along with the clinical grading information were fed into four machine learning algorithms: FRNN, logistic regression, multi-layer perceptron, and support vector machine. The results were thoroughly compared in the light of various evaluation-metrics. The confusion matrix was found balanced, and the outcome was found more cost-effective for FRNN. Results were also compared with various other leading studies done earlier in this field. The proposed FRNN model performed satisfactorily during the experiment. Further exploration of FRNN may be very helpful for radiologists and oncologists in planning the treatment for NSCLC. More varieties of cancers may be considered while conducting similar studies.
Imaging Biomarker Ontology (IBO): A Biomedical Ontology to Annotate and Share Imaging Biomarker Data
Amdouni, Emna
Gibaud, Bernard
Journal on Data Semantics2018Journal Article, cited 0 times
Website
TCGA-GBM
dicom
Biomarker Retrieval and Knowledge Reasoning System (BiomRKRS)
ontology
Automated AJCC (7th edition) staging of non-small cell lung cancer (NSCLC) using deep convolutional neural network (CNN) and recurrent neural network (RNN)
Moitra, Dipanjan
Mandal, Rakesh Kr
Health Inf Sci Syst2019Journal Article, cited 0 times
NSCLC Radiogenomics
LUNG
Classification
Deep Learning
Machine Learning
Deep convolutional neural network (DCNN)
Purpose: A large chunk of lung cancers are of the type non-small cell lung cancer (NSCLC). Both the treatment planning and patients' prognosis depend greatly on factors like AJCC staging which is an abstraction over TNM staging. Many significant efforts have so far been made towards automated staging of NSCLC, but the groundbreaking application of a deep neural networks (DNNs) is yet to be observed in this domain of study. DNN is capable of achieving higher level of accuracy than the traditional artificial neural networks (ANNs) as it uses deeper layers of convolutional neural network (CNN). The objective of the present study is to propose a simple yet fast CNN model combined with recurrent neural network (RNN) for automated AJCC staging of NSCLC and to compare the outcome with a few standard machine learning algorithms along with a few similar studies. Methods: The NSCLC radiogenomics collection from the cancer imaging archive (TCIA) dataset was considered for the study. The tumor images were refined and filtered by resizing, enhancing, de-noising, etc. The initial image processing phase was followed by texture based image segmentation. The segmented images were fed into a hybrid feature detection and extraction model which was comprised of two sequential phases: maximally stable extremal regions (MSER) and the speeded up robust features (SURF). After a prolonged experiment, the desired CNN-RNN model was derived and the extracted features were fed into the model. Results: The proposed CNN-RNN model almost outperformed the other machine learning algorithms under consideration. The accuracy remained steadily higher than the other contemporary studies. Conclusion: The proposed CNN-RNN model performed commendably during the study. Further studies may be carried out to refine the model and develop an improved auxiliary decision support system for oncologists and radiologists.
Computer-aided diagnosis of hepatocellular carcinoma fusing imaging and structured health data
Menegotto, A. B.
Becker, C. D. L.
Cazella, S. C.
Health Inf Sci Syst2021Journal Article, cited 0 times
Website
Algorithm Development
TCGA-STAD
TCGA-LIHC
TCGA-KIRP
CPTAC-PDA
Deep Learning
Classification
Computer Aided Diagnosis (CADx)
Introduction: Hepatocellular carcinoma is the prevalent primary liver cancer, a silent disease that killed 782,000 worldwide in 2018. Multimodal deep learning is the application of deep learning techniques, fusing more than one data modality as the model's input. Purpose: A computer-aided diagnosis system for hepatocellular carcinoma developed with multimodal deep learning approaches could use multiple data modalities as recommended by clinical guidelines, and enhance the robustness and the value of the second-opinion given to physicians. This article describes the process of creation and evaluation of an algorithm for computer-aided diagnosis of hepatocellular carcinoma developed with multimodal deep learning techniques fusing preprocessed computed-tomography images with structured data from patient Electronic Health Records. Results: The classification performance achieved by the proposed algorithm in the test dataset was: accuracy = 86.9%, precision = 89.6%, recall = 86.9% and F-Score = 86.7%. These classification performance metrics are closer to the state-of-the-art in this area and were achieved with data modalities which are cheaper than traditional Magnetic Resonance Imaging approaches, enabling the use of the proposed algorithm by low and mid-sized healthcare institutions. Conclusion: The classification performance achieved with the multimodal deep learning algorithm is higher than human specialists diagnostic performance using only CT for diagnosis. Even though the results are promising, the multimodal deep learning architecture used for hepatocellular carcinoma prediction needs more training and test processes using different datasets before the use of the proposed algorithm by physicians in real healthcare routines. The additional training aims to confirm the classification performance achieved and enhance the model's robustness.
“Radiotranscriptomics”: A synergy of imaging and transcriptomics in clinical assessment
Katrib, Amal
Hsu, William
Bui, Alex
Xing, Yi
Quantitative Biology2016Journal Article, cited 0 times
Radiogenomics
Quantitative evaluation of robust skull stripping and tumor detection applied to axial MR images
Chaddad, Ahmad
Tanougast, Camel
Brain Informatics2016Journal Article, cited 28 times
Website
TCGA-GBM
To isolate the brain from non-brain tissues using a fully automatic method may be affected by the presence of radio frequency non-homogeneity of MR images (MRI), regional anatomy, MR sequences, and the subjects of the study. In order to automate the brain tumor (Glioblastoma) detection, we proposed a novel approach of skull stripping for axial slices derived from MRI. Then, the brain tumor was detected using multi-level threshold segmentation based on histogram analysis. Skull-stripping method, was applied by adaptive morphological operations approach. This is considered an empirical threshold by calculation of the area of brain tissue, iteratively. It was employed on the registration of non-contrast T1-weighted (T1-WI) and its corresponding fluid attenuated inversion recovery sequence. Then, we used multi-thresholding segmentation (MTS) method which is proposed by Otsu. We calculated the performance metrics based on the similarity coefficients for patients (n = 120) with tumor. The adaptive algorithm of skull stripping and MTS of segmented tumors were achieved efficient in preliminary results with 92 and 80 % of Dice similarity coefficient and 0.3 and 25.8 % of false negative rate, respectively. The adaptive skull stripping algorithm provides robust skull-stripping results, and the tumor area for medical diagnosis was determined by MTS.
Nerve optic segmentation in CT images using a deep learning model and a texture descriptor
Ranjbarzadeh, Ramin
Dorosti, Shadi
Jafarzadeh Ghoushchi, Saeid
Safavi, Sadaf
Razmjooy, Navid
Tataei Sarshar, Nazanin
Anari, Shokofeh
Bendechache, Malika
Complex & Intelligent Systems2022Journal Article, cited 1 times
Website
Deep convolution neural network
Deep Learning
Computed Tomography (CT)
Brain
Multimodal Retrieval Framework for Brain Volumes in 3D MR Volumes
Sarathi, Mangipudi Partha
Ansari, Mohammad Ahmad
Journal of Medical and Biological Engineering2017Journal Article, cited 1 times
Website
The paper presents retrieval framework for extracting similar 3D tumor volumes in magnetic resonance brain volumes in response to a query tumor volume. Similar volumes correspond to closeness in spatial location of the brain structures. Query slice pertains to a new tumor volume of a patient and the output slices belong to the tumor volumes related to previous case histories stored in the database. The framework could be of immense help to the medical practitioners. It might prove to be a useful diagnostic aid for the medical expert and also serve as a teaching aid for researchers.
Morphological Changes of Liver Among Post-Fontan Surgery Patients
Nainamalai, Varatharajan
Jenssen, Håvard Bjørke
Tun Suha, Khadiza
Rezaeitaleshmahalleh, Mostafa
Wang, Min
Khan, Sarah
Haw, Marcus
Jiang, Jingfeng
Vettukattil, Joseph
Journal of Medical and Biological Engineering2024Journal Article, cited 0 times
Website
Healthy-Total-Body-CTs
LIVER
Radiomics
Computed Tomography (CT)
Purpose
Liver screening and longitudinal study of Fontan Associated Liver Diseases (FALD) is essential to identifying hepatomegaly and how hepatomegaly relates to various stages of liver fibrosis. In this study, we investigated longitudinal liver shape changes and liver stiffness in a cohort of patients with Fontan Associated Liver Disease.
Methods
We used 170 image volumes of 40 Fontan stage 3 completion patients. We also used 65 computed tomography images of healthy individuals from three datasets for comparison. Thirteen radiomic shape features of Fontan patients and individuals with a healthy liver were extracted and analyzed longitudinally. We studied correlations among features, liver spleen ratio, and liver stiffness with shape features.
Results
The enlargement of the liver, along with all shape features, was observed in all post-surgery intervals related to hepatomegaly and fibrosis. The shape features of healthy individuals and Fontan cases differ significantly in the longitudinal analysis and in the liver-spleen ratio. There is a positive correlation among body mass index, body surface area, age, Fontan surgery years, and liver stiffness.
Conclusion
The changes in shape features between Fontan patients and healthy subjects are statistically significant, which shows the relation for hepatomegaly and liver fibrosis. Accurate delineation of these features with artificial intelligence-based segmentation could serve as a valuable adjunct for the clinical follow-up of Fontan patients.
Multi-Classification of Brain Tumor MRI Images Using Deep Convolutional Neural Network with Fully Optimized Framework
Irmak, Emrah
Iranian Journal of Science and Technology, Transactions of Electrical Engineering2021Journal Article, cited 0 times
Website
RIDER NEURO MRI
REMBRANDT
TCGA-LGG
Computer Aided Detection (CADe)
Convolutional Neural Network (CNN)
Classification
BRAIN
Brain tumor diagnosis and classification still rely on histopathological analysis of biopsy specimens today. The current method is invasive, time-consuming and prone to manual errors. These disadvantages show how essential it is to perform a fully automated method for multi-classification of brain tumors based on deep learning. This paper aims to make multi-classification of brain tumors for the early diagnosis purposes using convolutional neural network (CNN). Three different CNN models are proposed for three different classification tasks. Brain tumor detection is achieved with 99.33% accuracy using the first CNN model. The second CNN model can classify the brain tumor into five brain tumor types as normal, glioma, meningioma, pituitary and metastatic with an accuracy of 92.66%. The third CNN model can classify the brain tumors into three grades as Grade II, Grade III and Grade IV with an accuracy of 98.14%. All the important hyper-parameters of CNN models are automatically designated using the grid search optimization algorithm. To the best of author’s knowledge, this is the first study for multi-classification of brain tumor MRI images using CNN whose almost all hyper-parameters are tuned by the grid search optimizer. The proposed CNN models are compared with other popular state-of-the-art CNN models such as AlexNet, Inceptionv3, ResNet-50, VGG-16 and GoogleNet. Satisfactory classification results are obtained using large and publicly available clinical datasets. The proposed CNN models can be employed to assist physicians and radiologists in validating their initial screening for brain tumor multi-classification purposes.
A novel decentralized model for storing and sharing neuroimaging data using ethereum blockchain and the interplanetary file system
Batchu, Sai
Henry, Owen S.
Hakim, Abraham A.
International Journal of Information Technology2021Journal Article, cited 0 times
Website
REMBRANDT
Information Storage and Retrieval
Current methods to store and transfer medical neuroimaging data raise issues with security and transparency, and novel protocols are needed. Ethereum smart contracts present an encouraging new option. Ethereum is an open-source platform that allows users to construct smart contracts—self-executable packages of code that exist in the Ethereum state and allow transactions under programmed conditions. The present study developed a proof-of-concept smart contract that stores patient brain tumor data such as patient identifier, disease, grade, chemotherapy drugs, and Karnofsky score. The InterPlanetary file system was used to efficiently store the image files, and the corresponding content identifier hashes were stored within the smart contracts. Testing with a private, proof-of-authority network required only 889 MB of memory per insertion to insert 350 patient records, while retrieval required 910 MB. Inserting 350 patient records required 907 ms. The concept presented in this study exemplifies the use of smart contracts and off chain data storage for efficient retrieval/insertion of medical neuroimaging data.
Cuckoo search based multi-objective algorithm with decomposition for detection of masses in mammogram images
Bhalerao, Pramod B.
Bonde, Sanjiv V.
International Journal of Information Technology2021Journal Article, cited 0 times
Website
CBIS-DDSM
mini-MIAS
Computer Aided Detection (CADe)
BREAST
Mammography
Machine Learning
Breast cancer is the most recurrent cancer in the United States after skin cancer. Early detection of masses in mammograms will help drop the death rate. This paper provides a hybrid approach based on a multiobjective evolutionary algorithm (MOEA) and cuckoo search. Using cuckoo search for decomposing problem into a single objective (single nest) for each Pareto optimal solution. The proposed method CS-MOEA/DE is evaluated using MIAS and DDSM datasets. A novel hybrid approach consists of nature-inspired cuckoo search and multiobjective optimization with Differential evolution, which is unique and includes detection of masses in a mammogram. The proposed work is evaluated based on 110 (50 + 60) images; the overall accuracy found for the proposed hybrid method is 96.74%. The experimental outcome shows that our proposed method provides better results than other state-of-the-art methods like the Otsu method, Kapur's Entropy, Cuckoo Search-based modified BHE.
Applications of Deep Neural Networks with Fractal Structure and Attention Blocks for 2D and 3D Brain Tumor Segmentation
In this paper, we propose a novel deep neural network (DNN) architecture with fractal structure and attention blocks. The new method is tested to identify and segment 2D and 3D brain tumor masks in normal and pathological neuroimaging data. To circumvent the problem of limited 3D volumetric datasets with raw and ground truth tumor masks, we utilized data augmentation using affine transformations to significantly expand the training data prior to estimating the network model parameters. The proposed Attention-based Fractal Unet (AFUnet) technique combines benefits of fractal convolutional networks, attention blocks, and the encoder-decoder structure of Unet. The AFUnet models are fit on training data and their performance is assessed on independent validation and testing datasets. The Dice score is used to measure and contrast the performance of AFUnet against alternative methods, such as Unet, attention Unet, and several other DNN models with relative number of parameters. In addition, we explore the effects of the network depth to the AFUnet prediction accuracy. The results suggest that with a few network structure iterations, the attention-based fractal Unet achieves good performance. Although deeper nested network structure certainly improves the prediction accuracy, this comes with a very substantial computational cost. The benefits of fitting deeper AFUnet models are relative to the extra time and computational demands. Some of the AFUnet networks outperform current state-of-the-art models and achieve highly accurate and realistic brain-tumor boundary segmentation (contours in 2D and surfaces in 3D). In our experiments, the sensitivity of the Dice score to capture significant inter-models differences is marginal. However, there is improved validation loss during long periods of AFUnet training. The lower binary cross entropy loss suggests that AFUNet is superior in finding true negative voxels (i.e., identifying normal tissue), which suggests the new method is more conservative. This approach may be generalized to higher dimensional data, e.g., 4D fMRI hypervolumes, and applied for a wide range of signal, image, volume, and hypervolume segmentation tasks.
A survey on lung CT datasets and research trends
Adiraju, Rama Vasantha
Elias, Susan
Research on Biomedical Engineering2021Journal Article, cited 0 times
NSCLC-Radiomics
SPIE-AAPM Lung CT Challenge
PurposeLung cancer is the most dangerous of all forms of cancer and it has the highest occurrence rate, world over. Early detection of lung cancer is a difficult task. Medical images generated by computer tomography (CT) are being used extensively for lung cancer analysis and research. However, it is essential to have a well-organized image database in order to design a reliable computer-aided diagnosis (CAD) tool. Identifying the most appropriate dataset for the research is another big challenge.Literarture reviewThe objective of this paper is to present a review of literature related to lung CT datasets. The Cancer Imaging Archive (TCIA) consortium collates different types of cancer datasets and permits public access through an integrated search engine. This survey summarizes the research work done using lung CT datasets maintained by TCIA. The motivation to present this survey was to help the research community in selecting the right lung dataset and to provide a comprehensive summary of the research developments in the field.
Fusing clinical and image data for detecting the severity level of hospitalized symptomatic COVID-19 patients using hierarchical model
Ershadi, Mohammad Mahdi
Rise, Zeinab Rahimi
Research on Biomedical Engineering2023Journal Article, cited 0 times
Website
COVID-19-AR
Radiomic features
Deep Learning
Clustering
MATLAB
Purpose; Based on medical reports, it is hard to find levels of different hospitalized symptomatic COVID-19 patients according to their features in a short time. Besides, there are common and special features for COVID-19 patients at different levels based on physicians’ knowledge that make diagnosis difficult. For this purpose, a hierarchical model is proposed in this paper based on experts’ knowledge, fuzzy C-mean (FCM) clustering, and adaptive neuro-fuzzy inference system (ANFIS) classifier.; ; Methods; Experts considered a special set of features for different groups of COVID-19 patients to find their treatment plans. Accordingly, the structure of the proposed hierarchical model is designed based on experts’ knowledge. In the proposed model, we applied clustering methods to patients’ data to determine some clusters. Then, we learn classifiers for each cluster in a hierarchical model. Regarding different common and special features of patients, FCM is considered for the clustering method. Besides, ANFIS had better performances than other classification methods. Therefore, FCM and ANFIS were considered to design the proposed hierarchical model. FCM finds the membership degree of each patient’s data based on common and special features of different clusters to reinforce the ANFIS classifier. Next, ANFIS identifies the need of hospitalized symptomatic COVID-19 patients to ICU and to find whether or not they are in the end-stage (mortality target class). Two real datasets about COVID-19 patients are analyzed in this paper using the proposed model. One of these datasets had only clinical features and another dataset had both clinical and image features. Therefore, some appropriate features are extracted using some image processing and deep learning methods.; ; Results; According to the results and statistical test, the proposed model has the best performance among other utilized classifiers. Its accuracies based on clinical features of the first and second datasets are 92% and 90% to find the ICU target class. Extracted features of image data increase the accuracy by 94%.; ; Conclusion; The accuracy of this model is even better for detecting the mortality target class among different classifiers in this paper and the literature review. Besides, this model is compatible with utilized datasets about COVID-19 patients based on clinical data and both clinical and image data, as well.; ; Highlights; • A new hierarchical model is proposed using ANFIS classifiers and FCM clustering method in this paper. Its structure is designed based on experts’ knowledge and real medical process. FCM reinforces the ANFIS classification learning phase based on the features of COVID-19 patients.; ; • Two real datasets about COVID-19 patients are studied in this paper. One of these datasets has both clinical and image data. Therefore, appropriate features are extracted based on its image data and considered with available meaningful clinical data. Different levels of hospitalized symptomatic COVID-19 patients are considered in this paper including the need of patients to ICU and whether or not they are in end-stage.; ; • Well-known classification methods including case-based reasoning (CBR), decision tree, convolutional neural networks (CNN), K-nearest neighbors (KNN), learning vector quantization (LVQ), multi-layer perceptron (MLP), Naive Bayes (NB), radial basis function network (RBF), support vector machine (SVM), recurrent neural networks (RNN), fuzzy type-I inference system (FIS), and adaptive neuro-fuzzy inference system (ANFIS) are designed for these datasets and their results are analyzed for different random groups of the train and test data;; ; • According to unbalanced utilized datasets, different performances of classifiers including accuracy, sensitivity, specificity, precision, F-score, and G-mean are compared to find the best classifier. ANFIS classifiers have the best results for both datasets.; ; • To reduce the computational time, the effects of the Principal Component Analysis (PCA) feature reduction method are studied on the performances of the proposed model and classifiers. According to the results and statistical test, the proposed hierarchical model has the best performances among other utilized classifiers.
Glioma Classification Using Deep Radiomics
Banerjee, Subhashis
Mitra, Sushmita
Masulli, Francesco
Rovetta, Stefano
SN Computer Science2020Journal Article, cited 1 times
Website
TCGA-GBM
LGG-1p19qDeletion
Convolutional Neural Network (CNN)
Glioma constitutes $$80\%$$80%of malignant primary brain tumors in adults, and is usually classified as high-grade glioma (HGG) and low-grade glioma (LGG). The LGG tumors are less aggressive, with slower growth rate as compared to HGG, and are responsive to therapy. Tumor biopsy being challenging for brain tumor patients, noninvasive imaging techniques like magnetic resonance imaging (MRI) have been extensively employed in diagnosing brain tumors. Therefore, development of automated systems for the detection and prediction of the grade of tumors based on MRI data becomes necessary for assisting doctors in the framework of augmented intelligence. In this paper, we thoroughly investigate the power of deep convolutional neural networks (ConvNets) for classification of brain tumors using multi-sequence MR images. We propose novel ConvNet models, which are trained from scratch, on MRI patches, slices, and multi-planar volumetric slices. The suitability of transfer learning for the task is next studied by applying two existing ConvNets models (VGGNet and ResNet) trained on ImageNet dataset, through fine-tuning of the last few layers. Leave-one-patient-out testing, and testing on the holdout dataset are used to evaluate the performance of the ConvNets. The results demonstrate that the proposed ConvNets achieve better accuracy in all cases where the model is trained on the multi-planar volumetric dataset. Unlike conventional models, it obtains a testing accuracy of $$95\%$$95%for the low/high grade glioma classification problem. A score of $$97\%$$97%is generated for classification of LGG with/without 1p/19q codeletion, without any additional effort toward extraction and selection of features. We study the properties of self-learned kernels/ filters in different layers, through visualization of the intermediate layer outputs. We also compare the results with that of state-of-the-art methods, demonstrating a maximum improvement of $$7\%$$7%on the grading performance of ConvNets and $$9\%$$9%on the prediction of 1p/19q codeletion status.
Breast Cancer Mass Detection in Mammograms Using Gray Difference Weight and MSER Detector
Divyashree, B. V.
Kumar, G. Hemantha
SN Computer Science2021Journal Article, cited 0 times
Website
CBIS-DDSM
BREAST
Mammography
fast marching method
Computer Aided Detection (CADe)
Algorithm Development
Breast cancer is a deadly and one of the most prevalent cancers in women across the globe. Mammography is widely used imaging modality for diagnosis and screening of breast cancer. Segmentation of breast region and mass detection are crucial steps in automatic breast cancer detection. Due to the non-uniform distribution of various tissues, it is a challenging task to analyze mammographic images with high accuracy. In this paper, background suppression and pectoral muscle removal are performed using gradient weight map followed by gray difference weight and fast marching method. Enhancement of breast region is performed using contrast limited adaptive histogram equalization (CLAHE) and de-correlation stretch. Detection of breast masses is accomplished by gray difference weight and maximally stable external regions (MSER) detector. Experimentation on Mammographic Image Analysis Society (MIAS) and curated breast imaging subset of digital database for screening mammography (CBIS-DDSM) show that the method proposed performs breast boundary segmentation and mass detection with best accuracies. Mass detection achieved high accuracies of about 97.64% and 94.66% for MIAS and CBIS-DDSM dataset, respectively. The method is simple, robust, less affected to noise, density, shape and size which could provide reasonable support for mammographic analysis.
Hybridized Deep Convolutional Neural Network and Fuzzy Support Vector Machines for Breast Cancer Detection
Oyetade, Idowu Sunday
Ayeni, Joshua Ojo
Ogunde, Adewale Opeoluwa
Oguntunde, Bosede Oyenike
Olowookere, Toluwase Ayobami
SN Computer Science2021Journal Article, cited 0 times
Website
CBIS-DDSM
Deep convolutional neural network (DCNN)
breast cancer
Support Vector Machine (SVM)
A cancerous development that originates from breast tissue is known as breast cancer, and it is reported to be the leading cause of women death globally. Previous researches have proved that the application of Computer-Aided Detection (CADe) in screening mammography can assist the radiologist in avoiding missing breast cancer cases. However, many of the existing systems are prone to false detections or misclassifications and are majorly tailored towards either binary classification or three-class classification. Therefore, this study seeks to develop both two-class and three-class models for breast cancer detection and classification employing a deep convolutional neural network (DCNN) with fuzzy support vector machines. The models were developed using mammograms downloaded from the digital database for screening mammography (DDSM) and curated breast imaging subset CBISDDSM data repositories. The datasets were pre-processed, and features extracted for classification with DCNN and fuzzy support vector machines (SVM). The system was evaluated using accuracy, sensitivity, AUC, F1-score, and confusion matrix. The 3-class model gave an accuracy of 81.43% for the DCNN and 85.00% accuracy for the fuzzy SVM. The first layer of the serial 2-layer DCNN with fuzzy SVM for binary prediction yielded 99.61% and 100.00% accuracy, respectively. However, the second layer gave 86.60% and 91.65%, respectively. This study’s contribution to knowledge includes the hybridization of deep convolutional neural network with fuzzy support vector machines to improve the detection and classification of cancerous and non-cancerous breast tumours in both binary classification and three-class classification scenarios.
Lung Cancer Detection: A Classification Approach Utilizing Oversampling and Support Vector Machines
Jara-Gavilanes, Adolfo
Robles-Bykbaev, Vladimir
SN Computer Science2023Journal Article, cited 0 times
Lung-PET-CT-Dx
Algorithm Development
Support Vector Machine (SVM)
Random Forest
Computer Aided Detection (CADe)
computer Aided Diagnosis (CADx)
Positron Emission Tomography (PET)
Lung cancer is the type of cancer that causes the most deaths each year. It is also cancer with the lowest survival rate. This represents a health problem worldwide. Lung cancer has two subtypes: Non-Small Cell Lung Cancer (NSCLC) and Small Cell Lung Cancer (SCLC). For doctors, it can be hard to detect and differentiate them. Therefore, in this work, we present a method to help doctors with this issue. It consists of three phases: image preprocessing is the first phase. It starts gathering the data. After that, PET scans are selected. Then, all the scans are converted to grayscale images, and finally, all the images are joined to create a video from each patient’s scan. Next, the data extraction phase starts. In this phase, some frames are extracted from each video, and they are flattened and blended to create a row of information from each frame. Thus, a dataframe is created where each row represents a patient, and each column is a pixel value. To obtain better results, an oversampling technique is applied. In this manner, the classes are balanced. Following this, a dimensionality reduction technique is applied to reduce the number of columns produced by the previous steps and to check if this technique improves the results yielded by each model. Subsequently, the model evaluation phase begins. At this stage, two models are created: a Support Vector Machine (SVM), and a Random Forest. Ultimately, the findings are unveiled, revealing that the SVM emerged as the top-performing model, boasting an impressive 97% accuracy, 98% precision, and 97% sensitivity. Eventually, this method can be applied to detect and classify different diseases that involve PET scans.
Machine Learning Applied to COVID-19: A Review of the Initial Pandemic Period
Mano, Leandro Y.
Torres, Alesson M.
Morales, Andres Giraldo
Cruz, Carla Cristina P.
Cardoso, Fabio H.
Alves, Sarah Hannah
Faria, Cristiane O.
Lanzillotti, Regina
Cerceau, Renato
da Costa, Rosa Maria E. M.
Figueiredo, Karla
Werneck, Vera Maria B.
2023Journal Article, cited 0 times
RIDER Lung CT
Diagnostic and decision-making processes in the 2019 Coronavirus treatment have combined new standards using patient chest images, clinical and laboratory data. This work presents a systematic review aimed at studying the Artificial Intelligence (AI) approaches to the patients’ diagnosis or evolution with Coronavirus 2019. Five electronic databases were searched, from December 2019 to October 2020, considering the beginning of the pandemic when there was no vaccine influencing the exploration of Artificial Intelligence-based techniques. The first search collected 839 papers. Next, the abstracts were reviewed, and 138 remained after the inclusion/exclusion criteria was performed. After thorough reading and review by a second group of reviewers, 64 met the study objectives. These papers were carefully analyzed to identify the AI techniques used to interpret the images, clinical and laboratory data, considering a distribution regarding two variables: (i) diagnosis or outcome and (ii) the type of data: clinical, laboratory, or imaging (chest computed tomography, chest X-ray, or ultrasound). The data type most used was chest CT scans, followed by chest X-ray. The chest CT scan was the only data type that was used for diagnosis, outcome, or both. A few works combine Clinical and Laboratory data, and the most used laboratory tests were C-reactive protein. AI techniques have been increasingly explored in medical image annotation to overcome the need for specialized manual work. In this context, 25 machine learning (ML) techniques with a highest frequency of usage were identified, ranging from the most classic ones, such as Logistic Regression, to the most current ones, such as those that explore Deep Learning. Most imaging works explored convolutional neural networks (CNN), such as VGG and Resnet. Then transfer learning which stands out among the techniques related to deep learning has the second highest frequency of use. In general, classification tasks adopted two or three datasets. COVID-19 related data is present in all papers, while pneumonia is the most common non-COVID-19 class among them.
Prognostic value and molecular correlates of a CT image-based quantitative pleural contact index in early stage NSCLC
Lee, Juheon
Cui, Yi
Sun, Xiaoli
Li, Bailiang
Wu, Jia
Li, Dengwang
Gensheimer, Michael F
Loo, Billy W
Diehn, Maximilian
Li, Ruijiang
European Radiology2018Journal Article, cited 3 times
Website
NSCLC-Radiomics
LUNG
Radiogenomics
Radiomics
PURPOSE: To evaluate the prognostic value and molecular basis of a CT-derived pleural contact index (PCI) in early stage non-small cell lung cancer (NSCLC). EXPERIMENTAL DESIGN: We retrospectively analysed seven NSCLC cohorts. A quantitative PCI was defined on CT as the length of tumour-pleura interface normalised by tumour diameter. We evaluated the prognostic value of PCI in a discovery cohort (n = 117) and tested in an external cohort (n = 88) of stage I NSCLC. Additionally, we identified the molecular correlates and built a gene expression-based surrogate of PCI using another cohort of 89 patients. To further evaluate the prognostic relevance, we used four datasets totalling 775 stage I patients with publically available gene expression data and linked survival information. RESULTS: At a cutoff of 0.8, PCI stratified patients for overall survival in both imaging cohorts (log-rank p = 0.0076, 0.0304). Extracellular matrix (ECM) remodelling was enriched among genes associated with PCI (p = 0.0003). The genomic surrogate of PCI remained an independent predictor of overall survival in the gene expression cohorts (hazard ratio: 1.46, p = 0.0007) adjusting for age, gender, and tumour stage. CONCLUSIONS: CT-derived pleural contact index is associated with ECM remodelling and may serve as a noninvasive prognostic marker in early stage NSCLC. KEY POINTS: * A quantitative pleural contact index (PCI) predicts survival in early stage NSCLC. * PCI is associated with extracellular matrix organisation and collagen catabolic process. * A multi-gene surrogate of PCI is an independent predictor of survival. * PCI can be used to noninvasively identify patients with poor prognosis.;
Breast cancer detection from mammograms using artificial intelligence
Subasi, Abdulhamit
Kandpal, Aayush Dinesh
Raj, Kolla Anant
Bagci, Ulas
2023Book Section, cited 0 times
CBIS-DDSM
Algorithm Development
Transfer learning
Convolutional Neural Network (CNN)
Computer Aided Detection (CADe)
Breast cancer is one of the fastest-growing forms of cancer in the world today. Breast cancer is primarily found in women, and its frequency has been gaining significantly in the last few years. The key to tackle the rising cases of breast cancer is early detection. Many studies have shown that early detection significantly reduces the mortality rate of those affected. Machine learning and deep learning techniques have been adopted in the present scenario to help detect breast cancer in an early stage. Deep learning models such as the convolutional neural networks (CNNs) are suited explicitly to image data and overcome the drawbacks of machine learning models. To improve upon conventional approaches, we apply deep CNNs for automatic feature extraction and classifier building. In this chapter, we have demonstrated thoroughly the use of deep learning models through transfer learning, deep feature extraction, and machine learning models. Computer-aided detection or diagnosis systems have recently been developed to help health-care professionals increase diagnosis accuracy. This chapter presents early breast cancer detection from mammograms using artificial intelligence (AI). Various models have been presented along with an in-depth comparative analysis of the different state-of-the-art architectures, custom CNN networks, and classifiers trained on features extracted from pretrained networks. Our findings have indicated that deep learning models can achieve training accuracies of up to 99%, while both validation and test accuracies up to 96%. We conclude by suggesting various improvements that could be made to existing architectures and how AI techniques could help further improve and help in the early detection of breast cancer.
Benefit of overlapping reconstruction for improving the quantitative assessment of CT lung nodule volume
Gavrielides, Marios A
Zeng, Rongping
Myers, Kyle J
Sahiner, Berkman
Petrick, Nicholas
Academic Radiology2013Journal Article, cited 23 times
Website
Phantom FDA
RATIONALE AND OBJECTIVES: The aim of this study was to quantify the effect of overlapping reconstruction on the precision and accuracy of lung nodule volume estimates in a phantom computed tomographic (CT) study. MATERIALS AND METHODS: An anthropomorphic phantom was used with a vasculature insert on which synthetic lung nodules were attached. Repeated scans of the phantom were acquired using a 64-slice CT scanner. Overlapping and contiguous reconstructions were performed for a range of CT imaging parameters (exposure, slice thickness, pitch, reconstruction kernel) and a range of nodule characteristics (size, density). Nodule volume was estimated with a previously developed matched-filter algorithm. RESULTS: Absolute percentage bias across all nodule sizes (n = 2880) was significantly lower when overlapping reconstruction was used, with an absolute percentage bias of 6.6% (95% confidence interval [CI], 6.4-6.9), compared to 13.2% (95% CI, 12.7-13.8) for contiguous reconstruction. Overlapping reconstruction also showed a precision benefit, with a lower standard percentage error of 7.1% (95% CI, 6.9-7.2) compared with 15.3% (95% CI, 14.9-15.7) for contiguous reconstructions across all nodules. Both effects were more pronounced for the smaller, subcentimeter nodules. CONCLUSIONS: These results support the use of overlapping reconstruction to improve the quantitative assessment of nodule size with CT imaging.
Data Analysis of the Lung Imaging Database Consortium and Image Database Resource Initiative
Wang, Weisheng
Luo, Jiawei
Yang, Xuedong
Lin, Hongli
Academic Radiology2015Journal Article, cited 5 times
Website
LIDC-IDRI
RATIONALE AND OBJECTIVES: The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) is the largest publicly available computed tomography (CT) image reference data set of lung nodules. In this article, a comprehensive data analysis of the data set and a uniform data model are presented with the purpose of facilitating potential researchers to have an in-depth understanding to and efficient use of the data set in their lung cancer-related investigations. MATERIALS AND METHODS: A uniform data model was designed for representation and organization of various types of information contained in different source data files. A software tool was developed for the processing and analysis of the database, which 1) automatically aligns and graphically displays the nodule outlines marked manually by radiologists onto the corresponding CT images; 2) extracts diagnostic nodule characteristics annotated by radiologists; 3) calculates a variety of nodule image features based on the outlines of nodules, including diameter, volume, and degree of roundness, and so forth; 4) integrates all the extracted nodule information into the uniform data model and stores it in a common and easy-to-access data format; and 5) analyzes and summarizes various feature distributions of nodules in several different categories. Using this data processing and analysis tool, all 1018 CT scans from the data set were processed and analyzed for their statistical distribution. RESULTS: The information contained in different source data files with different formats was extracted and integrated into a new and uniform data model. Based on the new data model, the statistical distributions of nodules in terms of nodule geometric features and diagnostic characteristics were summarized. In the LIDC/IDRI data set, 2655 nodules >/=3 mm, 5875 nodules <3 mm, and 7411 non-nodules are identified, respectively. Among the 2655 nodules, 1) 775, 488, 481, and 911 were marked by one, two, three, or four radiologists, respectively; 2) most of nodules >/=3 mm (85.7%) have a diameter <10.0 mm with the mean value of 6.72 mm; and 3) 10.87%, 31.4%, 38.8%, 16.4%, and 2.6% of nodules were assessed with a malignancy score of 1, 2, 3, 4, and 5, respectively. CONCLUSIONS: This study demonstrates the usefulness of the proposed software tool to the potential users for an in-depth understanding of the LIDC/IDRI data set, therefore likely to be beneficial to their future investigations. The analysis results also demonstrate the distribution diversity of nodules characteristics, therefore being useful as a reference resource for assessing the performance of a new and existing nodule detection and/or segmentation schemes.;
Computer-aided Diagnosis for Lung Cancer: Usefulness of Nodule Heterogeneity
Nishio, Mizuho
Nagashima, Chihiro
Academic Radiology2017Journal Article, cited 12 times
Website
SPIE LungX Challenge
Computed tomography (CT)
LUNG
Computer Aided Diagnosis (CADx)
Principal component analysis (PCA)
RATIONALE AND OBJECTIVES: To develop a computer-aided diagnosis system to differentiate between malignant and benign nodules. MATERIALS AND METHODS: Seventy-three lung nodules revealed on 60 sets of computed tomography (CT) images were analyzed. Contrast-enhanced CT was performed in 46 CT examinations. The images were provided by the LUNGx Challenge, and the ground truth of the lung nodules was unavailable; a surrogate ground truth was, therefore, constructed by radiological evaluation. Our proposed method involved novel patch-based feature extraction using principal component analysis, image convolution, and pooling operations. This method was compared to three other systems for the extraction of nodule features: histogram of CT density, local binary pattern on three orthogonal planes, and three-dimensional random local binary pattern. The probabilistic outputs of the systems and surrogate ground truth were analyzed using receiver operating characteristic analysis and area under the curve. The LUNGx Challenge team also calculated the area under the curve of our proposed method based on the actual ground truth of their dataset. RESULTS: Based on the surrogate ground truth, the areas under the curve were as follows: histogram of CT density, 0.640; local binary pattern on three orthogonal planes, 0.688; three-dimensional random local binary pattern, 0.725; and the proposed method, 0.837. Based on the actual ground truth, the area under the curve of the proposed method was 0.81. CONCLUSIONS: The proposed method could capture discriminative characteristics of lung nodules and was useful for the differentiation between malignant and benign nodules.;
Role of Imaging in the Era of Precision Medicine
Giardino, Angela
Gupta, Supriya
Olson, Emmi
Sepulveda, Karla
Lenchik, Leon
Ivanidze, Jana
Rakow-Penner, Rebecca
Patel, Midhir J
Subramaniam, Rathan M
Ganeshan, Dhakshinamoorthy
Academic Radiology2017Journal Article, cited 12 times
Website
Radiomics
TCGA-BRCA
TCGA-RCC
Precision medicine is an emerging approach for treating medical disorders, which takes into account individual variability in genetic and environmental factors. Preventive or therapeutic interventions can then be directed to those who will benefit most from targeted interventions, thereby maximizing benefits and minimizing costs and complications. Precision medicine is gaining increasing recognition by clinicians, healthcare systems, pharmaceutical companies, patients, and the government. Imaging plays a critical role in precision medicine including screening, early diagnosis, guiding treatment, evaluating response to therapy, and assessing likelihood of disease recurrence. The Association of University Radiologists Radiology Research Alliance Precision Imaging Task Force convened to explore the current and future role of imaging in the era of precision medicine and summarized its finding in this article. We review the increasingly important role of imaging in various oncological and non-oncological disorders. We also highlight the challenges for radiology in the era of precision medicine.
Radiogenomics of Clear Cell Renal Cell Carcinoma: Associations Between mRNA-Based Subtyping and CT Imaging Features
Bowen, Lan
Xiaojing, Li
Academic Radiology2018Journal Article, cited 0 times
Website
TCGA_RCC
clear cell renal cell carcinoma
PBRM1
BAP1
SETD2
JARID1C
PTEN
Conventional MR-based Preoperative Nomograms for Prediction of IDH/1p19q Subtype in Low-Grade Glioma
Liu, Zhenyin
Zhang, Tao
Jiang, Hua
Xu, Wenchan
Zhang, Jing
Academic Radiology2018Journal Article, cited 0 times
Website
IDH/1p19q
TCGA-LGG
nomograms
oligoastrocytoma
oligodendrogliomas
Quantitative and Qualitative Evaluation of Convolutional Neural Networks with a Deeper U-Net for Sparse-View Computed Tomography Reconstruction
Rationale and Objectives; ; To evaluate the utility of a convolutional neural network (CNN) with an increased number of contracting and expanding paths of U-net for sparse-view CT reconstruction.; Materials and Methods; ; This study used 60 anonymized chest CT cases from a public database called “The Cancer Imaging Archive”. Eight thousand images from 40 cases were used for training. Eight hundred and 80 images from another 20 cases were used for quantitative and qualitative evaluation, respectively. Sparse-view CT images subsampled by a factor of 20 were simulated, and two CNNs were trained to create denoised images from the sparse-view CT. A CNN based on U-net with residual learning with four contracting and expanding paths (the preceding CNN) was compared with another CNN with eight contracting and expanding paths (the proposed CNN) both quantitatively (peak signal to noise ratio, structural similarity index), and qualitatively (the scores given by two radiologists for anatomical visibility, artifact and noise, and overall image quality) using the Wilcoxon signed-rank test. Nodule and emphysema appearance were also evaluated qualitatively.; Results; ; The proposed CNN was significantly better than the preceding CNN both quantitatively and qualitatively (overall image quality interquartile range, 3.0–3.5 versus 1.0–1.0 reported from the preceding CNN; p < 0.001). However, only 2 of 22 cases used for emphysematous evaluation (2 CNNs for every 11 cases with emphysema) had an average score of ≥ 2 (on a 3 point scale).; Conclusion; ; Increasing contracting and expanding paths may be useful for sparse-view CT reconstruction with CNN. However, poor reproducibility of emphysema appearance should also be noted.; Key Words; Convolutional neural network; CNN; Sparse-view CT; Deep learning; Abbreviations; BN batch normalization; CNN convolutional neural networks; CTcomputed tomography; dB decibel; GGO ground glass opacity; GPU graphics processing unit; MSE the mean squared error; PSNR peak signal to noise ratio; ReLU rectified linear unit; SSIM structural similarity index; TCIA The Cancer Imaging Archive
Technical and Clinical Factors Affecting Success Rate of a Deep Learning Method for Pancreas Segmentation on CT
Bagheri, Mohammad Hadi
Roth, Holger
Kovacs, William
Yao, Jianhua
Farhadi, Faraz
Li, Xiaobai
Summers, Ronald M
Acad Radiol2019Journal Article, cited 0 times
Website
Pancreas-CT
PURPOSE: Accurate pancreas segmentation has application in surgical planning, assessment of diabetes, and detection and analysis of pancreatic tumors. Factors that affect pancreas segmentation accuracy have not been previously reported. The purpose of this study is to identify technical and clinical factors that adversely affect the accuracy of pancreas segmentation on CT. METHOD AND MATERIALS: In this IRB and HIPAA compliant study, a deep convolutional neural network was used for pancreas segmentation in a publicly available archive of 82 portal-venous phase abdominal CT scans of 53 men and 29 women. The accuracies of the segmentations were evaluated by the Dice similarity coefficient (DSC). The DSC was then correlated with demographic and clinical data (age, gender, height, weight, body mass index), CT technical factors (image pixel size, slice thickness, presence or absence of oral contrast), and CT imaging findings (volume and attenuation of pancreas, visceral abdominal fat, and CT attenuation of the structures within a 5 mm neighborhood of the pancreas). RESULTS: The average DSC was 78% +/- 8%. Factors that were statistically significantly correlated with DSC included body mass index (r=0.34, p < 0.01), visceral abdominal fat (r=0.51, p < 0.0001), volume of the pancreas (r=0.41, p=0.001), standard deviation of CT attenuation within the pancreas (r=0.30, p=0.01), and median and average CT attenuation in the immediate neighborhood of the pancreas (r = -0.53, p < 0.0001 and r=-0.52, p < 0.0001). There were no significant correlations between the DSC and the height, gender, or mean CT attenuation of the pancreas. CONCLUSION: Increased visceral abdominal fat and accumulation of fat within or around the pancreas are major factors associated with more accurate segmentation of the pancreas. Potential applications of our findings include assessment of pancreas segmentation difficulty of a particular scan or dataset and identification of methods that work better for more challenging pancreas segmentations.
Development and Validation of a Modified Three-Dimensional U-Net Deep-Learning Model for Automated Detection of Lung Nodules on Chest CT Images From the Lung Image Database Consortium and Japanese Datasets
Suzuki, K.
Otsuka, Y.
Nomura, Y.
Kumamaru, K. K.
Kuwatsuru, R.
Aoki, S.
Acad Radiol2020Journal Article, cited 0 times
Website
LIDC-IDRI
RATIONALE AND OBJECTIVES: A more accurate lung nodule detection algorithm is needed. We developed a modified three-dimensional (3D) U-net deep-learning model for the automated detection of lung nodules on chest CT images. The purpose of this study was to evaluate the accuracy of the developed modified 3D U-net deep-learning model. MATERIALS AND METHODS: In this Health Insurance Portability and Accountability Act-compliant, Institutional Review Board-approved retrospective study, the 3D U-net based deep-learning model was trained using the Lung Image Database Consortium and Image Database Resource Initiative dataset. For internal model validation, we used 89 chest CT scans that were not used for model training. For external model validation, we used 450 chest CT scans taken at an urban university hospital in Japan. Each case included at least one nodule of >5 mm identified by an experienced radiologist. We evaluated model accuracy using the competition performance metric (CPM) (average sensitivity at 1/8, 1/4, 1/2, 1, 2, 4, and 8 false-positives per scan). The 95% confidence interval (CI) was computed by bootstrapping 1000 times. RESULTS: In the internal validation, the CPM was 94.7% (95% CI: 89.1%-98.6%). In the external validation, the CPM was 83.3% (95% CI: 79.4%-86.1%). CONCLUSION: The modified 3D U-net deep-learning model showed high performance in both internal and external validation.
A Cascaded Deep Learning-Based Artificial Intelligence Algorithm for Automated Lesion Detection and Classification on Biparametric Prostate Magnetic Resonance Imaging
Mehralivand, Sherif
Yang, Dong
Harmon, Stephanie A
Xu, Daguang
Xu, Ziyue
Roth, Holger
Masoudi, Samira
Sanford, Thomas H
Kesani, Deepak
Lay, Nathan S
Merino, Maria J
Wood, Bradford J
Pinto, Peter A
Choyke, Peter L
Turkbey, Baris
Acad Radiol2021Journal Article, cited 0 times
Website
PROSTATEx
Machine Learning
Algorithm Development
Computer Aided Detection (CADe)
Magnetic Resonance Imaging (MRI)
RATIONALE AND OBJECTIVES: Prostate MRI improves detection of clinically significant prostate cancer; however, its diagnostic performance has wide variation. Artificial intelligence (AI) has the potential to assist radiologists in the detection and classification of prostatic lesions. Herein, we aimed to develop and test a cascaded deep learning detection and classification system trained on biparametric prostate MRI using PI-RADS for assisting radiologists during prostate MRI read out. MATERIALS AND METHODS: T2-weighted, diffusion-weighted (ADC maps, high b value DWI) MRI scans obtained at 3 Tesla from two institutions (n = 1043 in-house and n = 347 Prostate-X, respectively) acquired between 2015 to 2019 were used for model training, validation, testing. All scans were retrospectively reevaluated by one radiologist. Suspicious lesions were contoured and assigned a PI-RADS category. A 3D U-Net-based deep neural network was used to train an algorithm for automated detection and segmentation of prostate MRI lesions. Two 3D residual neural network were used for a 4-class classification task to predict PI-RADS categories 2 to 5 and BPH. Training and validation used 89% (n = 1290 scans) of the data using 5 fold cross-validation, the remaining 11% (n = 150 scans) were used for independent testing. Algorithm performance at lesion level was assessed using sensitivities, positive predictive values (PPV), false discovery rates (FDR), classification accuracy, Dice similarity coefficient (DSC). Additional analysis was conducted to compare AI algorithm's lesion detection performance with targeted biopsy results. RESULTS: Median age was 66 years (IQR = 60-71), PSA 6.7 ng/ml (IQR = 4.7-9.9) from in-house cohort. In the independent test set, algorithm correctly detected 111 of 198 lesions leading to 56.1% (49.3%-62.6%) sensitivity. PPV was 62.7% (95% CI 54.7%-70.7%) with FDR of 37.3% (95% CI 29.3%-45.3%). Of 79 true positive lesions, 82.3% were tumor positive at targeted biopsy, whereas of 57 false negative lesions, 50.9% were benign at targeted biopsy. Median DSC for lesion segmentation was 0.359. Overall PI-RADS classification accuracy was 30.8% (95% CI 24.6%-37.8%). CONCLUSION: Our cascaded U-Net, residual network architecture can detect, classify cancer suspicious lesions at prostate MRI with good detection, reasonable classification performance metrics.
Development of a 3D CNN-based AI Model for Automated Segmentation of the Prostatic Urethra
Belue, M. J.
Harmon, S. A.
Patel, K.
Daryanani, A.
Yilmaz, E. C.
Pinto, P. A.
Wood, B. J.
Citrin, D. E.
Choyke, P. L.
Turkbey, B.
Acad Radiol2022Journal Article, cited 0 times
Website
PROSTATEx
radiation therapy
PROSTATE
urethra
RATIONALE AND OBJECTIVE: The combined use of prostate cancer radiotherapy and MRI planning is increasingly being used in the treatment of clinically significant prostate cancers. The radiotherapy dosage quantity is limited by toxicity in organs with de-novo genitourinary toxicity occurrence remaining unperturbed. Estimation of the urethral radiation dose via anatomical contouring may improve our understanding of genitourinary toxicity and its related symptoms. Yet, urethral delineation remains an expert-dependent and time-consuming procedure. In this study, we aim to develop a fully automated segmentation tool for the prostatic urethra. MATERIALS AND METHODS: This study incorporated 939 patients' T2-weighted MRI scans (train/validation/test/excluded: 657/141/140/1 patients), including in-house and public PROSTATE-x datasets, and their corresponding ground truth urethral contours from an expert genitourinary radiologist. The AI model was developed using MONAI framework and was based on a 3D-UNet. AI model performance was determined by Dice score (volume-based) and the Centerline Distance (CLD) between the prediction and ground truth centers (slice-based). All predictions were compared to ground truth in a systematic failure analysis to elucidate the model's strengths and weaknesses. The Wilcoxon-rank sum test was used for pair-wise comparison of group differences. RESULTS: The overall organ-adjusted Dice score for this model was 0.61 and overall CLD was 2.56 mm. When comparing prostates with symmetrical (n = 117) and asymmetrical (n = 23) benign prostate hyperplasia (BPH), the AI model performed better on symmetrical prostates compared to asymmetrical in both Dice score (0.64 vs. 0.51 respectively, p < 0.05) and mean CLD (2.3 mm vs. 3.8 mm respectively, p < 0.05). When calculating location-specific performance, the performance was highest at the apex and lowest at the base location of the prostate for Dice and CLD. Dice location dependence: symmetrical (Apex, Mid, Base: 0.69 vs. 0.67 vs. 0.54 respectively, p < 0.05) and asymmetrical (Apex, Mid, Base: 0.68 vs. 0.52 vs. 0.39 respectively, p < 0.05). CLD location dependence: symmetrical (Apex, Mid, Base: 1.43 mm vs. 2.15 mm vs. 3.28 mm, p < 0.05) and asymmetrical (Apex, Mid, Base: 1.83 mm vs. 3.1 mm vs. 6.24 mm, p < 0.05). CONCLUSION: We developed a fully automated prostatic urethra segmentation AI tool yielding its best performance in prostate glands with symmetric BPH features. This system can potentially be used to assist treatment planning in patients who can undergo whole gland radiation therapy or ablative focal therapy.
Pulse Sequence Dependence of a Simple and Interpretable Deep Learning Method for Detection of Clinically Significant Prostate Cancer Using Multiparametric MRI
Kim, H.
Margolis, D. J. A.
Nagar, H.
Sabuncu, M. R.
Acad Radiol2022Journal Article, cited 0 times
PROSTATEx
Deep Learning
Magnetic Resonance Imaging (MRI)
Computer Aided Detection (CADe)
multi-parametric magnetic resonance imaging (multi-parametric MRI)
Prostate Cancer
RATIONALE AND OBJECTIVES: Multiparametric magnetic resonance imaging (mpMRI) is increasingly used for risk stratification and localization of prostate cancer (PCa). Thanks to the great success of deep learning models in computer vision, the potential application for early detection of PCa using mpMRI is imminent. MATERIALS AND METHODS: Deep learning analysis of the PROSTATEx dataset. RESULTS: In this study, we show a simple convolutional neural network (CNN) with mpMRI can achieve high performance for detection of clinically significant PCa (csPCa), depending on the pulse sequences used. The mpMRI model with T2-ADC-DWI achieved 0.90 AUC score in the held-out test set, not significantly better than the model using K(trans) instead of DWI (AUC 0.89). Interestingly, the model incorporating T2-ADC- K(trans) better estimates grade. We also describe a saliency "heat" map. Our results show that csPCa detection models with mpMRI may be leveraged to guide clinical management strategies. CONCLUSION: Convolutional neural networks incorporating multiple pulse sequences show high performance for detection of clinically-significant prostate cancer, and the model including dynamic contrast-enhanced information correlates best with grade.
New Perspectives for Estimating Body Composition From Computed Tomography: Clothing Associated Artifacts
Rentz, L. E.
Malone, B. M.
Vettiyil, B.
Sillaste, E. A.
Mizener, A. D.
Clayton, S. A.
Pistilli, E. E.
Acad Radiol2024Journal Article, cited 0 times
Website
ACRIN-FLT-Breast
BREAST-DIAGNOSIS
TCGA-THCA
TCGA-LIHC
CPTAC-PDA
CPTAC-UCEC
CC-Tumor-Heterogeneity
TCGA-UCEC
ACRIN 6668
ACRIN-NSCLC-FDG-PET
ANTI-PD-1_LUNG
CPTAC-LSCC
CPTAC-LUAD
NSCLC Radiogenomics
TCGA-LUAD
TCGA-LUSC
COVID-19-NY-SBU
NaF PROSTATE
TCGA-PRAD
CPTAC-CM
CPTAC-SAR
Soft-tissue-Sarcoma
TCGA-BLCA
TCGA-KIRP
Computed Tomography (CT)
Adiposity
Clinical imaging
Clothing
Data quality
Retrospective analysis
Skeletal muscle
Retrospective analysis of computed tomography (CT) imaging has been widely utilized in clinical populations as the “gold standard” method for quantifying body composition and tissue volumes ( 1 ). Thousands of published studies across the last 30 years suggest a concerningly high heterogeneity for statistical associations involving skeletal muscle and adiposity across patient populations that represent all types of cancer, COPD, and recently COVID-19 ( 2 , 3 , 4 , 5 , 6 , 7 , 8 ). Like most clinical datasets, the extensive presence of confounds, inconsistencies, and missing data tend to complicate post hoc imaging analyses ( 9 ). In addition to obvious data artifact, ample threats to study validity can be well concealed by lengthy patient charts, co-occurring factors, and methodological limitations. In the absence of a highly controlled environment, we neglect to consider the multiplicity of factors that can influence naturally occurring data, and thus, real-world utility of findings ( 9 , 10 ). Most importantly, we often fail to rehumanize collections of datapoints to understand patterns, compound clinical effects, and limitations experienced by both the clinical team and patient that maximize the value of post hoc conclusions.
Deep Learning Model for Pathological Grading and Prognostic Assessment of Lung Cancer Using CT Imaging: A Study on NLST and External Validation Cohorts
Yang, R.
Li, W.
Yu, S.
Wu, Z.
Zhang, H.
Liu, X.
Tao, L.
Li, X.
Huang, J.
Guo, X.
Acad Radiol2024Journal Article, cited 0 times
Website
NLST
Computed Tomography (CT)
Deep learning
Lung cancer
Pathological grading
Prognostic assessment
RATIONALE AND OBJECTIVES: To develop and validate a deep learning model for automated pathological grading and prognostic assessment of lung cancer using CT imaging, thereby providing surgeons with a non-invasive tool to guide surgical planning. MATERIAL AND METHODS: This study utilized 572 cases from the National Lung Screening Trial cohort, dividing them randomly into training (461 cases) and internal validation (111 cases) sets in an 8:2 ratio. Additionally, 224 cases from four cohorts obtained from the Cancer Imaging Archive, all diagnosed with non-small cell lung cancer, were included for external validation. The deep learning model, built on the MobileNetV3 architecture, was assessed in both internal and external validation sets using metrics such as accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUC). The model's prognostic value was further analyzed using Cox proportional hazards models. RESULTS: The model achieved high accuracy, sensitivity, specificity, and AUC in the internal validation set (accuracy: 0.888, macro AUC: 0.968, macro sensitivity: 0.798, macro specificity: 0.956). External validation demonstrated comparable performance (accuracy: 0.807, macro AUC: 0.920, macro sensitivity: 0.799, macro specificity: 0.896). The model's predicted signatures correlated significantly with patient mortality and provided valuable insights for prognostic assessment (adjusted HR 2.016 [95% CI: 1.010, 4.022]). CONCLUSIONS: This study successfully developed and validated a deep learning model for the preoperative grading of lung cancer pathology. The model's accurate predictions could serve as a useful adjunct in treatment planning for lung cancer patients, enabling more effective and customized interventions to improve patient outcomes.
Segmentation of the Prostatic Gland and the Intraprostatic Lesions on Multiparametic Magnetic Resonance Imaging Using Mask Region-Based Convolutional Neural Networks
Dai, Zhenzhen
Carver, Eric
Liu, Chang
Lee, Joon
Feldman, Aharon
Zong, Weiwei
Pantelic, Milan
Elshaikh, Mohamed
Wen, Ning
2020Journal Article, cited 0 times
PROSTATEx
PURPOSE: Accurate delineation of the prostate gland and intraprostatic lesions (ILs) is essential for prostate cancer dose-escalated radiation therapy. The aim of this study was to develop a sophisticated deep neural network approach to magnetic resonance image analysis that will help IL detection and delineation for clinicians.
METHODS AND MATERIALS: We trained and evaluated mask region-based convolutional neural networks to perform the prostate gland and IL segmentation. There were 2 cohorts in this study: 78 public patients (cohort 1) and 42 private patients from our institution (cohort 2). Prostate gland segmentation was performed using T2-weighted images (T2WIs), although IL segmentation was performed using T2WIs and coregistered apparent diffusion coefficient maps with prostate patches cropped out. The IL segmentation model was extended to select 5 highly suspicious volumetric lesions within the entire prostate.
RESULTS: The mask region-based convolutional neural networks model was able to segment the prostate with dice similarity coefficient (DSC) of 0.88 ± 0.04, 0.86 ± 0.04, and 0.82 ± 0.05; sensitivity (Sens.) of 0.93, 0.95, and 0.95; and specificity (Spec.) of 0.98, 0.85, and 0.90. However, ILs were segmented with DSC of 0.62 ± 0.17, 0.59 ± 0.14, and 0.38 ± 0.19; Sens. of 0.55 ± 0.30, 0.63 ± 0.28, and 0.22 ± 0.24; and Spec. of 0.974 ± 0.010, 0.964 ± 0.015, and 0.972 ± 0.015 in public validation/public testing/private testing patients when trained with patients from cohort 1 only. When trained with patients from both cohorts, the values were as follows: DSC of 0.64 ± 0.11, 0.56 ± 0.15, and 0.46 ± 0.15; Sens. of 0.57 ± 0.23, 0.50 ± 0.28, and 0.33 ± 0.17; and Spec. of 0.980 ± 0.009, 0.969 ± 0.016, and 0.977 ± 0.013.
CONCLUSIONS: Our research framework is able to perform as an end-to-end system that automatically segmented the prostate gland and identified and delineated highly suspicious ILs within the entire prostate. Therefore, this system demonstrated the potential for assisting the clinicians in tumor delineation.
Automated nuclear segmentation in head and neck squamous cell carcinoma (HNSCC) pathology reveals relationships between cytometric features and ESTIMATE stromal and immune scores
Blocker, Stephanie J.
Cook, James
Everitt, Jeffrey I.
Austin, Wyatt M.
Watts, Tammara L.
Mowery, Yvonne M.
The American Journal of Pathology2022Journal Article, cited 0 times
Website
CPTAC-HNSCC
Algorithm Development
pathomics
Image classification
Segmentation
Imaging features
The tumor microenvironment (TME) plays an important role in the progression of head and neck squamous cell carcinoma (HNSCC). Currently, pathological assessment of TME is non-standardized and subject to observer bias. Genome-wide transcriptomic approaches to understanding the TME, while less subject to bias, are expensive and not currently part of standard of care for HNSCC. To identify pathology-based biomarkers that correlate with genomic and transcriptomic signatures of TME in HNSCC, cytometric feature maps were generated in a publicly available cohort of patients with HNSCC with available whole-slide tissue images and genomic and transcriptomic phenotyping (N=49). Cytometric feature maps were generated based on whole-slide nuclear detection, using a deep learning algorithm trained for StarDist nuclear segmentation. Cytometric features were measured for each patient and compared to transcriptomic measurements, including Estimation of STromal and Immune cells in MAlignant Tumor tissues using Expression data (ESTIMATE) scores, as well as stemness scores. When corrected for multiple comparisons, one feature (nuclear circularity) demonstrated a significant linear correlation with ESTIMATE stromal score. Two features (nuclear maximum and minimum diameter) correlated significantly with ESTIMATE immune score. Three features (nuclear solidity, nuclear minimum diameter, and nuclear circularity) correlated significantly with transcriptomic stemness score. This study provides preliminary evidence that observer-independent, automated tissue slide analysis can provide insights into the HNSCC TME which correlate with genomic and transcriptomic assessments.
Graph Perceiver Network for Lung Tumor and Bronchial Premalignant Lesion Stratification from Histopathology
Gindra, R. H.
Zheng, Y.
Green, E. J.
Reid, M. E.
Mazzilli, S. A.
Merrick, D. T.
Burks, E. J.
Kolachalama, V. B.
Beane, J. E.
Am J Pathol2024Journal Article, cited 0 times
Website
CPTAC-LSCC
CPTAC-LUAD
H&E-stained slides
Whole Slide Imaging (WSI)
Algorithm Development
Multilayer Perceptron (MLP)
Pathomics
Imaging features
Bronchial premalignant lesions (PMLs) precede the development of invasive lung squamous cell carcinoma (LUSC), posing a significant challenge in distinguishing those likely to advance to LUSC from those that might regress without intervention. In this context, we present a novel computational approach, the Graph Perceiver Network, leveraging hematoxylin and eosin-stained whole slide images to stratify endobronchial biopsies of PMLs across a spectrum from normal to tumor lung tissues. The Graph Perceiver Network outperforms existing frameworks in classification accuracy predicting LUSC, lung adenocarcinoma, and nontumor (normal) lung tissue on The Cancer Genome Atlas and Clinical Proteomic Tumor Analysis Consortium datasets containing lung resection tissues while efficiently generating pathologist-aligned, class-specific heat maps. The network was further tested using endobronchial biopsies from two data cohorts, containing normal to carcinoma in situ histology, and it demonstrated a unique capability to differentiate carcinoma in situ lung squamous PMLs based on their progression status to invasive carcinoma. The network may have utility in stratifying PMLs for chemoprevention trials or more aggressive follow-up.
Association between increased Subcutaneous Adipose Tissue Radiodensity and cancer mortality: Automated computation, comparison of cancer types, gender, and scanner bias
Machado, Marcos A D
Moraes, Thauan F
Anjos, Bruno H L
Alencar, Nadja R G
Chang, Tien-Man C
Santana, Bruno C R F
Menezes, Vinicius O
Vieira, Lucas O
Brandão, Simone C S
Salvino, Marco A
Netto, Eduardo M
2024Journal Article, cited 0 times
CPTAC-PDA
HCC-TACE-Seg
TCGA-BLCA
TCGA-KIRC
TCGA-OV
CT
PURPOSE: Body composition analysis using computed tomography (CT) is proposed as a predictor of cancer mortality. An association between subcutaneous adipose tissue radiodensity (SATr) and cancer-specific mortality was established, while gender effects and equipment bias were estimated.
METHODS: 7,475 CT studies were selected from 17 cohorts containing CT images of untreated cancer patients who underwent follow-up for a period of 2.1-118.8 months. SATr measures were collected from published data (n = 6,718) or calculated according to CT images using a deep-learning network (n = 757). The association between SATr and mortality was ascertained for each cohort and gender using the p-value from either logistic regression or ROC analysis. The Kruskal-Wallis test was used to analyze differences between gender distributions, and automatic segmentation was evaluated using the Dice score and five-point Likert quality scale. Gender effect, scanner bias and changes in the Hounsfield unit (HU) to detect hazards were also estimated.
RESULTS: Higher SATr was associated with mortality in eight cancer types (p < 0.05). Automatic segmentation produced a score of 0.949 while the quality scale measurement was good to excellent. The extent of gender effect was 5.2 HU while the scanner bias was 10.3 HU. The minimum proposed HU change to detect a patient at risk of death was between 5.6 and 8.3 HU.
CONCLUSIONS: CT imaging provides valuable assessments of body composition as part of the staging process for several cancer types, saving both time and cost. Gender specific scales and scanner bias adjustments should be carried out to successfully implement SATr measures in clinical practice.
Prediction of lung cancer incidence on the low-dose computed tomography arm of the National Lung Screening Trial: A dynamic Bayesian network
Petousis, Panayiotis
Han, Simon X
Aberle, Denise
Bui, Alex AT
Artificial intelligence in medicine2016Journal Article, cited 13 times
Website
NLST
Dynamic Bayesian Network
LDCT
Segmentation of breast MR images using a generalised 2D mathematical model with inflation and deflation forces of active contours
Rampun, Andrik
Scotney, Bryan W
Morrow, Philip J
Wang, Hui
Winder, John
Artificial intelligence in medicine2018Journal Article, cited 0 times
QIN Breast DCE-MRI
In medical computer aided diagnosis systems, image segmentation is one of the major pre-processing steps used to ensure only the region of interest, such as the breast region, will be processed in subsequent steps. Nevertheless, breast segmentation is a difficult task due to low contrast and inhomogeneity, especially when estimating the chest wall in magnetic resonance (MR) images. In fact, the chest wall comprises fat, skin, muscles, and the thoracic skeleton, which can misguide automatic methods when attempting to estimate its location. The objective of the study is to develop a fully automated method for breast and pectoral muscle boundary estimation in MR images. Firstly, we develop a 2D breast mathematical model based on 30 MRI slices (from a patient) and identify important landmarks to obtain a model for the general shape of the breast in an axial plane. Subsequently, we use Otsu's thresholding approach and Canny edge detection to estimate the breast boundary. The active contour method is then employed using both inflation and deflation forces to estimate the pectoral muscle boundary by taking account of information obtained from the proposed 2D model. Finally, the estimated boundary is smoothed using a median filter to remove outliers. Our two datasets contain 60 patients in total and the proposed method is evaluated based on 59 patients (one patient is used to develop the 2D breast model). On the first dataset (9 patients) the proposed method achieved Jaccard = 81.1% ±6.1 % and dice coefficient= 89.4% ±4.1 % and on the second dataset (50 patients) Jaccard = 84.9% ±5.8 % and dice coefficient = 92.3% ±3.6 %. These results are qualitatively comparable with the existing methods in the literature.
FDSR: A new fuzzy discriminative sparse representation method for medical image classification
Ghasemi, Majid
Kelarestaghi, Manoochehr
Eshghi, Farshad
Sharifi, Arash
Artif Intell Med2020Journal Article, cited 10 times
Website
REMBRANDT
TCGA-LGG
Algorithm Development
Databases
Factual
Humans
Magnetic Resonance Imaging (MRI)
Discriminative sparse representation
Fuzzy dictionary learning
Inter-class difference
Intra-class similarity
Medical image classification
Recent developments in medical image analysis techniques make them essential tools in medical diagnosis. Medical imaging is always involved with different kinds of uncertainties. Managing these uncertainties has motivated extensive research on medical image classification methods, particularly for the past decade. Despite being a powerful classification tool, the sparse representation suffers from the lack of sufficient discrimination and robustness, which are required to manage the uncertainty and noisiness in medical image classification issues. It is tried to overcome this deficiency by introducing a new fuzzy discriminative robust sparse representation classifier, which benefits from the fuzzy terms in its optimization function of the dictionary learning process. In this work, we present a new medical image classification approach, fuzzy discriminative sparse representation (FDSR). The proposed fuzzy terms increase the inter-class representation difference and the intra-class representation similarity. Also, an adaptive fuzzy dictionary learning approach is used to learn dictionary atoms. FDSR is applied on Magnetic Resonance Images (MRI) from three medical image databases. The comprehensive experimental results clearly show that our approach outperforms its series of rival techniques in terms of accuracy, sensitivity, specificity, and convergence speed.
Uncertainty-aware temporal self-learning (UATS): Semi-supervised learning for segmentation of prostate zones and beyond
Meyer, Anneke
Ghosh, Suhita
Schindele, Daniel
Schostak, Martin
Stober, Sebastian
Hansen, Christian
Rak, Marko
Artificial intelligence in medicine2021Journal Article, cited 0 times
PROSTATEx
Various convolutional neural network (CNN) based concepts have been introduced for the prostate's automatic segmentation and its coarse subdivision into transition zone (TZ) and peripheral zone (PZ). However, when targeting a fine-grained segmentation of TZ, PZ, distal prostatic urethra (DPU) and the anterior fibromuscular stroma (AFS), the task becomes more challenging and has not yet been solved at the level of human performance. One reason might be the insufficient amount of labeled data for supervised training. Therefore, we propose to apply a semi-supervised learning (SSL) technique named uncertainty-aware temporal self-learning (UATS) to overcome the expensive and time-consuming manual ground truth labeling. We combine the SSL techniques temporal ensembling and uncertainty-guided self-learning to benefit from unlabeled images, which are often readily available. Our method significantly outperforms the supervised baseline and obtained a Dice coefficient (DC) of up to 78.9%, 87.3%, 75.3%, 50.6% for TZ, PZ, DPU and AFS, respectively. The obtained results are in the range of human inter-rater performance for all structures. Moreover, we investigate the method's robustness against noise and demonstrate the generalization capability for varying ratios of labeled data and on other challenging tasks, namely the hippocampus and skin lesion segmentation. UATS achieved superiority segmentation quality compared to the supervised baseline, particularly for minimal amounts of labeled data.
Evaluating the clinical utility of artificial intelligence assistance and its explanation on the glioma grading task
Jin, Weina
Fatehi, Mostafa
Guo, Ru
Hamarneh, Ghassan
Artificial intelligence in medicine2024Journal Article, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Artificial Intelligence
Glioma
Clinical evaluation evidence and model explainability are key gatekeepers to ensure the safe, accountable, and effective use of artificial intelligence (AI) in clinical settings. We conducted a clinical user-centered evaluation with 35 neurosurgeons to assess the utility of AI assistance and its explanation on the glioma grading task. Each participant read 25 brain MRI scans of patients with gliomas, and gave their judgment on the glioma grading without and with the assistance of AI prediction and explanation. The AI model was trained on the BraTS dataset with 88.0% accuracy. The AI explanation was generated using the explainable AI algorithm of SmoothGrad, which was selected from 16 algorithms based on the criterion of being truthful to the AI decision process. Results showed that compared to the average accuracy of 82.5±8.7% when physicians performed the task alone, physicians' task performance increased to 87.7±7.3% with statistical significance (p-value = 0.002) when assisted by AI prediction, and remained at almost the same level of 88.5±7.0% (p-value = 0.35) with the additional assistance of AI explanation. Based on quantitative and qualitative results, the observed improvement in physicians' task performance assisted by AI prediction was mainly because physicians' decision patterns converged to be similar to AI, as physicians only switched their decisions when disagreeing with AI. The insignificant change in physicians' performance with the additional assistance of AI explanation was because the AI explanations did not provide explicit reasons, contexts, or descriptions of clinical features to help doctors discern potentially incorrect AI predictions. The evaluation showed the clinical utility of AI to assist physicians on the glioma grading task, and identified the limitations and clinical usage gaps of existing explainable AI techniques for future improvement.
A Quantum-Inspired Self-Supervised Network model for automatic segmentation of brain MR images
Konar, Debanjan
Bhattacharyya, Siddhartha
Gandhi, Tapan Kr
Panigrahi, Bijaya Ketan
Applied Soft Computing2020Journal Article, cited 1 times
Website
QIN-BRAIN-DSC-MRI
Segmentation
Magnetic Resonance Imaging (MRI)
Fuzzy C-means clustering (FCM)
The classical self-supervised neural network architectures suffer from slow convergence problem and incorporation of quantum computing in classical self-supervised networks is a potential solution towards it. In this article, a fully self-supervised novel quantum-inspired neural network model referred to as Quantum-Inspired Self-Supervised Network (QIS-Net) is proposed and tailored for fully automatic segmentation of brain MR images to obviate the challenges faced by deeply supervised Convolutional Neural Network (CNN) architectures. The proposed QIS-Net architecture is composed of three layers of quantum neuron (input, intermediate and output) expressed as qbits. The intermediate and output layers of the QIS-Net architecture are inter-linked through bi-directional propagation of quantum states, wherein the image pixel intensities (quantum bits) are self-organized in between these two layers without any external supervision or training. Quantum observation allows to obtain the true output once the superimposed quantum states interact with the external environment. The proposed self-supervised quantum-inspired network model has been tailored for and tested on Dynamic Susceptibility Contrast (DSC) brain MR images from Nature data sets for detecting complete tumor and reported promising accuracy and reasonable dice similarity scores in comparison with the unsupervised Fuzzy C-Means clustering, self-trained QIBDS Net, Opti-QIBDS Net, deeply supervised U-Net and Fully Convolutional Neural Networks (FCNNs).
Effective full connection neural network updating using a quantized full FORCE algorithm
Heidarian, Mehdi
Karimi, Gholamreza
Applied Soft Computing2023Journal Article, cited 0 times
QIN Breast DCE-MRI
This paper presents a new training algorithm that can update the situation of layers’ network, and therefore, connections, neurons, and firing rate of neurons based on FORCE (first-order reduced and controlled error) training algorithm. The Quantized Full FORCE algorithm (QFF) also updates the number of neurons and connections between different layers in the network per iteration in a way that the whole firing rate of each layer is updated via selecting the best neurons and combining strong features. The update method is sequential, so that with each instance passing through the network, the network structure is updated with the Full FORCE algorithm. The algorithm updates the structure of networks with a multiple/single middle layer of the supervised version of feed forward networks such as Multilayer perceptron (MLP), changing them into partially-connected networks. A combination of principal component analysis PCA and Linear Discriminant Analysis (LDA) algorithms has been used to cluster the network input features. The paper focuses on the deep supervised MLP network with backpropagation (BP) and various datasets and its comparison with other MLP based stat of art methods and hybrid evolutionary algorithms. We achieved 98.15 percent accuracy for facial expression 98.6 and 97.7 percent for Wisconsin breast Cancer and Iris Flower in respectively. The training algorithm employed in the study enjoys a lower computational complexity while yielding faster and more accurate convergence, starting with a very low level of errors of 0.009 in comparison with the full connection network and it solves the challenge of getting stuck in local minima and poor convergence of Gradient Decent with BP.
3D medical image segmentation based on semi-supervised learning using deep co-training
Yang, Jingdong
Li, Haoqiu
Wang, Han
Han, Man
Applied Soft Computing2024Journal Article, cited 0 times
Website
CT Images in COVID-19
Semi-supervised learning
3D segmentation
In recent years, artificial intelligence has been applied to 3D COVID-19 medical image diagnosis, which reduces detection costs and missed diagnosis rates with higher predictive accuracy, and diagnostic efficiency. However, the limited size and low quality of clinical 3D medical image samples have hindered the segmentation performance of 3D models. Therefore, we propose a 3D medical image segmentation model based on semi-supervised learning using co-training. Multi-view and multi-modal images are generated using spatial flipping and windowing techniques to enhance the spatial diversity of 3D image samples. A pseudo label generation module based on confidence-weights is employed to generate reliable pseudo labels for non-annotated data, thereby increasing the sample size and reducing overfitting. The proposed approach utilizes a three-stage training process: firstly, training a single network based on annotated data; secondly, incorporating non-annotated data to train a dual-modal network and generate pseudo labels; finally, jointly training six models in three dimensions using both annotated and pseudo labels generated from multi-view and multi-modal images, aiming to enhance segmentation accuracy and generalization performance. Additionally, a consistency regularization loss is applied to reduce noises and accelerate convergence of the training. Moreover, a heatmap visualization method is employed to focus on the attention of features at each stage of training, providing effective reference for clinical diagnosis. Experiments were conducted on an open dataset of 3D COVID-19 CT samples and a non-annotated dataset from TCIA, including 771 NIFTI-format CT images from 661 COVID-19 patients. The results of 5-fold cross-validation show that the proposed model achieves a segmentation accuracy of Dice=73.30 %, ASD=10.633, Sensitivity=63.00 %, and Specificity=99.60 %. Compared to various typical semi-supervised learning 3D segmentation models, it demonstrates better segmentation accuracy and generalization performance.
An improved feature based image fusion technique for enhancement of liver lesions
Sreeja, P.
Hariharan, S.
2018Journal Article, cited 0 times
TCGA-LIHC
This paper describes two methods for enhancement of edge and texture of medical images. In the first method optimal kernel size of range filter suitable for enhancement of liver and lesions is deduced. The results have been compared with conventional edge detection algorithms. In the second method the feasibility of feature based pixel wise image fusion for enhancing abdominal images is investigated. Among the different algorithms developed in the medical image fusion pixel level fusion is capable of retaining the maximum relevant information with better implementation and computational efficiency. Conventional image fusion includes multi-modal fusion and multi-resolution fusion. The present work attempts to fuse together, texture enhanced and edge enhanced images of the input image in order to obtain significant enhancement in the output image. The algorithm is tested in low contrast medical images. The result shows an improvement in contrast and sharpness of output image which will provide a basis for a better visual interpretation leading to more accurate diagnosis. Qualitative and quantitative performance evaluation is done by calculating information entropy, MSE, PSNR, SSIM and Tenengrad values.
Computer-aided diagnosis of clinically significant prostate cancer from MRI images using sparse autoencoder and random forest classifier
Abraham, Bejoy
Nair, Madhu S
Biocybernetics and Biomedical Engineering2018Journal Article, cited 0 times
Website
PROSTATEx
Prostate Cancer
machine learning
Computer Aided Diagnosis (CADx)
Detection of lung cancer on chest CT images using minimum redundancy maximum relevance feature selection method with convolutional neural networks
Toğaçar, Mesut
Ergen, Burhan
Cömert, Zafer
Biocybernetics and Biomedical Engineering2019Journal Article, cited 0 times
TCGA-LUAD
Computer Aided Detection (CADe)
Algorithm Development
Lung cancer is a disease caused by the involuntary increase of cells in the lung tissue. Early detection of cancerous cells is of vital importance in the lungs providing oxygen to the human body and excretion of carbon dioxide in the body as a result of vital activities. In this study, the detection of lung cancers is realized using LeNet, AlexNet and VGG-16 deep learning models. The experiments were carried out on an open dataset composed of Computed Tomography (CT) images. In the experiment, convolutional neural networks (CNNs) were used for feature extraction and classification purposes. In order to increase the success rate of the classification, the image augmentation techniques, such as cutting, zooming, horizontal turning and filling, were applied to the dataset during the training of the models. Because of the outstanding success of AlexNet model, the features obtained from the last fully-connected layer of the model were separately applied as the input to linear regression (LR), linear discriminant analysis (LDA), decision tree (DT), support vector machine (SVM), -nearest neighbor (kNN) and softmax classifiers. A combination of AlexNet model and NN classifier achieved the most efficient classification accuracy as 98.74 %. Then, the minimum redundancy maximum relevance (mRMR) feature selection method was applied to the deep feature set to choose the most efficient features. Consequently, the success rate was yielded as 99.51 % by reclassifying the dataset with the selected features and NN model. The proposed model is consistent diagnosis model for lung cancer detection using chest CT images.
Two-stage multi-scale breast mass segmentation for full mammogram analysis without user intervention
Yan, Yutong
Conze, Pierre-Henri
Quellec, Gwenolé
Lamard, Mathieu
Cochener, Beatrice
Coatrieux, Gouenou
2021Journal Article, cited 0 times
CBIS-DDSM
Mammography is the primary imaging modality used for early detection and diagnosis of breast cancer. X-ray mammogram analysis mainly refers to the localization of suspicious regions of interest followed by segmentation, towards further lesion classification into benign versus malignant. Among diverse types of breast abnormalities, masses are the most important clinical findings of breast carcinomas. However, manually segmenting breast masses from native mammograms is time-consuming and error-prone. Therefore, an integrated computer-aided diagnosis system is required to assist clinicians for automatic and precise breast mass delineation. In this work, we present a two-stage multi-scale pipeline that provides accurate mass contours from high-resolution full mammograms. First, we propose an extended deep detector integrating a multi-scale fusion strategy for automated mass localization. Second, a convolutional encoder-decoder network using nested and dense skip connections is employed to fine-delineate candidate masses. Unlike most previous studies based on segmentation from regions, our framework handles mass segmentation from native full mammograms without any user intervention. Trained on INbreast and DDSM-CBIS public datasets, the pipeline achieves an overall average Dice of 80.44% on INbreast test images, outperforming state-of-the-art. Our system shows promising accuracy as an automatic full-image mass segmentation system. Extensive experiments reveals robustness against the diversity of size, shape and appearance of breast masses, towards better interaction-free computer-aided diagnosis.
Classification of lung nodule malignancy in computed tomography imaging utilising generative adversarial networks and semi-supervised transfer learning
Apostolopoulos, Ioannis D.
Papathanasiou, Nikolaos D.
Panayiotakis, George S.
Biocybernetics and Biomedical Engineering2021Journal Article, cited 2 times
Website
LIDC-IDRI
LUNG
Convolutional Neural Network (CNN)
Deep Learning
The pulmonary nodules' malignancy rating is commonly confined in patient follow-up; examining the nodule's activity is estimated with the Positron Emission Tomography (PET) system or biopsy. However, these strategies are usually after the initial detection of the malignant nodules acquired from the Computed Tomography (CT) scan. In this study, a Deep Learning methodology to address the challenge of the automatic characterisation of Solitary Pulmonary Nodules (SPN) detected in CT scans is proposed.; ; The research methodology is based on Convolutional Neural Networks, which have proven to be excellent automatic feature extractors for medical images. The publicly available CT dataset, called Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI), and a small CT scan dataset derived from a PET/CT system, is considered the classification target. New, realistic nodule representations are generated employing Deep Convolutional Generative Adversarial Networks to circumvent the shortage of large-scale data to train robust CNNs. Besides, a hierarchical CNN called Feature Fusion VGG19 (FF-VGG19) was developed to enhance feature extraction of the CNN proposed by the Visual Geometry Group (VGG). Moreover, the generated nodule images are separated into two classes by utilising a semi-supervised approach, called self-training, to tackle weak labelling due to DC-GAN inefficiencies.; ; The DC-GAN can generate realistic SPNs, as the experts could only distinguish 23 % of the synthetic nodule images. As a result, the classification accuracy of FF-VGG19 on the LIDC-IDRI dataset increases by +7%, reaching 92.07 %, while the classification accuracy on the CT dataset is increased by 5 %, reaching 84,3 %.
Implementing multiphysics models in FEniCS: Viscoelastic flows, poroelasticity, and tumor growth
Tunç, Birkan
Rodin, Gregory J.
Yankeelov, Thomas E.
Biomedical Engineering Advances2023Journal Article, cited 1 times
Website
BraTS-TCGA-GBM
Tumor growth dynamics
Algorithm Development
Finite element model
The open-source finite element code FEniCS is considered as an alternative to commercial finite element codes for evaluating complex constitutive models of multiphysics phenomena. FEniCS deserves this consideration because it is well-suited for encoding weak forms corresponding to partial differential equations arising from the fundamental balance laws and constitutive equations. It is shown how FEniCS can be adopted for solving boundary-value problems describing viscoelastic flows, poroelasticity, and tumor growth. Those problems span a wide range of models of continuum mechanics, and involve Eulerian, Lagrangian, and combined Eulerian-Lagrangian descriptions. Thus it is demonstrated that FEniCS is a viable computational tool capable of transcending traditional barriers between computational fluid and solid mechanics. Furthermore, it is shown that FEniCS implementations are straightforward, and do not require advanced knowledge of finite element methods and/or coding skills.
Artificial immune system features added to breast cancer clinical data for machine learning (ML) applications
Nave, OPhir
Elbaz, Miriam
2021Journal Article, cited 0 times
Breast-MRI-NACT-Pilot
ISPY1
We here propose a new method of combining a mathematical model that describes a chemotherapy treatment for breast cancer with a machine-learning (ML) algorithm to increase performance in predicting tumor size using a five-step procedure. The first step involves modeling the chemotherapy treatment protocol using an analytical function. In the second step, the ML algorithm is trained to predict the tumor size based on clinico-pathological data and data obtained from magnetic resonance imaging results at different time points of treatment. In the third step, the model is solved according to adjustments made at the individual patient level based on the initial tumor size. In the fourth step, the important variables are extracted from the mathematical model solutions and inserted as added features. In the final step, we applied various ML algorithms on the merged data. Performance comparison among algorithms showed that the root mean square error of the linear regression decreased with the addition of the mathematical results, and the accuracy of prediction as well as the F1-scores increased with the addition of the mathematical model to the neural network. We established these results for four different cohorts of women at different ages with breast cancer who received chemotherapy treatment.
A quantitative validation of segmented colon in virtual colonoscopy using image moments
Manjunath, K. N.
Prabhu, G. K.
Siddalingaswamy, P. C.
Biomedical Journal2020Journal Article, cited 1 times
Website
CT-COLONOGRAPHY
Segmentation
Background: Evaluation of segmented colon is one of the challenges in Computed Tomography Colonography (CTC). The objective of the study was to measure the segmented colon accurately using image processing techniques.; Methods: This was a retrospective study, and the Institutional Ethical clearance was obtained for the secondary dataset. The technique was tested on 85 CTC dataset. The CTCdataset of 100 - 120 kVp, 100 mA, and ST (Slice Thickness) of 1.25 and 2.5 mm were used for empirical testing. The initial results of the work appear in the conference proceedings. Post colon segmentation, three distance measurement techniques, and one volumetric overlap computation were applied in Euclidian space in which the distances were measured on MPR views of the segmented and unsegmented colons and the volumetric overlap calculation between these two volumes. ; Results: The key finding was that the measurements on both the segmented and the unsegmented volumes remain same without much difference noticed. This was statistically proved. The results were validated quantitatively on 2D MPR images. An accuracy of 95.265 ± 0.4551% was achieved through volumetric overlap computation. Through paired t-test, at alpha = 5% ; statistical values were p = 0.6769, and t = 0.4169 which infer that there was no much significant difference.; Conclusion: The combination of different validation techniques was applied to check the robustness of colon segmentation method, and good results were achieved with this approach. Through quantitative validation, the results were accepted at alpha =5%.
A low cost approach for brain tumor segmentation based on intensity modeling and 3D Random Walker
Kanas, Vasileios G
Zacharaki, Evangelia I
Davatzikos, Christos
Sgarbas, Kyriakos N
Megalooikonomou, Vasileios
Biomedical Signal Processing and Control2015Journal Article, cited 15 times
Website
Algorithm Development
BRAIN
Objective; Magnetic resonance imaging (MRI) is the primary imaging technique for evaluation of the brain tumor progression before and after radiotherapy or surgery. The purpose of the current study is to exploit conventional MR modalities in order to identify and segment brain images with neoplasms.; ; Methods; Four conventional MR sequences, namely, T1-weighted, gadolinium-enhanced T1-weighted, T2-weighted and fluid attenuation inversion recovery, are combined with machine learning techniques to extract global and local information of brain tissues and model the healthy and neoplastic imaging profiles. Healthy tissue clustering, outlier detection and geometric and spatial constraints are applied to perform a first segmentation which is further improved by a modified multiparametric Random Walker segmentation method. The proposed framework is applied on clinical data from 57 brain tumor patients (acquired by different scanners and acquisition parameters) and on 25 synthetic MR images with tumors. Assessment is performed against expert-defined tissue masks and is based on sensitivity analysis and Dice coefficient.; ; Results; The results demonstrate that the proposed multiparametric framework differentiates neoplastic tissues with accuracy similar to most current approaches while it achieves lower computational cost and higher degree of automation.; ; Conclusion; This study might provide a decision-support tool for neoplastic tissue segmentation, which can assist in treatment planning for tumor resection or focused radiotherapy.
Segmentation of lung from CT using various active contour models
Nithila, Ezhil E
Kumar, SS
Biomedical Signal Processing and Control2018Journal Article, cited 0 times
Website
SPIE-AAPM
lung cancer
Computer Aided Diagnosis (CADx)
active contour model (ACM)
Brain tumor segmentation using neutrosophic expert maximum fuzzy-sure entropy and other approaches
Sert, Eser
Avci, Derya
Biomedical Signal Processing and Control2019Journal Article, cited 0 times
BraTS-TCGA-GBM
Glioblastoma is the most aggressive and most common primary brain tumor in adult individuals. Magnetic resonance imagery (MRI) is widely used in the brain tumor diagnosis. This study proposes an approach called neutrosophic set – expert maximum fuzzy-sure entropy (NS-EMFSE), which is a successful edge detection approach, by combining two powerful approaches such as neutrosophic set (NS) and expert maximum fuzzy-sure entropy (EMFSE). Thus, a high performance approach is designed for Glioblastoma, which is the most difficult brain tumor segmentation and edge finding process. The proposed NS-EMFSE approach was designed to detect enhancing part of the tumor in brain MRI image. Using maximum fuzzy entropy and fuzzy c-partition methods, EMFSE determines the necessary threshold value to convert images into binary format. NS has been recently proposed as an efficient approach based on neutrosophy theory, and yields remarkably successful results for indeterminate situations. The proposed algorithm was compared to NS with Otsu thresholding (NS-Otsu), support vector machine (SVM), fuzzy c-means (FCM), Darwinian particle swarm optimization (DPSO). SVM, FCM, DPSO algorithms have been so far used for edge detection and segmentation in various fields. In this study, figure of merid (FOM) and jaccard index (JI) tests were carried out to evaluate the performances of these 5 edge detection approaches on 100 MRI images. These tests indicate which approach yields the best performance in enhancing part detection of the tumor in MRI image. Analysis of variance (ANOVA) was performed on FOM and JI data. As a result, the maximum values of FOM and JI results for the NS-EMFSE are 0.984000, and 0.965000, the mean values are 0.933440 and 0.912000, and the minimum values are 0.699000 and 0.671000, respectively. When these statistical results are compared with the statistical results of other 4 approaches, it is understood that the proposed method yields higher FOM and JI results. In addition, other statistical analysis results proved that the proposed NS-EMFSE performed better than other 4 methods.
Integrating imaging and omics data: A review
Antonelli, Laura
Guarracino, Mario Rosario
Maddalena, Lucia
Sangiovanni, Mara
Biomedical Signal Processing and Control2019Journal Article, cited 0 times
NSCLC Radiogenomics-Stanford
We refer to omics imaging as an emerging interdisciplinary field concerned with the integration of data collected from biomedical images and omics analyses. Bringing together information coming from different sources, it permits to reveal hidden genotype–phenotype relationships, with the aim of better understanding the onset and progression of many diseases, and identifying new diagnostic and prognostic biomarkers. More in detail, biomedical images, generated by anatomical or functional techniques, are processed to extract hundreds of numerical features describing visual aspects – as in solid cancer imaging – or functional elements – as in neuroimaging. These imaging features are then complemented and integrated with genotypic and phenotypic information, such as DNA mutations, RNA expression levels, and protein abundances. Apart from the difficulties arising from imaging and omics analyses alone, the process of integrating, combining, processing, and making sense of the omics imaging data is quite challenging, owed to the heterogeneity of the sources, the high dimensionality of the resulting feature space, and the reduced availability of freely accessible, large, and well-curated datasets containing both images and omics data for each sample. In this review, we present the state of the art of omics imaging, with the aim of providing the interested reader a unique source of information, with links for further detailed information. Based on the existing literature, we describe both the omics and imaging data that have been adopted, provide a list of curated databases of integrated resources, discuss the types of adopted features, give hints on the used data analysis methods, and overview current research in this field.
An efficient denoising of impulse noise from MRI using adaptive switching modified decision based unsymmetric trimmed median filter
Sheela, C. Jaspin Jeba
Suganthi, G.
Biomedical Signal Processing and Control2020Journal Article, cited 0 times
Brain
Algorithm Development
Classification
Multiscale receptive field based on residual network for pancreas segmentation in CT images
Li, Feiyan
Li, Weisheng
Shu, Yucheng
Qin, Sheng
Xiao, Bin
Zhan, Ziwei
Biomedical Signal Processing and Control2020Journal Article, cited 0 times
Pancreas-CT
Medical image segmentation has made great achievements. Yet pancreas is a challenging abdominal organ to segment due to the high inter-patient anatomical variability in both shape and volume metrics. The UNet often suffers from pancreas over-segmentation, under-segmentation and shape inconsistency between the predicted result and ground truth. We consider the UNet can not extract more deepen features and rich semantic information which can not distinguish the regions between pancreas and background. From this point, we proposed three cross-domain information fusion strategies to solve above three problems. The first strategy named skip network can efficiently restrain the over-segmentation through cross-domain connection. The second strategy named residual network mainly seeks to solve the under- and over- segmentation problem by cross-domain connecting on a small scale. The third multiscale cross-domain information fusion strategy named multiscale residual network added multiscale convolution operation on second strategy which can learn more accurate pancreas shape and restrain over- and under- segmentation. We performed experiments on a dataset of 82 abdominal contrast-enhanced three dimension computed tomography (3D CT) scans from the National Institutes of Health Clinical Center using 4-fold cross-validation. We report 87.57 ± 3.26 % of the mean Dice score, which outperforms the state-of-the-art method, producing 7.87 % improvement from the predicted result of original UNet. Our method is not only superior to the other established methods in terms of accuracy and robustness but can also effectively restrain pancreas over-segmentation, under-segmentation and shape inconsistency between the predicted result and ground truth. Our strategies prone to apply to clinical medicine.
Weighted Schatten p-norm minimization for impulse noise removal with TV regularization and its application to medical images
Wang, Li
Xiao, Di
Hou, Wen S.
Wu, Xiao Y.
Chen, Lin
Biomedical Signal Processing and Control2021Journal Article, cited 1 times
Website
LungCT-Diagnosis
Image denoising
Machine Learning
Principal component analysis (PCA)
Noise of impulse type was common in medical images. In this paper, we modeled the denoising problem for impulse noise by Weighted Schatten p-norm minimization (WSNM) with Robust Principal Component Analysis (RPCA). The anisotropic Total Variation (TV) regularization was incorporated to preserve edge information which was important for clinic detection and diagnosis. The alternating direction method of multipliers (ADMM) algorithm was adopted for solving the formulated nonconvex optimization problem. We tested the performance on both standard natural images and medical images with additive impulse noise in different levels. Experiment results implied its competitiveness compared to traditional denoising algorithms that validated to be state-of-the-art. The propose algorithm restored images with better structure information preservation outperformed the conventional techniques in terms of visual appearances. Quantitative metrics (PSNR, SSIM and FSIM) further objectively demonstrated the effectiveness of the proposed algorithm for impulse noise removal superior to the existing ones.
Breast DCE-MRI segmentation for lesion detection by multi-level thresholding using student psychological based optimization
Patra, Dipak Kumar
Si, Tapas
Mondal, Sukumar
Mukherjee, Prakash
Biomedical Signal Processing and Control2021Journal Article, cited 0 times
Website
TCGA-BRCA
Algorithm Development
Radiomics
Machine Learning
BREAST
Computer Aided Detection (CADe)
Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI)
In recent years, the high prevalence of breast cancer in women has risen dramatically. Therefore, segmentation of breast Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) is a necessary task to assist the radiologist inaccurate diagnosis and detection of breast cancer in breast DCE-MRI. For image segmentation, thresholding is a simple and effective method. In breast DCE-MRI analysis for lesion detection and segmentation, radiologists agree that optimization via multi-level thresholding technique is important to differentiate breast; lesions from dynamic DCE-MRI. In this paper, multi-level thresholding using Student Psychology-Based Optimizer (SPBO) is proposed to segment the breast DCE-MR images for lesion detection. First, MR images are denoised using the anisotropic diffusion filter and then, Intensity Inhomogeneities (IIHs) are corrected in the preprocessing step. The preprocessed MR images are segmented using the SPBO algorithm. Finally, the lesions are extracted from the segmented images and localized in the original MR images. The proposed method is applied on 300 Sagittal T2-Weighted DCE-MRI slices of 50 patients, histologically proven, and analyzed. The roposed method is compared with algorithms such as Particle Swarm Optimizer (PSO), Dragonfly Optimization (DA), Slime Mould Optimization (SMA), Multi-Verse Optimization (MVO), Grasshopper Optimization Algorithm (GOA), Hidden Markov Random Field (HMRF), Improved Markov Random Field (IMRF), and Conventional Markov Random Field (CMRF) methods. The high accuracy level of 99.44%, sensitivity 96.84%, and Dice Similarity Coefficient (DSC) 93.41% are achieved using the proposed automatic segmentation method. Both quantitative and qualitative results demonstrate that the proposed method performs better than the eight compared methods.
A deep learning study on osteosarcoma detection from histological images
Anisuzzaman, D.M.
Barzekar, Hosein
Tong, Ling
Luo, Jake
Yu, Zeyun
Biomedical Signal Processing and Control2021Journal Article, cited 0 times
Osteosarcoma-Tumor-Assessment
In the U.S. 5–10% of new pediatric cases of cancer are primary bone tumors. The most common type of primary malignant bone tumor is osteosarcoma. The intention of the present work is to improve the detection and diagnosis of osteosarcoma using computer-aided detection (CAD) and diagnosis (CADx). Such tools as convolutional neural networks (CNNs) can significantly decrease the surgeon’s workload and make a better prognosis of patient conditions. CNNs need to be trained on a large amount of data in order to achieve a more trustworthy performance. In this study, transfer learning techniques, pre-trained CNNs, are adapted to a public dataset on osteosarcoma histological images to detect necrotic images from non-necrotic and healthy tissues. First, the dataset was preprocessed, and different classifications are applied. Then, Transfer learning models including VGG19 and Inception V3 are used and trained on Whole Slide Images (WSI) with no patches, to improve the accuracy of the outputs. Finally, the models are applied to different classification problems, including binary and multi-class classifiers. Experimental results show that the accuracy of the VGG19 has the highest, 96%, performance amongst all binary classes and multiclass classification. Our fine-tuned model demonstrates state-of-the-art performance on detecting malignancy of Osteosarcoma based on histologic images.
Semi-automated and interactive segmentation of contrast-enhancing masses on breast DCE-MRI using spatial fuzzy clustering
Militello, Carmelo
Rundo, Leonardo
Dimarco, Mariangela
Orlando, Alessia
Conti, Vincenzo
Woitek, Ramona
D’Angelo, Ildebrando
Bartolotta, Tommaso Vincenzo
Russo, Giorgio
Biomedical Signal Processing and Control2022Journal Article, cited 0 times
QIN Breast DCE-MRI
RIDER Breast MRI
Multiparametric Magnetic Resonance Imaging (MRI) is the most sensitive imaging modality for breast cancer detection and is increasingly playing a key role in lesion characterization. In this context, accurate and reliable quantification of the shape and extent of breast cancer is crucial in clinical research environments. Since conventional lesion delineation procedures are still mostly manual, automated segmentation approaches can improve this time-consuming and operator-dependent task by annotating the regions of interest in a reproducible manner. In this work, a semi-automated and interactive approach based on the spatial Fuzzy C-Means (sFCM) algorithm is proposed, used to segment masses on dynamic contrast-enhanced (DCE) MRI of the breast. Our method was compared against existing approaches based on classic image processing, namely (i) Otsu’s method for thresholding-based segmentation, and (ii) the traditional FCM algorithm. A further comparison was performed against state-of-the-art Convolutional Neural Networks (CNNs) for medical image segmentation, namely SegNet and U-Net, in a 5-fold cross-validation scheme. The results showed the validity of the proposed approach, by significantly outperforming the competing methods in terms of the Dice similarity coefficient ( 84.47 ± 4.75 ). Moreover, a Pearson’s coefficient of ρ = 0.993 showed a high correlation between segmented volume and the gold standard provided by clinicians. Overall, the proposed method was confirmed to outperform the competing literature methods. The proposed computer-assisted approach could be deployed into clinical research environments by providing a reliable tool for volumetric and radiomics analyses.
Automated classification of acute leukemia on a heterogeneous dataset using machine learning and deep learning techniques
Abhishek, Arjun
Jha, Rajib Kumar
Sinha, Ruchi
Jha, Kamlesh
Biomedical Signal Processing and Control2022Journal Article, cited 2 times
Website
SN-AM
Pathomics
Acute myeloid leukemia
Acute lymphoblastic leukemia
Computer Aided Detection (CADe)
Classification
Machine learning
Deep learning
Support Vector Machine (SVM)
TIFF
Today, artificial intelligence and deep learning techniques constitute a prominent part in the area of medical sciences. These techniques help doctors detect diseases early and reduce their burden as well as chances of errors.; However, experiments based on deep learning techniques require large and well-annotated dataset. This paper introduces a novel dataset of 500 peripheral blood smear images, containing normal, Acute Myeloid Leukemia and Acute Lymphoblastic Leukemia images. The dataset comprises almost 1700 cancerous blood cells. The size of the dataset is increased by adding images of a publicly available dataset and forming a heterogeneous dataset.; The heterogeneous dataset is used for the automated binary classification task, which is one of the major tasks of the proposed work. The proposed work perform binary as well as three-class classification tasks involving state-of-the-art techniques based on machine learning and deep learning. For binary classification, the proposed work achieved an accuracy of 97% when fully connected layers along with the last three convolutional layers of VGG16 are fine tuned and 98% for DenseNet121 along with support vector machine. For three-class classification task, an accuracy of 95% is obtained for ResNet50 along with support vector machine. The preparation of the novel dataset is done under the opinion of various expertise that will help the scientific community for medical research supported by machine learning models.
Brain tumor classification using the fused features extracted from expanded tumor region
Öksüz, Coşku
Urhan, Oğuzhan
Güllü, Mehmet Kemal
Biomedical Signal Processing and Control2022Journal Article, cited 0 times
Website
LGG-1p19qDeletion
Radiomic feature
Computer aided diagnosis
Lung cancer diagnosis in CT images based on Alexnet optimized by modified Bowerbird optimization algorithm
Xu, Yeguo
Wang, Yuhang
Razmjooy, Navid
Biomedical Signal Processing and Control2022Journal Article, cited 0 times
LungCT-Diagnosis
AlexNet
Radiomic feature
Wiener Filtering
Computed Tomography (CT)
Computer Aided Diagnosis (CADx)
Gabor filter
LUNG
Objective; Cancer is the uncontrolled growth of abnormal cells that do not function as normal cells. Lung cancer is the leading cause of cancer death in the world, so early detection of lung disease will have a major impact on the likelihood of a definitive cure. Computed Tomography (CT) has been identified as one of the best imaging techniques. Various tools available for medical image processing include data collection in the form of images and algorithms for image analysis and system testing.; ; Methods; This study proposes a new diagnosis system for lung cancer based on image processing and artificial intelligence from CT-scan images. In the present study, after noise reduction based on wiener Filtering, Alexnet has been utilized for diagnosing healthy and cancerous cases. The system also uses optimum terms of different features, including Gabor wavelet transform, GLCM, and GLRM to be used in replacing with the network feature extraction part. The study also uses a new modified version of the Satin Bowerbird Optimization Algorithm for optimal designing of the Alexnet architecture and optimal selection of the features.; ; Results; Simulation results of the proposed method on the RIDER Lung CT collection database and the comparison results with some other state-of-the-art methods show that the proposed method provides a satisfying tool for lung cancer diagnosis. The comparison results show that the proposed method with 95.96% accuracy shows the highest value toward the others. The results also show that a higher harmonic mean value for the proposed method with higher F1-score of the method toward the others. Plus, the highest test recall results (98.06%) of the proposed method indicate its higher rate of relevant instances that are retrieved for the images.; ; Conclusion; Therefore, using the proposed method can provide an efficient tool for optimal diagnosis of the Lung Cancer from the CT Images.; ; Significance; this shows that using the proposed method as a new deep-learning-based methodology, can provide higher accuracy and can resolve the big problem of optimal hyperparameters selection of the deep-learning-based methodology techniques for the aimed case.
Spatial feature fusion in 3D convolutional autoencoders for lung tumor segmentation from 3D CT images
Najeeb, Suhail
Bhuiyan, Mohammed Imamul Hassan
Biomedical Signal Processing and Control2022Journal Article, cited 0 times
NSCLC-Radiomics
Accurate detection and segmentation of lung tumors from volumetric CT scans is a critical area of research for the development of computer aided diagnosis systems for lung cancer. Several existing methods of 2D biomedical image segmentation based on convolutional autoencoders show decent performance for the task. However, it is imperative to make use of volumetric data for 3D segmentation tasks. Existing 3D segmentation networks are computationally expensive and have several limitations. In this paper, we introduce a novel approach which makes use of the spatial features learned at different levels of a 2D convolutional autoencoder to create a 3D segmentation network capable of more efficiently utilizing spatial and volumetric information. Our studies show that without any major changes to the underlying architecture and minimum computational overhead, our proposed approach can improve lung tumor segmentation performance by 1.61%, 2.25%, and 2.42% respectively for the 3D-UNet, 3D-MultiResUNet, and Recurrent-3D-DenseUNet networks on the LOTUS dataset in terms of mean 2D dice coefficient. Our proposed models also respectively report 7.58%, 2.32%, and 4.28% improvement in terms of 3D dice coefficient. The proposed modified version of the 3D-MultiResUNet network outperforms existing segmentation architectures on the dataset with a mean 2D dice coefficient of 0.8669. A key feature of our proposed method is that it can be applied to different convolutional autoencoder based segmentation networks to improve segmentation performance.
Performance enhancement of MRI-based brain tumor classification using suitable segmentation method and deep learning-based ensemble algorithm
Tandel, Gopal S.
Tiwari, Ashish
Kakde, O.G.
Biomedical Signal Processing and Control2022Journal Article, cited 0 times
REMBRANDT
Glioma is the most common brain tumor in humans. Accurate stage estimation of the tumor is essential for treatment planning. The biopsy is the gold standard method for this purpose. However, it is an invasive procedure, which can prove fatal for patients, if a tumor is present deep inside the brain. Therefore, a magnetic resonance imaging (MRI) based non-invasive method is proposed in this paper for low-grade glioma (LGG) versus high-grade glioma (HGG) classification. To maximize the above classification performance, five pre-trained convolutional neural networks (CNNs) such as AlexNet, VGG16, ResNet18, GoogleNet, and ResNet50 are assembled using a majority voting mechanism. Segmentation methods require human intervention and additional computational efforts. It makes computer-aided diagnosis tools semi-automated. To analyze the performance effect of segmentation methods, three segmentation methods such as region of interest MRI segmentation (RSM) and skull-stripped MRI segmentation (SSM), and whole-brain MRI (WBM) (non-segmentation) data were compared using above mentioned algorithm. The highest classification accuracy of 99.06 ± 0.55 % was observed on the RSM data and the lowest accuracy of 98.43 ± 0.89 % was observed on the WSM data. However, only a 0.63 % improvement was found in the accuracy of the RSM data against the WBM data. This shows that deep learning models have an incredible ability to extract appropriate features from images. Furthermore, the proposed algorithm showed 2.85 %, 1.39 %, 1.26 %, 2.66 %, and 2.33 % improvement in the average accuracy of the above three datasets over the AlexNet, VGG16, ResNet18, GoogleNet, and ResNet50 models, respectively.
Joint few-shot registration and segmentation self-training of 3D medical images
Shi, Huabang
Lu, Liyun
Yin, Mengxiao
Zhong, Cheng
Yang, Feng
Biomedical Signal Processing and Control2023Journal Article, cited 0 times
Website
Pancreas-CT
Image Registration
Semi-automatic segmentation
Medical image segmentation and registration are very important related steps in clinical medical diagnosis. In the past few years, deep learning techniques for joint segmentation and registration have achieved good results in both segmentation and registration tasks through one-way assisted learning or mutual utilization. However, they often rely on large labeled datasets for supervised training or directly use pseudo-labels without quality estimation. We propose a joint registration and segmentation self-training framework (JRSS), which aims to use segmentation pseudo-labels to promote shared learning between segmentation and registration in scenarios with few manually labeled samples while improving the performance of dual tasks. JRSS combines weakly supervised registration and semi-supervised segmentation learning in a self-training framework. Segmentation self-training generates high-quality pseudo-labels for unlabeled data by injecting noise, pseudo-labels screening, and uncertainty correction. Registration utilizes pseudo-labels to facilitate weakly supervised learning, and as input noise as well as data augmentation to facilitate segmentation self-training. Experiments on two public 3D medical image datasets, abdominal CT and brain MRI, demonstrate that our proposed method achieves simultaneous improvements in segmentation and registration accuracy under few-shot scenarios. Outperforms the single-task fully-supervised training state-of-the-art model in the metrics of Dice similarity coefficient and standard deviation of the Jacobian determinant.
LCSCNet: A multi-level approach for lung cancer stage classification using 3D dense convolutional neural networks with concurrent squeeze-and-excitation module
Tyagi, Shweta
Talbar, Sanjay N.
Biomedical Signal Processing and Control2023Journal Article, cited 0 times
NSCLC Radiogenomics
Lung cancer, the deadliest disease worldwide, poses a massive threat to humankind. Various researchers have designed Computer-Aided-Diagnosis systems for the early-stage detection of lung cancer. However, patients are primarily diagnosed in advanced stages when treatment becomes complicated and dependent on multiple factors like size, nature, location of the tumor, and proper cancer staging. TNM (Tumor, Node, and Metastasis) staging provides all this information. This study aims to develop a novel and efficient approach to classify lung cancer stages based on TNM standards. We propose a multi-level 3D deep convolutional neural network, LCSCNet (Lung Cancer Stage Classification Network). The proposed network architecture consists of three similar classifier networks to classify three labels, T, N, and M-labels. First, we pre-process the data, in which the CT images are augmented, and the label files are processed to get the corresponding TNM labels. For the classification network, we implement a dense convolutional neural network with a concurrent squeeze & excitation module and asymmetric convolutions for classifying each label separately. The overall stage is determined by combining all three labels. The concurrent squeeze & excitation module helps the network focus on the essential information of the image, due to which the classification performance is enhanced. The asymmetric convolutions are introduced to reduce the computation complexity of the network. Two publicly available datasets are used for this study. We achieved average accuracies of 96.23% for T-Stage, 97.63% for N-Stage, and 96.92% for M-Stage classification. Furthermore, an overall stage classification accuracy of 97% is achieved.
Towards fully automated deep-learning-based brain tumor segmentation: Is brain extraction still necessary?
Pacheco, Bruno Machado
de Souza e Cassia, Guilherme
Silva, Danilo
Biomedical Signal Processing and Control2023Journal Article, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
DICOM-Glioma-SEG
State-of-the-art brain tumor segmentation is based on deep learning models applied to multi-modal MRIs. Currently, these models are trained on images after a preprocessing stage that involves registration, interpolation, brain extraction (BE, also known as skull-stripping) and manual correction by an expert. However, for clinical practice, this last step is tedious and time-consuming and, therefore, not always feasible, resulting in skull-stripping faults that can negatively impact the tumor segmentation quality. Still, the extent of this impact has never been measured for any of the many different BE methods available. In this work, we propose an automatic brain tumor segmentation pipeline and evaluate its performance with multiple BE methods. Our experiments show that the choice of a BE method can compromise up to 15.7% of the tumor segmentation performance. Moreover, we propose training and testing tumor segmentation models on non-skull-stripped images, effectively discarding the BE step from the pipeline. Our results show that this approach leads to a competitive performance at a fraction of the time. We conclude that, in contrast to the current paradigm, training tumor segmentation models on non-skull-stripped images can be the best option when high performance in clinical practice is desired.
An efficient reversible data hiding using SVD over a novel weighted iterative anisotropic total variation based denoised medical images
Diwakar, Manoj
Kumar, Pardeep
Singh, Prabhishek
Tripathi, Amrendra
Singh, Laxman
Biomedical Signal Processing and Control2023Journal Article, cited 0 times
Website
LDCT-and-Projection-data
LIDC-IDRI
Algorithm Development
Watermarking
Image denoising
Computed Tomography (CT)
Computed tomography (CT) advancement and extensive usage have raised the public’s worry regarding the patient’s associated radiation dose. Reducing the radiation dose may lead to more noise and artifacts, which may harm the reputation of radiologists. The instability of low-dose CT reconstruction necessitates better image reconstruction, which increases the diagnostic performance. More modern low-dose CT tests have demonstrated outstanding results. Many times these low-dose denoised medical images with medical related information are also required to transmit over a network. Hence in this article, firstly is a novel denoising method is proposed to improve the quality of low-dose CT images that is based on the total variation method by utilizing whale optimization algorithm (WHA). WHA method is important for getting the best possible weighted function. Reduction of noise occurs by the comparison of a given output to the ground truth, although total variation tends to statistically migrate the data noise distribution from strong to weak. Following denoising, a reversible watermarking approach based on SVD and multi-local extrema (MLE) approaches is provided. Individual results of denoising and watermarking are excellent in terms of visual and performance metrics, according to comparative experimental investigation. Also it was also analyzed that if the watermark is embedded over the denoised CT images then the results of watermarking methods are impressive. So, resultant image offers us the chance to use our visual perception abilities to allow us to cut noise and keep vital and secure information.
Brain tumor diagnosis using a step-by-step methodology based on courtship learning-based water strider algorithm
Ren, Weiguo
Bashkandi, Aysa Hasanzade
Jahanshahi, Javad Afshar
AlHamad, Ahmad Qasim Mohammad
Javaheri, Danial
Mohammadi, Morteza
Biomedical Signal Processing and Control2023Journal Article, cited 0 times
Brain-Tumor-Progression
Medical imaging plays an essential function in the management of brain tumors for diagnosis and assortment. Today, MRI images are used to diagnose brain tumors because they show the structure of a normal brain in great detail. Brain tumor segmentation from MRI is a challenging procedure that has certain ups and downs. The tumor to be diagnosed has a flexible and complex structure in the image, it is completely different in size and location and from disease to patient, and accordingly, various algorithms were suggested. In this paper, an automated method is introduced to achieve higher speed and appropriate accuracy in the diagnosis of brain tumors. The present study proposed a new pipeline technique to automatic diagnosis of the brain cancer images from the MRI. Extraction of a feature was conducted to the input images followed by preprocessing to reduce the complexity of the system. The features are then entered into an optimal ANN to provide an efficient diagnosis system. Both feature selection and classification are established by an improved metaheuristic, named courtship learning-based water strider algorithm. The proposed method is then enforced on the “Brain-Tumor-Progression” database and the outcomes have been validated by comparing with some formerly published methods. Simulation consequences indicated the higher efficiency of the suggested method against the other analyzed procedures.
An EffcientNet-encoder U-Net Joint Residual Refinement Module with Tversky–Kahneman Baroni–Urbani–Buser loss for biomedical image Segmentation
Nham, Do-Hai-Ninh
Trinh, Minh-Nhat
Nguyen, Viet-Dung
Pham, Van-Truong
Tran, Thi-Thao
Biomedical Signal Processing and Control2023Journal Article, cited 0 times
TCGA-LGG
BraTS
Radiomics
Quantitative analysis on biomedical images has been on increasing demand nowadays and for modern computer vision approaches. While recently advanced procedures have been enforced, there is still necessity in optimizing network architecture and loss functions. Inspired by the pretrained EfficientNet-B4 and the refinement module in boundary-aware problems, we propose a new two-stage network which is called EffcientNet-encoder U-Net Joint Residual Refinement Module and we create a novel loss function called the Tversky–Kahneman Baroni–Urbani–Buser loss function. The loss function is built on the basement of the Baroni–Urbani–Buser coefficient and the Jaccard–Tanimoto coefficient and reformulated in the Tversky–Kahneman probability-weighting function. We have evaluated our algorithm on the four popular datasets: the 2018 Data Science Bowl Cell Nucleus Segmentation dataset, the Brain Tumor LGG Segmentation dataset, the Skin Lesion ISIC 2018 dataset and the MRI cardiac ACDC dataset. Several comparisons have proved that our proposed approach is noticeably promising and some of the segmentation results provide new state-of-the-art results. The code is available at https://github.com/tswizzle141/An-EffcientNet-encoder-U-Net-Joint-Residual-Refinement-Module-with-TK-BUB-Loss.
Disagreement attention: Let us agree to disagree on computed tomography segmentation
Molina, Edgar Giussepi Lopez
Huang, Xingru
Zhang, Qianni
Biomedical Signal Processing and Control2023Journal Article, cited 0 times
Pancreas-CT
Semantic segmentation is a popular technique successfully applied to various fields like self-driving cars, natural, medical, and satellite images, among others. On the one hand, a well-known concept of disagreement where two models help each other to learn better discriminative features comes from co-training. On the other hand, attention mechanisms are proven to improve the segmentation results; nevertheless, this technique solely focuses on signals with some kind of alignment. This research leverage both concepts in a new kind of attention based on disagreement (Pure, Embedded, and Mixed-Embedded disagreement attention) that improves the model generalisation. Furthermore, we introduce an innovative deep supervision approach (alternating deep supervision) which trains the model following the sequence of supervision branches. Extensive experiments over the segmentation benchmark datasets LiTS17 and CT-82 verify the effectiveness of the proposed approaches. The code is available at https://github.com/giussepi/disagreement-attention.
2.75D: Boosting learning by representing 3D Medical imaging to 2D features for small data
Wang, Xin
Su, Ruisheng
Xie, Weiyi
Wang, Wenjin
Xu, Yi
Mann, Ritse
Han, Jungong
Tan, Tao
Biomedical Signal Processing and Control2023Journal Article, cited 0 times
Duke-Breast-Cancer-MRI
CNN
In medical-data driven learning, 3D convolutional neural networks (CNNs) have started to show superior performance to 2D CNNs in numerous deep learning tasks, proving the added value of 3D spatial information in feature representation. However, the difficulty in collecting more training samples to converge, more computational resources and longer execution time make this approach less applied. Also, applying transfer learning on 3D CNN is challenging due to a lack of publicly available pre-trained 3D models. To tackle these issues, we proposed a novel 2D strategical representation of volumetric data, namely 2.75D. In this work, the spatial information of 3D images is captured in a single 2D view by a spiral-spinning technique. As a result, 2D CNN networks can also be used to learn volumetric information. Besides, we can fully leverage pre-trained 2D CNNs for downstream vision problems. We also explore a multi-view 2.75D strategy, 2.75D 3 channels (2.75D × 3), to boost the advantage of 2.75D. We evaluated the proposed methods on three public datasets with different modalities or organs (Lung CT, Breast MRI, and Prostate MRI), against their 2D, 2.5D, and 3D counterparts in classification tasks. Results show that the proposed methods significantly outperform other counterparts when all methods were trained from scratch on the lung dataset. Such performance gain is more pronounced with transfer learning or in the case of limited training data. Our methods also achieved comparable performance on other datasets. In addition, our methods achieved a substantial reduction in time consumption of training and inference compared with the 2.5D or 3D method.
MFUnetr: A transformer-based multi-task learning network for multi-organ segmentation from partially labeled datasets
Hao, Qin
Tian, Shengwei
Yu, Long
Wang, Junwen
Biomedical Signal Processing and Control2023Journal Article, cited 0 times
Pancreas-CT
As multi-organ segmentation of CT images is crucial for clinical applications, most state-of-the-art models rely on a fully annotated dataset with strong supervision to achieve high accuracy for particular organs. However, these models have weak generalization when applied to various CT images due to the small scale and single source of the training data. To utilize existing partially labeled datasets to obtain segmentation containing more organs and with higher accuracy and robustness, we create a multi-task learning network called MFUnetr. By directly feeding a union of datasets, MFUnetr trains an encoder-decoder network on two tasks in parallel. The main task is to produce full organ segmentation using a specific training strategy. The auxiliary task is to segment labeled organs of each dataset using label priors. Additionally, we offer a new weighted combined loss function to optimize the model. Compared to the base model UNETR trained on the fully annotated dataset BTCV, our network model, utilizing a combination of three partially labeled datasets, achieved mean Dice on overlapping organs: spleen + 0.35 %, esophagus + 15.28 %, and aorta + 8.31 %. Importantly, without fine-tuning, the mean Dice calculated on 13 organs of BTCV remained + 1.91 % when all 15 organs were segmented. The experimental results show that our proposed method can effectively use existing large partially annotated datasets to alleviate the problem of data scarcity in multi-organ segmentation.
Transformer based multiple instance learning for WSI breast cancer classification
Gao, Chengyang
Sun, Qiule
Zhu, Wen
Zhang, Lizhi
Zhang, Jianxin
Liu, Bin
Zhang, Junxing
Biomedical Signal Processing and Control2024Journal Article, cited 0 times
SLN-Breast
Pathomics
Whole Slide Imaging (WSI)
Computer Aided Diagnosis (CADx)
Classification
Algorithm Development
The computer-aided diagnosis method based on deep learning provides pathologists with preliminary diagnostic opinions and improves their work efficiency. Inspired by the widespread use of transformers in computer vision, we try to explore their effectiveness and potential in classifying breast cancer tissues in WSIs, and propose a hybrid multiple instance learning method called HTransMIL. Specifically, its first stage is to select informative instances based on hierarchical Swin Transformer, which can capture global and local information of pathological images and is beneficial for obtaining accurate discriminative instances. The second stage aims to strengthen the correlation between selected instances via another transformer encoder consistently and produce powerful bag-level features by aggregating interactived instances for classification. Besides, visualization analysis is utilized to better understand the weakly supervised classification model for WSIs. The extensive evaluation results on a private and two public WSI breast cancer datasets demonstrate the effectiveness and competitiveness of HTransMIL. The code and models are publicly available at https://github.com/Chengyang852/Transformer-for-WSI-classification.
BGSNet: A cascaded framework of boundary guided semantic for COVID-19 infection segmentation
Chen, Ying
Feng, Longfeng
Lin, Hongping
Zhang, Wei
Chen, Wang
Zhou, Zonglai
Xu, Guohui
Biomedical Signal Processing and Control2024Journal Article, cited 0 times
CT Images in COVID-19
Veterinary and Food Sciences
Coronavirus disease 2019 (COVID-19) has spread globally in early 2020, leading to a new health crisis. Automatic segmentation of lung infections from computed tomography (CT) images provides an important basis for rapid diagnosis of COVID-19. This paper proposes a cascaded architecture of boundary guided semantic network (BGSNet) based on boundary supervision, multi-scale atrous convolution and dual attention mechanism. The BGSNet cascaded architecture includes a boundary supervision module (BSM), a multi-scale atrous convolution module (MACM), and a dual attention guidance module (DAGM). BSM provides boundary supervised features through explicit modeling to guide precise localization of target regions. MACM introduces atrous convolution with different dilation rates to obtain multi-scale receptive field, thus enhancing the segmentation ability of targets with different scales. DAGM combines channel and spatial attention to filter irrelevant information and enhance feature learning ability. The experimental results based on publicly available CO-Seg and CLSC datasets show that the BGSNet cascaded architecture achieves DSC of 0.806 and 0.677, respectively, which is superior to the advanced COVID-19 infection segmentation model. The effectiveness of the main components of BGSNet has been demonstrated through ablation experiments.
Deep learning-based tumor segmentation and classification in breast MRI with 3TP method
Carvalho, Edson Damasceno
da Silva Neto, Otilio Paulo
de Carvalho Filho, Antônio Oseas
Biomedical Signal Processing and Control2024Journal Article, cited 0 times
QIN Breast DCE-MRI
Breast cancer
Magnetic Resonance Imaging (MRI)
Tumor segmentation
Classification
Automatic Segmentation
Abstract; Background and Objective:; Timely diagnosis of early breast cancer plays a critical role in improving patient outcome and increasing treatment effectiveness. Dynamic contrast-enhancing magnetic resonance imaging (DCE-MRI) is a minimally invasive test widely used in the analysis of breast cancer. Manual analysis of DCE-MRI images by the specialist is extremely complex, exhaustive, and can lead to misunderstandings. Thus, the development of automated methods for analyzing DCE-MRI images of the breast is increasing. In this research, we propose an automatic methodology capable of detecting tumors and classifying their malignancy in a DCE-MRI breast image.; ; Methodology:; The proposed method consists of the use of two deep learning architectures, that is, SegNet and UNet, for breast tumor segmentation and the three-time-point (3TP) method for classifying the malignancy of segmented tumors.; ; Results:; The proposed methodology was tested on the public Quantitative Imaging Network (QIN) Breast DCE-MRI image set, and the best result in segmentation was a Dice of 0.9332 and IoU of 0.9799. For the classification of tumor malignancy, the methodology presented an accuracy of 100%.; ; Conclusions:; In our research, we demonstrate that the problem of mammary tumor segmentation in DCE-MRI images can be efficiently solved using deep learning architectures, and tumor malignancy classification can be done through the three-time method. The method can be integrated as a support system for the specialist in treating patients with breast cancer.
Enhancing brain MRI data visualization accuracy with UNET and FPN networks
Yeboah, Derrick
Dequan, Li
Agordzo, George K.
Biomedical Signal Processing and Control2024Journal Article, cited 0 times
BraTS 2020
TCGA-LGG
The application of medical imaging holds significant value in the identification and monitoring of neurological conditions. The visualization of brain MRI data assists medical practitioners in understanding complex brain structures and identifying irregularities. Transfer learning augments the efficacy of convolutional neural networks (CNNs), which are renowned for their exceptional capability in medical image interpretation. Utilizing UNet and FPN neural networks in tandem with transfer learning techniques, the current study seeks to evaluate the precision of brain MRI data visualization. The prevalent UNet and FPN architectures for medical image analysis are described in this study. The UNet model employs skip connections in its encoder-decoder architecture, while the FPN model collects multi-scale data using a feature pyramid. The effectiveness of segmentation and classification schemes has been demonstrated. To evaluate transfer learning, UNet, and FPN networks are trained to recognize generic image representations using a massive dataset like ImageNet. Brain MRI imaging is employed as target data to optimize the networks. Pre-training networks are intended to be equipped with sophisticated and comprehensive source domain features, with the specific purpose of facilitating brain MRI visualization. The information transmission procedure improves the ability of networks to discern nuanced attributes and faithfully depict them. The novel model framework implemented in this study is predicated on a histogram-based threshold. Through the examination of the image intensity histogram, we successfully computed the suitable threshold and evaluated the model’s proficiency in segmenting brain lesions in magnetic resonance imaging (MRI) scans. Utilizing the proposed method, correlations between numerous modalities were exploited, as each modality produced differing degrees of accuracy, which peaked at 91%. A combination of bottom-up and top-down approaches was utilized to integrate the numerous modalities into the model. By effectively capitalizing on the complex interrelation among modalities, our methodology improves the precision of segmentation. In the domain of brain MRI data visualization, the findings emphasize the importance of implementing transfer learning methodologies for medical image processing. By employing pre-trained networks, medical practitioners can more precisely evaluate brain architecture and diseases. This article makes a scholarly contribution to the expanding field of study that explores the feasibility of utilizing transfer learning to improve the performance of Convolutional Neural Networks (CNNs) in the context of medical imaging.
Descriptions and evaluations of methods for determining surface curvature in volumetric data
Hauenstein, Jacob D.
Newman, Timothy S.
Computers & Graphics2020Journal Article, cited 0 times
Website
FDA-Phantom
RIDER PHANTOM PET-CT
Highlights; • Methods using convolution or fitting are often the most accurate.; • The existing TE method is fast and accurate on noise-free data.; • The OP method is faster than existing, similarly accurate methods on real data.; • Even modest errors in curvature notably impact curvature-based renderings.; • On real data, GSTH, GSTI, and OP produce the best curvature-based renderings.; Abstract; Three methods developed for determining surface curvature in volumetric data are described, including one convolution-based method, one fitting-based method, and one method that uses normal estimates to directly determine curvature. Additionally, a study of the accuracy and computational performance of these methods and prior methods is presented. The study considers synthetic data, noise-added synthetic data, and real data. Sample volume renderings using curvature-based transfer functions, where curvatures were determined with the methods, are also exhibited.
An overview on Meta-learning approaches for Few-shot Weakly-supervised Segmentation
Gama, Pedro Henrique Targino
Oliveira, Hugo
dos Santos, Jefersson A.
Cesar, Roberto M.
Computers & Graphics2023Journal Article, cited 0 times
ISBI-MR-Prostate-2013
Semantic segmentation is a difficult task in computer vision that have applications in many scenarios, often as a preprocessing step for a tool. Current solutions are based on Deep Neural Networks, which often require a large amount of data for learning a task. Aiming to alleviate the strenuous data-collecting/annotating labor, research fields have emerged in recent years. One of them is Meta-Learning, which tries to improve the generability of models to learn in a restricted amount of data. In this work, we extend a previous paper conducting a more extensive overview of the still under-explored problem of Few-Shot Weakly-supervised Semantic Segmentation. We refined the previous taxonomy and included the review of additional methods, including Few-Shot Segmentation methods that could be adapted to the weak supervision. The goal is to provide a simple organization of literature and highlight aspects observed in the current moment, and be a starting point to foment research on this problem with applications in areas like medical imaging, remote sensing, video segmentation, and others.
Proteogenomic and metabolomic characterization of human glioblastoma
Wang, Liang-Bo
Karpova, Alla
Gritsenko, Marina A
Kyle, Jennifer E
Cao, Song
Li, Yize
Rykunov, Dmitry
Colaprico, Antonio
Rothstein, Joseph H
Hong, Runyu
Cancer Cell2021Journal Article, cited 0 times
Website
CPTAC-GBM
GBM
Pathology
Histopathologic and proteogenomic heterogeneity reveals features of clear cell renal cell carcinoma aggressiveness
Li, Y.
Lih, T. M.
Dhanasekaran, S. M.
Mannan, R.
Chen, L.
Cieslik, M.
Wu, Y.
Lu, R. J.
Clark, D. J.
Kolodziejczak, I.
Hong, R.
Chen, S.
Zhao, Y.
Chugh, S.
Caravan, W.
Naser Al Deen, N.
Hosseini, N.
Newton, C. J.
Krug, K.
Xu, Y.
Cho, K. C.
Hu, Y.
Zhang, Y.
Kumar-Sinha, C.
Ma, W.
Calinawan, A.
Wyczalkowski, M. A.
Wendl, M. C.
Wang, Y.
Guo, S.
Zhang, C.
Le, A.
Dagar, A.
Hopkins, A.
Cho, H.
Leprevost, F. D. V.
Jing, X.
Teo, G. C.
Liu, W.
Reimers, M. A.
Pachynski, R.
Lazar, A. J.
Chinnaiyan, A. M.
Van Tine, B. A.
Zhang, B.
Rodland, K. D.
Getz, G.
Mani, D. R.
Wang, P.
Chen, F.
Hostetter, G.
Thiagarajan, M.
Linehan, W. M.
Fenyo, D.
Jewell, S. D.
Omenn, G. S.
Mehra, R.
Wiznerowicz, M.
Robles, A. I.
Mesri, M.
Hiltke, T.
An, E.
Rodriguez, H.
Chan, D. W.
Ricketts, C. J.
Nesvizhskii, A. I.
Zhang, H.
Ding, L.
Clinical Proteomic Tumor Analysis, Consortium
Cancer Cell2022Journal Article, cited 0 times
CPTAC-CCRCC
TCGA-KIRC
Pathomics
histopathology imaging features
Uchl1
clear cell renal cell carcinoma (ccRCC)
glycoproteomics
histology
metabolome
phosphoproteomics
proteogenomics
single-nuclei RNA-seq
tumor heterogeneity
Clear cell renal cell carcinomas (ccRCCs) represent approximately 75% of RCC cases and account for most RCC-associated deaths. Inter- and intratumoral heterogeneity (ITH) results in varying prognosis and treatment outcomes. To obtain the most comprehensive profile of ccRCC, we perform integrative histopathologic, proteogenomic, and metabolomic analyses on 305 ccRCC tumor segments and 166 paired adjacent normal tissues from 213 cases. Combining histologic and molecular profiles reveals ITH in 90% of ccRCCs, with 50% demonstrating immune signature heterogeneity. High tumor grade, along with BAP1 mutation, genome instability, increased hypermethylation, and a specific protein glycosylation signature define a high-risk disease subset, where UCHL1 expression displays prognostic value. Single-nuclei RNA sequencing of the adverse sarcomatoid and rhabdoid phenotypes uncover gene signatures and potential insights into tumor evolution. In vitro cell line studies confirm the potential of inhibiting identified phosphoproteome targets. This study molecularly stratifies aggressive histopathologic subtypes that may inform more effective treatment strategies.
Proteogenomic insights suggest druggable pathways in endometrial carcinoma
Yongchao Dou
Lizabeth Katsnelson
Marina A. Gritsenko
Yingwei Hu
Boris Reva
Runyu Hong
Yi-Ting Wang
Iga Kolodziejczak
Rita Jui-Hsien Lu
Chia-Feng Tsai
Wen Bu
Wenke Liu
Xiaofang Guo
Eunkyung An
Rebecca C. Arend
Jasmin Bavarva
Lijun Chen
Rosalie K. Chu
Andrzej Czekański
Teresa Davoli
Elizabeth G. Demicco
Deborah DeLair
Kelly Devereaux
Saravana M. Dhanasekaran
Peter Dottino
Bailee Dover
Thomas L. Fillmore
McKenzie Foxall
Catherine E. Hermann
Tara Hiltke
Galen Hostetter
Marcin Jędryka
Scott D. Jewell
Isabelle Johnson
Andrea G. Kahn
Amy T. Ku
Chandan Kumar-Sinha
Paweł Kurzawa
Alexander J. Lazar
Rossana Lazcano
Jonathan T. Lei
Yi Li
Yuxing Liao
Tung-Shing M. Lih
Tai-Tu Lin
John A. Martignetti
Ramya P. Masand
Rafał Matkowski
Wilson McKerrow
Mehdi Mesri
Matthew E. Monroe
Jamie Moon
Ronald J. Moore
Michael D. Nestor
Chelsea Newton
Tatiana Omelchenko
Gilbert S. Omenn
Samuel H. Payne
Vladislav A. Petyuk
Ana I. Robles
Henry Rodriguez
Kelly V. Ruggles
Dmitry Rykunov
Sara R. Savage
Athena A. Schepmoes
Tujin Shi
Zhiao Shi
Jimin Tan
Mason Taylor
Mathangi Thiagarajan
Joshua M. Wang
Karl K. Weitz
Bo Wen
C.M. Williams
Yige Wu
Matthew A. Wyczalkowski
Xinpei Yi
Xu Zhang
Rui Zhao
David Mutch
Arul M. Chinnaiyan
Richard D. Smith
Alexey I. Nesvizhskii
Pei Wang
Maciej Wiznerowicz
Li Ding
D.R. Mani
Hui Zhang
Matthew L. Anderson
Karin D. Rodland
Bing Zhang
Tao Liu
David Fenyö
Clinical Proteomic Tumor Analysis Consortium
Andrzej Antczak
Meenakshi Anurag
Thomas Bauer
Chet Birger
Michael J. Birrer
Melissa Borucki
Shuang Cai
Anna Calinawan
Steven A. Carr
Patricia Castro
Sandra Cerda
Daniel W. Chan
David Chesla
Marcin P. Cieslik
Sandra Cottingham
Rajiv Dhir
Marcin J. Domagalski
Brian J. Druker
Elizabeth Duffy
Nathan J. Edwards
Robert Edwards
Matthew J. Ellis
Jennifer Eschbacher
Mina Fam
Brenda Fevrier-Sullivan
Jesse Francis
John Freymann
Stacey Gabriel
Gad Getz
Michael A. Gillette
Andrew K. Godwin
Charles A. Goldthwaite
Pamela Grady
Jason Hafron
Pushpa Hariharan
Barbara Hindenach
Katherine A. Hoadley
Jasmine Huang
Michael M. Ittmann
Ashlie Johnson
Corbin D. Jones
Karen A. Ketchum
Justin Kirby
Toan Le
Avi Ma'ayan
Rashna Madan
Sailaja Mareedu
Peter B. McGarvey
Francesmary Modugno
Rebecca Montgomery
Kristen Nyce
Amanda G. Paulovich
Barbara L. Pruetz
Liqun Qi
Shannon Richey
Eric E. Schadt
Yvonne Shutack
Shilpi Singh
Michael Smith
Darlene Tansil
Ratna R. Thangudu
Matt Tobin
Ki Sung Um
Negin Vatanian
Alex Webster
George D. Wilson
Jason Wright
Kakhaber Zaalishvili
Zhen Zhang
Grace Zhao
Cancer Cell2023Journal Article, cited 0 times
CPTAC-UCEC
TCGA-UCEC
We characterized a prospective endometrial carcinoma (EC) cohort containing 138 tumors and 20 enriched normal tissues using 10 different omics platforms. Targeted quantitation of two peptides can predict antigen processing and presentation machinery activity, and may inform patient selection for immunotherapy. Association analysis between MYC activity and metformin treatment in both patients and cell lines suggests a potential role for metformin treatment in non-diabetic patients with elevated MYC activity. PIK3R1 in-frame indels are associated with elevated AKT phosphorylation and increased sensitivity to AKT inhibitors. CTNNB1 hotspot mutations are concentrated near phosphorylation sites mediating pS45-induced degradation of β-catenin, which may render Wnt-FZD antagonists ineffective. Deep learning accurately predicts EC subtypes and mutations from histopathology images, which may be useful for rapid diagnosis. Overall, this study identified molecular and imaging markers that can be further investigated to guide patient stratification for more precise treatment of EC.
Coordinated Cellular Neighborhoods Orchestrate Antitumoral Immunity at the Colorectal Cancer Invasive Front
Schürch, Christian M
Bhate, Salil S
Barlow, Graham L
Phillips, Darci J
Noti, Luca
Zlobec, Inti
Chu, Pauline
Black, Sarah
Demeter, Janos
McIlwain, David R
Kinoshita, Shigemi
Samusik, Nikolay
Goltsev, Yury
Nolan, Garry P
2020Journal Article, cited 0 times
B7-H1 Antigen
Biomarkers
Tumor
CD4-Positive T-Lymphocytes
Cell Line
Tumor
Colorectal Neoplasms
Female
Humans
Immunotherapy
Male
Neoplasm Invasiveness
Tumor Microenvironment
Biomedical and Clinical Sciences
Immunology
Oncology and Carcinogenesis
CRC_FFPE-CODEX_CellNeighs
Antitumoral immunity requires organized, spatially nuanced interactions between components of the immune tumor microenvironment (iTME). Understanding this coordinated behavior in effective versus ineffective tumor control will advance immunotherapies. We re-engineered co-detection by indexing (CODEX) for paraffin-embedded tissue microarrays, enabling simultaneous profiling of 140 tissue regions from 35 advanced-stage colorectal cancer (CRC) patients with 56 protein markers. We identified nine conserved, distinct cellular neighborhoods (CNs)-a collection of components characteristic of the CRC iTME. Enrichment of PD-1+CD4+ T cells only within a granulocyte CN positively correlated with survival in a high-risk patient subset. Coupling of tumor and immune CNs, fragmentation of T cell and macrophage CNs, and disruption of inter-CN communication was associated with inferior outcomes. This study provides a framework for interrogating how complex biological processes, such as antitumoral immunity, occur through concerted actions of cells and spatial domains.
Proteogenomic analysis of chemo-refractory high-grade serous ovarian cancer
Chowdhury, Shrabanti
Kennedy, Jacob J
Ivey, Richard G
Murillo, Oscar D
Hosseini, Noshad
Song, Xiaoyu
Petralia, Francesca
Calinawan, Anna
Savage, Sara R
Berry, Anna B
Reva, Boris
Ozbek, Umut
Krek, Azra
Ma, Weiping
da Veiga Leprevost, Felipe
Ji, Jiayi
Yoo, Seungyeul
Lin, Chenwei
Voytovich, Uliana J
Huang, Yajue
Lee, Sun-Hee
Bergan, Lindsay
Lorentzen, Travis D
Mesri, Mehdi
Rodriguez, Henry
Hoofnagle, Andrew N
Herbert, Zachary T
Nesvizhskii, Alexey I
Zhang, Bing
Whiteaker, Jeffrey R
Fenyo, David
McKerrow, Wilson
Wang, Joshua
Schürer, Stephan C
Stathias, Vasileios
Chen, X Steven
Barcellos-Hoff, Mary Helen
Starr, Timothy K
Winterhoff, Boris J
Nelson, Andrew C
Mok, Samuel C
Kaufmann, Scott H
Drescher, Charles
Cieslik, Marcin
Wang, Pei
Birrer, Michael J
Paulovich, Amanda G
2023Journal Article, cited 0 times
PTRC-HGSOC
To improve the understanding of chemo-refractory high-grade serous ovarian cancers (HGSOCs), we characterized the proteogenomic landscape of 242 (refractory and sensitive) HGSOCs, representing one discovery and two validation cohorts across two biospecimen types (formalin-fixed paraffin-embedded and frozen). We identified a 64-protein signature that predicts with high specificity a subset of HGSOCs refractory to initial platinum-based therapy and is validated in two independent patient cohorts. We detected significant association between lack of Ch17 loss of heterozygosity (LOH) and chemo-refractoriness. Based on pathway protein expression, we identified 5 clusters of HGSOC, which validated across two independent patient cohorts and patient-derived xenograft (PDX) models. These clusters may represent different mechanisms of refractoriness and implicate putative therapeutic vulnerabilities.
Tumor-associated macrophages trigger MAIT cell dysfunction at the HCC invasive margin
Ruf, Benjamin
Bruhns, Matthias
Babaei, Sepideh
Kedei, Noemi
Ma, Lichun
Revsine, Mahler
Benmebarek, Mohamed-Reda
Ma, Chi
Heinrich, Bernd
Subramanyam, Varun
Qi, Jonathan
Wabitsch, Simon
Green, Benjamin L
Bauer, Kylynda C
Myojin, Yuta
Greten, Layla T
McCallen, Justin D
Huang, Patrick
Trehan, Rajiv
Wang, Xin
Nur, Amran
Murphy Soika, Dana Qiang
Pouzolles, Marie
Evans, Christine N
Chari, Raj
Kleiner, David E
Telford, William
Dadkhah, Kimia
Ruchinskas, Allison
Stovroff, Merrill K
Kang, Jiman
Oza, Kesha
Ruchirawat, Mathuros
Kroemer, Alexander
Wang, Xin Wei
Claassen, Manfred
Korangy, Firouzeh
Greten, Tim F
2023Journal Article, cited 0 times
CODEX imaging of HCC
Mucosal-associated invariant T (MAIT) cells represent an abundant innate-like T cell subtype in the human liver. MAIT cells are assigned crucial roles in regulating immunity and inflammation, yet their role in liver cancer remains elusive. Here, we present a MAIT cell-centered profiling of hepatocellular carcinoma (HCC) using scRNA-seq, flow cytometry, and co-detection by indexing (CODEX) imaging of paired patient samples. These analyses highlight the heterogeneity and dysfunctionality of MAIT cells in HCC and their defective capacity to infiltrate liver tumors. Machine-learning tools were used to dissect the spatial cellular interaction network within the MAIT cell neighborhood. Co-localization in the adjacent liver and interaction between niche-occupying CSF1R+PD-L1+ tumor-associated macrophages (TAMs) and MAIT cells was identified as a key regulatory element of MAIT cell dysfunction. Perturbation of this cell-cell interaction in ex vivo co-culture studies using patient samples and murine models reinvigorated MAIT cell cytotoxicity. These studies suggest that aPD-1/aPD-L1 therapies target MAIT cells in HCC patients.
Spatial Organization and Molecular Correlation of Tumor-Infiltrating Lymphocytes Using Deep Learning on Pathology Images
Saltz, J.
Gupta, R.
Hou, L.
Kurc, T.
Singh, P.
Nguyen, V.
Samaras, D.
Shroyer, K. R.
Zhao, T.
Batiste, R.
Van Arnam, J.
Cancer Genome Atlas Research, Network
Shmulevich, I.
Rao, A. U. K.
Lazar, A. J.
Sharma, A.
Thorsson, V.
Cell Rep2018Journal Article, cited 23 times
Website
TCIA General
artificial intelligence
bioinformatics
computer vision
deep learning
digital pathology
immuno-oncology
lymphocytes
machine learning
tumor microenvironment
tumor-infiltrating lymphocytes
Beyond sample curation and basic pathologic characterization, the digitized H&E-stained images of TCGA samples remain underutilized. To highlight this resource, we present mappings of tumor-infiltrating lymphocytes (TILs) based on H&E images from 13 TCGA tumor types. These TIL maps are derived through computational staining using a convolutional neural network trained to classify patches of images. Affinity propagation revealed local spatial structure in TIL patterns and correlation with overall survival. TIL map structural patterns were grouped using standard histopathological parameters. These patterns are enriched in particular T cell subpopulations derived from molecular measures. TIL densities and spatial structure were differentially enriched among tumor types, immune subtypes, and tumor molecular subtypes, implying that spatial infiltrate state could reflect particular tumor cell aberration states. Obtaining spatial lymphocytic patterns linked to the rich genomic characterization of TCGA samples demonstrates one use for the TCGA image archives with insights into the tumor-immune microenvironment.
MRI volume changes of axillary lymph nodes as predictor of pathological complete responses to neoadjuvant chemotherapy in breast cancer
Cattell, Renee F.
Kang, James J.
Ren, Thomas
Huang, Pauline B.
Muttreja, Ashima
Dacosta, Sarah
Li, Haifang
Baer, Lea
Clouston, Sean
Palermo, Roxanne
Fisher, Paul
Bernstein, Cliff
Cohen, Jules A.
Duong, Tim Q.
Clinical Breast Cancer2019Journal Article, cited 0 times
Website
ISPY1
ACRIN 6657
Breast
Radiomics
Introduction; Longitudinal monitoring of breast tumor volume over the course of chemotherapy is informative of pathological response. This study aims to determine whether axillary lymph node (aLN) volume by MRI could augment the prediction accuracy of treatment response to neoadjuvant chemotherapy (NAC).; ; Materials and Methods; Level-2a curated data from I-SPY-1 TRIAL (2002-2006) were used. Patients had stage 2 or 3 breast cancer. MRI was acquired pre-, during and post-NAC. A subset with visible aLN on MRI was identified (N=132). Prediction of pathological complete response (PCR) was made using breast tumor volume changes, nodal volume changes, and combined breast tumor and nodal volume changes with sub-stratification with and without large lymph nodes (3mL or ∼1.79cm diameter cutoff). Receiver-operator-curve analysis was used to quantify prediction performance.; ; Results; Rate of change of aLN and breast tumor volume were informative of pathological response, with prediction being most informative early in treatment (AUC: 0.63-0.82) compared to later in treatment (AUC: 0.50-0.73). Larger aLN volume was associated with hormone receptor negativity, with the largest nodal volume for triple negative subtypes. Sub-stratification by node size improved predictive performance, with the best predictive model for large nodes having AUC of 0.82.; ; Conclusion; Axillary lymph node MRI offers clinically relevant information and has the potential to predict treatment response to neoadjuvant chemotherapy in breast cancer patients.
Automatic segmentation of prostate magnetic resonance imaging using generative adversarial networks
Wang, Wei
Wang, Gangmin
Wu, Xiaofen
Ding, Xie
Cao, Xuexiang
Wang, Lei
Zhang, Jingyi
Wang, Peijun
2020Journal Article, cited 0 times
ISBI-MR-Prostate-2013
QIN-PROSTATE-Repeatability
BACKGROUND: Automatic and detailed segmentation of the prostate using magnetic resonance imaging (MRI) plays an essential role in prostate imaging diagnosis. Traditionally, prostate gland was manually delineated by the clinician in a time-consuming process that requires professional experience of the observer. Thus, we proposed an automatic prostate segmentation method, called SegDGAN, which is based on a classic generative adversarial network model.
MATERIAL AND METHODS: The proposed method comprises a fully convolutional generation network of densely con- nected blocks and a critic network with multi-scale feature extraction. In these computations, the objective function is optimized using mean absolute error and the Dice coefficient, leading to improved accuracy of segmentation results and correspondence with the ground truth. The common and similar medical image segmentation networks U-Net, FCN, and SegAN were selected for qualitative and quantitative comparisons with SegDGAN using a 220-patient dataset and the public datasets. The commonly used segmentation evaluation metrics DSC, VOE, ASD, and HD were used to compare the accuracy of segmentation between these methods.
RESULTS: SegDGAN achieved the highest DSC value of 91.66%, the lowest VOE value of 15.28%, the lowest ASD values of 0.51 mm and the lowest HD value of 11.58 mm with the clinical dataset. In addition, the highest DSC value, and the lowest VOE, ASD and HD values obtained with the public data set PROMISE12 were 86.24%, 23.60%, 1.02 mm and 7.57 mm, respectively.
CONCLUSIONS: Our experimental results show that the SegDGAN model have the potential to improve the accuracy of MRI-based prostate gland segmentation. Code has been made available at: https://github.com/w3user/SegDGAN.
Identifying overall survival in 98 glioblastomas using VASARI features at 3T
Sacli-Bilmez, B.
Firat, Z.
Topcuoglu, O. M.
Yaltirik, K.
Ture, U.
Ozturk-Isik, E.
Clin Imaging2023Journal Article, cited 0 times
Website
VASARI
REMBRANDT
Adult
Humans
*Glioblastoma/diagnostic imaging
*Brain Neoplasms/diagnostic imaging
Necrosis
Machine Learning
Supervised training
Algorithms
Glioblastoma
Magnetic resonance imaging
Survival analysis
PURPOSE: This study aims to evaluate qualitative and quantitative imaging metrics along with clinical features affecting overall survival in glioblastomas and to classify them into high survival and low survival groups based on 12, 19, and 24 months thresholds using machine learning. METHODS: The cohort consisted of 98 adult glioblastomas. A standard brain tumor magnetic resonance (MR) imaging protocol, was performed on a 3T MR scanner. Visually Accessible REMBRANDT Images (VASARI) features were assessed. A Kaplan-Meier survival analysis followed by a log-rank test and multivariate Cox regression analysis were used to investigate the effects of VASARI features along with the age, gender, the extent of resection, pre- and post-KPS, ki67 and P53 mutation status on overall survival. Supervised machine learning algorithms were employed to predict the survival of glioblastoma patients based on 12, 19, and 24 months thresholds. RESULTS: Tumor location (p<0.001), the proportion of non-enhancing component (p=0.0482), and proportion of necrosis (p=0.02) were significantly associated with overall survival based on Kaplan-Meier analysis. Multivariate Cox regression analysis revealed that increases in proportion of non-enhancing component (p=0.040) and proportion of necrosis (p=0.039) were significantly associated with overall survival. Machine-learning models were successful in differentiating patients living longer than 12 months with 96.40% accuracy (sensitivity=97.22%, specificity=95.55%). The classification accuracies based on 19 and 24 months survival thresholds were 70.87% (sensitivity=83.02%, specificity=60.11%) and 74.66% (sensitivity=67.58%, specificity=82.08%), respectively. CONCLUSION: Employing clinical and VASARI features together resulted in a successful classification of glioblastomas that would have a longer overall survival.
Comparison Between Radiological Semantic Features and Lung-RADS in Predicting Malignancy of Screen-Detected Lung Nodules in the National Lung Screening Trial
Li, Qian
Balagurunathan, Yoganand
Liu, Ying
Qi, Jin
Schabath, Matthew B
Ye, Zhaoxiang
Gillies, Robert J
Clinical Lung Cancer2017Journal Article, cited 3 times
Website
Lung cancer screening
Lung-RADS
National Lung Screening Trial (NLST)
Predictive
Semantic features
Assessment of a radiomic signature developed in a general NSCLC cohort for predicting overall survival of ALK-positive patients with different treatment types
Huang, Lyu
Chen, Jiayan
Hu, Weigang
Xu, Xinyan
Liu, Di
Wen, Junmiao
Lu, Jiayu
Cao, Jianzhao
Zhang, Junhua
Gu, Yu
Wang, Jiazhou
Fan, Min
Clinical Lung Cancer2019Journal Article, cited 0 times
Website
NSCLC-Radiomics
RIDER Lung CT
Radiomics
Non Small Cell Lung Cancer (NSCLC)
Objectives; To investigate the potential of a radiomic signature developed in a general NSCLC cohort for predicting the overall survival of ALK-positive patients with different treatment types.; ; Methods; After test-retest in the RIDER dataset, 132 features (ICC>0.9) were selected in the LASSO Cox regression model with a leave-one-out cross-validation. The NSCLC Radiomics collection from TCIA was randomly divided into a training set (N=254) and a validation set (N=63) to develop a general radiomic signature for NSCLC. In our ALK+ set, 35 patients received targeted therapy and 19 patients received non-targeted therapy. The developed signature was tested later in this ALK+ set. Performance of the signature was evaluated with C-index and stratification analysis.; ; Results; The general signature has good performance (C-index>0.6, log-rank p-value<0.05) in the NSCLC Radiomics collection. It includes five features: Geom_va_ratio, W_GLCM_LH_Std, W_GLCM_LH_DV, W_GLCM_HH_IM2 and W_his_HL_mean (Supplementary Table S2). Its accuracy of predicting overall survival in the ALK+ set achieved 0.649 (95%CI=0.640-0.658). Nonetheless, impaired performance was observed in the targeted therapy group (C-index=0.573, 95%CI=0.556-0.589) while significantly improved performance was observed in the non-targeted therapy group (C-index=0.832, 95%CI=0.832-0.852). Stratification analysis also showed that the general signature could only identify high- and low-risk patients in the non-targeted therapy group (log-rank p-value=0.00028).; ; Conclusions; This preliminary study suggests that the applicability of a general signature to ALK-positive patients is limited. The general radiomic signature seems to be only applicable to ALK-positive patients who had received non-targeted therapy, which indicates that developing special radiomics signatures for patients treated with TKI might be necessary.; ; Abbreviations and acronyms; TCIA The Cancer Imaging Archive ; ALK Anaplastic lymphoma kinase ; NSCLC Non-small cell lung cancer ; EML4-ALK fusion The echinoderm microtubule-associated protein like 4-anaplastic lymphoma kinase fusion ; C-index Concordance index; CI Confidence interval; ICC The intra-class correlation coefficient; OS Overall Survival ; LASSO The Least Absolute Shrinkage and Selection Operator; EGFR Epidermal Growth Factor Receptor; TKI Tyrosine-kinase inhibitor
Geometric and Dosimetric Evaluation of a Commercially Available Auto-segmentation Tool for Gross Tumour Volume Delineation in Locally Advanced Non-small Cell Lung Cancer: a Feasibility Study
Barrett, S.
Simpkin, A.J.
Walls, G.M.
Leech, M.
Marignol, L.
2020Journal Article, cited 0 times
4D-Lung
AIMS: To quantify the reliability of a commercially available auto-segmentation tool in locally advanced non-small cell lung cancer using serial four-dimensional computed tomography (4DCT) scans during conventionally fractionated radiotherapy.
MATERIALS AND METHODS: Eight patients with serial 4DCT scans (n = 44) acquired over the course of radiotherapy were assessed. Each 4DCT had a physician-defined primary tumour manual contour (MC). An auto-contour (AC) and a user-adjusted auto-contour (UA-AC) were created for each scan. Geometric agreement of the AC and the UA-AC to the MC was assessed using the dice similarity coefficient (DSC), the centre of mass (COM) shift from the MC and the structure volume difference from the MC. Bland Altman analysis was carried out to assess agreement between contouring methods. Dosimetric reliability was assessed by comparison of planning target volume dose coverage on the MC and UA-AC. The time trend analysis of the geometric accuracy measures from the initial planning scan through to the final scan for each patient was evaluated using a Wilcoxon signed ranks test to assess the reliability of the UA-AC over the duration of radiotherapy.
RESULTS: User adjustment significantly improved all geometric comparison metrics over the AC alone. Improved agreement was observed in smaller tumours not abutting normal soft tissue and median values for geometric comparisons to the MC for DSC, tumour volume difference and COM offset were 0.80 (range 0.49-0.89), 0.8 cm3 (range 0.0-5.9 cm3) and 0.16 cm (range 0.09-0.69 cm), respectively. There were no significant differences in dose metrics measured from the MC and the UA-AC after Bonferroni correction. Variation in geometric agreement between the MC and the UA-AC were observed over the course of radiotherapy with both DSC (P = 0.035) and COM shift from the MC (ns) worsening. The median tumour volume difference from the MC improved at the later time point.
CONCLUSIONS: These findings suggest that the UA-AC can produce geometrically and dosimetrically acceptable contours for appropriately selected patients with non-small cell lung cancer. Larger studies are required to confirm the findings.
Reading the Mind of a Machine: Hopes and Hypes of Artificial Intelligence for Clinical Oncology Imaging
Green, A.
Aznar, M.C.
Muirhead, R.
Osorio, E.M. Vasquez
2021Journal Article, cited 0 times
CT Images in COVID-19
Correlation coefficient based supervised locally linear embedding for pulmonary nodule recognition
Wu, Panpan
Xia, Kewen
Yu, Hengyong
Computer Methods and Programs in Biomedicine2016Journal Article, cited 5 times
Website
Algorithm Development
Classification
LIDC-IDRI
Computer Aided Detection (CADe)
Machine Learning
BACKGROUND AND OBJECTIVE: Dimensionality reduction techniques are developed to suppress the negative effects of high dimensional feature space of lung CT images on classification performance in computer aided detection (CAD) systems for pulmonary nodule detection. METHODS: An improved supervised locally linear embedding (SLLE) algorithm is proposed based on the concept of correlation coefficient. The Spearman's rank correlation coefficient is introduced to adjust the distance metric in the SLLE algorithm to ensure that more suitable neighborhood points could be identified, and thus to enhance the discriminating power of embedded data. The proposed Spearman's rank correlation coefficient based SLLE (SC(2)SLLE) is implemented and validated in our pilot CAD system using a clinical dataset collected from the publicly available lung image database consortium and image database resource initiative (LICD-IDRI). Particularly, a representative CAD system for solitary pulmonary nodule detection is designed and implemented. After a sequential medical image processing steps, 64 nodules and 140 non-nodules are extracted, and 34 representative features are calculated. The SC(2)SLLE, as well as SLLE and LLE algorithm, are applied to reduce the dimensionality. Several quantitative measurements are also used to evaluate and compare the performances. RESULTS: Using a 5-fold cross-validation methodology, the proposed algorithm achieves 87.65% accuracy, 79.23% sensitivity, 91.43% specificity, and 8.57% false positive rate, on average. Experimental results indicate that the proposed algorithm outperforms the original locally linear embedding and SLLE coupled with the support vector machine (SVM) classifier. CONCLUSIONS: Based on the preliminary results from a limited number of nodules in our dataset, this study demonstrates the great potential to improve the performance of a CAD system for nodule detection using the proposed SC(2)SLLE.
Computer-aided grading of gliomas based on local and global MRI features
Hsieh, Kevin Li-Chun
Lo, Chung-Ming
Hsiao, Chih-Jou
Computer Methods and Programs in Biomedicine2016Journal Article, cited 13 times
Website
TCGA-GBM
TCGA-LGG
Radiomics
BACKGROUND AND OBJECTIVES: A computer-aided diagnosis (CAD) system based on quantitative magnetic resonance imaging (MRI) features was developed to evaluate the malignancy of diffuse gliomas, which are central nervous system tumors. METHODS: The acquired image database for the CAD performance evaluation was composed of 34 glioblastomas and 73 diffuse lower-grade gliomas. In each case, tissues enclosed in a delineated tumor area were analyzed according to their gray-scale intensities on MRI scans. Four histogram moment features describing the global gray-scale distributions of gliomas tissues and 14 textural features were used to interpret local correlations between adjacent pixel values. With a logistic regression model, the individual feature set and a combination of both feature sets were used to establish the malignancy prediction model. RESULTS: Performances of the CAD system using global, local, and the combination of both image feature sets achieved accuracies of 76%, 83%, and 88%, respectively. Compared to global features, the combined features had significantly better accuracy (p = 0.0213). With respect to the pathology results, the CAD classification obtained substantial agreement kappa = 0.698, p < 0.001. CONCLUSIONS: Numerous proposed image features were significant in distinguishing glioblastomas from lower-grade gliomas. Combining them further into a malignancy prediction model would be promising in providing diagnostic suggestions for clinical use.
Learning MRI-based classification models for MGMT methylation status prediction in glioblastoma
Kanas, Vasileios G
Zacharaki, Evangelia I
Thomas, Ginu A
Zinn, Pascal O
Megalooikonomou, Vasileios
Colen, Rivka R
Computer Methods and Programs in Biomedicine2017Journal Article, cited 16 times
Website
TCGA-GBM
Radiogenomics
BRAIN
Background and objective: The O6-methylguanine-DNA-methyltransferase (MGMT) promoter methylation has been shown to be associated with improved outcomes in patients with glioblastoma (GBM) and may be a predictive marker of sensitivity to chemotherapy. However, determination of the MGMT promoter methylation status requires tissue obtained via surgical resection or biopsy. The aim of this study was to assess the ability of quantitative and qualitative imaging variables in predicting MGMT methylation status noninvasively.; ; Methods: A retrospective analysis of MR images from GBM patients was conducted. Multivariate prediction models were obtained by machine-learning methods and tested on data from The Cancer Genome Atlas (TCGA) database.; ; Results: The status of MGMT promoter methylation was predicted with an accuracy of up to 73.6%. Experimental analysis showed that the edema/necrosis volume ratio, tumor/necrosis volume ratio, edema volume, and tumor location and enhancement characteristics were the most significant variables in respect to the status of MGMT promoter methylation in GBM.; ; Conclusions: The obtained results provide further evidence of an association between standard preoperative MRI variables and MGMT methylation status in GBM.
A Novel End-to-End Classifier Using Domain Transferred Deep Convolutional Neural Networks for Biomedical Images
Pang, Shuchao
Yu, Zhezhou
Orgun, Mehmet A
Computer Methods and Programs in Biomedicine2017Journal Article, cited 21 times
Website
Radiomics
CT COLONOGRAPHY
Convolutional Neural Network (CNN)
Transfer learning
BACKGROUND AND OBJECTIVES: Highly accurate classification of biomedical images is an essential task in the clinical diagnosis of numerous medical diseases identified from those images. Traditional image classification methods combined with hand-crafted image feature descriptors and various classifiers are not able to effectively improve the accuracy rate and meet the high requirements of classification of biomedical images. The same also holds true for artificial neural network models directly trained with limited biomedical images used as training data or directly used as a black box to extract the deep features based on another distant dataset. In this study, we propose a highly reliable and accurate end-to-end classifier for all kinds of biomedical images via deep learning and transfer learning. METHODS: We first apply domain transferred deep convolutional neural network for building a deep model; and then develop an overall deep learning architecture based on the raw pixels of original biomedical images using supervised training. In our model, we do not need the manual design of the feature space, seek an effective feature vector classifier or segment specific detection object and image patches, which are the main technological difficulties in the adoption of traditional image classification methods. Moreover, we do not need to be concerned with whether there are large training sets of annotated biomedical images, affordable parallel computing resources featuring GPUs or long times to wait for training a perfect deep model, which are the main problems to train deep neural networks for biomedical image classification as observed in recent works. RESULTS: With the utilization of a simple data augmentation method and fast convergence speed, our algorithm can achieve the best accuracy rate and outstanding classification ability for biomedical images. We have evaluated our classifier on several well-known public biomedical datasets and compared it with several state-of-the-art approaches. CONCLUSIONS: We propose a robust automated end-to-end classifier for biomedical images based on a domain transferred deep convolutional neural network model that shows a highly reliable and accurate performance which has been confirmed on several public biomedical image datasets.
NiftyNet: a deep-learning platform for medical imaging
Gibson, Eli
Li, Wenqi
Sudre, Carole
Fidon, Lucas
Shakir, Dzhoshkun I.
Wang, Guotai
Eaton-Rosen, Zach
Gray, Robert
Doel, Tom
Hu, Yipeng
Whyntie, Tom
Nachev, Parashkev
Modat, Marc
Barratt, Dean C.
Ourselin, Sébastien
Cardoso, M. Jorge
Vercauteren, Tom
Computer Methods and Programs in Biomedicine2018Journal Article, cited 678 times
Website
Pancreas-CT
BACKGROUND AND OBJECTIVES: Medical image analysis and computer-assisted intervention problems are increasingly being addressed with deep-learning-based solutions. Established deep-learning platforms are flexible but do not provide specific functionality for medical image analysis and adapting them for this domain of application requires substantial implementation effort. Consequently, there has been substantial duplication of effort and incompatible infrastructure developed across many research groups. This work presents the open-source NiftyNet platform for deep learning in medical imaging. The ambition of NiftyNet is to accelerate and simplify the development of these solutions, and to provide a common mechanism for disseminating research outputs for the community to use, adapt and build upon.
METHODS: The NiftyNet infrastructure provides a modular deep-learning pipeline for a range of medical imaging applications including segmentation, regression, image generation and representation learning applications. Components of the NiftyNet pipeline including data loading, data augmentation, network architectures, loss functions and evaluation metrics are tailored to, and take advantage of, the idiosyncracies of medical image analysis and computer-assisted intervention. NiftyNet is built on the TensorFlow framework and supports features such as TensorBoard visualization of 2D and 3D images and computational graphs by default.
RESULTS: We present three illustrative medical image analysis applications built using NiftyNet infrastructure: (1) segmentation of multiple abdominal organs from computed tomography; (2) image regression to predict computed tomography attenuation maps from brain magnetic resonance images; and (3) generation of simulated ultrasound images for specified anatomical poses.
CONCLUSIONS: The NiftyNet infrastructure enables researchers to rapidly develop and distribute deep learning solutions for segmentation, regression, image generation and representation learning applications, or extend the platform to new applications.
A Novel Biomedical Image Indexing and Retrieval System via Deep Preference Learning
Pang, Shuchao
Orgun, MA
Yu, Zhezhou
Computer Methods and Programs in Biomedicine2018Journal Article, cited 4 times
Website
CT-COLONOGRAPHY
Deep learning
Convolutional Neural Network (CNN)
BACKGROUND AND OBJECTIVES: The traditional biomedical image retrieval methods as well as content-based image retrieval (CBIR) methods originally designed for non-biomedical images either only consider using pixel and low-level features to describe an image or use deep features to describe images but still leave a lot of room for improving both accuracy and efficiency. In this work, we propose a new approach, which exploits deep learning technology to extract the high-level and compact features from biomedical images. The deep feature extraction process leverages multiple hidden layers to capture substantial feature structures of high-resolution images and represent them at different levels of abstraction, leading to an improved performance for indexing and retrieval of biomedical images. METHODS: We exploit the current popular and multi-layered deep neural networks, namely, stacked denoising autoencoders (SDAE) and convolutional neural networks (CNN) to represent the discriminative features of biomedical images by transferring the feature representations and parameters of pre-trained deep neural networks from another domain. Moreover, in order to index all the images for finding the similarly referenced images, we also introduce preference learning technology to train and learn a kind of a preference model for the query image, which can output the similarity ranking list of images from a biomedical image database. To the best of our knowledge, this paper introduces preference learning technology for the first time into biomedical image retrieval. RESULTS: We evaluate the performance of two powerful algorithms based on our proposed system and compare them with those of popular biomedical image indexing approaches and existing regular image retrieval methods with detailed experiments over several well-known public biomedical image databases. Based on different criteria for the evaluation of retrieval performance, experimental results demonstrate that our proposed algorithms outperform the state-of-the-art techniques in indexing biomedical images. CONCLUSIONS: We propose a novel and automated indexing system based on deep preference learning to characterize biomedical images for developing computer aided diagnosis (CAD) systems in healthcare. Our proposed system shows an outstanding indexing ability and high efficiency for biomedical image retrieval applications and it can be used to collect and annotate the high-resolution images in a biomedical database for further biomedical image research and applications.
Spinal cord detection in planning CT for radiotherapy through adaptive template matching, IMSLIC and convolutional neural networks
Diniz, J. O. B.
Diniz, P. H. B.
Valente, T. L. A.
Silva, A. C.
Paiva, A. C.
Comput Methods Programs Biomed2019Journal Article, cited 23 times
Website
LCTSC
Computer Aided Detection (CADe)
Convolutional Neural Network (CNN)
Planning CT
Spinal cord
Radiation Therapy
BACKGROUND AND OBJECTIVE: The spinal cord is a very important organ that must be protected in treatments of radiotherapy (RT), considered an organ at risk (OAR). Excess rays associated with the spinal cord can cause irreversible diseases in patients who are undergoing radiotherapy. For the planning of treatments with RT, computed tomography (CT) scans are commonly used to delimit the OARs and to analyze the impact of rays in these organs. Delimiting these OARs take a lot of time from medical specialists, plus the fact that involves a large team of professionals. Moreover, this task made slice-by-slice becomes an exhaustive and consequently subject to errors, especially in organs such as the spinal cord, which extend through several slices of the CT and requires precise segmentation. Thus, we propose, in this work, a computational methodology capable of detecting spinal cord in planning CT images. METHODS: The techniques highlighted in this methodology are adaptive template matching for initial segmentation, intrinsic manifold simple linear iterative clustering (IMSLIC) for candidate segmentation and convolutional neural networks (CNN) for candidate classification, that consists of four steps: (1) images acquisition, (2) initial segmentation, (3) candidates segmentation and (4) candidates classification. RESULTS: The methodology was applied on 36 planning CT images provided by The Cancer Imaging Archive, and achieved an accuracy of 92.55%, specificity of 92.87% and sensitivity of 89.23% with 0.065 of false positives per images, without any false positives reduction technique, in detection of spinal cord. CONCLUSIONS: It is demonstrated the feasibility of the analysis of planning CT images using IMSLIC and convolutional neural network techniques to achieve success in detection of spinal cord regions.
An ensemble learning approach for brain cancer detection exploiting radiomic features
Brunese, Luca
Mercaldo, Francesco
Reginelli, Alfonso
Santone, Antonella
Comput Methods Programs Biomed2019Journal Article, cited 1 times
Website
REMBRANDT
BraTS
Classification
Radiopaedia
Magnetic Resonance Imaging (MRI)
BACKGROUND AND OBJECTIVE: The brain cancer is one of the most aggressive tumour: the 70% of the patients diagnosed with this malignant cancer will not survive. Early detection of brain tumours can be fundamental to increase survival rates. The brain cancers are classified into four different grades (i.e., I, II, III and IV) according to how normal or abnormal the brain cells look. The following work aims to recognize the different brain cancer grades by analysing brain magnetic resonance images. METHODS: A method to identify the components of an ensemble learner is proposed. The ensemble learner is focused on the discrimination between different brain cancer grades using non invasive radiomic features. The considered radiomic features are belonging to five different groups: First Order, Shape, Gray Level Co-occurrence Matrix, Gray Level Run Length Matrix and Gray Level Size Zone Matrix. We evaluate the features effectiveness through hypothesis testing and through decision boundaries, performance analysis and calibration plots thus we select the best candidate classifiers for the ensemble learner. RESULTS: We evaluate the proposed method with 111,205 brain magnetic resonances belonging to two freely available data-sets for research purposes. The results are encouraging: we obtain an accuracy of 99% for the benign grade I and the II, III and IV malignant brain cancer detection. CONCLUSION: The experimental results confirm that the ensemble learner designed with the proposed method outperforms the current state-of-the-art approaches in brain cancer grade detection starting from magnetic resonance images.
Integration of convolutional neural networks for pulmonary nodule malignancy assessment in a lung cancer classification pipeline
Bonavita, I.
Rafael-Palou, X.
Ceresa, M.
Piella, G.
Ribas, V.
Gonzalez Ballester, M. A.
Comput Methods Programs Biomed2020Journal Article, cited 3 times
Website
Machine Learning
LIDC-IDRI
LUNG
Convolutional Neural Network (CNN)
BACKGROUND AND OBJECTIVE: The early identification of malignant pulmonary nodules is critical for a better lung cancer prognosis and less invasive chemo or radio therapies. Nodule malignancy assessment done by radiologists is extremely useful for planning a preventive intervention but is, unfortunately, a complex, time-consuming and error-prone task. This explains the lack of large datasets containing radiologists malignancy characterization of nodules; METHODS: In this article, we propose to assess nodule malignancy through 3D convolutional neural networks and to integrate it in an automated end-to-end existing pipeline of lung cancer detection. For training and testing purposes we used independent subsets of the LIDC dataset; RESULTS: Adding the probabilities of nodules malignity in a baseline lung cancer pipeline improved its F1-weighted score by 14.7%, whereas integrating the malignancy model itself using transfer learning outperformed the baseline prediction by 11.8% of F1-weighted score; CONCLUSIONS: Despite the limited size of the lung cancer datasets, integrating predictive models of nodule malignancy improves prediction of lung cancer.
Segmentation of prostate zones using probabilistic atlas-based method with diffusion-weighted MR images
Singh, D.
Kumar, V.
Das, C. J.
Singh, A.
Mehndiratta, A.
Comput Methods Programs Biomed2020Journal Article, cited 10 times
Website
QIN-PROSTATE-Repeatability
Algorithms
Diffusion Magnetic Resonance Imaging
Humans
Image Processing
Computer-Assisted
*Magnetic Resonance Imaging
Male
*Prostatic Neoplasms/diagnostic imaging
3D registration
Diffusion-weighted imaging
Partial volume correction
Probabilistic atlas
Prostate zonal segmentation
BACKGROUND AND OBJECTIVE: Accurate segmentation of prostate and its zones constitute an essential preprocessing step for computer-aided diagnosis and detection system for prostate cancer (PCa) using diffusion-weighted imaging (DWI). However, low signal-to-noise ratio and high variability of prostate anatomic structures are challenging for its segmentation using DWI. We propose a semi-automated framework that segments the prostate gland and its zones simultaneously using DWI. METHODS: In this paper, the Chan-Vese active contour model along with morphological opening operation was used for segmentation of prostate gland. Then segmentation of prostate zones into peripheral zone (PZ) and transition zone (TZ) was carried out using in-house developed probabilistic atlas with partial volume (PV) correction algorithm. The study cohort included MRI dataset of 18 patients (n = 18) as our dataset and methodology were also independently evaluated using 15 MRI scans (n = 15) of QIN-PROSTATE-Repeatability dataset. The atlas for zones of prostate gland was constructed using dataset of twelve patients of our patient cohort. Three-fold cross-validation was performed with 10 repetitions, thus total 30 instances of training and testing were performed on our dataset followed by independent testing on the QIN-PROSTATE-Repeatability dataset. Dice similarity coefficient (DSC), Jaccard coefficient (JC), and accuracy were used for quantitative assessment of the segmentation results with respect to boundaries delineated manually by an expert radiologist. A paired t-test was performed to evaluate the improvement in zonal segmentation performance with the proposed PV correction algorithm. RESULTS: For our dataset, the proposed segmentation methodology produced improved segmentation with DSC of 90.76 +/- 3.68%, JC of 83.00 +/- 5.78%, and accuracy of 99.42 +/- 0.36% for the prostate gland, DSC of 77.73 +/- 2.76%, JC of 64.46 +/- 3.43%, and accuracy of 82.47 +/- 2.22% for the PZ, and DSC of 86.05 +/- 1.50%, JC of 75.80 +/- 2.10%, and accuracy of 91.67 +/- 1.56% for the TZ. The segmentation performance for QIN-PROSTATE-Repeatability dataset was, DSC of 85.50 +/- 4.43%, JC of 75.00 +/- 6.34%, and accuracy of 81.52 +/- 5.55% for prostate gland, DSC of 74.40 +/- 1.79%, JC of 59.53 +/- 8.70%, and accuracy of 80.91 +/- 5.16% for PZ, and DSC of 85.80 +/- 5.55%, JC of 74.87 +/- 7.90%, and accuracy of 90.59 +/- 3.74% for TZ. With the implementation of the PV correction algorithm, statistically significant (p<0.05) improvements were observed in all the metrics (DSC, JC, and accuracy) for both prostate zones, PZ and TZ segmentation. CONCLUSIONS: The proposed segmentation methodology is stable, accurate, and easy to implement for segmentation of prostate gland and its zones (PZ and TZ). The atlas-based segmentation framework with PV correction algorithm can be incorporated into a computer-aided diagnostic system for PCa localization and treatment planning.
Style transfer strategy for developing a generalizable deep learning application in digital pathology
Shin, Seo Jeong
You, Seng Chan
Jeon, Hokyun
Jung, Ji Won
An, Min Ho
Park, Rae Woong
Roh, Jin
Computer Methods and Programs in Biomedicine2021Journal Article, cited 1 times
Website
TCGA-OV
deep learning
Generative adversarial network (GAN)
Anisotropic 3D Multi-Stream CNN for Accurate Prostate Segmentation from Multi-Planar MRI
Meyer, Anneke
Chlebus, Grzegorz
Rak, Marko
Schindele, Daniel
Schostak, Martin
van Ginneken, Bram
Schenk, Andrea
Meine, Hans
Hahn, Horst K
Schreiber, Andreas
Hansen, Christian
Computer Methods and Programs in Biomedicine2020Journal Article, cited 0 times
PROSTATEx
PROSTATEx-Seg-HiRes
BACKGROUND AND OBJECTIVE: Accurate and reliable segmentation of the prostate gland in MR images can support the clinical assessment of prostate cancer, as well as the planning and monitoring of focal and loco-regional therapeutic interventions. Despite the availability of multi-planar MR scans due to standardized protocols, the majority of segmentation approaches presented in the literature consider the axial scans only. In this work, we investigate whether a neural network processing anisotropic multi-planar images could work in the context of a semantic segmentation task, and if so, how this additional information would improve the segmentation quality.
METHODS: We propose an anisotropic 3D multi-stream CNN architecture, which processes additional scan directions to produce a high-resolution isotropic prostate segmentation. We investigate two variants of our architecture, which work on two (dual-plane) and three (triple-plane) image orientations, respectively. The influence of additional information used by these models is evaluated by comparing them with a single-plane baseline processing only axial images. To realize a fair comparison, we employ a hyperparameter optimization strategy to select optimal configurations for the individual approaches.
RESULTS: Training and evaluation on two datasets spanning multiple sites show statistical significant improvement over the plain axial segmentation (p<0.05 on the Dice similarity coefficient). The improvement can be observed especially at the base (0.898 single-plane vs. 0.906 triple-plane) and apex (0.888 single-plane vs. 0.901 dual-plane).
CONCLUSION: This study indicates that models employing two or three scan directions are superior to plain axial segmentation. The knowledge of precise boundaries of the prostate is crucial for the conservation of risk structures. Thus, the proposed models have the potential to improve the outcome of prostate cancer diagnosis and therapies.
Reproducible and Interpretable Spiculation Quantification for Lung Cancer Screening
Choi, Wookjin
Nadeem, Saad
Alam, Sadegh R.
Deasy, Joseph O.
Tannenbaum, Allen
Lu, Wei
Computer Methods and Programs in Biomedicine2020Journal Article, cited 0 times
Website
Spiculations are important predictors of lung cancer malignancy, which are spikes on the surface of the pulmonary nodules. In this study, we proposed an interpretable and parameter-free technique to quantify the spiculation using area distortion metric obtained by the conformal (angle-preserving) spherical parameterization. We exploit the insight that for an angle-preserved spherical mapping of a given nodule, the corresponding negative area distortion precisely characterizes the spiculations on that nodule. We introduced novel spiculation scores based on the area distortion metric and spiculation measures. We also semi-automatically segment lung nodule (for reproducibility) as well as vessel and wall attachment to differentiate the real spiculations from lobulation and attachment. A simple pathological malignancy prediction model is also introduced. We used the publicly-available LIDC-IDRI dataset pathologists (strong-label) and radiologists (weak-label) ratings to train and test radiomics models containing this feature, and then externally validate the models. We achieved AUC = 0.80 and 0.76, respectively, with the models trained on the 811 weakly-labeled LIDC datasets and tested on the 72 strongly-labeled LIDC and 73 LUNGx datasets; the previous best model for LUNGx had AUC = 0.68. The number-of-spiculations feature was found to be highly correlated (Spearman’s rank correlation coefficient ) with the radiologists’ spiculation score. We developed a reproducible and interpretable, parameter-free technique for quantifying spiculations on nodules. The spiculation quantification measures was then applied to the radiomics framework for pathological malignancy prediction with reproducible semi-automatic segmentation of nodule. Using our interpretable features (size, attachment, spiculation, lobulation), we were able to achieve higher performance than previous models. In the future, we will exhaustively test our model for lung cancer screening in the clinic.
A hybrid approach based on multiple Eigenvalues selection (MES) for the automated grading of a brain tumor using MRI
Al-Saffar, Z. A.
Yildirim, T.
Comput Methods Programs Biomed2021Journal Article, cited 5 times
Website
REMBRANDT
Algorithms
Radiomic features
BRAIN
Magnetic Resonance Imaging (MRI)
Artificial neural network (ANN)
Classification
Segmentation
Clustering
Image processing
Machine learning
Mutual information (MI)
Singular value decomposition (SVD)
Support vector machine (SVM)
BACKGROUND AND OBJECTIVE: The manual segmentation, identification, and classification of brain tumor using magnetic resonance (MR) images are essential for making a correct diagnosis. It is, however, an exhausting and time consuming task performed by clinical experts and the accuracy of the results is subject to their point of view. Computer aided technology has therefore been developed to computerize these procedures. METHODS: In order to improve the outcomes and decrease the complications involved in the process of analysing medical images, this study has investigated several methods. These include: a Local Difference in Intensity - Means (LDI-Means) based brain tumor segmentation, Mutual Information (MI) based feature selection, Singular Value Decomposition (SVD) based dimensionality reduction, and both Support Vector Machine (SVM) and Multi-Layer Perceptron (MLP) based brain tumor classification. Also, this study has presented a new method named Multiple Eigenvalues Selection (MES) to choose the most meaningful features as inputs to the classifiers. This combination between unsupervised and supervised techniques formed an effective system for the grading of brain glioma. RESULTS: The experimental results of the proposed method showed an excellent performance in terms of accuracy, recall, specificity, precision, and error rate. They are 91.02%,86.52%, 94.26%, 87.07%, and 0.0897 respectively. CONCLUSION: The obtained results prove the significance and effectiveness of the proposed method in comparison to other state-of-the-art techniques and it can have in the contribution to an early diagnosis of brain glioma.
Does non-COVID-19 lung lesion help? investigating transferability in COVID-19 CT image segmentation
Wang, Yixin
Zhang, Yao
Liu, Yang
Tian, Jiang
Zhong, Cheng
Shi, Zhongchao
Zhang, Yang
He, Zhiqiang
Computer Methods and Programs in Biomedicine2021Journal Article, cited 0 times
NSCLC Radiogenomics-Stanford
BACKGROUND AND OBJECTIVE: Coronavirus disease 2019 (COVID-19) is a highly contagious virus spreading all around the world. Deep learning has been adopted as an effective technique to aid COVID-19 detection and segmentation from computed tomography (CT) images. The major challenge lies in the inadequate public COVID-19 datasets. Recently, transfer learning has become a widely used technique that leverages the knowledge gained while solving one problem and applying it to a different but related problem. However, it remains unclear whether various non-COVID19 lung lesions could contribute to segmenting COVID-19 infection areas and how to better conduct this transfer procedure. This paper provides a way to understand the transferability of non-COVID19 lung lesions and a better strategy to train a robust deep learning model for COVID-19 infection segmentation.
METHODS: Based on a publicly available COVID-19 CT dataset and three public non-COVID19 datasets, we evaluate four transfer learning methods using 3D U-Net as a standard encoder-decoder method. i) We introduce the multi-task learning method to get a multi-lesion pre-trained model for COVID-19 infection. ii) We propose and compare four transfer learning strategies with various performance gains and training time costs. Our proposed Hybrid-encoder Learning strategy introduces a Dedicated-encoder and an Adapted-encoder to extract COVID-19 infection features and general lung lesion features, respectively. An attention-based Selective Fusion unit is designed for dynamic feature selection and aggregation.
RESULTS: Experiments show that trained with limited data, proposed Hybrid-encoder strategy based on multi-lesion pre-trained model achieves a mean DSC, NSD, Sensitivity, F1-score, Accuracy and MCC of 0.704, 0.735, 0.682, 0.707, 0.994 and 0.716, respectively, with better genetalization and lower over-fitting risks for segmenting COVID-19 infection.
CONCLUSIONS: The results reveal the benefits of transferring knowledge from non-COVID19 lung lesions, and learning from multiple lung lesion datasets can extract more general features, leading to accurate and robust pre-trained models. We further show the capability of the encoder to learn feature representations of lung lesions, which improves segmentation accuracy and facilitates training convergence. In addition, our proposed Hybrid-encoder learning method incorporates transferred lung lesion features from non-COVID19 datasets effectively and achieves significant improvement. These findings promote new insights into transfer learning for COVID-19 CT image segmentation, which can also be further generalized to other medical tasks.
CLCU-Net: Cross-level connected U-shaped network with selective feature aggregation attention module for brain tumor segmentation
Wang, Y. L.
Zhao, Z. J.
Hu, S. Y.
Chang, F. L.
Comput Methods Programs Biomed2021Journal Article, cited 0 times
Website
BRAIN
Segmentation
Deep Learning
Radiomics
BraTS-TCGA-LGG
BraTS-TCGA-GBM
Multi-scale feature connection
Segmented attention module
Selective feature aggregation
BACKGROUND AND OBJECTIVE: Brain tumors are among the most deadly cancers worldwide. Due to the development of deep convolutional neural networks, many brain tumor segmentation methods help clinicians diagnose and operate. However, most of these methods insufficiently use multi-scale features, reducing their ability to extract brain tumors' features and details. To assist clinicians in the accurate automatic segmentation of brain tumors, we built a new deep learning network to make full use of multi-scale features for improving the performance of brain tumor segmentation. METHODS: We propose a novel cross-level connected U-shaped network (CLCU-Net) to connect different scales' features for fully utilizing multi-scale features. Besides, we propose a generic attention module (Segmented Attention Module, SAM) on the connections of different scale features for selectively aggregating features, which provides a more efficient connection of different scale features. Moreover, we employ deep supervision and spatial pyramid pooling (SSP) to improve the method's performance further. RESULTS: We evaluated our method on the BRATS 2018 dataset by five indexes and achieved excellent performance with a Dice Score of 88.5%, a Precision of 91.98%, a Recall of 85.62%, a Params of 36.34M and Inference Time of 8.89ms for the whole tumor, which outperformed six state-of-the-art methods. Moreover, the performed analysis of different attention modules' heatmaps proved that the attention module proposed in this study was more suitable for segmentation tasks than the other existing popular attention modules. CONCLUSION: Both the qualitative and quantitative experimental results indicate that our cross-level connected U-shaped network with selective feature aggregation attention module can achieve accurate brain tumor segmentation and is considered quite instrumental in clinical practice implementation.
Automatic segmentation of organs at risk and tumors in CT images of lung cancer from partially labelled datasets with a semi-supervised conditional nnU-Net
Zhang, G.
Yang, Z.
Huo, B.
Chai, S.
Jiang, S.
Comput Methods Programs Biomed2021Journal Article, cited 0 times
Website
Lung CT Segmentation Challenge 2017
LCTSC
Classification
U-Net
Semi-automatic segmentation
Computed Tomography (CT)
Automatic segmentation
Conditioning strategy
Deep learning
Partially labelled dataset
Semi-supervised learning
BACKGROUND AND OBJECTIVE: Accurately and reliably defining organs at risk (OARs) and tumors are the cornerstone of radiation therapy (RT) treatment planning for lung cancer. Almost all segmentation networks based on deep learning techniques rely on fully annotated data with strong supervision. However, existing public imaging datasets encountered in the RT domain frequently include singly labelled tumors or partially labelled organs because annotating full OARs and tumors in CT images is both rigorous and tedious. To utilize labelled data from different sources, we proposed a dual-path semi-supervised conditional nnU-Net for OARs and tumor segmentation that is trained on a union of partially labelled datasets. METHODS: The framework employs the nnU-Net as the base model and introduces a conditioning strategy by incorporating auxiliary information as an additional input layer into the decoder. The conditional nnU-Net efficiently leverages prior conditional information to classify the target class at the pixelwise level. Specifically, we employ the uncertainty-aware mean teacher (UA-MT) framework to assist in OARs segmentation, which can effectively leverage unlabelled data (images from a tumor labelled dataset) by encouraging consistent predictions of the same input under different perturbations. Furthermore, we individually design different combinations of loss functions to optimize the segmentation of OARs (Dice loss and cross-entropy loss) and tumors (Dice loss and focal loss) in a dual path. RESULTS: The proposed method is evaluated on two publicly available datasets of the spinal cord, left and right lung, heart, esophagus, and lung tumor, in which satisfactory segmentation performance has been achieved in term of both the region-based Dice similarity coefficient (DSC) and the boundary-based Hausdorff distance (HD). CONCLUSIONS: The proposed semi-supervised conditional nnU-Net breaks down the barriers between nonoverlapping labelled datasets and further alleviates the problem of "data hunger" and "data waste" in multi-class segmentation. The method has the potential to help radiologists with RT treatment planning in clinical practice.
An efficient interactive multi-label segmentation tool for 2D and 3D medical images using fully connected conditional random field
Li, R.
Chen, X.
Comput Methods Programs Biomed2022Journal Article, cited 2 times
Website
ISPY1/ACRIN 6657
Algorithm Development
*Image Processing
Computer-Assisted
*Imaging
Three-Dimensional
MATLAB
Magnetic Resonance Imaging (MRI)
Ultrasound
Segmentation
Conditional random field
OBJECTIVE: Image segmentation is a crucial and fundamental step in many medical image analysis tasks, such as tumor measurement, surgery planning, disease diagnosis, etc. To ensure the quality of image segmentation, most of the current solutions require labor-intensive manual processes by tracing the boundaries of the objects. The workload increases tremendously for the case of three dimensional (3D) image with multiple objects to be segmented. METHOD: In this paper, we introduce our developed interactive image segmentation tool that provides efficient segmentation of multiple labels for both 2D and 3D medical images. The core segmentation method is based on a fast implementation of the fully connected conditional random field. The software also enables automatic recommendation of the next slice to be annotated in 3D, leading to a higher efficiency. RESULTS: We have evaluated the tool on many 2D and 3D medical image modalities (e.g. CT, MRI, ultrasound, X-ray, etc.) and different objects of interest (abdominal organs, tumor, bones, etc.), in terms of segmentation accuracy, repeatability and computational time. CONCLUSION: In contrast to other interactive image segmentation tools, our software produces high quality image segmentation results without the requirement of parameter tuning for each application.
WVALE: Weak variational autoencoder for localisation and enhancement of COVID-19 lung infections
Zhou, Q.
Wang, S.
Zhang, X.
Zhang, Y. D.
Comput Methods Programs Biomed2022Journal Article, cited 0 times
*COVID-19/diagnostic imaging
NSCLC-Radiomics
Humans
Image Processing
Computer-Assisted/methods
Lung/diagnostic imaging
Pandemics
*Supervised Machine Learning
Anomaly localisation
Pseudo data
Segmentation
Weakly supervision
BACKGROUND AND OBJECTIVE: The COVID-19 pandemic is a major global health crisis of this century. The use of neural networks with CT imaging can potentially improve clinicians' efficiency in diagnosis. Previous studies in this field have primarily focused on classifying the disease on CT images, while few studies targeted the localisation of disease regions. Developing neural networks for automating the latter task is impeded by limited CT images with pixel-level annotations available to the research community. METHODS: This paper proposes a weakly-supervised framework named "Weak Variational Autoencoder for Localisation and Enhancement" (WVALE) to address this challenge for COVID-19 CT images. This framework includes two components: anomaly localisation with a novel WVAE model and enhancement of supervised segmentation models with WVALE. RESULTS: The WVAE model have been shown to produce high-quality post-hoc attention maps with fine borders around infection regions, while weak supervision segmentation shows results comparable to conventional supervised segmentation models. The WVALE framework can enhance the performance of a range of supervised segmentation models, including state-of-art models for the segmentation of COVID-19 lung infection. CONCLUSIONS: Our study provides a proof-of-concept for weakly supervised segmentation and an alternative approach to alleviate the lack of annotation, while its independence from classification & segmentation frameworks makes it easily integrable with existing systems.
Multi-Dimensional Cascaded Net with Uncertain Probability Reduction for Abdominal Multi-Organ Segmentation in CT Sequences
Li, C.
Mao, Y.
Guo, Y.
Li, J.
Wang, Y.
Comput Methods Programs Biomed2022Journal Article, cited 0 times
Website
Pancreas-CT
circular inference module
high-resolution multi-view 2.5D net
multi-organ segmentation
shallow-layer-enhanced 3D location net
MATLAB
ITK
BACKGROUND AND OBJECTIVE: Deep learning abdominal multi-organ segmentation provides preoperative guidance for abdominal surgery. However, due to the large volume of 3D CT sequences, the existing methods cannot balance complete semantic features and high-resolution detail information, which leads to uncertain, rough, and inaccurate segmentation, especially in small and irregular organs. In this paper, we propose a two-stage algorithm named multi-dimensional cascaded net (MDCNet) to solve the above problems and segment multi-organs in CT images, including the spleen, kidney, gallbladder, esophagus, liver, stomach, pancreas, and duodenum. METHODS: MDCNet combines the powerful semantic encoder ability of a 3D net and the rich high-resolution information of a 2.5D net. In stage1, a prior-guided shallow-layer-enhanced 3D location net extracts entire semantic features from a downsampled CT volume to perform rough segmentation. Additionally, we use circular inference and parameter Dice loss to alleviate uncertain boundary. The inputs of stage2 are high-resolution slices, which are obtained by the original image and coarse segmentation of stage1. Stage2 offsets the details lost during downsampling, resulting in smooth and accurate refined contours. The 2.5D net from the axial, coronal, and sagittal views also compensates for the missing spatial information of a single view. RESULTS: The experiments on the two datasets both obtained the best performance, particularly a higher Dice on small gallbladders and irregular duodenums, which reached 0.85+/-0.12 and 0.77+/-0.07 respectively, increasing by 0.02 and 0.03 compared to the state-of-the-art method. CONCLUSION: Our method can extract all semantic and high-resolution detail information from a large-volume CT image. It reduces the boundary uncertainty while yielding smoother segmentation edges, indicating good clinical application prospects.
YOLO-LOGO: A transformer-based YOLO segmentation model for breast mass detection and segmentation in digital mammograms
Su, Y.
Liu, Q.
Xie, W.
Hu, P.
Comput Methods Programs Biomed2022Journal Article, cited 4 times
Website
CBIS-DDSM
INBreast
Breast/diagnostic imaging
*Breast Neoplasms/diagnostic imaging
Computer Aided Diagnosis (CADx)
Female
Humans
Mammography/methods
*Neural Networks
Computer
Breast cancer
Deep learning
Mass detection
Mass segmentation
Transformer
BACKGROUND AND OBJECTIVE: Both mass detection and segmentation in digital mammograms play a crucial role in early breast cancer detection and treatment. Furthermore, clinical experience has shown that they are the upstream tasks of pathological classification of breast lesions. Recent advancements in deep learning have made the analyses faster and more accurate. This study aims to develop a deep learning model architecture for breast cancer mass detection and segmentation using the mammography. METHODS: In this work we proposed a double shot model for mass detection and segmentation simultaneously using a combination of YOLO (You Only Look Once) and LOGO (Local-Global) architectures. Firstly, we adopted YoloV5L6, the state-of-the-art object detection model, to position and crop the breast mass in mammograms with a high resolution; Secondly, to balance training efficiency and segmentation performance, we modified the LOGO training strategy to train the whole images and cropped images on the global and local transformer branches separately. The two branches were then merged to form the final segmentation decision. RESULTS: The proposed YOLO-LOGO model was tested on two independent mammography datasets (CBIS-DDSM and INBreast). The proposed model performs significantly better than previous works. It achieves true positive rate 95.7% and mean average precision 65.0% for mass detection on CBIS-DDSM dataset. Its performance for mass segmentation on CBIS-DDSM dataset is F1-score=74.5% and IoU=64.0%. The similar performance trend is observed in another independent dataset INBreast as well. CONCLUSIONS: The proposed model has a higher efficiency and better performance, reduces computational requirements, and improves the versatility and accuracy of computer-aided breast cancer diagnosis. Hence it has the potential to enable more assistance for doctors in early breast cancer detection and treatment, thereby reducing mortality.
Deep learning based time-to-event analysis with PET, CT and joint PET/CT for head and neck cancer prognosis
Wang, Y.
Lombardo, E.
Avanzo, M.
Zschaek, S.
Weingartner, J.
Holzgreve, A.
Albert, N. L.
Marschner, S.
Fanetti, G.
Franchin, G.
Stancanello, J.
Walter, F.
Corradini, S.
Niyazi, M.
Lang, J.
Belka, C.
Riboldi, M.
Kurz, C.
Landry, G.
Comput Methods Programs Biomed2022Journal Article, cited 0 times
Head-Neck-PET-CT
NSCLC-Radiomics
Canada
*Deep Learning
Fluorodeoxyglucose F18
*Head and Neck Neoplasms/diagnostic imaging
Humans
PET/CT
Positron Emission Tomography Computed Tomography
Positron-Emission Tomography/methods
Prognosis
Radiopharmaceuticals
Tomography
X-Ray Computed/methods
Deep Learning
Head-and-neck cancer
LUNG
OBJECTIVES: Recent studies have shown that deep learning based on pre-treatment positron emission tomography (PET) or computed tomography (CT) is promising for distant metastasis (DM) and overall survival (OS) prognosis in head and neck cancer (HNC). However, lesion segmentation is typically required, resulting in a predictive power susceptible to variations in primary and lymph node gross tumor volume (GTV) segmentation. This study aimed at achieving prognosis without GTV segmentation, and extending single modality prognosis to joint PET/CT to allow investigating the predictive performance of combined- compared to single-modality inputs. METHODS: We employed a 3D-Resnet combined with a time-to-event outcome model to incorporate censoring information. We focused on the prognosis of DM and OS for HNC patients. For each clinical endpoint, five models with PET and/or CT images as input were compared: PET-GTV, PET-only, CT-GTV, CT-only, and PET/CT-GTV models, where -GTV indicates that the corresponding images were masked using the GTV contour. Publicly available delineated CT and PET scans from 4 different Canadian hospitals (293) and the MAASTRO clinic (74) were used for training by 3-fold cross-validation (CV). For independent testing, we used 110 patients from a collaborating institution. The predictive performance was evaluated via Harrell's Concordance Index (HCI) and Kaplan-Meier curves. RESULTS: In a 5-year time-to-event analysis, all models could produce CV HCIs with median values around 0.8 for DM and 0.7 for OS. The best performance was obtained with the PET-only model, achieving a median testing HCI of 0.82 for DM and 0.69 for OS. Compared with the PET/CT-GTV model, the PET-only still had advantages of up to 0.07 in terms of testing HCI. The Kaplan-Meier curves and corresponding log-rank test results also demonstrated significant stratification capability of our models for the testing cohort. CONCLUSION: Deep learning-based DM and OS time-to-event models showed predictive capability and could provide indications for personalized RT. The best predictive performance achieved by the PET-only model suggested GTV segmentation might be less relevant for PET-based prognosis.
LibHip: An open-access hip joint model repository suitable for finite element method simulation
Moshfeghifar, Faezeh
Gholamalizadeh, Torkan
Ferguson, Zachary
Schneider, Teseo
Nielsen, Michael Bachmann
Panozzo, Daniele
Darkner, Sune
Erleben, Kenny
Computer Methods and Programs in Biomedicine2022Journal Article, cited 1 times
Website
CT COLONOGRAPHY
TCGA-BLCA
CT Lymph Nodes
finite-element model
Hip joint repository
Background and objective: population-based finite element analysis of hip joints allows us to understand the effect of inter-subject variability on simulation results. Developing large subject-specific population models is challenging and requires extensive manual effort. Thus, the anatomical representations are often subjected to simplification. The discretized geometries do not guarantee conformity in shared interfaces, leading to complications in setting up simulations. Additionally, these models are not openly accessible, challenging reproducibility. Our work provides multiple subject-specific hip joint finite element models and a novel semi-automated modeling workflow. Methods: we reconstruct 11 healthy subject-specific models, including the sacrum, the paired pelvic bones, the paired proximal femurs, the paired hip joints, the paired sacroiliac joints, and the pubic symphysis. The bones are derived from CT scans, and the cartilages are generated from the bone geometries. We generate the whole complex’s volume mesh with conforming interfaces. Our models are evaluated using both mesh quality metrics and simulation experiments. Results: the geometry of all the models are inspected by our clinical expert and show high-quality discretization with accurate geometries. The simulations produce smooth stress patterns, and the variance among the subjects highlights the effect of inter-subject variability and asymmetry in the predicted results. Conclusions: our work is one of the largest model repositories with respect to the number of subjects and regions of interest in the hip joint area. Our detailed research data, including the clinical images, the segmentation label maps, the finite element models, and software tools, are openly accessible on GitHub and the link is provided in Moshfeghifar et al.(2022)[1]. Our aim is to empower clinical researchers to have free access to verified and reproducible models. In future work, we aim to add additional structures to our models.
Functional-structural Sub-region Graph Convolutional Network (FSGCN): Application to the Prognosis of Head and Neck Cancer with PET/CT imaging
Lv, Wenbing
Zhou, Zidong
Peng, Junyi
Peng, Lihong
Lin, Guoyu
Wu, Huiqin
Xu, Hui
Lu, Lijun
Computer Methods and Programs in Biomedicine2023Journal Article, cited 0 times
Head-Neck-PET-CT
Head-Neck-Radiomics-HN1
TCGA-HNSC
QIN-HEADNECK
Radiomic features
Graph Convolutional Neural Network
Algorithm Development
Background and objective; Accurate risk stratification is crucial for enabling personalized treatment for head and neck cancer (HNC). Current PET/CT image-based prognostic methods include radiomics analysis and convolutional neural network (CNN), while extracting radiomics or deep features in grid Euclidean space has inherent limitations for risk stratification. Here, we propose a functional-structural sub-region graph convolutional network (FSGCN) for accurate risk stratification of HNC.; ; Methods; This study collected 642 patients from 8 different centers in The Cancer Imaging Archive (TCIA), 507 patients from 5 centers were used for training, and 135 patients from 3 centers were used for testing. The tumor was first clustered into multiple sub-regions by using PET and CT voxel information, and radiomics features were extracted from each sub-region to characterize its functional and structural information, a graph was then constructed to format the relationship/difference among different sub-regions in non-Euclidean space for each patient, followed by a residual gated graph convolutional network, the prognostic score was finally generated to predict the progression-free survival (PFS).; ; Results; In the testing cohort, compared with radiomics or FSGCN or clinical model alone, the model PETCTFea_CTROI + Cli that integrates FSGCN prognostic score and clinical parameter achieved the highest C-index and AUC of 0.767 (95% CI: 0.759-0.774) and 0.781 (95% CI: 0.774-0.788), respectively for PFS prediction. Besides, it also showed good prognostic performance on the secondary endpoints OS, RFS, and MFS in the testing cohort, with C-index of 0.786 (95% CI: 0.778-0.795), 0.775 (95% CI: 0.767-0.782) and 0.781 (95% CI: 0.772-0.789), respectively.; ; Conclusions; The proposed FSGCN can better capture the metabolic or anatomic difference/interaction among sub-regions of the whole tumor imaged with PET/CT. Extensive multi-center experiments demonstrated its capability and generalization of prognosis prediction in HNC over conventional radiomics analysis.
PyRaDiSe: A Python package for DICOM-RT-based auto-segmentation pipeline construction and DICOM-RT data conversion
Rüfenacht, Elias
Kamath, Amith
Suter, Yannick
Poel, Robert
Ermiş, Ekin
Scheib, Stefan
Reyes, Mauricio
Computer Methods and Programs in Biomedicine2023Journal Article, cited 0 times
Vestibular-Schwannoma-SEG
BACKGROUND AND OBJECTIVE: Despite fast evolution cycles in deep learning methodologies for medical imaging in radiotherapy, auto-segmentation solutions rarely run in clinics due to the lack of open-source frameworks feasible for processing DICOM RT Structure Sets. Besides this shortage, available open-source DICOM RT Structure Set converters rely exclusively on 2D reconstruction approaches leading to pixelated contours with potentially low acceptance by healthcare professionals. PyRaDiSe, an open-source, deep learning framework independent Python package, addresses these issues by providing a framework for building auto-segmentation solutions feasible to operate directly on DICOM data. In addition, PyRaDiSe provides profound DICOM RT Structure Set conversion and processing capabilities; thus, it applies also to auto-segmentation-related tasks, such as dataset construction for deep learning model training.
METHODS: The PyRaDiSe package follows a holistic approach and provides DICOM data handling, deep learning model inference, pre-processing, and post-processing functionalities. The DICOM data handling allows for highly automated and flexible handling of DICOM image series, DICOM RT Structure Sets, and DICOM registrations, including 2D-based and 3D-based conversion from and to DICOM RT Structure Sets. For deep learning model inference, extending given skeleton classes is straightforwardly achieved, allowing for employing any deep learning framework. Furthermore, a profound set of pre-processing and post-processing routines is included that incorporate partial invertibility for restoring spatial properties, such as image origin or orientation.
RESULTS: The PyRaDiSe package, characterized by its flexibility and automated routines, allows for fast deployment and prototyping, reducing efforts for auto-segmentation pipeline implementation. Furthermore, while deep learning model inference is independent of the deep learning framework, it can easily be integrated into famous deep learning frameworks such as PyTorch or Tensorflow. The developed package has successfully demonstrated its capabilities in a research project at our institution for organs-at-risk segmentation in brain tumor patients. Furthermore, PyRaDiSe has shown its conversion performance for dataset construction.
CONCLUSIONS: The PyRaDiSe package closes the gap between data science and clinical radiotherapy by enabling deep learning segmentation models to be easily transferred into clinical research practice. PyRaDiSe is available on https://github.com/ubern-mia/pyradise and can be installed directly from the Python Package Index using pip install pyradise.
Efficient diagnosis of hematologic malignancies using bone marrow microscopic images: A method based on MultiPathGAN and MobileViTv2
Yang, G.
Qin, Z.
Mu, J.
Mao, H.
Mao, H.
Han, M.
Comput Methods Programs Biomed2023Journal Article, cited 0 times
SN-AM
Classification
Computer Aided Diagnosis (CADx)
Algorithm Development
Pathomics
Imaging features
Stain normalisation
Multiple myeloma
Acute lymphocytic leukemia
Lymphoma
Leukemia
BACKGROUND AND OBJECTIVES: Hematologic malignancies, including the associated multiple subtypes, are critically threatening to human health. The timely detection of malignancies is crucial for their effective treatment. In this regard, the examination of bone marrow smears constitutes a crucial step. Nonetheless, the conventional approach to cell identification and enumeration is laborious and time-intensive. Therefore, the present study aimed to develop a method for the efficient diagnosis of these malignancies directly from bone marrow microscopic images. METHODS: A deep learning-based framework was developed to facilitate the diagnosis of common hematologic malignancies. First, a total of 2033 microscopic images of bone marrow analysis, including the images for 6 disease types and 1 healthy control, were collected from two Chinese medical websites. Next, the collected images were classified into the training, validation, and test datasets in the ratio of 7:1:2. Subsequently, a method of stain normalization to multi-domains (stain domain augmentation) based on the MultiPathGAN model was developed to equalize the stain styles and expand the image datasets. Afterward, a lightweight hybrid model named MobileViTv2, which integrates the strengths of both CNNs and ViTs, was developed for disease classification. The resulting model was trained and utilized to diagnose patients based on multiple microscopic images of their bone marrow smears, obtained from a cohort of 61 individuals. RESULTS: MobileViTv2 exhibited an average accuracy of 94.28% when applied to the test set, with multiple myeloma, acute lymphocytic leukemia, and lymphoma revealed as the three diseases diagnosed with the highest accuracy values of 98%, 96%, and 96%, respectively. Regarding patient-level prediction, the average accuracy of MobileViTv2 was 96.72%. This model outperformed both CNN and ViT models in terms of accuracy, despite utilizing only 9.8 million parameters. When applied to two public datasets, MobileViTv2 exhibited accuracy values of 99.75% and 99.72%, respectively, and outperformed previous methods. CONCLUSIONS: The proposed framework could be applied directly to bone marrow microscopic images with different stain styles to efficiently establish the diagnosis of common hematologic malignancies.
Multi-institutional PET/CT image segmentation using federated deep transformer learning
Shiri, I.
Razeghi, B.
Vafaei Sadr, A.
Amini, M.
Salimi, Y.
Ferdowsi, S.
Boor, P.
Gunduz, D.
Voloshynovskiy, S.
Zaidi, H.
Comput Methods Programs Biomed2023Journal Article, cited 0 times
Website
HNSCC
Deep transformers
Federated learning
PET/CT
Privacy
Segmentation
BACKGROUND AND OBJECTIVE: Generalizable and trustworthy deep learning models for PET/CT image segmentation necessitates large diverse multi-institutional datasets. However, legal, ethical, and patient privacy issues challenge sharing of datasets between different centers. To overcome these challenges, we developed a federated learning (FL) framework for multi-institutional PET/CT image segmentation. METHODS: A dataset consisting of 328 FL (HN) cancer patients who underwent clinical PET/CT examinations gathered from six different centers was enrolled. A pure transformer network was implemented as fully core segmentation algorithms using dual channel PET/CT images. We evaluated different frameworks (single center-based, centralized baseline, as well as seven different FL algorithms) using 68 PET/CT images (20% of each center data). In particular, the implemented FL algorithms include clipping with the quantile estimator (ClQu), zeroing with the quantile estimator (ZeQu), federated averaging (FedAvg), lossy compression (LoCo), robust aggregation (RoAg), secure aggregation (SeAg), and Gaussian differentially private FedAvg with adaptive quantile clipping (GDP-AQuCl). RESULTS: The Dice coefficient was 0.80+/-0.11 for both centralized and SeAg FL algorithms. All FL approaches achieved centralized learning model performance with no statistically significant differences. Among the FL algorithms, SeAg and GDP-AQuCl performed better than the other techniques. However, there was no statistically significant difference. All algorithms, except the center-based approach, resulted in relative errors less than 5% for SUV(max) and SUV(mean) for all FL and centralized methods. Centralized and FL algorithms significantly outperformed the single center-based baseline. CONCLUSIONS: The developed FL-based (with centralized method performance) algorithms exhibited promising performance for HN tumor segmentation from PET/CT images.
MOB-CBAM: A dual-channel attention-based deep learning generalizable model for breast cancer molecular subtypes prediction using mammograms
Nissar, I.
Alam, S.
Masood, S.
Kashif, M.
Comput Methods Programs Biomed2024Journal Article, cited 0 times
Website
CBIS-DDSM
*Deep Learning
Mammography
*Calcinosis
Image Processing
Computer-Assisted
*Neoplasms
Attention mechanism
Breast cancer
Deep Learning
Molecular subtypes
BACKGROUND AND OBJECTIVE: Deep Learning models have emerged as a significant tool in generating efficient solutions for complex problems including cancer detection, as they can analyze large amounts of data with high efficiency and performance. Recent medical studies highlight the significance of molecular subtype detection in breast cancer, aiding the development of personalized treatment plans as different subtypes of cancer respond better to different therapies. METHODS: In this work, we propose a novel lightweight dual-channel attention-based deep learning model MOB-CBAM that utilizes the backbone of MobileNet-V3 architecture with a Convolutional Block Attention Module to make highly accurate and precise predictions about breast cancer. We used the CMMD mammogram dataset to evaluate the proposed model in our study. Nine distinct data subsets were created from the original dataset to perform coarse and fine-grained predictions, enabling it to identify masses, calcifications, benign, malignant tumors and molecular subtypes of cancer, including Luminal A, Luminal B, HER-2 Positive, and Triple Negative. The pipeline incorporates several image pre-processing techniques, including filtering, enhancement, and normalization, for enhancing the model's generalization ability. RESULTS: While identifying benign versus malignant tumors, i.e., coarse-grained classification, the MOB-CBAM model produced exceptional results with 99 % accuracy, precision, recall, and F1-score values of 0.99 and MCC of 0.98. In terms of fine-grained classification, the MOB-CBAM model has proven to be highly efficient in accurately identifying mass with (benign/malignant) and calcification with (benign/malignant) classification tasks with an impressive accuracy rate of 98 %. We have also cross-validated the efficiency of the proposed MOB-CBAM deep learning architecture on two datasets: MIAS and CBIS-DDSM. On the MIAS dataset, an accuracy of 97 % was reported for the task of classifying benign, malignant, and normal images, while on the CBIS-DDSM dataset, an accuracy of 98 % was achieved for the classification of mass with either benign or malignant, and calcification with benign and malignant tumors. CONCLUSION: This study presents lightweight MOB-CBAM, a novel deep learning framework, to address breast cancer diagnosis and subtype prediction. The model's innovative incorporation of the CBAM enhances precise predictions. The extensive evaluation of the CMMD dataset and cross-validation on other datasets affirm the model's efficacy.
A time-dependent explainable radiomic analysis from the multi-omic cohort of CPTAC-Pancreatic Ductal Adenocarcinoma
Zaccaria, G. M.
Berloco, F.
Buongiorno, D.
Brunetti, A.
Altini, N.
Bevilacqua, V.
Comput Methods Programs Biomed2024Journal Article, cited 0 times
Website
CPTAC-PDA
Explainability
Machine learning
Pancreatic ductal adenocarcinoma
Radiomics
Survival analysis
BACKGROUND AND OBJECTIVE: In Pancreatic Ductal Adenocarcinoma (PDA), multi-omic models are emerging to answer unmet clinical needs to derive novel quantitative prognostic factors. We realized a pipeline that relies on survival machine-learning (SML) classifiers and explainability based on patients' follow-up (FU) to stratify prognosis from the public-available multi-omic datasets of the CPTAC-PDA project. MATERIALS AND METHODS: Analyzed datasets included tumor-annotated radiologic images, clinical, and mutational data. A feature selection was based on univariate (UV) and multivariate (MV) survival analyses according to Overall Survival (OS) and recurrence (REC). In this study, we considered seven multi-omic datasets and compared four SML classifiers: Cox, survival random forest, generalized boosted, and support vector machines (SVM). For each classifier, we assessed the concordance (C) index on the validation set. The best classifiers for the validation set on both OS and REC underwent explainability analyses using SurvSHAP(t), which extends SHapley Additive exPlanations (SHAP). RESULTS: According to OS, after UV and MV analyses we selected 18/37 and 10/37 multi-omic features, respectively. According to REC, based on UV and MV analyses we selected 10/35 and 5/35 determinants, respectively. Generally, SML classifiers including radiomics outperformed those modelled on clinical or mutational predictors. For OS, the Cox model encompassing radiomic, clinical, and mutational features reached 75 % of C index, outperforming other classifiers. On the other hand, for REC, the SVM model including only radiomics emerged as the best-performing, with 68 % of C index. For OS, SurvSHAP(t) identified the first order Median Gray Level (GL) intensities, the gender, the tumor grade, the Joint Energy GL Co-occurrence Matrix (GLCM), and the GLCM Informational Measures of Correlations of type 1 as the most important features. For REC, the first order Median GL intensities, the GL size zone matrix Small Area Low GL Emphasis, and first order variance of GL intensities emerged as the most discriminative. CONCLUSIONS: In this work, radiomics showed the potential for improving patients' risk stratification in PDA. Furthermore, a deeper understanding of how radiomics can contribute to prognosis in PDA was achieved with a time-dependent explainability of the top multi-omic predictors.
Fuzzy information granulation towards benign and malignant lung nodules classification
Amini, Fatemeh
Amjadifard, Roya
Mansouri, Azadeh
Computer Methods and Programs in Biomedicine Update2024Journal Article, cited 0 times
Website
LIDC-IDRI
SPIE-AAPM Lung CT Challenge
Algorithm Development
Classification
Radiomic features
Lung cancer is the second common cancer with the highest death rate in the world. Cancer diagnosis in the early stages is a critical factor for increasing the treatment speed. This paper proposes a new machine learning method based on a fuzzy approach to detect benign and malignant lung nodules to early-diagnose lung cancer by investigating the computed tomography (CT) images. First, the lung nodule images are pre-processed via the Gabor wavelet transform. Then, some of the texture features are extracted from the transformed domain based on the statistical characteristics and histogram of the local patterns of images. Finally, based on the fuzzy information granulation (FIG) method, which is widely recognized as being able to distinguish between similar textures, a FIG-based classifier is introduced to classify the benign and malignant lung nodules. The clinical data set used for this research are a combination of 150 CT scans of LIDC and SPIE-APPM data sets. Also the LIDC data set is analyzed alone. The results show that the proposed method can be an innovative alternative to classify the benign and malignant nodules in the CT images.
Empathy structure in multi-agent system with the mechanism of self-other separation: Design and analysis from a random walk view
Chen, Jize
Liu, Bo
Qu, Zhenshen
Wang, Changhong
Cognitive Systems Research2023Journal Article, cited 0 times
Website
LIDC-IDRI
Random Forest
Semi-supervised learning
In a socialized multi-agent system, the preferences of individuals will be inevitably influenced by others. This paper introduces an extended empathy structure to characterize the coupling process of preferences under specific relations and make it cover scenarios including human society, human–machine system, and even abiotic engineering applications. In this model, empathy is abstracted as a stochastic experience process in the form of Markov chain, and the coupled empathy utility is defined as the expectation of obtaining preferences under the corresponding probability distribution. The self-other separation is the core concept with which our structure can exhibit social attributes, including attraction of implicit states, inhibition of excessive empathy, attention of empathetic targets, and anisotropy of the utility distribution. Compared with the previous empirical models, our model has a better performance on the data set and can provide a new perspective for designing and analyzing the cognitive layer of the human–machine network, as well as the information fusion and semi-supervised clustering methods in engineering.
An automated lung segmentation approach using bidirectional chain codes to improve nodule detection accuracy
Shen, Shiwen
Bui, Alex AT
Cong, Jason
Hsu, William
Computers in Biology and Medicine2015Journal Article, cited 31 times
Website
LIDC-IDRI
Computer Aided Detection (CADe)
Segmentation
Support Vector Machine (SVM)
Computer-aided detection and diagnosis (CAD) has been widely investigated to improve radiologists' diagnostic accuracy in detecting and characterizing lung disease, as well as to assist with the processing of increasingly sizable volumes of imaging. Lung segmentation is a requisite preprocessing step for most CAD schemes. This paper proposes a parameter-free lung segmentation algorithm with the aim of improving lung nodule detection accuracy, focusing on juxtapleural nodules. A bidirectional chain coding method combined with a support vector machine (SVM) classifier is used to selectively smooth the lung border while minimizing the over-segmentation of adjacent regions. This automated method was tested on 233 computed tomography (CT) studies from the lung imaging database consortium (LIDC), representing 403 juxtapleural nodules. The approach obtained a 92.6% re-inclusion rate. Segmentation accuracy was further validated on 10 randomly selected CT series, finding a 0.3% average over-segmentation ratio and 2.4% under-segmentation rate when compared to manually segmented reference standards done by an expert. (C) 2014 Elsevier Ltd. All rights reserved.
Quantitative glioma grading using transformed gray-scale invariant textures of MRI
Hsieh, Kevin Li-Chun
Chen, Cheng-Yu
Lo, Chung-Ming
Computers in Biology and Medicine2017Journal Article, cited 8 times
Website
Algorithm Development
TCGA-LGG
TCGA-GBM
BRAIN
Computer Aided Diagnosis (CADx)
Background: A computer-aided diagnosis (CAD) system based on intensity-invariant magnetic resonance (MR) imaging features was proposed to grade gliomas for general application to various scanning systems and settings.; Method: In total, 34 glioblastomas and 73 lower-grade gliomas comprised the image database to evaluate the proposed CAD system. For each case, the local texture on MR images was transformed into a local binary pattern (LBP) which was intensity-invariant. From the LBP, quantitative image features, including the histogram moment and textures, were extracted and combined in a logistic regression classifier to establish a malignancy prediction model. The performance was compared to conventional texture features to demonstrate the improvement.; Results: The performance of the CAD system based on LBP features achieved an accuracy of 93% (100/107), a sensitivity of 97% (33/34), a negative predictive value of 99% (67/68), and an area under the receiver operating characteristic curve (Az) of 0.94, which were significantly better than the conventional texture features: an accuracy of 84% (90/107), a sensitivity of 76% (26/34), a negative predictive value of 89% (64/72), and an Az of 0.89 with respective p values of 0.0303, 0.0122, 0.0201, and 0.0334.; Conclusions: More-robust texture features were extracted from MR images and combined into a significantly better CAD system for distinguishing glioblastomas from lower-grade gliomas. The proposed CAD system would be more practical in clinical use with various imaging systems and settings.
Deciphering unclassified tumors of non-small-cell lung cancer through radiomics
Saad, Maliazurina
Choi, Tae-Sun
Computers in Biology and Medicine2017Journal Article, cited 8 times
Website
NSCLC-Radiomics
non-small-cell lung cancer
Radiomics
Pilot study for supervised target detection applied to spatially registered multiparametric MRI in order to non-invasively score prostate cancer
Mayer, Rulon
Simone, Charles B
Skinner, William
Turkbey, Baris
Choyke, Peter
Computers in Biology and Medicine2018Journal Article, cited 0 times
Website
PROSTATE-MRI
Supervised target detection
Gleason scoring
Prostate cancer
MRI
Multi-parametric MRI
Meta-analysis
BACKGROUND: Gleason Score (GS) is a validated predictor of prostate cancer (PCa) disease progression and outcomes. GS from invasive needle biopsies suffers from significant inter-observer variability and possible sampling error, leading to underestimating disease severity ("underscoring") and can result in possible complications. A robust non-invasive image-based approach is, therefore, needed. PURPOSE: Use spatially registered multi-parametric MRI (MP-MRI), signatures, and supervised target detection algorithms (STDA) to non-invasively GS PCa at the voxel level. METHODS AND MATERIALS: This study retrospectively analyzed 26MP-MRI from The Cancer Imaging Archive. The MP-MRI (T2, Diffusion Weighted, Dynamic Contrast Enhanced) were spatially registered to each other, combined into stacks, and stitched together to form hypercubes. Multi-parametric (or multi-spectral) signatures derived from a training set of registered MP-MRI were transformed using statistics-based Whitening-Dewhitening (WD). Transformed signatures were inserted into STDA (having conical decision surfaces) applied to registered MP-MRI determined the tumor GS. The MRI-derived GS was quantitatively compared to the pathologist's assessment of the histology of sectioned whole mount prostates from patients who underwent radical prostatectomy. In addition, a meta-analysis of 17 studies of needle biopsy determined GS with confusion matrices and was compared to the MRI-determined GS. RESULTS: STDA and histology determined GS are highly correlated (R=0.86, p<0.02). STDA more accurately determined GS and reduced GS underscoring of PCa relative to needle biopsy as summarized by meta-analysis (p<0.05). CONCLUSION: This pilot study found registered MP-MRI, STDA, and WD transforms of signatures shows promise in non-invasively GS PCa and reducing underscoring with high spatial resolution.
Pathophysiological mapping of tumor habitats in the breast in DCE-MRI using molecular texture descriptor
da Silva Neto, Otilio Paulo
Araújo, José Denes Lima
Caldas Oliveira, Ana Gabriela
Cutrim, Mara
Silva, Aristófanes Corrêa
Paiva, Anselmo Cardoso
Gattass, Marcelo
Computers in Biology and Medicine2019Journal Article, cited 0 times
QIN Breast DCE-MRI
Breast
MRI
BACKGROUND: We propose a computational methodology capable of detecting and analyzing breast tumor habitats in images acquired by magnetic resonance imaging with dynamic contrast enhancement (DCE-MRI), based on the pathophysiological behavior of the contrast agent (CA).
METHODS: The proposed methodology comprises three steps. In summary, the first step is the acquisition of images from the Quantitative Imaging Network Breast. In the second step, the segmentation of the breasts is performed to remove the background, noise, and other unwanted objects from the image. In the third step, the generation of habitats is performed by applying two techniques: the molecular texture descriptor (MTD) that highlights the CA regions in the breast, and pathophysiological texture mapping (MPT), which generates tumor habitats based on the behavior of the CA. The combined use of these two techniques allows the automatic detection of tumors in the breast and analysis of each separate habitat with respect to their malignancy type.
RESULTS: The results found in this study were promising, with 100% of breast tumors being identified. The segmentation results exhibited an accuracy of 99.95%, sensitivity of 71.07%, specificity of 99.98%, and volumetric similarity of 77.75%. Moreover, we were able to classify the malignancy of the tumors, with 6 classified as malignant type III (WashOut) and 14 as malignant type II (Plateau), for a total of 20 cases.
CONCLUSION: We proposed a method for the automatic detection of tumors in the breast in DCE-MRI and performed the pathophysiological mapping of tumor habitats by analyzing the behavior of the CA, combining MTD and MPT, which allowed the mapping of internal tumor habitats.
Association of genomic subtypes of lower-grade gliomas with shape features automatically extracted by a deep learning algorithm
Buda, Mateusz
Saha, Ashirbani
Mazurowski, Maciej A
Computers in Biology and Medicine2019Journal Article, cited 1 times
Website
TCGA-LGG
Radiomics
Deep learning
Radiogenomics
Brain
Recent analysis identified distinct genomic subtypes of lower-grade glioma tumors which are associated with shape features. In this study, we propose a fully automatic way to quantify tumor imaging characteristics using deep learning-based segmentation and test whether these characteristics are predictive of tumor genomic subtypes.; ; We used preoperative imaging and genomic data of 110 patients from 5 institutions with lower-grade gliomas from The Cancer Genome Atlas. Based on automatic deep learning segmentations, we extracted three features which quantify two-dimensional and three-dimensional characteristics of the tumors. Genomic data for the analyzed cohort of patients consisted of previously identified genomic clusters based on IDH mutation and 1p/19q co-deletion, DNA methylation, gene expression, DNA copy number, and microRNA expression. To analyze the relationship between the imaging features and genomic clusters, we conducted the Fisher exact test for 10 hypotheses for each pair of imaging feature and genomic subtype. To account for multiple hypothesis testing, we applied a Bonferroni correction. P-values lower than 0.005 were considered statistically significant.; ; We found the strongest association between RNASeq clusters and the bounding ellipsoid volume ratio (p < 0.0002) and between RNASeq clusters and margin fluctuation (p < 0.005). In addition, we identified associations between bounding ellipsoid volume ratio and all tested molecular subtypes (p < 0.02) as well as between angular standard deviation and RNASeq cluster (p < 0.02). In terms of automatic tumor segmentation that was used to generate the quantitative image characteristics, our deep learning algorithm achieved a mean Dice coefficient of 82% which is comparable to human performance.
Bone segmentation on whole-body CT using convolutional neural network with novel data augmentation techniques
Noguchi, Shunjiro
Nishio, Mizuho
Yakami, Masahiro
Nakagomi, Keita
Togashi, Kaori
Computers in Biology and Medicine2020Journal Article, cited 0 times
CT Lymph Nodes
BACKGROUND: The purpose of this study was to develop and evaluate an algorithm for bone segmentation on whole-body CT using a convolutional neural network (CNN).
METHODS: Bone segmentation was performed using a network based on U-Net architecture. To evaluate its performance and robustness, we prepared three different datasets: (1) an in-house dataset comprising 16,218 slices of CT images from 32 scans in 16 patients; (2) a secondary dataset comprising 12,529 slices of CT images from 20 scans in 20 patients, which were collected from The Cancer Imaging Archive; and (3) a publicly available labelled dataset comprising 270 slices of CT images from 27 scans in 20 patients. To improve the network's performance and robustness, we evaluated the efficacy of three types of data augmentation technique: conventional method, mixup, and random image cropping and patching (RICAP).
RESULTS: The network trained on the in-house dataset achieved a mean Dice coefficient of 0.983 ± 0.005 on cross validation with the in-house dataset, and 0.943 ± 0.007 with the secondary dataset. The network trained on the public dataset achieved a mean Dice coefficient of 0.947 ± 0.013 on 10 randomly generated 15-3-9 splits of the public dataset. These results outperform those reported previously. Regarding augmentation technique, the conventional method, RICAP, and a combination of these were effective.
CONCLUSIONS: The CNN-based model achieved accurate bone segmentation on whole-body CT, with generalizability to various scan conditions. Data augmentation techniques enabled construction of an accurate and robust model even with a small dataset.
Encryption of 3D medical images based on a novel multiparameter cosine number transform
Lima, V.S.
Madeiro, F.
Lima, J.B.
Computers in Biology and Medicine2020Journal Article, cited 0 times
Mouse-Astrocytoma
PROSTATEx
QIN-BREAST
RIDER NEURO MRI
TCGA-CESC
In this paper, a multiparameter cosine number transform is proposed. Such a transform is obtained using the fact that the basis vectors of the three-dimensional cosine number transform (3D-CNT) constitute a possible eigenbasis for the Laplacian of the cubical lattice graph evaluated in a finite field. The proposed transform is identified as three-dimensional steerable cosine number transform (3D-SCNT) and is defined by rotating the 3D-CNT basis vectors, using a finite field rotation operator. We introduce a 3D medical image encryption scheme based on the 3D-SCNT, which uses the rotation angles as secret parameters. By means of computer experiments, we have verified that the scheme is resistant against the main cryptographic attacks.
Multiclass magnetic resonance imaging brain tumor classification using artificial intelligence paradigm
Tandel, G. S.
Balestrieri, A.
Jujaray, T.
Khanna, N. N.
Saba, L.
Suri, J. S.
Comput Biol Med2020Journal Article, cited 157 times
Website
REMBRANDT
Artificial Intelligence
Bayes Theorem
*Brain Neoplasms/diagnostic imaging
*Deep Learning
Humans
Magnetic Resonance Imaging (MRI)
Benchmarking
Classification
Convolutional Neural Network (CNN)
Machine learning
Performance
Transfer learning
Tumour grading system
Validation
Verification
Algorithm Development
MOTIVATION: Brain or central nervous system cancer is the tenth leading cause of death in men and women. Even though brain tumour is not considered as the primary cause of mortality worldwide, 40% of other types of cancer (such as lung or breast cancers) are transformed into brain tumours due to metastasis. Although the biopsy is considered as the gold standard for cancer diagnosis, it poses several challenges such as low sensitivity/specificity, risk during the biopsy procedure, and relatively long waiting times for the biopsy results. Due to an increase in the sheer volume of patients with brain tumours, there is a need for a non-invasive, automatic computer-aided diagnosis tool that can automatically diagnose and estimate the grade of a tumour accurately within a few seconds. METHOD: Five clinically relevant multiclass datasets (two-, three-, four-, five-, and six-class) were designed. A transfer-learning-based Artificial Intelligence paradigm using a Convolutional Neural Network (CCN) was proposed and led to higher performance in brain tumour grading/classification using magnetic resonance imaging (MRI) data. We benchmarked the transfer-learning-based CNN model against six different machine learning (ML) classification methods, namely Decision Tree, Linear Discrimination, Naive Bayes, Support Vector Machine, K-nearest neighbour, and Ensemble. RESULTS: The CNN-based deep learning (DL) model outperforms the six types of ML models when considering five types of multiclass tumour datasets. These five types of data are two-, three-, four-, five, and six-class. The CNN-based AlexNet transfer learning system yielded mean accuracies derived from three kinds of cross-validation protocols (K2, K5, and K10) of 100, 95.97, 96.65, 87.14, and 93.74%, respectively. The mean areas under the curve of DL and ML were found to be 0.99 and 0.87, respectively, for p < 0.0001, and DL showed a 12.12% improvement over ML. Multiclass datasets were benchmarked against the TT protocol (where training and testing samples are the same). The optimal model was validated using a statistical method of a tumour separation index and verified on synthetic data consisting of eight classes. CONCLUSION: The transfer-learning-based AI system is useful in multiclass brain tumour grading and shows better performance than ML systems.
Hessian-MRLoG: Hessian information and multi-scale reverse LoG filter for pulmonary nodule detection
Mao, Q.
Zhao, S.
Tong, D.
Su, S.
Li, Z.
Cheng, X.
Comput Biol Med2021Journal Article, cited 0 times
Website
LIDC-IDRI
Computer Aided Detection (CADe)
LUNA16 Challenge
Algorithm Development
Computer-aided detection (CADe) of pulmonary nodules is an effective approach for early detection of lung cancer. However, due to the low contrast of lung computed tomography (CT) images, the interference of blood vessels and classifications, CADe has the problems of low detection rate and high false-positive rate (FPR). To solve these problems, a novel method using Hessian information and multi-scale reverse Laplacian of Gaussian (LoG) (Hessian-MRLoG) is proposed and developed in this work. Also, since the intensity distribution of the LoG operator and the lung nodule in CT images are inconsistent, and their shapes are mismatched, a multi-scale reverse Laplacian of Gaussian (MRLoG) is constructed. In addition, in order to enhance the effectiveness of target detection, the second-order partial derivatives of MRLoG are partially adjusted by introducing an adjustment factor. On this basis, the Hessian-MRLoG model is developed, and a novel elliptic filter is designed. Ultimately, in this study, the method of Hessian-MRLoG filtering is proposed and developed for pulmonary nodule detection. To verify its effectiveness and accuracy, the proposed method was used to analyze the LUNA16 dataset. The experimental results revealed that the proposed method had an accuracy of 93.6% and produced 1.0 false positives per scan (FPs/scan), indicating that the proposed method can improve the detection rate and significantly reduce the FPR. Therefore, the proposed method has the potential for application in the detection, localization and labeling of other lesion areas.
Radiomics-based machine learning model for efficiently classifying transcriptome subtypes in glioblastoma patients from MRI
Le, Nguyen Quoc Khanh
Hung, Truong Nguyen Khanh
Do, Duyen Thi
Lam, Luu Ho Thanh
Dang, Luong Huu
Huynh, Tuan-Tu
Comput Biol Med2021Journal Article, cited 0 times
Website
TCGA-GBM
Ivy GAP
Glioblastoma Multiforme (GBM)
BRAIN
Magnetic Resonance Imaging (MRI)
Radiogenomics
Radiomics
BACKGROUND: In the field of glioma, transcriptome subtypes have been considered as an important diagnostic and prognostic biomarker that may help improve the treatment efficacy. However, existing identification methods of transcriptome subtypes are limited due to the relatively long detection period, the unattainability of tumor specimens via biopsy or surgery, and the fleeting nature of intralesional heterogeneity. In search of a superior model over previous ones, this study evaluated the efficiency of eXtreme Gradient Boosting (XGBoost)-based radiomics model to classify transcriptome subtypes in glioblastoma patients. METHODS: This retrospective study retrieved patients from TCGA-GBM and IvyGAP cohorts with pathologically diagnosed glioblastoma, and separated them into different transcriptome subtypes groups. GBM patients were then segmented into three different regions of MRI: enhancement of the tumor core (ET), non-enhancing portion of the tumor core (NET), and peritumoral edema (ED). We subsequently used handcrafted radiomics features (n = 704) from multimodality MRI and two-level feature selection techniques (Spearman correlation and F-score tests) in order to find the features that could be relevant. RESULTS: After the feature selection approach, we identified 13 radiomics features that were the most meaningful ones that can be used to reach the optimal results. With these features, our XGBoost model reached the predictive accuracies of 70.9%, 73.3%, 88.4%, and 88.4% for classical, mesenchymal, neural, and proneural subtypes, respectively. Our model performance has been improved in comparison with the other models as well as previous works on the same dataset. CONCLUSION: The use of XGBoost and two-level feature selection analysis (Spearman correlation and F-score) could be expected as a potential combination for classifying transcriptome subtypes with high performance and might raise public attention for further research on radiomics-based GBM models.
MIL normalization -- prerequisites for accurate MRI radiomics analysis
Hu, Z.
Zhuang, Q.
Xiao, Y.
Wu, G.
Shi, Z.
Chen, L.
Wang, Y.
Yu, J.
Comput Biol Med2021Journal Article, cited 0 times
Website
BraTS-TCGA-LGG
Deep Learning
Radiomics
Magnetic Resonance Imaging (MRI)
The quality of magnetic resonance (MR) images obtained with different instruments and imaging parameters varies greatly. A large number of heterogeneous images are collected, and they suffer from acquisition variation. Such imaging quality differences will have a great impact on the radiomics analysis. The main differences in MR images include modality mismatch (M), intensity distribution variance (I), and layer-spacing differences (L), which are referred to as MIL differences in this paper for convenience. An MIL normalization system is proposed to reconstruct uneven MR images into high-quality data with complete modality, a uniform intensity distribution and consistent layer spacing. Three radiomics tasks, including tumor segmentation, pathological grading and genetic diagnosis of glioma, were used to verify the effect of MIL normalization on radiomics analysis. Three retrospective glioma datasets were analyzed in this study: BraTs (285 cases), TCGA (112 cases) and HuaShan (403 cases). They were used to test the effect of MIL on three different radiomics tasks, including tumor segmentation, pathological grading and genetic diagnosis. MIL normalization included three components: multimodal synthesis based on an encoder-decoder network, intensity normalization based on CycleGAN, and layer-spacing unification based on Statistical Parametric Mapping (SPM). The Dice similarity coefficient, areas under the curve (AUC) and six other indicators were calculated and compared after different normalization steps. The MIL normalization system can improved the Dice coefficient of segmentation by 9% (P < .001), the AUC of pathological grading by 32% (P < .001), and IDH1 status prediction by 25% (P < .001) when compared to non-normalization. The proposed MIL normalization system provides high-quality standardized data, which is a prerequisite for accurate radiomics analysis.
Performance optimisation of deep learning models using majority voting algorithm for brain tumour classification
Tandel, G. S.
Tiwari, A.
Kakde, O. G.
Comput Biol Med2021Journal Article, cited 0 times
Website
REMBRANDT
Computer Aided Diagnosis (CADx)
Convolutional Neural Network (CNN)
Deep learning
Ensemble
Machine learning
Magnetic Resonance Imaging (MRI)
Majority voting
Transfer learning
BACKGROUND: Although biopsy is the gold standard for tumour grading, being invasive, this procedure also proves fatal to the brain. Thus, non-invasive methods for brain tumour grading are urgently needed. Here, a magnetic resonance imaging (MRI)-based non-invasive brain tumour grading method has been proposed using deep learning (DL) and machine learning (ML) techniques. METHOD: Four clinically applicable datasets were designed. The four datasets were trained and tested on five DL-based models (convolutional neural networks), AlexNet, VGG16, ResNet18, GoogleNet, and ResNet50, and five ML-based models, Support Vector Machine, K-Nearest Neighbours, Naive Bayes, Decision Tree, and Linear Discrimination using five-fold cross-validation. A majority voting (MajVot)-based ensemble algorithm has been proposed to optimise the overall classification performance of five DL and five ML-based models. RESULTS: The average accuracy improvement of four datasets using the DL-based MajVot algorithm against AlexNet, VGG16, ResNet18, GoogleNet, and ResNet50 models was 2.02%, 1.11%, 1.04%, 2.67%, and 1.65%, respectively. Further, a 10.12% improvement was seen in the average accuracy of four datasets using the DL method against ML. Furthermore, the proposed DL-based MajVot algorithm was validated on synthetic face data and improved the male versus female face image classification accuracy by 2.88%, 0.71%, 1.90%, 2.24%, and 0.35% against AlexNet, VGG16, ResNet18, GoogleNet, and ResNet50, respectively. CONCLUSION: The proposed MajVot algorithm achieved promising results for brain tumour classification and is able to utilise the combined potential of multiple models.
Privacy preserving distributed learning classifiers - Sequential learning with small sets of data
Zerka, F.
Urovi, V.
Bottari, F.
Leijenaar, R. T. H.
Walsh, S.
Gabrani-Juma, H.
Gueuning, M.
Vaidyanathan, A.
Vos, W.
Occhipinti, M.
Woodruff, H. C.
Dumontier, M.
Lambin, P.
Comput Biol Med2021Journal Article, cited 0 times
Website
NSCLC-Radiomics
Machine Learning
Distributed learning
Medical data privacy
Rare disease
Sequential learning
BACKGROUND: Artificial intelligence (AI) typically requires a significant amount of high-quality data to build reliable models, where gathering enough data within a single institution can be particularly challenging. In this study we investigated the impact of using sequential learning to exploit very small, siloed sets of clinical and imaging data to train AI models. Furthermore, we evaluated the capacity of such models to achieve equivalent performance when compared to models trained with the same data over a single centralized database. METHODS: We propose a privacy preserving distributed learning framework, learning sequentially from each dataset. The framework is applied to three machine learning algorithms: Logistic Regression, Support Vector Machines (SVM), and Perceptron. The models were evaluated using four open-source datasets (Breast cancer, Indian liver, NSCLC-Radiomics dataset, and Stage III NSCLC). FINDINGS: The proposed framework ensured a comparable predictive performance against a centralized learning approach. Pairwise DeLong tests showed no significant difference between the compared pairs for each dataset. INTERPRETATION: Distributed learning contributes to preserve medical data privacy. We foresee this technology will increase the number of collaborative opportunities to develop robust AI, becoming the default solution in scenarios where collecting enough data from a single reliable source is logistically impossible. Distributed sequential learning provides privacy persevering means for institutions with small but clinically valuable datasets to collaboratively train predictive AI while preserving the privacy of their patients. Such models perform similarly to models that are built on a larger central dataset.
Comput Biol Med2021Journal Article, cited 0 times
Website
NSCLC-Radiomics-Genomics
Computed Tomography (CT)
Histopathology
Non-Small Cell Lung Cancer (NSCLC)
Radiomics
PyRadiomics
Wavelet
OBJECTIVE: The aim of this study was to identify the most important features and assess their discriminative power in the classification of the subtypes of NSCLC. METHODS: This study involved 354 pathologically proven NSCLC patients including 134 squamous cell carcinoma (SCC), 110 large cell carcinoma (LCC), 62 not other specified (NOS), and 48 adenocarcinoma (ADC). In total, 1433 radiomics features were extracted from 3D volumes of interest drawn on the malignant lesion identified on CT images. Wrapper algorithm and multivariate adaptive regression splines were implemented to identify the most relevant/discriminative features. A multivariable multinomial logistic regression was employed with 1000 bootstrapping samples based on the selected features to classify four main subtypes of NSCLC. RESULTS: The results revealed that the texture features, specifically gray level size zone matrix features (GLSZM), were the significant indicators of NSCLC subtypes. The optimized classifier achieved an average precision, recall, F1-score, and accuracy of 0.710, 0.703, 0.706, and 0.865, respectively, based on the selected features by the wrapper algorithm. CONCLUSIONS: Our CT radiomics approach demonstrated impressive potential for the classification of the four main histological subtypes of NSCLC, It is anticipated that CT radiomics could be useful in treatment planning and precision medicine.
Comput Biol Med2021Journal Article, cited 1 times
Website
LIDC-IDRI
Algorithm Development
LUNG
Computed Tomography (CT)
*Generative adversarial network
Segmentation
Machine Learning
Lung nodule segmentation is an exciting area of research for the effective detection of lung cancer. One of the significant challenges in detecting lung cancer is Accuracy, which is affected due to the visual deviations and heterogeneity in the lung nodules. Hence, to improve the segmentation process's Accuracy, a Salp Shuffled Shepherd Optimization Algorithm-based Generative Adversarial Network (SSSOA-based GAN) model is developed in this research for lung nodule segmentation. The SSSOA is the hybrid optimization algorithm developed by integrating the Salp Swarm Algorithm (SSA) and shuffled shepherd optimization algorithm (SSOA). The artefacts in the input Computed Tomography (CT) image are removed by performing pre-processing with the help of a Gaussian filter. The pre-processed image is subjected to lung lobe segmentation, which is done with the help of deep joint segmentation for segmenting the appropriate regions. The lung nodule segmentation is performed using the GAN. The GAN is trained using the SSSOA to effectively segment the lung nodule from the lung lobe image. The metrics, such as Dice Coefficient, Accuracy, and Jaccard Similarity, are used to evaluate the performance. The developed SSSOA-based GAN method obtained a maximum Accuracy of 0.9387, a maximum Dice Coefficient of 0.7986, and a maximum Jaccard Similarity of 0.8026, respectively, compared with the existing lung nodule segmentation method.
Improving the fidelity of CT image colorization based on pseudo-intensity model and tumor metabolism enhancement
Zhang, Z.
Jiang, H.
Liu, J.
Shi, T.
Comput Biol Med2021Journal Article, cited 0 times
Website
Soft-Tissue Sarcoma
LUNG
Computed Tomography (CT)
Conditional generative adversarial network (cGAN)
Deep learning
PET/CT
BACKGROUND: Subject to the principle of imaging, most medical images are gray-scale images. Human eyes are more sensitive to color images compared to gray-scale images. The state-of-the-art medical image colorization results are unnatural and unrealistic, especially in some organs, such as the lung field. METHOD: We propose a CT image colorization network that consists of a pseudo-intensity model, tumor metabolic enhancement, and MemoPainter-cGAN colorization network. First, the distributions of both the density of CT images and the intensity of anatomical images are analyzed with the aim of building a pseudo-intensity model. Then, the PET images, which are sensitive to tumor metabolism, are used to highlight the tumor regions. Finally, the MemoPainter-cGAN is used to generate colorized anatomical images. RESULTS: Our experiment verified that the mean structural similarity between the colorized images and the original color images is 0.995, which indicates that the colorized image maintains the features of the original images enormously. The average image information entropy is 6.62, which is 13.4% higher than that of the images before metabolism enhancement and colorization. It indicates that the image fidelity is significantly improved. CONCLUSIONS: Our method can generate vivid and fresh anatomical images based on prior knowledge of tissue or organ intensity. The colorized PET/CT images with abundant anatomical knowledge and high sensitivity of metabolic information provide radiologists with access to a new modality that offers additional reference information.
S2FLNet: Hepatic steatosis detection network with body shape
Wang, Q.
Xue, W.
Zhang, X.
Jin, F.
Hahn, J.
Comput Biol Med2021Journal Article, cited 0 times
Website
CT Lymph Nodes
Center loss
Dilated residual network
Hepatic steatosis
LIVER
Fat accumulation in the liver cells can increase the risk of cardiac complications and cardiovascular disease mortality. Therefore, a way to quickly and accurately detect hepatic steatosis is critically important. However, current methods, e.g., liver biopsy, magnetic resonance imaging, and computerized tomography scan, are subject to high cost and/or medical complications. In this paper, we propose a deep neural network to estimate the degree of hepatic steatosis (low, mid, high) using only body shapes. The proposed network adopts dilated residual network blocks to extract refined features of input body shape maps by expanding the receptive field. Furthermore, to classify the degree of steatosis more accurately, we create a hybrid of the center loss and cross entropy loss to compact intra-class variations and separate inter-class differences. We performed extensive tests on the public medical dataset with various network parameters. Our experimental results show that the proposed network achieves a total accuracy of over 82% and offers an accurate and accessible assessment for hepatic steatosis.
A U-Net Ensemble for breast lesion segmentation in DCE MRI
Khaled, R
Vidal, Joel
Vilanova, Joan C
Martí, Robert
Computers in Biology and Medicine2022Journal Article, cited 0 times
Website
TCGA-BRCA
U-Net
Breast cancer
Segmentation
Dce-mri
Deep learning
Segmentation of male pelvic organs on computed tomography with a deep neural network fine-tuned by a level-set method
Almeida, Gonçalo
Figueira, Ana Rita
Lencart, Joana
Tavares, João Manuel R S
Computers in Biology and Medicine2021Journal Article, cited 0 times
PROSTATEx
Computed Tomography (CT) imaging is used in Radiation Therapy planning, where the treatment is carefully tailored to each patient in order to maximize radiation dose to the target while decreasing adverse effects to nearby healthy tissues. A crucial step in this process is manual organ contouring, which if performed automatically could considerably decrease the time to starting treatment and improve outcomes. Computerized segmentation of male pelvic organs has been studied for decades and deep learning models have brought considerable advances to the field, but improvements are still demanded. A two-step framework for automatic segmentation of the prostate, bladder and rectum is presented: a convolutional neural network enhanced with attention gates performs an initial segmentation, followed by a region-based active contour model to fine-tune the segmentations to each patient's specific anatomy. The framework was evaluated on a large collection of planning CTs of patients who had Radiation Therapy for prostate cancer. The Surface Dice Coefficient improved from 79.41 to 81.00% on segmentation of the prostate, 94.03-95.36% on the bladder and 82.17-83.68% on the rectum, comparing the proposed framework with the baseline convolutional neural network. This study shows that traditional image segmentation algorithms can help improve the immense gains that deep learning models have brought to the medical imaging segmentation field.
Human-level comparable control volume mapping with a deep unsupervised-learning model for image-guided radiation therapy
Liang, X.
Bassenne, M.
Hristov, D. H.
Islam, M. T.
Zhao, W.
Jia, M.
Zhang, Z.
Gensheimer, M.
Beadle, B.
Le, Q.
Xing, L.
Comput Biol Med2022Journal Article, cited 0 times
Website
QIN-HEADNECK
HNSCC-3DCT-RT
Head and neck
Image registration
Image-guided radiation therapy
Patient positioning
Unsupervised learning
PURPOSE: To develop a deep unsupervised learning method with control volume (CV) mapping from patient positioning daily CT (dCT) to planning computed tomography (pCT) for precise patient positioning. METHODS: We propose an unsupervised learning framework, which maps CVs from dCT to pCT to automatically generate the couch shifts, including translation and rotation dimensions. The network inputs are dCT, pCT and CV positions in the pCT. The output is the transformation parameter of the dCT used to setup the head and neck cancer (HNC) patients. The network is trained to maximize image similarity between the CV in the pCT and the CV in the dCT. A total of 554 CT scans from 158 HNC patients were used for the evaluation of the proposed model. At different points in time, each patient had many CT scans. Couch shifts are calculated for the testing by averaging the translation and rotation from the CVs. The ground-truth of the shifts come from bone landmarks determined by an experienced radiation oncologist. RESULTS: The system positioning errors of translation and rotation are less than 0.47 mm and 0.17 degrees , respectively. The random positioning errors of translation and rotation are less than 1.13 mm and 0.29 degrees , respectively. The proposed method enhanced the proportion of cases registered within a preset tolerance (2.0 mm/1.0 degrees ) from 66.67% to 90.91% as compared to standard registrations. CONCLUSIONS: We proposed a deep unsupervised learning architecture for patient positioning with inclusion of CVs mapping, which weights the CVs regions differently to mitigate any potential adverse influence of image artifacts on the registration. Our experimental results show that the proposed method achieved efficient and effective HNC patient positioning.
AIR-Net: A novel multi-task learning method with auxiliary image reconstruction for predicting EGFR mutation status on CT images of NSCLC patients
Gui, D.
Song, Q.
Song, B.
Li, H.
Wang, M.
Min, X.
Li, A.
Comput Biol Med2022Journal Article, cited 0 times
Website
NSCLC Radiogenomics
Auxiliary image reconstruction
EGFR mutation status prediction
Multi-task learning
Non-small cell lung cancer
LUNG
Automated and accurate EGFR mutation status prediction using computed tomography (CT) imagery is of great value for tailoring optimal treatments to non-small cell lung cancer (NSCLC) patients. However, existing deep learning based methods usually adopt a single task learning strategy to design and train EGFR mutation status prediction models with limited training data, which may be insufficient to learn distinguishable representations for promoting prediction performance. In this paper, a novel multi-task learning method named AIR-Net is proposed to precisely predict EGFR mutation status on CT images. First, an auxiliary image reconstruction task is effectively integrated with EGFR mutation status prediction, aiming at providing extra supervision at the training phase. Particularly, we adequately employ multi-level information in a shared encoder to generate more comprehensive representations of tumors. Second, a powerful feature consistency loss is further introduced to constrain semantic consistency of original and reconstructed images, which contributes to enhanced image reconstruction and offers more effective regularization to AIR-Net during training. Performance analysis of AIR-Net indicates that auxiliary image reconstruction plays an essential role in identifying EGFR mutation status. Furthermore, extensive experimental results demonstrate that our method achieves favorable performance against other competitive prediction methods. All the results executed in this study suggest that the effectiveness and superiority of AIR-Net in precisely predicting EGFR mutation status of NSCLC.
A 2.5D convolutional neural network for HPV prediction in advanced oropharyngeal cancer
La Greca Saint-Esteven, A.
Bogowicz, M.
Konukoglu, E.
Riesterer, O.
Balermpas, P.
Guckenberger, M.
Tanadini-Lang, S.
van Timmeren, J. E.
Comput Biol Med2022Journal Article, cited 0 times
Website
OPC-Radiomics
Head-Neck-Radiomics-HN1
HNSCC
Computed Tomography (CT)
Convolutional Neural Network (CNN)
Deep Learning
Human papilloma virus
Oropharyngeal cancer
HEADNECK
BACKGROUND: Infection with human papilloma virus (HPV) is one of the most relevant prognostic factors in advanced oropharyngeal cancer (OPC) treatment. In this study we aimed to assess the diagnostic accuracy of a deep learning-based method for HPV status prediction in computed tomography (CT) images of advanced OPC. METHOD: An internal dataset and three public collections were employed (internal: n = 151, HNC1: n = 451; HNC2: n = 80; HNC3: n = 110). Internal and HNC1 datasets were used for training, whereas HNC2 and HNC3 collections were used as external test cohorts. All CT scans were resampled to a 2 mm(3) resolution and a sub-volume of 72x72x72 pixels was cropped on each scan, centered around the tumor. Then, a 2.5D input of size 72x72x3 pixels was assembled by selecting the 2D slice containing the largest tumor area along the axial, sagittal and coronal planes, respectively. The convolutional neural network employed consisted of the first 5 modules of the Xception model and a small classification network. Ten-fold cross-validation was applied to evaluate training performance. At test time, soft majority voting was used to predict HPV status. RESULTS: A final training mean [range] area under the curve (AUC) of 0.84 [0.76-0.89], accuracy of 0.76 [0.64-0.83] and F1-score of 0.74 [0.62-0.83] were achieved. AUC/accuracy/F1-score values of 0.83/0.75/0.69 and 0.88/0.79/0.68 were achieved on the HNC2 and HNC3 test sets, respectively. CONCLUSION: Deep learning was successfully applied and validated in two external cohorts to predict HPV status in CT images of advanced OPC, proving its potential as a support tool in cancer precision medicine.
Impact of feature harmonization on radiogenomics analysis: Prediction of EGFR and KRAS mutations from non-small cell lung cancer PET/CT images
Shiri, I.
Amini, M.
Nazari, M.
Hajianfar, G.
Haddadi Avval, A.
Abdollahi, H.
Oveisi, M.
Arabi, H.
Rahmim, A.
Zaidi, H.
Comput Biol Med2022Journal Article, cited 19 times
Website
OBJECTIVE: To investigate the impact of harmonization on the performance of CT, PET, and fused PET/CT radiomic features toward the prediction of mutations status, for epidermal growth factor receptor (EGFR) and Kirsten rat sarcoma viral oncogene (KRAS) genes in non-small cell lung cancer (NSCLC) patients. METHODS: Radiomic features were extracted from tumors delineated on CT, PET, and wavelet fused PET/CT images obtained from 136 histologically proven NSCLC patients. Univariate and multivariate predictive models were developed using radiomic features before and after ComBat harmonization to predict EGFR and KRAS mutation statuses. Multivariate models were built using minimum redundancy maximum relevance feature selection and random forest classifier. We utilized 70/30% splitting patient datasets for training/testing, respectively, and repeated the procedure 10 times. The area under the receiver operator characteristic curve (AUC), accuracy, sensitivity, and specificity were used to assess model performance. The performance of the models (univariate and multivariate), before and after ComBat harmonization was compared using statistical analyses. RESULTS: While the performance of most features in univariate modeling was significantly improved for EGFR prediction, most features did not show any significant difference in performance after harmonization in KRAS prediction. Average AUCs of all multivariate predictive models for both EGFR and KRAS were significantly improved (q-value < 0.05) following ComBat harmonization. The mean ranges of AUCs increased following harmonization from 0.87-0.90 to 0.92-0.94 for EGFR, and from 0.85-0.90 to 0.91-0.94 for KRAS. The highest performance was achieved by harmonized F_R0.66_W0.75 model with AUC of 0.94, and 0.93 for EGFR and KRAS, respectively. CONCLUSION: Our results demonstrated that regarding univariate modelling, while ComBat harmonization had generally a better impact on features for EGFR compared to KRAS status prediction, its effect is feature-dependent. Hence, no systematic effect was observed. Regarding the multivariate models, ComBat harmonization significantly improved the performance of all radiomics models toward more successful prediction of EGFR and KRAS mutation statuses in lung cancer patients. Thus, by eliminating the batch effect in multi-centric radiomic feature sets, harmonization is a promising tool for developing robust and reproducible radiomics using vast and variant datasets.
LeuFeatx: Deep learning-based feature extractor for the diagnosis of acute leukemia from microscopic images of peripheral blood smear
Rastogi, P.
Khanna, K.
Singh, V.
Comput Biol Med2022Journal Article, cited 2 times
Website
AML-Cytomorphology_LMU
C-NMC 2019
*Deep Learning
Pathomics
Leukocytes
Image classification
*Feature extraction
*Image classification
*Peripheral blood smear
The abnormal growth of leukocytes causes hematologic malignancies such as leukemia. The clinical assessment methods for the diagnosis of the disease are labor-intensive and time-consuming. Image-based automated diagnostic systems can be of great help in the decision-making process for leukemia detection. A feature-dependent, intrinsic, reliable classifier is a critical component in building such a diagnostic system. However, the identification of vital and relevant features is a challenging task in the classification workflow. The proposed work presents a novel two-step methodology for the robust classification of leukocytes for leukemia diagnosis by building a VGG16-adapted fine-tuned feature-extractor model, termed as "LeuFeatx," which plays a critical role in the accurate classification of leukocytes. LeuFeatx was found to be capable of extracting notable leukocyte features using microscopic single-cell leukocyte images. The filters and learned features are visualized and compared with base VGG16 model features. Independent classification experiments using three public benchmark leukocyte datasets were conducted to assess the effectiveness of extracted features with the proposed LeuFeatx model. Multiclass classifiers trained using LeuFeatx deep features achieved higher precision and sensitivity for seven leukocyte subtypes compared to the latest research on the AML Morphological dataset, and it achieved higher sensitivity for all cell types vis-a-vis recent work on peripheral blood cells dataset from the Hospital Clinic of Barcelona. In a binary classification experiment using the ALL_IDB2 dataset, classifiers trained using LeuFeatx deep features achieved an accuracy of 96.15%, which is better than the other state-of-the-art methods reported in the literature. Thus, the higher performance of the classifiers across observed comparison metrics establishes the relevance of the extracted features and the overall robustness of the proposed model.
ALNett: A cluster layer deep convolutional neural network for acute lymphoblastic leukemia classification
Jawahar, M.
H, S.
L, J. A.
Gandomi, A. H.
Comput Biol Med2022Journal Article, cited 0 times
Website
C-NMC 2019
Computer Aided Diagnosis (CADx)
Convolutional Neural Network (CNN)
Deep learning
Leukemia
Transfer learning
Acute Lymphoblastic Leukemia (ALL) is cancer in which bone marrow overproduces undeveloped lymphocytes. Over 6500 cases of ALL are diagnosed every year in the United States in both adults and children, accounting for around 25% of pediatric cancers, and the trend continues to rise. With the advancements of AI and big data analytics, early diagnosis of ALL can be used to aid the clinical decisions of physicians and radiologists. This research proposes a deep neural network-based (ALNett) model that employs depth-wise convolution with different dilation rates to classify microscopic white blood cell images. Specifically, the cluster layers encompass convolution and max-pooling followed by a normalization process that provides enriched structural and contextual details to extract robust local and global features from the microscopic images for the accurate prediction of ALL. The performance of the model was compared with various pre-trained models, including VGG16, ResNet-50, GoogleNet, and AlexNet, based on precision, recall, accuracy, F1 score, loss accuracy, and receiver operating characteristic (ROC) curves. Experimental results showed that the proposed ALNett model yielded the highest classification accuracy of 91.13% and an F1 score of 0.96 with less computational complexity. ALNett demonstrated promising ALL categorization and outperformed the other pre-trained models.
Towards computational solutions for precision medicine based big data healthcare system using deep learning models: A review
Thirunavukarasu, Ramkumar
C, George Priya Doss
R, Gnanasambandan
Gopikrishnan, Mohanraj
Palanisamy, Venketesh
Computers in Biology and Medicine2022Journal Article, cited 0 times
LIDC-IDRI
The emergence of large-scale human genome projects, advances in DNA sequencing technologies, and the massive volume of electronic medical records [EMR] shift the transformation of healthcare research into the next paradigm, namely 'Precision Medicine.' This new clinical system model uses patients' genomic profiles and disparate healthcare data sources to a greater extent and provides personalized deliverables. As an advanced analytical technique, deep learning models significantly impact precision medicine because they can process voluminous amounts of diversified data with improved accuracy. Two salient features of deep learning models, namely processing a massive volume of multi-model data at multiple levels of abstraction and the ability to identify inherent features from the input data on their own, attract the implication of deep learning techniques in precision medicine research. The proposed review highlights the importance of deep learning-based analytical models in handling diversified and disparate big data sources of precision medicine. To augment further, state-of-the-art precision medicine research based on the taxonomy of deep learning models has been reviewed along with their research outcomes. The diversified data inputs used in research attempts, their applications, benchmarking data repositories, and usage of various evaluation measures for accuracy estimations are highlighted in this review. This review also brings out some promising analytical avenues of precision medicine research that give directions for future exploration.
Breast cancer detection using deep learning: Datasets, methods, and challenges ahead
Din, Nusrat Mohi Ud
Dar, Rayees Ahmad
Rasool, Muzafar
Assad, Assif
Computers in Biology and Medicine2022Journal Article, cited 0 times
RIDER Breast MRI
Breast Cancer (BC) is the most commonly diagnosed cancer and second leading cause of mortality among women. About 1 in 8 US women (about 13%) will develop invasive BC throughout their lifetime. Early detection of this life-threatening disease not only increases the survival rate but also reduces the treatment cost. Fortunately, advancements in radiographic imaging like "Mammograms", "Computed Tomography (CT)", "Magnetic Resonance Imaging (MRI)", "3D Mammography", and "Histopathological Imaging (HI)" have made it feasible to diagnose this life-taking disease at an early stage. However, the analysis of radiographic images and Histopathological images is done by experienced radiologists and pathologists, respectively. The process is not only costly but also error-prone. Over the last ten years, Computer Vision and Machine Learning (ML) have transformed the world in every way possible. Deep learning (DL), a subfield of ML has shown outstanding results in a variety of fields, particularly in the biomedical industry, because of its ability to handle large amounts of data. DL techniques automatically extract the features by analyzing the high dimensional and correlated data efficiently. The potential and ability of DL models have also been utilized and evaluated in the identification and prognosis of BC, utilizing radiographic and Histopathological images, and have performed admirably. However, AI has shown good claims in retrospective studies only. External validations are needed for translating these cutting-edge AI tools as a clinical decision maker. The main aim of this research work is to present the critical analysis of the research and findings already done to detect and classify BC using various imaging modalities including "Mammography", "Histopathology", "Ultrasound", "PET/CT", "MRI", and "Thermography". At first, a detailed review of the past research papers using Machine Learning, Deep Learning and Deep Reinforcement Learning for BC classification and detection is carried out. We also review the publicly available datasets for the above-mentioned imaging modalities to make future research more accessible. Finally, a critical discussion section has been included to elaborate open research difficulties and prospects for future study in this emerging area, demonstrating the limitations of Deep Learning approaches.
A hierarchical machine learning model based on Glioblastoma patients' clinical, biomedical, and image data to analyze their treatment plans
Ershadi, Mohammad Mahdi
Rise, Zeinab Rahimi
Niaki, Seyed Taghi Akhavan
Computers in Biology and Medicine2022Journal Article, cited 0 times
TCGA-GBM
Glioblastoma
Machine Learning
Deep Learning
AIM OF STUDY: Glioblastoma Multiforme (GBM) is an aggressive brain cancer in adults that kills most patients in the first year due to ineffective treatment. Different clinical, biomedical, and image data features are needed to analyze GBM, increasing complexities. Besides, they lead to weak performances for machine learning models due to ignoring physicians' knowledge. Therefore, this paper proposes a hierarchical model based on Fuzzy C-mean (FCM) clustering, Wrapper feature selection, and twelve classifiers to analyze treatment plans.
METHODOLOGY/APPROACH: The proposed method finds the effectiveness of previous and current treatment plans, hierarchically determining the best decision for future treatment plans for GBM patients using clinical data, biomedical data, and different image data. A case study is presented based on the Cancer Genome Atlas Glioblastoma Multiforme dataset to prove the effectiveness of the proposed model. This dataset is analyzed using data preprocessing, experts' knowledge, and a feature reduction method based on the Principal Component Analysis. Then, the FCM clustering method is utilized to reinforce classifier learning.
OUTCOMES OF STUDY: The proposed model finds the best combination of Wrapper feature selection and classifier for each cluster based on different measures, including accuracy, sensitivity, specificity, precision, F-score, and G-mean according to a hierarchical structure. It has the best performance among other reinforced classifiers. Besides, this model is compatible with real-world medical processes for GBM patients based on clinical, biomedical, and image data.
A novel Parametric Flatten-p Mish activation function based deep CNN model for brain tumor classification
Mondal, Ayan
Shrivastava, Vimal K
Computers in Biology and Medicine2022Journal Article, cited 0 times
Brain-Tumor-Progression
The brain tumor is one of the deadliest diseases of all cancers. Influenced by the recent developments of convolutional neural networks (CNNs) in medical imaging, we have formed a CNN based model called BMRI-Net for brain tumor classification. As the activation function is one of the important modules of CNN, we have proposed a novel parametric activation function named Parametric Flatten-p Mish (PFpM) to improve the performance. PFpM can tackle the significant disadvantages of the pre-existing activation functions like neuron death and bias shift effect. The parametric approach of PFpM also offers the model some extra flexibility to learn the complex patterns more accurately from the data. To validate our proposed methodology, we have used two brain tumor datasets namely Figshare and Br35H. We have compared the performance of our model with state-of-the-art deep CNN models like DenseNet201, InceptionV3, MobileNetV2, ResNet50 and VGG19. Further, the comparative performance of PFpM has been presented with various activation functions like ReLU, Leaky ReLU, GELU, Swish and Mish. We have performed record-wise and subject-wise (patient-level) experiments for Figshare dataset whereas only record-wise experiments have been performed in case of Br35H dataset due to unavailability of subject-wise information. Further, the model has been validated using hold-out and 5-fold cross-validation techniques. On Figshare dataset, our model has achieved 99.57% overall accuracy with hold-out validation and 98.45% overall accuracy with 5-fold cross validation in case of record-wise data split. On the other hand, the model has achieved 97.91% overall accuracy with hold-out validation and 97.26% overall accuracy with 5-fold cross validation in case of subject-wise data split. Similarly, for Br35H dataset, our model has attained 99% overall accuracy with hold-out validation and 98.33% overall accuracy with 5-fold cross validation using record-wise data split. Hence, our findings can introduce a secondary procedure in the clinical diagnosis of brain tumors.
Detecting liver cirrhosis in computed tomography scans using clinically-inspired and radiomic features
Kotowski, K.
Kucharski, D.
Machura, B.
Adamski, S.
Gutierrez Becker, B.
Krason, A.
Zarudzki, L.
Tessier, J.
Nalepa, J.
Comput Biol Med2023Journal Article, cited 1 times
Website
HCC-TACE-Seg
Humans
Reproducibility of Results
*Tomography
X-Ray Computed/methods
*Liver Cirrhosis/diagnostic imaging
Abdomen
Retrospective Studies
Computed Tomography (CT)
Radiomic features
Liver cirrhosis
Machine learning
Hepatic cirrhosis is an increasing cause of mortality in developed countries-it is the pathological sequela of chronic liver diseases, and the final liver fibrosis stage. Since cirrhosis evolves from the asymptomatic phase, it is of paramount importance to detect it as quickly as possible, because entering the symptomatic phase commonly leads to hospitalization and can be fatal. Understanding the state of the liver based on the abdominal computed tomography (CT) scans is tedious, user-dependent and lacks reproducibility. We tackle these issues and propose an end-to-end and reproducible approach for detecting cirrhosis from CT. It benefits from the introduced clinically-inspired features that reflect the patient's characteristics which are often investigated by experienced radiologists during the screening process. Such features are coupled with the radiomic ones extracted from the liver, and from the suggested region of interest which captures the liver's boundary. The rigorous experiments, performed over two heterogeneous clinical datasets (two cohorts of 241 and 32 patients) revealed that extracting radiomic features from the liver's rectified contour is pivotal to enhance the classification abilities of the supervised learners. Also, capturing clinically-inspired image features significantly improved the performance of such models, and the proposed features were consistently selected as the important ones. Finally, we showed that selecting the most discriminative features leads to the Pareto-optimal models with enhanced feature-level interpretability, as the number of features was dramatically reduced (280x) from thousands to tens.
MRLA-Net: A tumor segmentation network embedded with a multiple receptive-field lesion attention module in PET-CT images
Zhou, Y.
Jiang, H.
Diao, Z.
Tong, G.
Luan, Q.
Li, Y.
Li, X.
Comput Biol Med2023Journal Article, cited 0 times
Soft Tissue Sarcoma
Humans
*Positron Emission Tomography Computed Tomography
PET/CT
HECKTOR Challenge
*Liver Neoplasms
Image Processing
Computer-Assisted
Attention module
Multi-modal learning
Tumor segmentation
The tumor image segmentation is an important basis for doctors to diagnose and formulate treatment planning. PET-CT is an extremely important technology for recognizing the systemic situation of diseases due to the complementary advantages of their respective modal information. However, current PET-CT tumor segmentation methods generally focus on the fusion of PET and CT features. The fusion of features will weaken the characteristics of the modality itself. Therefore, enhancing the modal features of the lesions can obtain optimized feature sets, which is extremely necessary to improve the segmentation results. This paper proposed an attention module that integrates the PET-CT diagnostic visual field and the modality characteristics of the lesion, that is, the multiple receptive-field lesion attention module. This paper made full use of the spatial domain, frequency domain, and channel attention, and proposed a large receptive-field lesion localization module and a small receptive-field lesion enhancement module, which together constitute the multiple receptive-field lesion attention module. In addition, a network embedded with a multiple receptive-field lesion attention module has been proposed for tumor segmentation. This paper conducted experiments on a private liver tumor dataset as well as two publicly available datasets, the soft tissue sarcoma dataset, and the head and neck tumor segmentation dataset. The experimental results showed that the proposed method achieves excellent performance on multiple datasets, and has a significant improvement compared with DenseUNet, and the tumor segmentation results on the above three PET/CT datasets were improved by 7.25%, 6.5%, 5.29% in Dice per case. Compared with the latest PET-CT liver tumor segmentation research, the proposed method improves by 8.32%.
AATSN: Anatomy Aware Tumor Segmentation Network for PET-CT volumes and images using a lightweight fusion-attention mechanism
Ahmad, I.
Xia, Y.
Cui, H.
Islam, Z. U.
Comput Biol Med2023Journal Article, cited 1 times
Website
Fluorodeoxyglucose Positron Emission Tomography (FDG-PET) provides metabolic information, while Computed Tomography (CT) provides the anatomical context of the tumors. Combined PET-CT segmentation helps in computer-assisted tumor diagnosis, staging, and treatment planning. Current state-of-the-art models mainly rely on early or late fusion techniques. These methods, however, rarely learn PET-CT complementary features and cannot efficiently co-relate anatomical and metabolic features. These drawbacks can be removed by intermediate fusion; however, it produces inaccurate segmentations in the case of heterogeneous textures in the modalities. Furthermore, it requires massive computation. In this work, we propose AATSN (Anatomy Aware Tumor Segmentation Network), which extracts anatomical CT features, and then intermediately fuses with PET features through a fusion-attention mechanism. Our anatomy-aware fusion-attention mechanism fuses the selective useful CT and PET features instead of fusing the full features set. Thus this not only improves the network performance but also requires lesser resources. Furthermore, our model is scalable to 2D images and 3D volumes. The proposed model is rigorously trained, tested, evaluated, and compared to the state-of-the-art through several ablation studies on the largest available datasets. We have achieved a 0.8104 dice score and 2.11 median HD95 score in a 3D setup, while 0.6756 dice score in a 2D setup. We demonstrate that AATSN achieves a significant performance gain while being lightweight at the same time compared to the state-of-the-art methods. The implications of AATSN include improved tumor delineation for diagnosis, analysis, and radiotherapy treatment.
Radiomics-based survival risk stratification of glioblastoma is associated with different genome alteration
Xu, P. F.
Li, C.
Chen, Y. S.
Li, D. P.
Xi, S. Y.
Chen, F. R.
Li, X.
Chen, Z. P.
Comput Biol Med2023Journal Article, cited 0 times
Website
Ivy GAP
IBSI
Glioblastoma
Magnetic Resonance Imaging (MRI)
Prognosis
Radiomics
Unsupervised learning
Radiogenomics
BRAIN
BACKGROUND: Glioblastoma (GBM) is a remarkable heterogeneous tumor with few non-invasive, repeatable, and cost-effective prognostic biomarkers reported. In this study, we aim to explore the association between radiomic features and prognosis and genomic alterations in GBM. METHODS: A total of 180 GBM patients (training cohort: n = 119; validation cohort 1: n = 37; validation cohort 2: n = 24) were enrolled and underwent preoperative MRI scans. From the multiparametric (T1, T1-Gd, T2, and T2-FLAIR) MR images, the radscore was developed to predict overall survival (OS) in a multistep postprocessing workflow and validated in two external validation cohorts. The prognostic accuracy of the radscore was assessed with concordance index (C-index) and Brier scores. Furthermore, we used hierarchical clustering and enrichment analysis to explore the association between image features and genomic alterations. RESULTS: The MRI-based radscore was significantly correlated with OS in the training cohort (C-index: 0.70), validation cohort 1 (C-index: 0.66), and validation cohort 2 (C-index: 0.74). Multivariate analysis revealed that the radscore was an independent prognostic factor. Cluster analysis and enrichment analysis revealed that two distinct phenotypic clusters involved in distinct biological processes and pathways, including the VEGFA-VEGFR2 signaling pathway (q-value = 0.033), JAK-STAT signaling pathway (q-value = 0.049), and regulation of MAPK cascade (q-value = 0.0015/0.025). CONCLUSIONS: Radiomic features and radiomics-derived radscores provided important phenotypic and prognostic information with great potential for risk stratification in GBM.
A deep learning-based cancer survival time classifier for small datasets
Shakir, Hina
Aijaz, Bushra
Khan, Tariq Mairaj Rasool
Hussain, Muhammad
Computers in Biology and Medicine2023Journal Article, cited 0 times
Website
NSCLC-Radiomics
Algorithm Development
Radiomic features
Lung cancer survival
Machine learning
Medical image processing
Cancer survival time prediction using Deep Learning (DL) has been an emerging area of research. However, non-availability of large-sized annotated medical imaging databases affects the training performance of DL models leading to their arguable usage in many clinical applications. In this research work, a neural network model is customized for small sample space to avoid data over-fitting for DL training. A set of prognostic radiomic features is selected through an iterative process using average of multiple dropouts which results in back-propagated gradients with low variance, thus increasing the network learning capability, reliable feature selection and better training over a small database. The proposed classifier is further compared with erasing feature selection method proposed in the literature for improved network training and with other well-known classifiers on small sample size. Achieved results which were statistically validated show efficient and improved classification of cancer survival time into three intervals of 6 months, between 6 months up to 2 years, and above 2 years; and has the potential to aid health care professionals in lung tumor evaluation for timely treatment and patient care.
CDDnet: Cross-domain denoising network for low-dose CT image via local and global information alignment
Huang, Jiaxin
Chen, Kecheng
Ren, Yazhou
Sun, Jiayu
Wang, Yanmei
Tao, Tao
Pu, Xiaorong
Computers in Biology and Medicine2023Journal Article, cited 0 times
Website
LDCT-and-Projection-data
LIDC-IDRI
Image denoising
Algorithm Development
Deep Learning
Computed Tomography (CT)
LUNG
Low-dose CT
encoder-decoder
The domain shift problem has emerged as a challenge in cross-domain low-dose CT (LDCT) image denoising task, where the acquisition of a sufficient number of medical images from multiple sources may be constrained by privacy concerns. In this study, we propose a novel cross-domain denoising network (CDDnet) that incorporates both local and global information of CT images. To address the local component, a local information alignment module has been proposed to regularize the similarity between extracted target and source features from selected patches. To align the general information of the semantic structure from a global perspective, an autoencoder is adopted to learn the latent correlation between the source label and the estimated target label generated by the pre-trained denoiser. Experimental results demonstrate that our proposed CDDnet effectively alleviates the domain shift problem, outperforming other deep learning-based and domain adaptation-based methods under cross-domain scenarios.
BOSS: Bones, organs and skin shape model
Shetty, Karthik
Birkhold, Annette
Jaganathan, Srikrishna
Strobel, Norbert
Egger, Bernhard
Kowarschik, Markus
Maier, Andreas
Computers in Biology and Medicine2023Journal Article, cited 0 times
ACRIN-NSCLC-FDG-PET
QIN-HEADNECK
A virtual anatomical model of a patient can be a valuable tool for enhancing clinical tasks such as workflow automation, patient-specific X-ray dose optimization, markerless tracking, positioning, and navigation assistance in image-guided interventions. For these tasks, it is highly desirable that the patient's surface and internal organs are of high quality for any pose and shape estimate. At present, the majority of statistical shape models (SSMs) are restricted to a small number of organs or bones or do not adequately represent the general population. To address this, we propose a deformable human shape and pose model that combines skin, internal organs, and bones, learned from CT images. By modeling the statistical variations in a pose-normalized space using probabilistic PCA while also preserving joint kinematics, our approach offers a holistic representation of the body that can be beneficial for automation in various medical applications. In an interventional setup, our model could, for example, facilitate automatic system/patient positioning, organ-specific iso-centering, automated collimation or collision prediction. We assessed our model's performance on a registered dataset, utilizing the unified shape space, and noted an average error of 3.6 mm for bones and 8.8 mm for organs. By utilizing solely skin surface data or patient metadata like height and weight, we find that the overall combined error for bone-organ measurement is 8.68 mm and 8.11 mm, respectively. To further verify our findings, we conducted additional tests on publicly available datasets with multi-part segmentations, which confirmed the effectiveness of our model. In the diverse TotalSegmentator dataset, the errors for bones and organs are observed to be 5.10mm and 8.72mm, respectively. Our work shows that anatomically parameterized statistical shape models can be created accurately and in a computationally efficient manner. The proposed approach enables the construction of shape models that can be directly integrated into to various medical applications.
Dual-stream EfficientNet with adversarial sample augmentation for COVID-19 computer aided diagnosis
Xu, Weijie
Nie, Lina
Chen, Beijing
Ding, Weiping
Computers in Biology and Medicine2023Journal Article, cited 0 times
COVID-19-NY-SBU
Computer Aided Diagnosis (CADx)
COVID-19
Computed Tomography (CT)
Deep Learning
Adversarial training
Though a series of computer aided measures have been taken for the rapid and definite diagnosis of 2019 coronavirus disease (COVID-19), they generally fail to achieve high enough accuracy, including the recently popular deep learning-based methods. The main reasons are that: (a) they generally focus on improving the model structures while ignoring important information contained in the medical image itself; (b) the existing small-scale datasets have difficulty in meeting the training requirements of deep learning. In this paper, a dual-stream network based on the EfficientNet is proposed for the COVID-19 diagnosis based on CT scans. The dual-stream network takes into account the important information in both spatial and frequency domains of CT scans. Besides, Adversarial Propagation (AdvProp) technology is used to address the insufficient training data usually faced by the deep learning-based computer aided diagnosis and also the overfitting issue. Feature Pyramid Network (FPN) is utilized to fuse the dual-stream features. Experimental results on the public dataset COVIDx CT-2A demonstrate that the proposed method outperforms the existing 12 deep learning-based methods for COVID-19 diagnosis, achieving an accuracy of 0.9870 for multi-class classification, and 0.9958 for binary classification. The source code is available at https://github.com/imagecbj/covid-efficientnet.
Leveraging different learning styles for improved knowledge distillation in biomedical imaging
Niyaz, Usma
Sambyal, Abhishek Singh
Bathula, Deepti R.
Computers in Biology and Medicine2024Journal Article, cited 0 times
TCGA-LGG
Model
Mutual information
Transfer learning
Classification
Segmentation
Learning style refers to a type of training mechanism adopted by an individual to gain new knowledge. As suggested by the VARK model, humans have different learning preferences, like Visual (V), Auditory (A), Read/Write (R), and Kinesthetic (K), for acquiring and effectively processing information. Our work endeavors to leverage this concept of knowledge diversification to improve the performance of model compression techniques like Knowledge Distillation (KD) and Mutual Learning (ML). Consequently, we use a single-teacher and two-student network in a unified framework that not only allows for the transfer of knowledge from teacher to students (KD) but also encourages collaborative learning between students (ML). Unlike the conventional approach, where the teacher shares the same knowledge in the form of predictions or feature representations with the student network, our proposed approach employs a more diversified strategy by training one student with predictions and the other with feature maps from the teacher. We further extend this knowledge diversification by facilitating the exchange of predictions and feature maps between the two student networks, enriching their learning experiences. We have conducted comprehensive experiments with three benchmark datasets for both classification and segmentation tasks using two different network architecture combinations. These experimental results demonstrate that knowledge diversification in a combined KD and ML framework outperforms conventional KD or ML techniques (with similar network configuration) that only use predictions with an average improvement of 2%. Furthermore, consistent improvement in performance across different tasks, with various network architectures, and over state-of-the-art techniques establishes the robustness and generalizability of the proposed model.
Joint denoising and interpolating network for low-dose cone-beam CT reconstruction under hybrid dose-reduction strategy
Chao, L.
Wang, Y.
Zhang, T.
Shan, W.
Zhang, H.
Wang, Z.
Li, Q.
Comput Biol Med2023Journal Article, cited 0 times
Website
LDCT-and-Projection-data
Denoising
Image reconstruction
Interpolation
Low-dose CT
Cone-beam computed tomography (CBCT) is generally reconstructed with hundreds of two-dimensional X-Ray projections through the FDK algorithm, and its excessive ionizing radiation of X-Ray may impair patients' health. Two common dose-reduction strategies are to either lower the intensity of X-Ray, i.e., low-intensity CBCT, or reduce the number of projections, i.e., sparse-view CBCT. Existing efforts improve the low-dose CBCT images only under a single dose-reduction strategy. In this paper, we argue that applying the two strategies simultaneously can reduce dose in a gentle manner and avoid the extreme degradation of the projection data in a single dose-reduction strategy, especially under ultra-low-dose situations. Therefore, we develop a Joint Denoising and Interpolating Network (JDINet) in projection domain to improve the CBCT quality with the hybrid low-intensity and sparse-view projections. Specifically, JDINet mainly includes two important components, i.e., denoising module and interpolating module, to respectively suppress the noise caused by the low-intensity strategy and interpolate the missing projections caused by the sparse-view strategy. Because FDK actually utilizes the projection information after ramp-filtering, we develop a filtered structural similarity constraint to help JDINet focus on the reconstruction-required information. Afterward, we employ a Postprocessing Network (PostNet) in the reconstruction domain to refine the CBCT images that are reconstructed with denoised and interpolated projections. In general, a complete CBCT reconstruction framework is built with JDINet, FDK, and PostNet. Experiments demonstrate that our framework decreases RMSE by approximately 8 %, 15 %, and 17 %, respectively, on the 1/8, 1/16, and 1/32 dose data, compared to the latest methods. In conclusion, our learning-based framework can be deeply imbedded into the CBCT systems to promote the development of CBCT. Source code is available at https://github.com/LianyingChao/FusionLowDoseCBCT.
Comput Biol Med2024Journal Article, cited 0 times
Website
LDCT-and-Projection-data
Humans
*Tomography
X-Ray Computed/methods
Neural Networks
Computer
Convolutional operations
Distillation
Image quality assessment
Low-dose computed tomography
No-reference quality assessment
Self-attention mechanism
Vision transformers
No-reference image quality assessment (IQA) is a critical step in medical image analysis, with the objective of predicting perceptual image quality without the need for a pristine reference image. The application of no-reference IQA to CT scans is valuable in providing an automated and objective approach to assessing scan quality, optimizing radiation dose, and improving overall healthcare efficiency. In this paper, we introduce DistilIQA, a novel distilled Vision Transformer network designed for no-reference CT image quality assessment. DistilIQA integrates convolutional operations and multi-head self-attention mechanisms by incorporating a powerful convolutional stem at the beginning of the traditional ViT network. Additionally, we present a two-step distillation methodology aimed at improving network performance and efficiency. In the initial step, a "teacher ensemble network" is constructed by training five vision Transformer networks using a five-fold division schema. In the second step, a "student network", comprising of a single Vision Transformer, is trained using the original labeled dataset and the predictions generated by the teacher network as new labels. DistilIQA is evaluated in the task of quality score prediction from low-dose chest CT scans obtained from the LDCT and Projection data of the Cancer Imaging Archive, along with low-dose abdominal CT images from the LDCTIQAC2023 Grand Challenge. Our results demonstrate DistilIQA's remarkable performance in both benchmarks, surpassing the capabilities of various CNNs and Transformer architectures. Moreover, our comprehensive experimental analysis demonstrates the effectiveness of incorporating convolutional operations within the ViT architecture and highlights the advantages of our distillation methodology.
Depth estimation from monocular endoscopy using simulation and image transfer approach
Jeong, B. H.
Kim, H. K.
Son, Y. D.
Comput Biol Med2024Journal Article, cited 0 times
Website
CT COLONOGRAPHY
Deep learning
Depth estimation
Endoscopy
Generative Adversarial Network (GAN)
Simulation-to-real transfer
Obtaining accurate distance or depth information in endoscopy is crucial for the effective utilization of navigation systems. However, due to space constraints, incorporating depth cameras into endoscopic systems is often impractical. Our goal is to estimate depth images directly from endoscopic images using deep learning. This study presents a three-step methodology for training a depth-estimation network model. Initially, simulated endoscopy images and corresponding depth maps are generated using Unity based on a colon surface model obtained from segmented computed tomography colonography data. Subsequently, a cycle generative adversarial network model is employed to enhance the realism of the simulated endoscopy images. Finally, a deep learning model is trained using the synthesized endoscopy images and depth maps to estimate depths accurately. The performance of the proposed approach is evaluated and compared against prior studies utilizing unsupervised training methods. The results demonstrate the superior precision of the proposed technique in estimating depth images within endoscopy. The proposed depth estimation method holds promise for advancing the field by enabling enhanced navigation, improved lesion marking capabilities, and ultimately leading to better clinical outcomes.
Dual-consistency guidance semi-supervised medical image segmentation with low-level detail feature augmentation
Wang, B.
Ju, M.
Zhang, X.
Yang, Y.
Tian, X.
Comput Biol Med2024Journal Article, cited 0 times
Website
TCGA-LGG
LIDC-IDRI
BraTS 2019
Deep learning
Dual-consistency
Feature enhancement
Medical image segmentation
Semi-supervised learning
In deep-learning-based medical image segmentation tasks, semi-supervised learning can greatly reduce the dependence of the model on labeled data. However, existing semi-supervised medical image segmentation methods face the challenges of object boundary ambiguity and a small amount of available data, which limit the application of segmentation models in clinical practice. To solve these problems, we propose a novel semi-supervised medical image segmentation network based on dual-consistency guidance, which can extract reliable semantic information from unlabeled data over a large spatial and dimensional range in a simple and effective manner. This serves to improve the contribution of unlabeled data to the model accuracy. Specifically, we construct a split weak and strong consistency constraint strategy to capture data-level and feature-level consistencies from unlabeled data to improve the learning efficiency of the model. Furthermore, we design a simple multi-scale low-level detail feature enhancement module to improve the extraction of low-level detail contextual information, which is crucial to accurately locate object contours and avoid omitting small objects in semi-supervised medical image dense prediction tasks. Quantitative and qualitative evaluations on six challenging datasets demonstrate that our model outperforms other semi-supervised segmentation models in terms of segmentation accuracy and presents advantages in terms of generalizability. Code is available at https://github.com/0Jmyy0/SSMIS-DC.
radMLBench: A dataset collection for benchmarking in radiomics
Demircioglu, A.
Comput Biol Med2024Journal Article, cited 0 times
Website
C4KC-KiTS
Colorectal-Liver-Metastases
HCC-TACE-Seg
HNSCC
Head-Neck-PET-CT
HEAD-NECK-RADIOMICS-HN1
ISPY1
LGG-1p19qDeletion
Meningioma-SEG-CLASS
NSCLC Radiogenomics
PI-CAI
Prostate-MRI-US-Biopsy
QIN-HEADNECK
UCSF-PDGM
UPENN-GBM
RSNA-ASNR-MICCAI-BraTS-2021
BraTS 2021
Benchmarking
High-dimensional datasets
Machine learning
Methodology
Radiomics
BACKGROUND: New machine learning methods and techniques are frequently introduced in radiomics, but they are often tested on a single dataset, which makes it challenging to assess their true benefit. Currently, there is a lack of a larger, publicly accessible dataset collection on which such assessments could be performed. In this study, a collection of radiomics datasets with binary outcomes in tabular form was curated to allow benchmarking of machine learning methods and techniques. METHODS: A variety of journals and online sources were searched to identify tabular radiomics data with binary outcomes, which were then compiled into a homogeneous data collection that is easily accessible via Python. To illustrate the utility of the dataset collection, it was applied to investigate whether feature decorrelation prior to feature selection could improve predictive performance in a radiomics pipeline. RESULTS: A total of 50 radiomic datasets were collected, with sample sizes ranging from 51 to 969 and 101 to 11165 features. Using this data, it was observed that decorrelating features did not yield any significant improvement on average. CONCLUSIONS: A large collection of datasets, easily accessible via Python, suitable for benchmarking and evaluating new machine learning techniques and methods was curated. Its utility was exemplified by demonstrating that feature decorrelation prior to feature selection does not, on average, lead to significant performance gains and could be omitted, thereby increasing the robustness and reliability of the radiomics pipeline.
Leveraging deep transfer learning and explainable AI for accurate COVID-19 diagnosis: Insights from a multi-national chest CT scan study
Pham, N. T.
Ko, J.
Shah, M.
Rakkiyappan, R.
Woo, H. G.
Manavalan, B.
Comput Biol Med2024Journal Article, cited 0 times
Website
CT Images in COVID-19
LIDC-IDRI
COVID-19-NY-SBU
COVID-19 detection
Computed Tomography (CT)
Convolutional Neural Network (CNN)
Deep transfer learning
Explainable artificial intelligence
Hyperparameter optimization
The COVID-19 pandemic has emerged as a global health crisis, impacting millions worldwide. Although chest computed tomography (CT) scan images are pivotal in diagnosing COVID-19, their manual interpretation by radiologists is time-consuming and potentially subjective. Automated computer-aided diagnostic (CAD) frameworks offer efficient and objective solutions. However, machine or deep learning methods often face challenges in their reproducibility due to underlying biases and methodological flaws. To address these issues, we propose XCT-COVID, an explainable, transferable, and reproducible CAD framework based on deep transfer learning to predict COVID-19 infection from CT scan images accurately. This is the first study to develop three distinct models within a unified framework by leveraging a previously unexplored large dataset and two widely used smaller datasets. We employed five known convolutional neural network architectures, both with and without pretrained weights, on the larger dataset. We optimized hyperparameters through extensive grid search and 5-fold cross-validation (CV), significantly enhancing the model performance. Experimental results from the larger dataset showed that the VGG16 architecture (XCT-COVID-L) with pretrained weights consistently outperformed other architectures, achieving the best performance, on both 5-fold CV and independent test. When evaluated with the external datasets, XCT-COVID-L performed well with data with similar distributions, demonstrating its transferability. However, its performance significantly decreased on smaller datasets with lower-quality images. To address this, we developed other models, XCT-COVID-S1 and XCT-COVID-S2, specifically for the smaller datasets, outperforming existing methods. Moreover, eXplainable Artificial Intelligence (XAI) analyses were employed to interpret the models' functionalities. For prediction and reproducibility purposes, the implementation of XCT-COVID is publicly accessible at https://github.com/cbbl-skku-org/XCT-COVID/.
Brain tumors detection and segmentation in MR images: Gabor wavelet vs. statistical features
Nabizadeh, Nooshin
Kubat, Miroslav
Computers & Electrical Engineering2015Journal Article, cited 85 times
Website
BRAIN
Magnetic Resonance Imaging (MRI)
Automated recognition of brain tumors in magnetic resonance images (MRI) is a difficult procedure owing to the variability and complexity of the location, size, shape, and texture of these lesions. Because of intensity similarities between brain lesions and normal tissues, some approaches make use of multi-spectral anatomical MRI scans. However, the time and cost restrictions for collecting multi-spectral MRI scans and some other difficulties necessitate developing an approach that can detect tumor tissues using a single-spectral anatomical MRI images. In this paper, we present a fully automatic system, which is able to detect slices that include tumor and, to delineate the tumor area. The experimental results on single contrast mechanism demonstrate the efficacy of our proposed technique in successfully segmenting brain tumor tissues with high accuracy and low computational complexity. Moreover, we include a study evaluating the efficacy of statistical features over Gabor wavelet features using several classifiers. This contribution fills the gap in the literature, as is the first to compare these sets of features for tumor segmentation applications. (C) 2015 Elsevier Ltd. All rights reserved.;
EfficientNet and multi-path convolution with multi-head attention network for brain tumor grade classification
Isunuri, B. Venkateswarlu
Kakarla, Jagadeesh
Computers and Electrical Engineering2023Journal Article, cited 0 times
Website
REMBRANDT
Brain-Tumor-Progression
BRAIN
Classification
Convolutional Neural Network (CNN)
Transfer learning
Grade classification is a challenging task in brain tumor image classification. Contemporary models employ transfer learning technique to attain better performance. The existing models ignored the semantic features of a tumor during classification decisions. Moreover, contemporary research requires an optimized model to exhibit better performance on larger datasets. Thus, we propose an EfficientNet and multi-path convolution with a multi-head attention network for the grade classification. We used a pre-trained EfficientNetB4 in the feature extraction phase. Then, a multi-path convolution with multi-head attention network performs a feature enhancement task. Finally, features obtained from the above step are classified using a fully connected double dense network. We utilize TCIA repository datasets to generate a three-class (normal/low-grade/high-grade) classification dataset. Our model achieves 98.35% accuracy and 97.32% Jaccard coefficient. The proposed model achieves superior performance than its competing models in all key metrics. Further, we achieve similar performance on a noisy dataset.
Automatic rectum limit detection by anatomical markers correlation
Namías, R
D’Amato, JP
Del Fresno, M
Vénere, M
Computerized Medical Imaging and Graphics2014Journal Article, cited 1 times
Website
CT Colonography
Several diseases take place at the end of the digestive system. Many of them can be diagnosed by means of different medical imaging modalities together with computer aided detection (CAD) systems. These CAD systems mainly focus on the complete segmentation of the digestive tube. However, the detection of limits between different sections could provide important information to these systems.; In this paper we present an automatic method for detecting the rectum and sigmoid colon limit using a novel global curvature analysis over the centerline of the segmented digestive tube in different imaging modalities. The results are compared with the gold standard rectum upper limit through a validation scheme comprising two different anatomical markers: the third sacral vertebra and the average rectum length. Experimental results in both magnetic resonance imaging (MRI) and computed tomography colonography (CTC) acquisitions show the efficacy of the proposed strategy in automatic detection of rectum limits. The method is intended for application to the rectum segmentation in MRI for geometrical modeling and as contextual information source in virtual colonoscopies and CAD systems. (C) 2014 Elsevier Ltd. All rights reserved.;
Segmentation-free direct tumor volume and metabolic activity estimation from PET scans
Taghanaki, S. A.
Duggan, N.
Ma, H.
Hou, X.
Celler, A.
Benard, F.
Hamarneh, G.
Comput Med Imaging Graph2018Journal Article, cited 2 times
Website
QIN-HEADNECK
Algorithm development
machine learning
PET
Total lesion glycolosis
Tumor volume and metabolic activity are two robust imaging biomarkers for predicting early therapy response in F-fluorodeoxyglucose (FDG) positron emission tomography (PET), which is a modality to image the distribution of radiotracers and thereby observe functional processes in the body. To date, estimation of these two biomarkers requires a lesion segmentation step. While the segmentation methods requiring extensive user interaction have obvious limitations in terms of time and reproducibility, automatically estimating activity from segmentation, which involves integrating intensity values over the volume is also suboptimal, since PET is an inherently noisy modality. Although many semi-automatic segmentation based methods have been developed, in this paper, we introduce a method which completely eliminates the segmentation step and directly estimates the volume and activity of the lesions. We trained two parallel ensemble models using locally extracted 3D patches from phantom images to estimate the activity and volume, which are derivatives of other important quantification metrics such as standardized uptake value (SUV) and total lesion glycolysis (TLG). For validation, we used 54 clinical images from the QIN Head and Neck collection on The Cancer Imaging Archive, as well as a set of 55 PET scans of the Elliptical Lung-Spine Body Phantomwith different levels of noise, four different reconstruction methods, and three different background activities, namely; air, water, and hot background. In the validation on phantom images, we achieved relative absolute error (RAE) of 5.11%+/-3.5% and 5.7%+/-5.25% for volume and activity estimation, respectively, which represents improvements of over 20% and 6% respectively, compared with the best competing methods. From the validation performed using clinical images, we found that the proposed method is capable of obtaining almost the same level of agreement with a group of trained experts, as a single trained expert is, indicating that the method has the potential to be a useful tool in clinical practice.
An application of cascaded 3D fully convolutional networks for medical image segmentation
Roth, Holger R
Oda, Hirohisa
Zhou, Xiangrong
Shimizu, Natsuki
Yang, Ying
Hayashi, Yuichiro
Oda, Masahiro
Fujiwara, Michitaka
Misawa, Kazunari
Mori, Kensaku
Computerized Medical Imaging and Graphics2018Journal Article, cited 0 times
Pancreas-CT
Recent advances in 3D fully convolutional networks (FCN) have made it feasible to produce dense voxel-wise predictions of volumetric images. In this work, we show that a multi-class 3D FCN trained on manually labeled CT scans of several anatomical structures (ranging from the large organs to thin vessels) can achieve competitive segmentation results, while avoiding the need for handcrafting features or training class-specific models. To this end, we propose a two-stage, coarse-to-fine approach that will first use a 3D FCN to roughly define a candidate region, which will then be used as input to a second 3D FCN. This reduces the number of voxels the second FCN has to classify to ∼10% and allows it to focus on more detailed segmentation of the organs and vessels. We utilize training and validation sets consisting of 331 clinical CT images and test our models on a completely unseen data collection acquired at a different hospital that includes 150 CT scans, targeting three anatomical organs (liver, spleen, and pancreas). In challenging organs such as the pancreas, our cascaded approach improves the mean Dice score from 68.5 to 82.2%, achieving the highest reported average score on this dataset. We compare with a 2D FCN method on a separate dataset of 240 CT scans with 18 classes and achieve a significantly higher performance in small organs and vessels. Furthermore, we explore fine-tuning our models to different datasets. Our experiments illustrate the promise and robustness of current 3D FCN based semantic segmentation of medical images, achieving state-of-the-art results.1.
Computer-assisted subtyping and prognosis for non-small cell lung cancer patients with unresectable tumor
Saad, Maliazurina
Choi, Tae-Sun
Computerized Medical Imaging and Graphics2018Journal Article, cited 0 times
Website
NSCLC Radiogenomics
NSCLC-Radiomics
LUNG
Algorithm Development
Histopathology imaging features
Non-Small Cell Lung Cancer (NSCLC)
Computed Tomography (CT)
BACKGROUND: The histological classification or subtyping of non-small cell lung cancer is essential for systematic therapy decisions. Differentiating between the two main subtypes of pulmonary adenocarcinoma and squamous cell carcinoma highlights the considerable differences that exist in the prognosis of patient outcomes. Physicians rely on a pathological analysis to reveal these phenotypic variations that requires invasive methods, such as biopsy and resection sample, but almost 70% of tumors are unresectable at the point of diagnosis. METHOD: A computational method that fuses two frameworks of computerized subtyping and prognosis was proposed, and it was validated against publicly available dataset in The Cancer Imaging Archive that consisted of 82 curated patients with CT scans. The accuracy of the proposed method was compared with the gold standard of pathological analysis, as defined by theInternational Classification of Disease for Oncology (ICD-O). A series of survival outcome test cases were evaluated using the Kaplan-Meier estimator and log-rank test (p-value) between the computational method and ICD-O. RESULTS: The computational method demonstrated high accuracy in subtyping (96.2%) and good consistency in the statistical significance of overall survival prediction for adenocarcinoma and squamous cell carcinoma patients (p<0.03) with respect to its counterpart pathological subtyping (p<0.02). The degree of reproducibility between prognosis taken on computational and pathological subtyping was substantial with an averaged concordance correlation coefficient (CCC) of 0.9910. CONCLUSION: The findings in this study support the idea that quantitative analysis is capable of representing tissue characteristics, as offered by a qualitative analysis.
Computer-aided classification of prostate cancer grade groups from MRI images using texture features and stacked sparse autoencoder
Abraham, Bejoy
Nair, Madhu S
Computerized Medical Imaging and Graphics2018Journal Article, cited 1 times
Website
PROSTATEx-2 2017 challenge
Classification
Machine learning to predict lung nodule biopsy method using CT image features: A pilot study
Sumathipala, Yohan
Shafiq, Majid
Bongen, Erika
Brinton, Connor
Paik, David
Computerized Medical Imaging and Graphics2019Journal Article, cited 0 times
Website
LIDC-IDRI
CT
machine learning
semantic features
Fast and Fully-Automated Detection and Segmentation of Pulmonary Nodules in Thoracic CT Scans Using Deep Convolutional Neural Networks
Huang, X.
Sun, W.
Tseng, T. B.
Li, C.
Qian, W.
Computerized Medical Imaging and Graphics2019Journal Article, cited 0 times
Website
LIDC-IDRI
Computer Aided Detection (CADe)
Lung
Radiomics
Segmentation
Classification
Deep learning techniques have been extensively used in computerized pulmonary nodule analysis in recent years. Many reported studies still utilized hybrid methods for diagnosis, in which convolutional neural networks (CNNs) are used only as one part of the pipeline, and the whole system still needs either traditional image processing modules or human intervention to obtain final results. In this paper, we introduced a fast and fully-automated end-to-end system that can efficiently segment precise lung nodule contours from raw thoracic CT scans. Our proposed system has four major modules: candidate nodule detection with Faster regional-CNN (R-CNN), candidate merging, false positive (FP) reduction with CNN, and nodule segmentation with customized fully convolutional neural network (FCN). The entire system has no human interaction or database specific design. The average runtime is about 16 s per scan on a standard workstation. The nodule detection accuracy is 91.4% and 94.6% with an average of 1 and 4 false positives (FPs) per scan. The average dice coefficient of nodule segmentation compared to the groundtruth is 0.793.
Morphological and multi-level geometrical descriptor analysis in CT and MRI volumes for automatic pancreas segmentation
Asaturyan, Hykoush
Gligorievski, Antonio
Villarini, Barbara
Computerized Medical Imaging and Graphics2019Journal Article, cited 3 times
Website
Pancreas-CT
segmentation
Automatic pancreas segmentation in 3D radiological scans is a critical, yet challenging task. As a prerequisite for computer-aided diagnosis (CADx) systems, accurate pancreas segmentation could generate both quantitative and qualitative information towards establishing the severity of a condition, and thus provide additional guidance for therapy planning. Since the pancreas is an organ of high inter-patient anatomical variability, previous segmentation approaches report lower quantitative accuracy scores in comparison to abdominal organs such as the liver or kidneys. This paper presents a novel approach for automatic pancreas segmentation in magnetic resonance imaging (MRI) and computer tomography (CT) scans. This method exploits 3D segmentation that, when coupled with geometrical and morphological characteristics of abdominal tissue, classifies distinct contours in tight pixel-range proximity as “pancreas” or “non-pancreas”. There are three main stages to this approach: (1) identify a major pancreas region and apply contrast enhancement to differentiate between pancreatic and surrounding tissue; (2) perform 3D segmentation via continuous max-flow and min-cuts approach, structured forest edge detection, and a training dataset of annotated pancreata; (3) eliminate non-pancreatic contours from resultant segmentation via morphological operations on area, structure and connectivity between distinct contours. The proposed method is evaluated on a dataset containing 82 CT image volumes, achieving mean Dice Similarity coefficient (DSC) of 79.3 ± 4.4%. Two MRI datasets containing 216 and 132 image volumes are evaluated, achieving mean DSC 79.6 ± 5.7% and 81.6 ± 5.1% respectively. This approach is statistically stable, reflected by lower metrics in standard deviation in comparison to state-of-the-art approaches.
Combo loss: Handling input and output imbalance in multi-organ segmentation
Taghanaki, S. A.
Zheng, Y.
Kevin Zhou, S.
Georgescu, B.
Sharma, P.
Xu, D.
Comaniciu, D.
Hamarneh, G.
Comput Med Imaging Graph2019Journal Article, cited 219 times
Website
QIN-HEADNECK
PROSTATEx
Algorithm Development
Deep Learning
Electrocardiography
Humans
*Image Interpretation
Computer-Assisted
Image Processing
Computer-Assisted/*methods
Convolutional Neural Network (CNN)
Positron Emission Tomography (PET)
Tomography
X-Ray Computed
Ultrasonography
Class-imbalance
Deep convolutional neural networks
Loss function
Multi-organ segmentation
Output imbalance
Simultaneous segmentation of multiple organs from different medical imaging modalities is a crucial task as it can be utilized for computer-aided diagnosis, computer-assisted surgery, and therapy planning. Thanks to the recent advances in deep learning, several deep neural networks for medical image segmentation have been introduced successfully for this purpose. In this paper, we focus on learning a deep multi-organ segmentation network that labels voxels. In particular, we examine the critical choice of a loss function in order to handle the notorious imbalance problem that plagues both the input and output of a learning model. The input imbalance refers to the class-imbalance in the input training samples (i.e., small foreground objects embedded in an abundance of background voxels, as well as organs of varying sizes). The output imbalance refers to the imbalance between the false positives and false negatives of the inference model. In order to tackle both types of imbalance during training and inference, we introduce a new curriculum learning based loss function. Specifically, we leverage Dice similarity coefficient to deter model parameters from being held at bad local minima and at the same time gradually learn better model parameters by penalizing for false positives/negatives using a cross entropy term. We evaluated the proposed loss function on three datasets: whole body positron emission tomography (PET) scans with 5 target organs, magnetic resonance imaging (MRI) prostate scans, and ultrasound echocardigraphy images with a single target organ i.e., left ventricular. We show that a simple network architecture with the proposed integrative loss function can outperform state-of-the-art methods and results of the competing methods can be improved when our proposed loss is used.
One-slice CT image based kernelized radiomics model for the prediction of low/mid-grade and high-grade HNSCC
Ye, Junyong
Luo, Jin
Xu, Shengsheng
Wu, Wenli
Computerized Medical Imaging and Graphics2019Journal Article, cited 0 times
HEAD-NECK-RADIOMICS-HN1
Machine Learning
CT
An accurate grade prediction can help to appropriate treatment strategy and effective diagnosis to Head and neck squamous cell carcinoma (HNSCC). Radiomics has been studied for the prediction of carcinoma characteristics in medical images. The success of previous researches in radiomics is attributed to the availability of annotated all-slice medical images. However, it is very challenging to annotate all slices, as annotating biomedical images is not only tedious, laborious, and time consuming, but also demanding of costly, specialty-oriented skills, which are not easily accessible. To address this problem, this paper presents a model to integrate radiomics and kernelized dimension reduction into a single framework, which maps handcrafted radiomics features to a kernelized space where they are linearly separable and then reduces the dimension of features through principal component analysis. Three methods including baseline radiomics models, proposed kernelized model and convolutional neural network (CNN) model were compared in experiments. Results suggested proposed kernelized model best fit in one-slice data. We reached AUC of 95.91 % on self-made one-slice dataset, 67.33 % in predicting localregional recurrence on H&N dataset and 64.33 % on H&N1 dataset. While all other models were <76 %, <65 %, and <62 %. Though CNN model reached an incredible performance when predicting distant metastasis on H&N (AUC 0.88), model faced serious problem of overfitting in small datasets. When changing all-slice data to one-slice on both H&N and H&N1, proposed model suffered less loss on AUC (<1.3 %) than any other models (>3 %). These proved our proposed model is efficient to deal with the one-slice problem and makes using one-slice data to reduce annotation cost practical. This is attributed to the several advantages derived from the proposed kernelized radiomics model, including (1) the prior radiomics features reduced the demanding of huge amount of data and avoided overfitting; (2) the kernelized method mined the potential information contributed to predict; (3) generating principal components in kernelized features reduced redundant features.
Analyzing magnetic resonance imaging data from glioma patients using deep learning
Menze, Bjoern
Isensee, Fabian
Wiest, Roland
Wiestler, Bene
Maier-Hein, Klaus
Reyes, Mauricio
Bakas, Spyridon
Computerized Medical Imaging and Graphics2020Journal Article, cited 0 times
ACRIN-FMISO-Brain
Brain-Tumor-Progression
CPTAC-GBM
IvyGAP
The quantitative analysis of images acquired in the diagnosis and treatment of patients with brain tumors has seen a significant rise in the clinical use of computational tools. The underlying technology to the vast majority of these tools are machine learning methods and, in particular, deep learning algorithms. This review offers clinical background information of key diagnostic biomarkers in the diagnosis of glioma, the most common primary brain tumor. It offers an overview of publicly available resources and datasets for developing new computational tools and image biomarkers, with emphasis on those related to the Multimodal Brain Tumor Segmentation (BraTS) Challenge. We further offer an overview of the state-of-the-art methods in glioma image segmentation, again with an emphasis on publicly available tools and deep learning algorithms that emerged in the context of the BraTS challenge.
Automated MRI based pipeline for segmentation and prediction of grade, IDH mutation and 1p19q co-deletion in glioma
Decuyper, M.
Bonte, S.
Deblaere, K.
Van Holen, R.
Comput Med Imaging Graph2021Journal Article, cited 42 times
Website
TCGA-GBM
TCGA-LGG
LGG-1p19qDeletion
BraTS 2019
*Brain Neoplasms/diagnostic imaging/genetics
*Glioma/diagnostic imaging/genetics
Humans
Isocitrate dehydrogenase (IDH) mutation
Isocitrate Dehydrogenase/genetics
Magnetic Resonance Imaging (MRI)
Mutation
Retrospective Studies
Algorithm Development
Deep learning
Glioma
PyTorch
Molecular markers
Segmentation
ReLu
In the WHO glioma classification guidelines grade (glioblastoma versus lower-grade glioma), IDH mutation and 1p/19q co-deletion status play a central role as they are important markers for prognosis and optimal therapy planning. Currently, diagnosis requires invasive surgical procedures. Therefore, we propose an automatic segmentation and classification pipeline based on routinely acquired pre-operative MRI (T1, T1 postcontrast, T2 and/or FLAIR). A 3D U-Net was designed for segmentation and trained on the BraTS 2019 training dataset. After segmentation, the 3D tumor region of interest is extracted from the MRI and fed into a CNN to simultaneously predict grade, IDH mutation and 1p19q co-deletion. Multi-task learning allowed to handle missing labels and train one network on a large dataset of 628 patients, collected from The Cancer Imaging Archive and BraTS databases. Additionally, the network was validated on an independent dataset of 110 patients retrospectively acquired at the Ghent University Hospital (GUH). Segmentation performance calculated on the BraTS validation set shows an average whole tumor dice score of 90% and increased robustness to missing image modalities by randomly excluding input MRI during training. Classification area under the curve scores are 93%, 94% and 82% on the TCIA test data and 94%, 86% and 87% on the GUH data for grade, IDH and 1p19q status respectively. We developed a fast, automatic pipeline to segment glioma and accurately predict important (molecular) markers based on pre-therapy MRI.
Computed tomography image reconstruction using stacked U-Net
Mizusawa, S.
Sei, Y.
Orihara, R.
Ohsuga, A.
Comput Med Imaging Graph2021Journal Article, cited 0 times
Website
TCGA-THCA
Algorithm Development
Since the development of deep learning methods, many researchers have focused on image quality improvement using convolutional neural networks. They proved its effectivity in noise reduction, single-image super-resolution, and segmentation. In this study, we apply stacked U-Net, a deep learning method, for X-ray computed tomography image reconstruction to generate high-quality images in a short time with a small number of projections. It is not easy to create highly accurate models because medical images have few training images due to patients' privacy issues. Thus, we utilize various images from the ImageNet, a widely known visual database. Results show that a cross-sectional image with a peak signal-to-noise ratio of 27.93db and a structural similarity of 0.886 is recovered for a 512x512 image using 360-degree rotation, 512 detectors, and 64 projections, with a processing time of 0.11s on the GPU. Therefore, the proposed method has a shorter reconstruction time and better image quality than the existing methods.
Learnable image histograms-based deep radiomics for renal cell carcinoma grading and staging
Hussain, M. A.
Hamarneh, G.
Garbi, R.
Comput Med Imaging Graph2021Journal Article, cited 0 times
Website
Algorithm Development
Deep Learning
KIDNEY
Computed Tomography (CT)
Fuhrman cancer grading and tumor-node-metastasis (TNM) cancer staging systems are typically used by clinicians in the treatment planning of renal cell carcinoma (RCC), a common cancer in men and women worldwide. Pathologists typically use percutaneous renal biopsy for RCC grading, while staging is performed by volumetric medical image analysis before renal surgery. Recent studies suggest that clinicians can effectively perform these classification tasks non-invasively by analyzing image texture features of RCC from computed tomography (CT) data. However, image feature identification for RCC grading and staging often relies on laborious manual processes, which is error prone and time-intensive. To address this challenge, this paper proposes a learnable image histogram in the deep neural network framework that can learn task-specific image histograms with variable bin centers and widths. The proposed approach enables learning statistical context features from raw medical data, which cannot be performed by a conventional convolutional neural network (CNN). The linear basis function of our learnable image histogram is piece-wise differentiable, enabling back-propagating errors to update the variable bin centers and widths during training. This novel approach can segregate the CT textures of an RCC in different intensity spectra, which enables efficient Fuhrman low (I/II) and high (III/IV) grading as well as RCC low (I/II) and high (III/IV) staging. The proposed method is validated on a clinical CT dataset of 159 patients from The Cancer Imaging Archive (TCIA) database, and it demonstrates 80% and 83% accuracy in RCC grading and staging, respectively.
Iterative confidence relabeling with deep ConvNets for organ segmentation with partial labels
Petit, Olivier
Thome, Nicolas
Soler, Luc
Computerized Medical Imaging and Graphics2021Journal Article, cited 0 times
CT-ORG
Training deep ConvNets requires large labeled datasets. However, collecting pixel-level labels for medical image segmentation is very expensive and requires a high level of expertise. In addition, most existing segmentation masks provided by clinical experts focus on specific anatomical structures. In this paper, we propose a method dedicated to handle such partially labeled medical image datasets. We propose a strategy to identify pixels for which labels are correct, and to train Fully Convolutional Neural Networks with a multi-label loss adapted to this context. In addition, we introduce an iterative confidence self-training approach inspired by curriculum learning to relabel missing pixel labels, which relies on selecting the most confident prediction with a specifically designed confidence network that learns an uncertainty measure which is leveraged in our relabeling process. Our approach, INERRANT for Iterative coNfidencE Relabeling of paRtial ANnoTations, is thoroughly evaluated on two public datasets (TCAI and LITS), and one internal dataset with seven abdominal organ classes. We show that INERRANT robustly deals with partial labels, performing similarly to a model trained on all labels even for large missing label proportions. We also highlight the importance of our iterative learning scheme and the proposed confidence measure for optimal performance. Finally we show a practical use case where a limited number of completely labeled data are enriched by publicly available but partially labeled data.
Prediction of the motion of chest internal points using a recurrent neural network trained with real-time recurrent learning for latency compensation in lung cancer radiotherapy
Pohl, Michel
Uesaka, Mitsuru
Demachi, Kazuyuki
Bhusal Chhatkuli, Ritu
Computerized Medical Imaging and Graphics2021Journal Article, cited 0 times
4D-Lung
During the radiotherapy treatment of patients with lung cancer, the radiation delivered to healthy tissue around the tumor needs to be minimized, which is difficult because of respiratory motion and the latency of linear accelerator (LINAC) systems. In the proposed study, we first use the Lucas-Kanade pyramidal optical flow algorithm to perform deformable image registration (DIR) of chest computed tomography (CT) scan images of four patients with lung cancer. We then track three internal points close to the lung tumor based on the previously computed deformation field and predict their position with a recurrent neural network (RNN) trained using real-time recurrent learning (RTRL) and gradient clipping. The breathing data is quite regular, sampled at approximately 2.5 Hz, and includes artificially added drift in the spine direction. The amplitude of the motion of the tracked points ranged from 12.0 mm to 22.7 mm. Finally, we propose a simple method for recovering and predicting three-dimensional (3D) tumor images from the tracked points and the initial tumor image, based on a linear correspondence model and the Nadaraya-Watson non-linear regression. The root-mean-square (RMS) error, maximum error and jitter corresponding to the RNN prediction on the test set were smaller than the same performance measures obtained with linear prediction and least mean squares (LMS). In particular, the maximum prediction error associated with the RNN, equal to 1.51 mm, is respectively 16.1% and 5.0% lower than the error given by a linear predictor and LMS. The average prediction time per time step with RTRL is equal to 119 ms, which is less than the 400 ms marker position sampling time. The tumor position in the predicted images appears visually correct, which is confirmed by the high mean cross-correlation between the original and predicted images, equal to 0.955. The standard deviation of the Gaussian kernel and the number of layers in the optical flow algorithm were the parameters having the most significant impact on registration performance. Their optimization led respectively to a 31.3% and 36.2% decrease in the registration error. Using only a single layer proved to be detrimental to the registration quality because tissue motion in the lower part of the lung has a high amplitude relative to the resolution of the CT scan images. The random initialization of the hidden units and the number of these hidden units were found to be the most important factors affecting the performance of the RNN. Increasing the number of hidden units from 15 to 250 led to a 56.3% decrease in the prediction error on the cross-validation data. Similarly, optimizing the standard deviation of the initial Gaussian distribution of the synaptic weights σinitRNN led to a 28.4% decrease in the prediction error on the cross-validation data, with the error minimized for σinitRNN=0.02 with the four patients.
Weakly supervised deep learning for prediction of treatment effectiveness on ovarian cancer from histopathology images
Wang, Ching-Wei
Chang, Cheng-Chang
Lee, Yu-Ching
Lin, Yi-Jia
Lo, Shih-Chang
Hsu, Po-Chao
Liou, Yi-An
Wang, Chih-Hung
Chao, Tai-Kuang
Computerized Medical Imaging and Graphics2022Journal Article, cited 0 times
Ovarian Bevacizumab Response
Despite the progress made during the last two decades in the surgery and chemotherapy of ovarian cancer, more than 70 % of advanced patients are with recurrent cancer and decease. Surgical debulking of tumors following chemotherapy is the conventional treatment for advanced carcinoma, but patients with such treatment remain at great risk for recurrence and developing drug resistance, and only about 30 % of the women affected will be cured. Bevacizumab is a humanized monoclonal antibody, which blocks VEGF signaling in cancer, inhibits angiogenesis and causes tumor shrinkage, and has been recently approved by FDA as a monotherapy for advanced ovarian cancer in combination with chemotherapy. Considering the cost, potential toxicity, and finding that only a portion of patients will benefit from these drugs, the identification of new predictive method for the treatment of ovarian cancer remains an urgent unmet medical need. In this study, we develop weakly supervised deep learning approaches to accurately predict therapeutic effect for bevacizumab of ovarian cancer patients from histopathological hematoxylin and eosin stained whole slide images, without any pathologist-provided locally annotated regions. To the authors' best knowledge, this is the first model demonstrated to be effective for prediction of the therapeutic effect of patients with epithelial ovarian cancer to bevacizumab. Quantitative evaluation of a whole section dataset shows that the proposed method achieves high accuracy, 0.882 ± 0.06; precision, 0.921 ± 0.04, recall, 0.912 ± 0.03; F-measure, 0.917 ± 0.07 using 5-fold cross validation and outperforms two state-of-the art deep learning approaches Coudray et al. (2018), Campanella et al. (2019). For an independent TMA testing set, the three proposed methods obtain promising results with high recall (sensitivity) 0.946, 0.893 and 0.964, respectively. The results suggest that the proposed method could be useful for guiding treatment by assisting in filtering out patients without positive therapeutic response to suffer from further treatments while keeping patients with positive response in the treatment process. Furthermore, according to the statistical analysis of the Cox Proportional Hazards Model, patients who were predicted to be invalid by the proposed model had a very high risk of cancer recurrence (hazard ratio = 13.727) than patients predicted to be effective with statistical signifcance (p < 0.05).
Active deep learning from a noisy teacher for semi-supervised 3D image segmentation: Application to COVID-19 pneumonia infection in CT
Hussain, M. A.
Mirikharaji, Z.
Momeny, M.
Marhamati, M.
Neshat, A. A.
Garbi, R.
Hamarneh, G.
Comput Med Imaging Graph2022Journal Article, cited 0 times
Website
CT Images in COVID-19
Active learning
Covid-19
Deep learning
Noisy teacher
Pneumonia
Segmentation
Semi-supervised learning
Supervised deep learning has become a standard approach to solving medical image segmentation tasks. However, serious difficulties in attaining pixel-level annotations for sufficiently large volumetric datasets in real-life applications have highlighted the critical need for alternative approaches, such as semi-supervised learning, where model training can leverage small expert-annotated datasets to enable learning from much larger datasets without laborious annotation. Most of the semi-supervised approaches combine expert annotations and machine-generated annotations with equal weights within deep model training, despite the latter annotations being relatively unreliable and likely to affect model optimization negatively. To overcome this, we propose an active learning approach that uses an example re-weighting strategy, where machine-annotated samples are weighted (i) based on the similarity of their gradient directions of descent to those of expert-annotated data, and (ii) based on the gradient magnitude of the last layer of the deep model. Specifically, we present an active learning strategy with a query function that enables the selection of reliable and more informative samples from machine-annotated batch data generated by a noisy teacher. When validated on clinical COVID-19 CT benchmark data, our method improved the performance of pneumonia infection segmentation compared to the state of the art.
Investigation and benchmarking of U-Nets on prostate segmentation tasks
Bhandary, Shrajan
Kuhn, Dejan
Babaiee, Zahra
Fechter, Tobias
Benndorf, Matthias
Zamboglou, Constantinos
Grosu, Anca-Ligia
Grosu, Radu
Computerized Medical Imaging and Graphics2023Journal Article, cited 0 times
ISBI-MR-Prostate-2013
PROSTATEx
Prostate
In healthcare, a growing number of physicians and support staff are striving to facilitate personalized radiotherapy regimens for patients with prostate cancer. This is because individual patient biology is unique, and employing a single approach for all is inefficient. A crucial step for customizing radiotherapy planning and gaining fundamental information about the disease, is the identification and delineation of targeted structures. However, accurate biomedical image segmentation is time-consuming, requires considerable experience and is prone to observer variability. In the past decade, the use of deep learning models has significantly increased in the field of medical image segmentation. At present, a vast number of anatomical structures can be demarcated on a clinician's level with deep learning models. These models would not only unload work, but they can offer unbiased characterization of the disease. The main architectures used in segmentation are the U-Net and its variants, that exhibit outstanding performances. However, reproducing results or directly comparing methods is often limited by closed source of data and the large heterogeneity among medical images. With this in mind, our intention is to provide a reliable source for assessing deep learning models. As an example, we chose the challenging task of delineating the prostate gland in multi-modal images. First, this paper provides a comprehensive review of current state-of-the-art convolutional neural networks for 3D prostate segmentation. Second, utilizing public and in-house CT and MR datasets of varying properties, we created a framework for an objective comparison of automatic prostate segmentation algorithms. The framework was used for rigorous evaluations of the models, highlighting their strengths and weaknesses.
On-cloud decision-support system for non-small cell lung cancer histology characterization from thorax computed tomography scans
Tomassini, S.
Falcionelli, N.
Bruschi, G.
Sbrollini, A.
Marini, N.
Sernani, P.
Morettini, M.
Muller, H.
Dragoni, A. F.
Burattini, L.
Comput Med Imaging Graph2023Journal Article, cited 0 times
NSCLC-Radiomics
NSCLC Radiogenomics
NSCLC-Radiomics-Genomics
TCGA-LUAD
Image resampling
Radiomic features
Classification
Computed Tomography (CT)
Long Short-Term Memory (LSTM)
Humans
*Carcinoma
Non-Small-Cell Lung/diagnostic imaging/pathology
*Lung Neoplasms/diagnostic imaging/pathology
*Carcinoma
Squamous Cell/pathology
Tomography
X-Ray Computed/methods
ROC Curve
Clinical decision-support system
Cloud computing
Convolutional long short-term memory
Non-small cell lung cancer histology characterization
Supervised deep learning
Thorax computed tomography
Non-Small Cell Lung Cancer (NSCLC) accounts for about 85% of all lung cancers. Developing non-invasive techniques for NSCLC histology characterization may not only help clinicians to make targeted therapeutic treatments but also prevent subjects from undergoing lung biopsy, which is challenging and could lead to clinical implications. The motivation behind the study presented here is to develop an advanced on-cloud decision-support system, named LUCY, for non-small cell LUng Cancer histologY characterization directly from thorax Computed Tomography (CT) scans. This aim was pursued by selecting thorax CT scans of 182 LUng ADenocarcinoma (LUAD) and 186 LUng Squamous Cell carcinoma (LUSC) subjects from four openly accessible data collections (NSCLC-Radiomics, NSCLC-Radiogenomics, NSCLC-Radiomics-Genomics and TCGA-LUAD), in addition to the implementation and comparison of two end-to-end neural networks (the core layer of whom is a convolutional long short-term memory layer), the performance evaluation on test dataset (NSCLC-Radiomics-Genomics) from a subject-level perspective in relation to NSCLC histological subtype location and grade, and the dynamic visual interpretation of the achieved results by producing and analyzing one heatmap video for each scan. LUCY reached test Area Under the receiver operating characteristic Curve (AUC) values above 77% in all NSCLC histological subtype location and grade groups, and a best AUC value of 97% on the entire dataset reserved for testing, proving high generalizability to heterogeneous data and robustness. Thus, LUCY is a clinically-useful decision-support system able to timely, non-invasively and reliably provide visually-understandable predictions on LUAD and LUSC subjects in relation to clinically-relevant information.
Main challenges on the curation of large scale datasets for pancreas segmentation using deep learning in multi-phase CT scans: Focus on cardinality, manual refinement, and annotation quality
Cavicchioli, M.
Moglia, A.
Pierelli, L.
Pugliese, G.
Cerveri, P.
Comput Med Imaging Graph2024Journal Article, cited 0 times
Website
Pancreas-CT
Medical Decathlon
Artificial intelligence surgery
Artificial intelligence surgical planning
Deep learning
PANCREAS
Medical imaging dataset acquisition
Medical imaging dataset curation
Pancreas dataset
Segmentation
Accurate segmentation of the pancreas in computed tomography (CT) holds paramount importance in diagnostics, surgical planning, and interventions. Recent studies have proposed supervised deep-learning models for segmentation, but their efficacy relies on the quality and quantity of the training data. Most of such works employed small-scale public datasets, without proving the efficacy of generalization to external datasets. This study explored the optimization of pancreas segmentation accuracy by pinpointing the ideal dataset size, understanding resource implications, examining manual refinement impact, and assessing the influence of anatomical subregions. We present the AIMS-1300 dataset encompassing 1,300 CT scans. Its manual annotation by medical experts required 938 h. A 2.5D UNet was implemented to assess the impact of training sample size on segmentation accuracy by partitioning the original AIMS-1300 dataset into 11 smaller subsets of progressively increasing numerosity. The findings revealed that training sets exceeding 440 CTs did not lead to better segmentation performance. In contrast, nnU-Net and UNet with Attention Gate reached a plateau for 585 CTs. Tests on generalization on the publicly available AMOS-CT dataset confirmed this outcome. As the size of the partition of the AIMS-1300 training set increases, the number of error slices decreases, reaching a minimum with 730 and 440 CTs, for AIMS-1300 and AMOS-CT datasets, respectively. Segmentation metrics on the AIMS-1300 and AMOS-CT datasets improved more on the head than the body and tail of the pancreas as the dataset size increased. By carefully considering the task and the characteristics of the available data, researchers can develop deep learning models without sacrificing performance even with limited data. This could accelerate developing and deploying artificial intelligence tools for pancreas surgery and other surgical data science applications.
Computational modeling of tumor invasion from limited and diverse data in Glioblastoma
Jonnalagedda, P.
Weinberg, B.
Min, T. L.
Bhanu, S.
Bhanu, B.
Comput Med Imaging Graph2024Journal Article, cited 0 times
Website
TCGA-GBM
Generative Adversarial Network (GAN)
Glioblastoma
Magnetic Resonance Imaging (MRI)
Radiogenomic analysis
Tumor microenvironment
For diseases with high morbidity rates such as Glioblastoma Multiforme, the prognostic and treatment planning pipeline requires a comprehensive analysis of imaging, clinical, and molecular data. Many mutations have been shown to correlate strongly with the median survival rate and response to therapy of patients. Studies have demonstrated that these mutations manifest as specific visual biomarkers in tumor imaging modalities such as MRI. To minimize the number of invasive procedures on a patient and for the overall resource optimization for the prognostic and treatment planning process, the correlation of imaging and molecular features has garnered much interest. While the tumor mass is the most significant feature, the impacted tissue surrounding the tumor is also a significant biomarker contributing to the visual manifestation of mutations - which has not been studied as extensively. The pattern of tumor growth impacts the surrounding tissue accordingly, which is a reflection of tumor properties as well. Modeling how the tumor growth impacts the surrounding tissue can reveal important information about the patterns of tumor enhancement, which in turn has significant diagnostic and prognostic value. This paper presents the first work to automate the computational modeling of the impacted tissue surrounding the tumor using generative deep learning. The paper isolates and quantifies the impact of the Tumor Invasion (TI) on surrounding tissue based on change in mutation status, subsequently assessing its prognostic value. Furthermore, a TI Generative Adversarial Network (TI-GAN) is proposed to model the tumor invasion properties. Extensive qualitative and quantitative analyses, cross-dataset testing, and radiologist blind tests are carried out to demonstrate that TI-GAN can realistically model the tumor invasion under practical challenges of medical datasets such as limited data and high intra-class heterogeneity.
Mammography and breast tomosynthesis simulator for virtual clinical trials
Badal, Andreu
Sharma, Diksha
Graff, Christian G.
Zeng, Rongping
Badano, Aldo
Computer Physics Communications2021Journal Article, cited 0 times
Website
VICTRE
mammography
Breast
Computer modeling and simulations are increasingly being used to predict the clinical performance of x-ray imaging devices in silico, and to generate synthetic patient images for training and testing of machine learning algorithms. We present a detailed description of the computational models implemented in the open source GPU-accelerated Monte Carlo x-ray imaging simulation code MC-GPU. This code, originally developed to simulate radiography and computed tomography, has been extended to replicate a commercial full-field digital mammography and digital breast tomosynthesis (DBT) device. The code was recently used to image 3000 virtual breast models with the aim of reproducing in silico a clinical trial used in support of the regulatory approval of DBT as a replacement of mammography for breast cancer screening. The updated code implements a more realistic x-ray source model (extended 3D focal spot, tomosynthesis acquisition trajectory, tube motion blurring) and an improved detector model (direct-conversion Selenium detector with depth-of-interaction effects, fluorescence tracking, electronic noise and anti-scatter grid). The software uses a high resolution voxelized geometry model to represent the breast anatomy. To reduce the GPU memory requirements, the code stores the voxels in memory within a binary tree structure. The binary tree is an efficient compression mechanism because many voxels with the same composition are combined in common tree branches while preserving random access to the phantom composition at any location. A delta scattering ray-tracing algorithm which does not require computing ray-voxel interfaces is used to minimize memory access. Multiple software verification and validation steps intended to establish the credibility of the implemented computational models are reported. The software verification was done using a digital quality control phantom and an ideal pinhole camera. The validation was performed reproducing standard bench testing experiments used in clinical practice and comparing with experimental measurements. A sensitivity study intended to assess the robustness of the simulated results to variations in some of the input parameters was performed using an in silico clinical trial pipeline with simulated lesions and mathematical observers. We show that MC-GPU is able to simulate x-ray projections that incorporate many of the sources of variability found in clinical images, and that the simulated results are robust to some uncertainty in the input parameters. Limitations of the implemented computational models are discussed. Program summary Program title: MCGPU_VICTRE CPC Library link to program files: http://dx.doi.org/10.17632/k5x2bsf27m.1 Licensing provisions: CC0 1.0 Programming language: C (with NVIDIA CUDA extensions) Nature of problem: The health risks associated with ionizing radiation impose a limit to the amount of clinical testing that can be done with x-ray imaging devices. In addition, radiation dose cannot be directly measured inside the body. For these reasons, a computational replica of an x-ray imaging device that simulates radiographic images of synthetic anatomical phantoms is of great value for device evaluation. The simulated radiographs and dosimetric estimates can be used for system design and optimization, task-based evaluation of image quality, machine learning software training, and in silico imaging trials. Solution method: Computational models of a mammography x-ray source and detector have been implemented. X-ray transport through matter is simulated using Monte Carlo methods customized for parallel execution in multiple Graphics Processing Units. The input patient anatomy is represented by voxels, which are efficiently stored in the video memory using a new binary tree structure compression mechanism.
A review of lung cancer screening and the role of computer-aided detection
Al Mohammad, B
Brennan, PC
Mello-Thoms, C
Clinical Radiology2017Journal Article, cited 23 times
Website
LIDC-IDRI
Lung screening
Radiologist performance in the detection of lung cancer using CT
Al Mohammad, B
Hillis, SL
Reed, W
Alakhras, M
Brennan, PC
Clinical Radiology2019Journal Article, cited 2 times
Website
LIDC-IDRI
Lung Cancer
CT
Comparison of CT and MRI images for the prediction of soft-tissue sarcoma grading and lung metastasis via a convolutional neural networks model
Zhang, L.
Ren, Z.
Clin Radiol2019Journal Article, cited 0 times
Soft Tissue Sarcoma
AIM: To realise the automated prediction of soft-tissue sarcoma (STS) grading and lung metastasis based on computed tomography (CT), T1-weighted (T1W) magnetic resonance imaging (MRI), and fat-suppressed T2-weighted MRI (FST2W) via the convolutional neural networks (CNN) model. MATERIALS AND METHODS: MRI and CT images of 51 patients diagnosed with STS were analysed retrospectively. The patients could be divided into three groups based on disease grading: high-grade group (n=28), intermediate-grade group (n=15), low-grade group (n=8). Among these patients, 32 had lung metastasis, while the remaining 19 had no lung metastasis. The data were divided into the training, validation, and testing groups according to the ratio of 5:2:3. The receiver operating characteristic (ROC) curves and accuracy values were acquired using the testing dataset to evaluate the performance of the CNN model. RESULTS: For STS grading, the accuracy of the T1W, FST2W, CT, and the fusion of T1W and FST2W testing data were 0.86, 0.89, 0.86, and 0.85, respectively. In addition, Area Under Curve (AUC) were 0.96, 0.97, 0.97, and 0.94 respectively. For the prediction of lung metastasis, the accuracy of the T1W, FST2W, CT, and the fusion of T1W and FST2W test data were 0.92, 0.93, 0.88, and 0.91, respectively. The corresponding AUC values were 0.97, 0.96, 0.95, and 0.95, respectively. FST2W MRI performed best for predicting STS grading and lung metastasis. CONCLUSION: MRI and CT images combined with the CNN model can be useful for making predictions regarding STS grading and lung metastasis, thus providing help for patient diagnosis and treatment.
MRI-based radiogenomics analysis for predicting genetic alterations in oncogenic signalling pathways in invasive breast carcinoma
Lin, P
Liu, WK
Li, X
Wan, D
Qin, H
Li, Q
Chen, G
He, Y
Yang, H
Clinical Radiology2020Journal Article, cited 0 times
TCGA-BRCA
radiogenomics
Breast
Prediction and verification of survival in patients with non-small-cell lung cancer based on an integrated radiomics nomogram
Li, R.
Peng, H.
Xue, T.
Li, J.
Ge, Y.
Wang, G.
Feng, F.
Clinical Radiology2021Journal Article, cited 0 times
Website
NSCLC-Radiomics
radiomics
AIM To develop and validate a nomogram to predict 1-, 2-, and 5-year survival in patients with non-small-cell lung cancer (NSCLC) by combining optimised radiomics features, clinicopathological factors, and conventional image features extracted from three-dimensional (3D) computed tomography (CT) images. MATERIALS AND METHODS A total of 172 patients with NSCLC were selected to construct the model, and 74 and 72 patients were selected for internal validation and external testing, respectively. A total of 828 radiomics features were extracted from each patient's 3D CT images. Univariable Cox regression and least absolute shrinkage and selection operator (LASSO) regression were used to select features and generate a radiomics signature (radscore). The performance of the nomogram was evaluated by calibration curves, clinical practicability, and the c-index. Kaplan–Meier (KM) analysis was used to compare the overall survival (OS) between the two subgroups. RESULT The radiomics features of the NSCLC patients correlated significantly with survival time. The c-indexes of the nomogram in the training cohort, internal validation cohort, and external test cohort were 0.670, 0.658, and 0.660, respectively. The calibration curves showed that the predicted survival time was close to the actual survival time. Decision curve analysis shows that the nomogram could be useful in the clinic. According to KM analysis, the 1-, 2- and 5-year survival rates of the low-risk group were higher than those of the high-risk group. CONCLUSION The nomogram, combining the radscore, clinicopathological factors, and conventional CT parameters, can improve the accuracy of survival prediction in patients with NSCLC.
Fully automated deep-learning section-based muscle segmentation from CT images for sarcopenia assessment
Islam, S.
Kanavati, F.
Arain, Z.
Da Costa, O. F.
Crum, W.
Aboagye, E. O.
Rockall, A. G.
Clin Radiol2022Journal Article, cited 0 times
Website
Head-Neck-CT-Atlas
CT COLONOGRAPHY
Convolutional Neural Network (CNN)
AIM: To develop a fully automated deep-learning-based approach to measure muscle area for assessing sarcopenia on standard-of-care computed tomography (CT) of the abdomen without any case exclusion criteria, for opportunistic screening for frailty. MATERIALS AND METHODS: This ethically approved retrospective study used publicly available and institutional unselected abdominal CT images (n=1,070 training, n=31 testing). The method consisted of two sequential steps: section detection from CT volume followed by muscle segmentation on single-section. Both stages used fully convolutional neural networks (FCNN), based on a UNet-like architecture. Input data consisted of CT volumes with a variety of fields of view, section thicknesses, occlusions, artefacts, and anatomical variations. Output consisted of segmented muscle area on a CT section at the L3 vertebral level. The muscle was segmented into erector spinae, psoas, and rectus abdominus muscle groups. Output was tested against expert manual segmentation. RESULTS: Threefold cross-validation was used to evaluate the model. Section detection cross-validation error was 1.41 +/- 5.02 (in sections). Segmentation cross-validation Dice overlaps were 0.97 +/- 0.02, 0.95 +/- 0.04, and 0.94 +/- 0.04 for erector spinae, psoas, and rectus abdominus, respectively, and 0.96 +/- 0.02 for the combined muscle area, with R(2) = 0.95/0.98 for muscle attenuation/area in 28/31 hold-out test cases. No statistical difference was found between the automated output and a second annotator. Fully automated processing took <1 second per CT examination. CONCLUSIONS: A FCNN pipeline accurately and efficiently automates muscle segmentation at the L3 vertebral level from unselected abdominal CT volumes, with no manual processing step. This approach is promising as a generalisable tool for opportunistic screening for frailty on standard-of-care CT.
Development of a multi-task learning V-Net for pulmonary lobar segmentation on CT and application to diseased lungs
AIM: To develop a multi-task learning (MTL) V-Net for pulmonary lobar segmentation on computed tomography (CT) and application to diseased lungs. MATERIALS AND METHODS: The described methodology utilises tracheobronchial tree information to enhance segmentation accuracy through the algorithm's spatial familiarity to define lobar extent more accurately. The method undertakes parallel segmentation of lobes and auxiliary tissues simultaneously by employing MTL in conjunction with V-Net-attention, a popular convolutional neural network in the imaging realm. Its performance was validated by an external dataset of patients with four distinct lung conditions: severe lung cancer, COVID-19 pneumonitis, collapsed lungs, and chronic obstructive pulmonary disease (COPD), even though the training data included none of these cases. RESULTS: The following Dice scores were achieved on a per-segment basis: normal lungs 0.97, COPD 0.94, lung cancer 0.94, COVID-19 pneumonitis 0.94, and collapsed lung 0.92, all at p<0.05. CONCLUSION: Despite severe abnormalities, the model provided good performance at segmenting lobes, demonstrating the benefit of tissue learning. The proposed model is poised for adoption in the clinical setting as a robust tool for radiologists and researchers to define the lobar distribution of lung diseases and aid in disease treatment planning.
A 3D lung lesion variational autoencoder
Li, Yiheng
Sadée, Christoph Y.
Carrillo-Perez, Francisco
Selby, Heather M.
Thieme, Alexander H.
Gevaert, Olivier
2024Journal Article, cited 0 times
NSCLC Radiogenomics
Machine Learning
CT
Radiomics
In this study, we develop a 3D beta variational autoencoder (beta-VAE) to advance lung cancer imaging analysis, countering the constraints of conventional radiomics methods. The autoencoder extracts information from public lung computed tomography (CT) datasets without additional labels. It reconstructs 3D lung nodule images with high quality (structural similarity: 0.774, peak signal-to-noise ratio: 26.1, and mean-squared error: 0.0008). The model effectively encodes lesion sizes in its latent embeddings, with a significant correlation with lesion size found after applying uniform manifold approximation and projection (UMAP) for dimensionality reduction. Additionally, the beta-VAE can synthesize new lesions of varying sizes by manipulating the latent features. The model can predict multiple clinical endpoints, including pathological N stage or KRAS mutation status, on the Stanford radiogenomics lung cancer dataset. Comparisons with other methods show that the beta-VAE performs equally well in these tasks, suggesting its potential as a pretrained model for predicting patient outcomes in medical imaging.
A generalized graph reduction framework for interactive segmentation of large images
Gueziri, Houssem-Eddine
McGuffin, Michael J
Laporte, Catherine
Computer Vision and Image Understanding2016Journal Article, cited 5 times
Website
Algorithm Development
Segmentation
Computer Aided Detection (CADe)
The speed of graph-based segmentation approaches, such as random walker (RW) and graph cut (GC), depends strongly on image size. For high-resolution images, the time required to compute a segmentation based on user input renders interaction tedious. We propose a novel method, using an approximate contour sketched by the user, to reduce the graph before passing it on to a segmentation algorithm such as RW or GC. This enables a significantly faster feedback loop. The user first draws a rough contour of the object to segment. Then, the pixels of the image are partitioned into "layers" (corresponding to different scales) based on their distance from the contour. The thickness of these layers increases with distance to the contour according to a Fibonacci sequence. An initial segmentation result is rapidly obtained after automatically generating foreground and background labels according to a specifically selected layer; all vertices beyond this layer are eliminated, restricting the segmentation to regions near the drawn contour. Further foreground background labels can then be added by the user to refine the segmentation. All iterations of the graph-based segmentation benefit from a reduced input graph, while maintaining full resolution near the object boundary. A user study with 16 participants was carried out for RW segmentation of a multi-modal dataset of 22 medical images, using either a standard mouse or a stylus pen to draw the contour. Results reveal that our approach significantly reduces the overall segmentation time compared with the status quo approach (p < 0.01). The study also shows that our approach works well with both input devices. Compared to super-pixel graph reduction, our approach provides full resolution accuracy at similar speed on a high-resolution benchmark image with both RW and GC segmentation methods. However, graph reduction based on super-pixels does not allow interactive correction of clustering errors. Finally, our approach can be combined with super-pixel clustering methods for further graph reduction, resulting in even faster segmentation. (C) 2016 Elsevier Inc. All rights reserved.
AVT: Multicenter aortic vessel tree CTA dataset collection with ground truth segmentation masks
Radl, L.
Jin, Y.
Pepe, A.
Li, J.
Gsaxner, C.
Zhao, F. H.
Egger, J.
Data in Brief2022Journal Article, cited 2 times
Website
RIDER LUNG CT
Abdominal aortic aneurysm
Aorta
Aortic dissection
Cta
Deep learning
Ground truth
Masks
Segmentations
Vessel tree
In this article, we present a multicenter aortic vessel tree database collection, containing 56 aortas and their branches. The datasets have been acquired with computed tomography angiography (CTA) scans and each scan covers the ascending aorta, the aortic arch and its branches into the head/neck area, the thoracic aorta, the abdominal aorta and the lower abdominal aorta with the iliac arteries branching into the legs. For each scan, the collection provides a semi-automatically generated segmentation mask of the aortic vessel tree (ground truth). The scans come from three different collections and various hospitals, having various resolutions, which enables studying the geometry/shape variabilities of human aortas and its branches from different geographic locations. Furthermore, creating a robust statistical model of the shape of human aortic vessel trees, which can be used for various tasks such as the development of fully-automatic segmentation algorithms for new, unseen aortic vessel tree cases, e.g. by training deep learning-based approaches. Hence, the collection can serve as an evaluation set for automatic aortic vessel tree segmentation algorithms.
The Río Hortega University Hospital Glioblastoma dataset: A comprehensive collection of preoperative, early postoperative and recurrence MRI scans (RHUH-GBM)
Cepeda, Santiago
García-García, Sergio
Arrese, Ignacio
Herrero, Francisco
Escudero, Trinidad
Zamora, Tomás
Sarabia, Rosario
Data in Brief2023Journal Article, cited 0 times
RHUH-GBM
Glioblastoma, a highly aggressive primary brain tumor, is associated with poor patient outcomes. Although magnetic resonance imaging (MRI) plays a critical role in diagnosing, characterizing, and forecasting glioblastoma progression, public MRI repositories present significant drawbacks, including insufficient postoperative and follow-up studies as well as expert tumor segmentations. To address these issues, we present the "Río Hortega University Hospital Glioblastoma Dataset (RHUH-GBM)," a collection of multiparametric MRI images, volumetric assessments, molecular data, and survival details for glioblastoma patients who underwent total or near-total enhancing tumor resection. The dataset features expert-corrected segmentations of tumor subregions, offering valuable ground truth data for developing algorithms for postoperative and follow-up MRI scans.
LiverHccSeg: A publicly available multiphasic MRI dataset with liver and HCC tumor segmentations and inter-rater agreement analysis
Gross, M.
Arora, S.
Huber, S.
Kucukkaya, A. S.
Onofrey, J. A.
Data Brief2023Journal Article, cited 0 times
TCGA-LIHC
Benchmarking
Hepatocellular carcinoma
Imaging biomarkers
Inter-rater agreement
Inter-rater variability
Liver segmentation
Multiphasic contrast-enhanced magnetic resonance imaging
Tumor segmentation
LIVER
Magnetic Resonance Imaging (MRI)
Algorithm Development
Segmentation
Accurate segmentation of liver and tumor regions in medical imaging is crucial for the diagnosis, treatment, and monitoring of hepatocellular carcinoma (HCC) patients. However, manual segmentation is time-consuming and subject to inter- and intra-rater variability. Therefore, automated methods are necessary but require rigorous validation of high-quality segmentations based on a consensus of raters. To address the need for reliable and comprehensive data in this domain, we present LiverHccSeg, a dataset that provides liver and tumor segmentations on multiphasic contrast-enhanced magnetic resonance imaging from two board-approved abdominal radiologists, along with an analysis of inter-rater agreement. LiverHccSeg provides a curated resource for liver and HCC tumor segmentation tasks. The dataset includes a scientific reading and co-registered contrast-enhanced multiphasic magnetic resonance imaging (MRI) scans with corresponding manual segmentations by two board-approved abdominal radiologists and relevant metadata and offers researchers a comprehensive foundation for external validation, and benchmarking of liver and tumor segmentation algorithms. The dataset also provides an analysis of the agreement between the two sets of liver and tumor segmentations. Through the calculation of appropriate segmentation metrics, we provide insights into the consistency and variability in liver and tumor segmentations among the radiologists. A total of 17 cases were included for liver segmentation and 14 cases for HCC tumor segmentation. Liver segmentations demonstrates high segmentation agreement (mean Dice, 0.95 +/- 0.01 [standard deviation]) and HCC tumor segmentations showed higher variation (mean Dice, 0.85 +/- 0.16 [standard deviation]). The applications of LiverHccSeg can be manifold, ranging from testing machine learning algorithms on public external data to radiomic feature analyses. Leveraging the inter-rater agreement analysis within the dataset, researchers can investigate the impact of variability on segmentation performance and explore methods to enhance the accuracy and robustness of liver and tumor segmentation algorithms in HCC patients. By making this dataset publicly available, LiverHccSeg aims to foster collaborations, facilitate innovative solutions, and ultimately improve patient outcomes in the diagnosis and treatment of HCC.
Dataset on renal tumor diameter assessment by multiple observers in normal-dose and low-dose CT
Borgbjerg, J.
Larsen, N. E.
Salte, I. M.
Gronli, N. R.
Klaestrup, E.
Negard, A.
Data Brief2023Journal Article, cited 0 times
Website
C4KC-KiTS
KIDNEY
Computed Tomography (CT)
Inter-observer variability
Renal tumor
Tumor diameter
Low-dose CT
Computed tomography-based active surveillance is increasingly used to manage small renal tumors, regardless of patient age. However, there is an unmet need for decreasing radiation exposure while maintaining the necessary accuracy and reproducibility in radiographic measurements, allowing for detecting even minor changes in renal mass size. In this article, we present supplementary data from a multiobserver investigation. We explored the accuracy and reproducibility of low-dose CT (75% dose reduction) compared to normal-dose CT in assessing maximum axial renal tumor diameter. Open-access CT datasets from the 2019 Kidney and Kidney Tumor Segmentation Challenge were used. A web-based platform for assessing observer performance was used by six radiologist observers to obtain and provide data on tumor diameters and accompanying viewing settings, in addition to key images of each measurement and an interactive module for exploring diameter measurements. These data can serve as a baseline and inform future studies investigating and validating lower-dose CT protocols for active surveillance of small renal masses.
IILS: Intelligent imaging layout system for automatic imaging report standardization and intra-interdisciplinary clinical workflow optimization
Wang, Yang
Yan, Fangrong
Lu, Xiaofan
Zheng, Guanming
Zhang, Xin
Wang, Chen
Zhou, Kefeng
Zhang, Yingwei
Li, Hui
Zhao, Qi
Zhu, Hu
Chen, Fei
Gao, Cailiang
Qing, Zhao
Ye, Jing
Li, Aijing
Xin, Xiaoyan
Li, Danyan
Wang, Han
Yu, Hongming
Cao, Lu
Zhao, Chaowei
Deng, Rui
Tan, Libo
Chen, Yong
Yuan, Lihua
Zhou, Zhuping
Yang, Wen
Shao, Mingran
Dou, Xin
Zhou, Nan
Zhou, Fei
Zhu, Yue
Lu, Guangming
Zhang, Bing
EBioMedicine2019Journal Article, cited 1 times
Website
LIDC-IDRI
Classification
LUNA16 Challenge
Deep learning
Lung nodule
BACKGROUND: To achieve imaging report standardization and improve the quality and efficiency of the intra-interdisciplinary clinical workflow, we proposed an intelligent imaging layout system (IILS) for a clinical decision support system-based ubiquitous healthcare service, which is a lung nodule management system using medical images. METHODS: We created a lung IILS based on deep learning for imaging report standardization and workflow optimization for the identification of nodules. Our IILS utilized a deep learning plus adaptive auto layout tool, which trained and tested a neural network with imaging data from all the main CT manufacturers from 11,205 patients. Model performance was evaluated by the receiver operating characteristic curve (ROC) and calculating the corresponding area under the curve (AUC). The clinical application value for our IILS was assessed by a comprehensive comparison of multiple aspects. FINDINGS: Our IILS is clinically applicable due to the consistency with nodules detected by IILS, with its highest consistency of 0.94 and an AUC of 90.6% for malignant pulmonary nodules versus benign nodules with a sensitivity of 76.5% and specificity of 89.1%. Applying this IILS to a dataset of chest CT images, we demonstrate performance comparable to that of human experts in providing a better layout and aiding in diagnosis in 100% valid images and nodule display. The IILS was superior to the traditional manual system in performance, such as reducing the number of clicks from 14.45+/-0.38 to 2, time consumed from 16.87+/-0.38s to 6.92+/-0.10s, number of invalid images from 7.06+/-0.24 to 0, and missing lung nodules from 46.8% to 0%. INTERPRETATION: This IILS might achieve imaging report standardization, and improve the clinical workflow therefore opening a new window for clinical application of artificial intelligence. FUND: The National Natural Science Foundation of China.
Development and validation of radiomic signatures of head and neck squamous cell carcinoma molecular features and subtypes
Huang, Chao
Cintra, Murilo
Brennan, Kevin
Zhou, Mu
Colevas, A Dimitrios
Fischbein, Nancy
Zhu, Shankuan
Gevaert, Olivier
EBioMedicine2019Journal Article, cited 1 times
Website
TCGA-HNSC
Radiomics
Radiogenomics
Transcriptomics
BACKGROUND: Radiomics-based non-invasive biomarkers are promising to facilitate the translation of therapeutically related molecular subtypes for treatment allocation of patients with head and neck squamous cell carcinoma (HNSCC). METHODS: We included 113 HNSCC patients from The Cancer Genome Atlas (TCGA-HNSCC) project. Molecular phenotypes analyzed were RNA-defined HPV status, five DNA methylation subtypes, four gene expression subtypes and five somatic gene mutations. A total of 540 quantitative image features were extracted from pre-treatment CT scans. Features were selected and used in a regularized logistic regression model to build binary classifiers for each molecular subtype. Models were evaluated using the average area under the Receiver Operator Characteristic curve (AUC) of a stratified 10-fold cross-validation procedure repeated 10 times. Next, an HPV model was trained with the TCGA-HNSCC, and tested on a Stanford cohort (N=53). FINDINGS: Our results show that quantitative image features are capable of distinguishing several molecular phenotypes. We obtained significant predictive performance for RNA-defined HPV+ (AUC=0.73), DNA methylation subtypes MethylMix HPV+ (AUC=0.79), non-CIMP-atypical (AUC=0.77) and Stem-like-Smoking (AUC=0.71), and mutation of NSD1 (AUC=0.73). We externally validated the HPV prediction model (AUC=0.76) on the Stanford cohort. When compared to clinical models, radiomic models were superior to subtypes such as NOTCH1 mutation and DNA methylation subtype non-CIMP-atypical while were inferior for DNA methylation subtype CIMP-atypical and NSD1 mutation. INTERPRETATION: Our study demonstrates that radiomics can potentially serve as a non-invasive tool to identify treatment-relevant subtypes of HNSCC, opening up the possibility for patient stratification, treatment allocation and inclusion in clinical trials. FUND: Dr. Gevaert reports grants from National Institute of Dental & Craniofacial Research (NIDCR) U01 DE025188, grants from National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health (NIBIB), R01 EB020527, grants from National Cancer Institute (NCI), U01 CA217851, during the conduct of the study; Dr. Huang and Dr. Zhu report grants from China Scholarship Council (Grant NO:201606320087), grants from China Medical Board Collaborating Program (Grant NO:15-216), the Cyrus Tang Foundation, and the Zhejiang University Education Foundation during the conduct of the study; Dr. Cintra reports grants from Sao Paulo State Foundation for Teaching and Research (FAPESP), during the conduct of the study.
Single NMR image super-resolution based on extreme learning machine
Wang, Zhiqiong
Xin, Junchang
Wang, Zhongyang
Tian, Shuo
Qiu, Xuejun
Physica Medica2016Journal Article, cited 0 times
Website
REMBRANDT
RIDER NEURO MRI
TCGA-GBM
TCGA-LGG
BRAIN
Introduction: The performance limitation of MRI equipment and higher resolution demand of NMR images from radiologists have formed a strong contrast. Therefore, it is important to study the super resolution algorithm suitable for NMR images, using low costs software to replace the expensive equipment-updating.; ; Methods and materials: Firstly, a series of NMR images are obtained from original NMR images with original noise to the lowest resolution images with the highest noise. Then, based on extreme learning machine, the mapping relation model is constructed from lower resolution NMR images with higher noise to higher resolution NMR images with lower noise in each pair of adjacent images in the obtained image sequence. Finally, the optimal mapping model is established by the ensemble way to reconstruct the higher resolution NMR images with lower noise on the basis of original resolution NMR images with original noise. Experiments are carried out by 990111 NMR brain images in datasets NITRC, REMBRANDT, RIDER NEURO MRI, TCGA-GBM and TCGA-LGG.; ; Results: The performance of proposed method is compared with three approaches through 7 indexes, and the experimental results show that our proposed method has a significant improvement.; ; Discussion: Since our method considers the influence of the noise, it has 20% higher in Peak-Signal-to-Noise-Ratio comparison. As our method is sensitive to details, and has a better characteristic retention, it has higher image quality upgrade of 15% in the additional evaluation. Finally, since extreme learning machine has a celerity learning speed, our method is 46.1% faster.; ; Keywords: Extreme learning machine; NMR; Single image; Super-resolution.
Exploration of temporal stability and prognostic power of radiomic features based on electronic portal imaging device images
Soufi, M.
Arimura, H.
Nakamoto, T.
Hirose, T. A.
Ohga, S.
Umezu, Y.
Honda, H.
Sasaki, T.
Phys Med2018Journal Article, cited 7 times
Website
NSCLC-Radiomics
Radiomics
multiplication of intraclass correlation coefficient (mICC)
Matlab
Kaplan-Meier survival analysis
EPID image
Prognostic prediction
Radiomic feature
Temporal stability
PURPOSE: We aimed to explore the temporal stability of radiomic features in the presence of tumor motion and the prognostic powers of temporally stable features. METHODS: We selected single fraction dynamic electronic portal imaging device (EPID) (n=275 frames) and static digitally reconstructed radiographs (DRRs) of 11 lung cancer patients, who received stereotactic body radiation therapy (SBRT) under free breathing. Forty-seven statistical radiomic features, which consisted of 14 histogram-based features and 33 texture features derived from the graylevel co-occurrence and graylevel run-length matrices, were computed. The temporal stability was assessed by using a multiplication of the intra-class correlation coefficients (ICCs) between features derived from the EPID and DRR images at three quantization levels. The prognostic powers of the features were investigated using a different database of lung cancer patients (n=221) based on a Kaplan-Meier survival analysis. RESULTS: Fifteen radiomic features were found to be temporally stable for various quantization levels. Among these features, seven features have shown potentials for prognostic prediction in lung cancer patients. CONCLUSIONS: This study suggests a novel approach to select temporally stable radiomic features, which could hold prognostic powers in lung cancer patients.
Automatic detection of pulmonary nodules in CT images by incorporating 3D tensor filtering with local image feature analysis
Gong, J.
Liu, J. Y.
Wang, L. J.
Sun, X. W.
Zheng, B.
Nie, S. D.
Physica Medica2018Journal Article, cited 4 times
Website
NSCLC-Radiomics
3D tensor filtering
CT image
Curvedness
Shape index
Algorithm Development
LUNG
Computer Aided Detection (CADe)
Investigation of thoracic four-dimensional CT-based dimension reduction technique for extracting the robust radiomic features
Tanaka, S.
Kadoya, N.
Kajikawa, T.
Matsuda, S.
Dobashi, S.
Takeda, K.
Jingu, K.
Phys Med2019Journal Article, cited 0 times
RIDER Lung CT
3D Slicer
Radiomics
Imaging features
Computed Tomography (CT)
Lung cancer
Four-Dimensional Computed Tomography (4D-CT)
Robust feature selection in radiomic analysis is often implemented using the RIDER test-retest datasets. However, the CT Protocol between the facility and test-retest datasets are different. Therefore, we investigated possibility to select robust features using thoracic four-dimensional CT (4D-CT) scans that are available from patients receiving radiation therapy. In 4D-CT datasets of 14 lung cancer patients who underwent stereotactic body radiotherapy (SBRT) and 14 test-retest datasets of non-small cell lung cancer (NSCLC), 1170 radiomic features (shape: n = 16, statistics: n = 32, texture: n = 1122) were extracted. A concordance correlation coefficient (CCC) > 0.85 was used to select robust features. We compared the robust features in various 4D-CT group with those in test-retest. The total number of robust features was a range between 846/1170 (72%) and 970/1170 (83%) in all 4D-CT groups with three breathing phases (40%–60%); however, that was a range between 44/1170 (4%) and 476/ 1170 (41%) in all 4D-CT groups with 10 breathing phases. In test-retest, the total number of robust features was 967/1170 (83%); thus, the number of robust features in 4D-CT was almost equal to that in test-retest by using 40–60% breathing phases. In 4D-CT, respiratory motion is a factor that greatly affects the robustness of features, thus by using only 40–60% breathing phases, excessive dimension reduction will be able to be prevented in any 4D-CT datasets, and select robust features suitable for CT protocol of your own facility.
Early survival prediction in non-small cell lung cancer from PET/CT images using an intra-tumor partitioning method.
Astaraki, Mehdi
Wang, Chunliang
Buizza, Giulia
Toma-Dasu, Iuliana
Lazzeroni, Marta
Smedby, Orjan
Physica Medica2019Journal Article, cited 0 times
Website
RIDER Lung CT
Radiomics
PURPOSE: To explore prognostic and predictive values of a novel quantitative feature set describing intra-tumor heterogeneity in patients with lung cancer treated with concurrent and sequential chemoradiotherapy. METHODS: Longitudinal PET-CT images of 30 patients with non-small cell lung cancer were analysed. To describe tumor cell heterogeneity, the tumors were partitioned into one to ten concentric regions depending on their sizes, and, for each region, the change in average intensity between the two scans was calculated for PET and CT images separately to form the proposed feature set. To validate the prognostic value of the proposed method, radiomics analysis was performed and a combination of the proposed novel feature set and the classic radiomic features was evaluated. A feature selection algorithm was utilized to identify the optimal features, and a linear support vector machine was trained for the task of overall survival prediction in terms of area under the receiver operating characteristic curve (AUROC). RESULTS: The proposed novel feature set was found to be prognostic and even outperformed the radiomics approach with a significant difference (AUROCSALoP=0.90 vs. AUROCradiomic=0.71) when feature selection was not employed, whereas with feature selection, a combination of the novel feature set and radiomics led to the highest prognostic values. CONCLUSION: A novel feature set designed for capturing intra-tumor heterogeneity was introduced. Judging by their prognostic power, the proposed features have a promising potential for early survival prediction.
Homological radiomics analysis for prognostic prediction in lung cancer patients
Ninomiya, Kenta
Arimura, Hidetaka
Physica Medica2020Journal Article, cited 0 times
Website
NSCLC-Radiomics
radiomics
lung
Machine learning helps identifying volume-confounding effects in radiomics
Traverso, Alberto
Kazmierski, Michal
Zhovannik, Ivan
Welch, Mattea
Wee, Leonard
Jaffray, David
Dekker, Andre
Hope, Andrew
Physica Medica2020Journal Article, cited 0 times
NSCLC-Radiomics-Genomics
PURPOSE: Highlighting the risk of biases in radiomics-based models will help improve their quality and increase usage as decision support systems in the clinic. In this study we use machine learning-based methods to identify the presence of volume-confounding effects in radiomics features. Methods 841 radiomics features were extracted from two retrospective publicly available datasets of lung and head neck cancers using open source software. Unsupervised hierarchical clustering and principal component analysis (PCA) identified relations between radiomics and clinical outcomes (overall survival). Bootstrapping techniques with logistic regression verified features' prognostic power and robustness. Results Over 80% of the features had large pairwise correlations. Nearly 30% of the features presented strong correlations with tumor volume. Using volume-independent features for clustering and PCA did not allow risk stratification of patients. Clinical predictors outperformed radiomics features in bootstrapping and logistic regression. Conclusions The adoption of safeguards in radiomics is imperative to improve the quality of radiomics studies. We proposed machine learning (ML) - based methods for robust radiomics signatures development.
Resolution enhancement for lung 4D-CT based on transversal structures by using multiple Gaussian process regression learning
Fang, Shiting
Hu, Runyue
Yuan, Xinrui
Liu, Shangqing
Zhang, Yuan
Phys Med2020Journal Article, cited 0 times
Website
Machine Learning
4D-Lung
LUNG
Image Enhancement/methods
PURPOSE: Four-dimensional computed tomography (4D-CT) plays a useful role in many clinical situations. However, due to the hardware limitation of system, dense sampling along superior-inferior direction is often not practical. In this paper, we develop a novel multiple Gaussian process regression model to enhance the superior-inferior resolution for lung 4D-CT based on transversal structures. METHODS: The proposed strategy is based on the observation that high resolution transversal images can recover missing pixels in the superior-inferior direction. Based on this observation and motived by random forest algorithm, we employ multiple Gaussian process regression model learned from transversal images to improve superior-inferior resolution. Specifically, we first randomly sample 3 x 3 patches from original transversal images. The central pixel of these patches and the eight-neighbour pixels of their corresponding degraded versions form the label and input of training data, respectively. Multiple Gaussian process regression model is then built on the basis of multiple training subsets obtained by random sampling. Finally, the central pixel of the patch is estimated based on the proposed model, with the eight-neighbour pixels of each 3 x 3 patch from interpolated superior-inferior direction images as inputs. RESULTS: The performance of our method is extensively evaluated using simulated and publicly available datasets. Our experiments show the remarkable performance of the proposed method. CONCLUSIONS: In this paper, we propose a new approach to improve the 4D-CT resolution, which does not require any external data and hardware support, and can produce clear coronal/sagittal images for easy viewing.
Making radiotherapy more efficient with FAIR data
Kalendralis, Petros
Sloep, Matthijs
van Soest, Johan
Dekker, Andre
Fijten, Rianne
Physica Medica2021Journal Article, cited 0 times
NSCLC-Radiomics
Given the rapid growth of artificial intelligence (AI) applications in radiotherapy and the related transformations toward the data-driven healthcare domain, this article summarizes the need and usage of the FAIR (Findable, Accessible, Interoperable, Reusable) data principles in radiotherapy. This work introduces the FAIR data concept, presents practical and relevant use cases and the future role of the different parties involved. The goal of this article is to provide guidance and potential applications of FAIR to various radiotherapy stakeholders, focusing on the central role of medical physicists.
AI applications to medical images: From machine learning to deep learning
Castiglioni, Isabella
Rundo, Leonardo
Codari, Marina
Di Leo, Giovanni
Salvatore, Christian
Interlenghi, Matteo
Gallivanone, Francesca
Cozzi, Andrea
D'Amico, Natascha Claudia
Sardanelli, Francesco
Physica Medica2021Journal Article, cited 0 times
Crowds-Cure-2017
PURPOSE: Artificial intelligence (AI) models are playing an increasing role in biomedical research and healthcare services. This review focuses on challenges points to be clarified about how to develop AI applications as clinical decision support systems in the real-world context.
METHODS: A narrative review has been performed including a critical assessment of articles published between 1989 and 2021 that guided challenging sections.
RESULTS: We first illustrate the architectural characteristics of machine learning (ML)/radiomics and deep learning (DL) approaches. For ML/radiomics, the phases of feature selection and of training, validation, and testing are described. DL models are presented as multi-layered artificial/convolutional neural networks, allowing us to directly process images. The data curation section includes technical steps such as image labelling, image annotation (with segmentation as a crucial step in radiomics), data harmonization (enabling compensation for differences in imaging protocols that typically generate noise in non-AI imaging studies) and federated learning. Thereafter, we dedicate specific sections to: sample size calculation, considering multiple testing in AI approaches; procedures for data augmentation to work with limited and unbalanced datasets; and the interpretability of AI models (the so-called black box issue). Pros and cons for choosing ML versus DL to implement AI applications to medical imaging are finally presented in a synoptic way.
CONCLUSIONS: Biomedicine and healthcare systems are one of the most important fields for AI applications and medical imaging is probably the most suitable and promising domain. Clarification of specific challenging points facilitates the development of such systems and their translation to clinical practice.
High-dose hypofractionated pencil beam scanning carbon ion radiotherapy for lung tumors: Dosimetric impact of different spot sizes and robustness to interfractional uncertainties
Mastella, Edoardo
Mirandola, Alfredo
Russo, Stefania
Vai, Alessandro
Magro, Giuseppe
Molinelli, Silvia
Barcellini, Amelia
Vitolo, Viviana
Orlandi, Ester
Ciocca, Mario
Physica Medica2021Journal Article, cited 0 times
Website
4D-Lung
Radiation Dosage
Lung
NSCLC
FRoG dose computation meets Monte Carlo accuracy for proton therapy dose calculation in lung
Magro, G.
Mein, S.
Kopp, B.
Mastella, E.
Pella, A.
Ciocca, M.
Mairani, A.
Phys Med2021Journal Article, cited 0 times
Website
4D-Lung
Algorithm Development
PURPOSE: To benchmark and evaluate the clinical viability of novel analytical GPU-accelerated and CPU-based Monte Carlo (MC) dose-engines for spot-scanning intensity-modulated-proton-therapy (IMPT) towards the improvement of lung cancer treatment. METHODS: Nine patient cases were collected from the CNAO clinical experience and The Cancer Imaging Archive-4D-Lung-Database for in-silico study. All plans were optimized with 2 orthogonal beams in RayStation (RS) v.8. Forward calculations were performed with FRoG, an independent dose calculation system using a fast robust approach to the pencil beam algorithm (PBA), RS-MC (CPU for v.8) and general-purpose MC (gp-MC). Dosimetric benchmarks were acquired via irradiation of a lung-like phantom and ionization chambers for both a single-field-uniform-dose (SFUD) and IMPT plans. Dose-volume-histograms, dose-difference and gamma-analyses were conducted. RESULTS: With respect to reference gp-MC, the average dose to the GTV was 1.8% and 2.3% larger for FRoG and the RS-MC treatment planning system (TPS). FRoG and RS-MC showed a local gamma-passing rate of ~96% and ~93%. Phantom measurements confirmed FRoG's high accuracywith a deviation < 0.1%. CONCLUSIONS: Dose calculation performance using the GPU-accelerated analytical PBA, MC-TPS and gp-MC code were well within clinical tolerances. FRoG predictions were in good agreement with both the full gp-MC and experimental data for proton beams optimized for thoracic dose calculations. GPU-accelerated dose-engines like FRoG may alleviate current issues related to deficiencies in current commercial analytical proton beam models. The novel approach to the PBA implemented in FRoG is suitable for either clinical TPS or as an auxiliary dose-engine to support clinical activity for lung patients.
Measuring breathing induced oesophageal motion and its dosimetric impact
Fechter, Tobias
Adebahr, Sonja
Grosu, Anca-Ligia
Baltas, Dimos
Physica Medica2021Journal Article, cited 0 times
4D-Lung
PURPOSE: Stereotactic body radiation therapy allows for a precise dose delivery. Organ motion bears the risk of undetected high dose healthy tissue exposure. An organ very susceptible to high dose is the oesophagus. Its low contrast on CT and the oblong shape render motion estimation difficult. We tackle this issue by modern algorithms to measure oesophageal motion voxel-wise and estimate motion related dosimetric impacts.
METHODS: Oesophageal motion was measured using deformable image registration and 4DCT of 11 internal and 5 public datasets. Current clinical practice of contouring the organ on 3DCT was compared to timely resolved 4DCT contours. Dosimetric impacts of the motion were estimated by analysing the trajectory of each voxel in the 4D dose distribution. Finally an organ motion model for patient-wise comparisons was built.
RESULTS: Motion analysis showed mean absolute maximal motion amplitudes of 4.55 ± 1.81 mm left-right, 5.29 ± 2.67 mm anterior-posterior and 10.78 ± 5.30 mm superior-inferior. Motion between cohorts differed significantly. In around 50% of the cases the dosimetric passing criteria was violated. Contours created on 3DCT did not cover 14% of the organ for 50% of the respiratory cycle and were around 38% smaller than the union of all 4D contours. The motion model revealed that the maximal motion is not limited to the lower part of the organ. Our results showed motion amplitudes higher than most reported values in the literature and that motion is very heterogeneous across patients.
CONCLUSIONS: Individual motion information should be considered in contouring and planning.
Investigating the impact of the CT Hounsfield unit range on radiomic feature stability using dual energy CT data
Chatterjee, A.
Vallieres, M.
Forghani, R.
Seuntjens, J.
Phys Med2021Journal Article, cited 0 times
Website
Head-Neck-PET-CT
Algorithm Development
Computed Tomography (CT)
Feature stability
Radiomics
Replicability
PURPOSE: Radiomic texture calculation requires discretizing image intensities within the region-of-interest. FBN (fixed-bin-number), FBS (fixed-bin-size) and FBN and FBS with intensity equalization (FBNequal, FBSequal) are four discretization approaches. A crucial choice is the voxel intensity (Hounsfield units, or HU) binning range. We assessed the effect of this choice on radiomic features. METHODS: The dataset comprised 95 patients with head-and-neck squamous-cell-carcinoma. Dual energy CT data was reconstructed at 21 electron energies (40, 45,... 140 keV). Each of 94 texture features were calculated with 64 extraction parameters. All features were calculated five times: original choice, left shift (-10/-20 HU), right shift (+10/+20 HU). For each feature, Spearman correlation between nominal and four variants were calculated to determine feature stability. This was done for six texture feature types (GLCM, GLRLM, GLSZM, GLDZM, NGTDM, and NGLDM) separately. This analysis was repeated for the four binning algorithms. Effect of feature instability on predictive ability was studied for lymphadenopathy as endpoint. RESULTS: FBN and FBNequal algorithms showed good stability (correlation values consistently >0.9). For FBS and FBSequal algorithms, while median values exceeded 0.9, the 95% lower bound decreased as a function of energy, with poor performance over the entire spectrum. FBNequal was the most stable algorithm, and FBS the least. CONCLUSIONS: We believe this is the first multi-energy systematic study of the impact of CT HU range used during intensity discretization for radiomic feature extraction. Future analyses should account for this source of uncertainty when evaluating the robustness of their radiomic signature.
Optimization of polyethylene glycol-based hydrogel rectal spacer for focal laser ablation of prostate peripheral zone tumor
Namakshenas, P.
Mojra, A.
Physica Medica2021Journal Article, cited 1 times
Website
PROSTATE-MRI
PROSTATE
Radiation Dosage
Radiation Therapy
PURPOSE: Focal Laser ablation therapy is a technique that exposes the prostate tumor to hyperthermia ablation and eradicates cancerous cells. However, due to the excessive heating generated by laser irradiation, there is a possibility of damage to the adjacent healthy tissues. This paper through in silico study presents a novel approach to reduce collateral effects due to heating by the placement of polyethylene glycol (PEG) spacer between the rectum and tumor during laser irradiation. The PEG spacer thickness is optimized to reduce the undesired damage at common laser power used in the clinical trials. Our study also encompasses novelty by conducting the thermal analysis based on the porous structure of prostate tumor. METHODS: The thermal parameters and two thermal phase lags between the temperature gradient and the heat flux, are determined by considering the vascular network of prostate tumor. The Nelder-Mead algorithm is applied to find the minimum thickness of the PEG spacer. RESULTS: In the absence of the spacer, the predicted results for the laser power of 4 W, 8 W, and 12 W show that the temperature of the rectum rises up to 58.6 degrees C, 80.4 degrees C, and 101.1 degrees C, while through the insertion of 2.59 mm, 4 mm, and 4.9 mm of the PEG spacer, it dramatically reduces below 42 degrees C. CONCLUSIONS: The results can be used as a guideline to ablate the prostate tumors while avoiding undesired damage to the rectal wall during laser irradiation, especially for the peripheral zone tumors.
Automatic head computed tomography image noise quantification with deep learning
Inkinen, S. I.
Makela, T.
Kaasalainen, T.
Peltonen, J.
Kangasniemi, M.
Kortesniemi, M.
Phys Med2022Journal Article, cited 0 times
Website
LDCT-and-Projection-data
*Deep Learning
Head/diagnostic imaging
Humans
Image Processing
Computer-Assisted/methods
Neural Networks
Computer
Tomography
X-Ray Computed/methods
Anthropomorphic phantom
BRAIN
Computed Tomography (CT)
Deep learning
Image quality
Noise
PURPOSE: Computed tomography (CT) image noise is usually determined by standard deviation (SD) of pixel values from uniform image regions. This study investigates how deep learning (DL) could be applied in head CT image noise estimation. METHODS: Two approaches were investigated for noise image estimation of a single acquisition image: direct noise image estimation using supervised DnCNN convolutional neural network (CNN) architecture, and subtraction of a denoised image estimated with denoising UNet-CNN experimented with supervised and unsupervised noise2noise training approaches. Noise was assessed with local SD maps using 3D- and 2D-CNN architectures. Anthropomorphic phantom CT image dataset (N = 9 scans, 3 repetitions) was used for DL-model comparisons. Mean square error (MSE) and mean absolute percentage errors (MAPE) of SD values were determined using the SD values of subtraction images as ground truth. Open-source clinical head CT low-dose dataset (N(train) = 37, N(test) = 10 subjects) were used to demonstrate DL applicability in noise estimation from manually labeled uniform regions and in automated noise and contrast assessment. RESULTS: The direct SD estimation using 3D-CNN was the most accurate assessment method when comparing in phantom dataset (MAPE = 15.5%, MSE = 6.3HU). Unsupervised noise2noise approach provided only slightly inferior results (MAPE = 20.2%, MSE = 13.7HU). 2DCNN and unsupervised UNet models provided the smallest MSE on clinical labeled uniform regions. CONCLUSIONS: DL-based clinical image assessment is feasible and provides acceptable accuracy as compared to true image noise. Noise2noise approach may be feasible in clinical use where no ground truth data is available. Noise estimation combined with tissue segmentation may enable more comprehensive image quality characterization.
CTContour: An open-source Python pipeline for automatic contouring and calculation of mean SSDE along the abdomino-pelvic region for CT images; validation on fifteen systems
Pace, Eric
Caruana, Carmel J
Bosmans, Hilde
Cortis, Kelvin
D'Anastasi, Melvin
Valentino, Gianluca
Physica Medica2022Journal Article, cited 0 times
C4KC-KiTS
StageII-Colorectal-CT
PURPOSE: Calculation of the Size Specific Dose Estimate (SSDE) requires accurate delineation of the skin boundary of patient CT slices. The AAPM recommendation for SSDE evaluation at every CT slice is too time intensive for manual contouring, prohibiting real-time or bulk processing; an automated approach is therefore desirable. Previous automated delineation studies either did not fully disclose the steps of the algorithm or did not always manage to fully isolate the patient. The purpose of this study was to develop a validated, freely available, fast, vendor-independent open-source tool to automatically and accurately contour and calculate the SSDE for the abdomino-pelvic region for entire studies in real-time, including flagging of patient-truncated images.
METHODS: The Python tool, CTContour, consists of a sequence of morphological steps and scales over multiple cores for speed. Tool validation was achieved on 700 randomly selected slices from abdominal and abdomino-pelvic studies from public datasets. Contouring accuracy was assessed visually by four medical physicists using a 1-5 Likert scale (5 indicating perfect contouring). Mean SSDE values were validated via manual calculation.
RESULTS: Contour accuracy validation produced a score of four of five for 98.5 % of the images. A 300 slice exam was contoured and truncation flagged in 6.3 s on a six-core laptop.
CONCLUSIONS: The algorithm was accurate even for complex clinical scenarios and when artefacts were present. Fast execution makes it possible to automate the calculation of SSDE in real time. The tool has been published on GitHub under the GNU-GPLv3 license.
Deriving quantitative information from multiparametric MRI via Radiomics: Evaluation of the robustness and predictive value of radiomic features in the discrimination of low-grade versus high-grade gliomas with machine learning
Ubaldi, Leonardo
Saponaro, Sara
Giuliano, Alessia
Talamonti, Cinzia
Retico, Alessandra
Phys Med2023Journal Article, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Glioma grading
Classification
Image normalization
Machine learning
Magnetic Resonance Imaging (MRI)
Radiomics
Robustness of features
PyRadiomics
PURPOSE: Analysis pipelines based on the computation of radiomic features on medical images are widely used exploration tools across a large variety of image modalities. This study aims to define a robust processing pipeline based on Radiomics and Machine Learning (ML) to analyze multiparametric Magnetic Resonance Imaging (MRI) data to discriminate between high-grade (HGG) and low-grade (LGG) gliomas. METHODS: The dataset consists of 158 multiparametric MRI of patients with brain tumor publicly available on The Cancer Imaging Archive, preprocessed by the BraTS organization committee. Three different types of image intensity normalization algorithms were applied and 107 features were extracted for each tumor region, setting the intensity values according to different discretization levels. The predictive power of radiomic features in the LGG versus HGG categorization was evaluated by using random forest classifiers. The impact of the normalization techniques and of the different settings in the image discretization was studied in terms of the classification performances. A set of MRI-reliable features was defined selecting the features extracted according to the most appropriate normalization and discretization settings. RESULTS: The results show that using MRI-reliable features improves the performance in glioma grade classification (AUC=0.93+/-0.05) with respect to the use of raw (AUC=0.88+/-0.08) and robust features (AUC=0.83+/-0.08), defined as those not depending on image normalization and intensity discretization. CONCLUSIONS: These results confirm that image normalization and intensity discretization strongly impact the performance of ML classifiers based on radiomic features. Thus, special attention should be provided in the image preprocessing step before typical radiomic and ML analysis are carried out.
Overall survival prediction for high-grade glioma patients using mathematical modeling of tumor cell infiltration
Häger, Wille
Toma-Dașu, Iuliana
Astaraki, Mehdi
Lazzeroni, Marta
Physica Medica2023Journal Article, cited 0 times
BraTS-TCGA-GBM
PURPOSE: This study aimed at applying a mathematical framework for the prediction of high-grade gliomas (HGGs) cell invasion into normal tissues for guiding the clinical target delineation, and at investigating the possibility of using tumor infiltration maps for patient overall survival (OS) prediction.
MATERIAL & METHODS: A model describing tumor infiltration into normal tissue was applied to 93 HGG cases. Tumor infiltration maps and corresponding isocontours with different cell densities were produced. ROC curves were used to seek correlations between the patient OS and the volume encompassed by a particular isocontour. Area-Under-the-Curve (AUC) values were used to determine the isocontour having the highest predictive ability. The optimal cut-off volume, having the highest sensitivity and specificity, for each isocontour was used to divide the patients in two groups for a Kaplan-Meier survival analysis.
RESULTS: The highest AUC value was obtained for the isocontour of cell densities 1000 cells/mm3 and 2000 cells/mm3, equal to 0.77 (p < 0.05). Correlation with the GTV yielded an AUC of 0.73 (p < 0.05). The Kaplan-Meier survival analysis using the 1000 cells/mm3 isocontour and the ROC optimal cut-off volume for patient group selection rendered a hazard ratio (HR) of 2.7 (p < 0.05), while the GTV rendered a HR = 1.6 (p < 0.05).
CONCLUSION: The simulated tumor cell invasion is a stronger predictor of overall survival than the segmented GTV, indicating the importance of using mathematical models for cell invasion to assist in the definition of the target for HGG patients.
Interpretable radiomics method for predicting human papillomavirus status in oropharyngeal cancer using Bayesian networks
Altinok, Oya
Guvenis, Albert
Physica Medica2023Journal Article, cited 0 times
Oropharyngeal-Radiomics-Outcomes
Human Papillomavirus Viruses
OBJECTIVES: To develop a simple interpretable Bayesian Network (BN) to classify HPV status in patients with oropharyngeal cancer.
METHODS: Two hundred forty-six patients, 216 of whom were HPV positive, were used in this study. We extracted 851 radiomics markers from patients' contrast-enhanced Computed Tomography (CT) images. Mens eX Machina (MXM) approach selected two most relevant predictors: sphericity and max2DDiameterRow. The area under the curve (AUC) demonstrated BN model performance in 30% of the data reserved for testing. A Support Vector Machine (SVM) based method was also implemented for comparison purposes.
RESULTS: The Mens eX Machina (MXM) approach selected two most relevant predictors: sphericity and max2DDiameterRow. Areas under the Curves (AUC) were found 0.78 and 0.72 on the training and test data, respectively. When using support vector machine (SVM) and 25 features, the AUC was found 0.83 on the test data.
CONCLUSIONS: The straightforward structure and power of interpretability of our BN model will help clinicians make treatment decisions and enable the non-invasive detection of HPV status from contrast-enhanced CT images. Higher accuracy can be obtained using more complex structures at the expense of lower interpretability.
ADVANCES IN KNOWLEDGE: Radiomics is being studied lately as a simple imaging data based HPV status detection technique which can be an alternative to laboratory approaches. However, it generally lacks interpretability. This work demonstrated the feasibility of using Bayesian networks based radiomics for predicting HPV positivity in an interpretable way.
Convection enhanced delivery of anti-angiogenic and cytotoxic agents in combination therapy against brain tumour
Zhan, W.
Eur J Pharm Sci2020Journal Article, cited 0 times
Website
RIDER NEURO MRI
Magnetic Resonance Imaging (MRI)
BRAIN
Algorithm Development
Models
Convection enhanced delivery is an effective alternative to routine delivery methods to overcome the blood brain barrier. However, its treatment efficacy remains disappointing in clinic owing to the rapid drug elimination in tumour tissue. In this study, multiphysics modelling is employed to investigate the combination delivery of anti-angiogenic and cytotoxic drugs from the perspective of intratumoural transport. Simulations are based on a 3-D realistic brain tumour model that is reconstructed from patient magnetic resonance images. The tumour microvasculature is targeted by bevacizumab, and six cytotoxic drugs are included, as doxorubicin, carmustine, cisplatin, fluorouracil, methotrexate and paclitaxel. The treatment efficacy is evaluated in terms of the distribution volume where the drug concentration is above the corresponding LD90. Results demonstrate that the infusion of bevacizumab can slightly improve interstitial fluid flow, but is significantly efficient in reducing the fluid loss from the blood circulatory system to inhibit the concentration dilution. As the transport of bevacizumab is dominated by convection, its spatial distribution and anti-angiogenic effectiveness present high sensitivity to the directional interstitial fluid flow. Infusing bevacizumab could enhance the delivery outcomes of all the six drugs, however, the degree of enhancement differs. The delivery of doxorubicin can be improved most, whereas, the impacts on methotrexate and paclitaxel are limited. Fluorouracil could cover the comparable distribution volume as paclitaxel in the combination therapy for effective cell killing. Results obtain in this study could be a guide for the design of this co-delivery treatment.
Volume fractions of DCE-MRI parameter as early predictor of histologic response in soft tissue sarcoma: A feasibility study
Xia, Wei
Yan, Zhuangzhi
Gao, Xin
European Journal of Radiology2017Journal Article, cited 2 times
Website
QIN-SARCOMA
DCE-MRI
Textural differences between renal cell carcinoma subtypes: Machine learning-based quantitative computed tomography texture analysis with independent external validation
Kocak, Burak
Yardimci, Aytul Hande
Bektas, Ceyda Turan
Turkcanoglu, Mehmet Hamza
Erdim, Cagri
Yucetas, Ugur
Koca, Sevim Baykal
Kilickesmez, Ozgur
European Journal of Radiology2018Journal Article, cited 0 times
TCGA-KICH
TCGA-KIRC
TCGA-KIRP
OBJECTIVE: To develop externally validated, reproducible, and generalizable models for distinguishing three major subtypes of renal cell carcinomas (RCCs) using machine learning-based quantitative computed tomography (CT) texture analysis (qCT-TA).
MATERIALS AND METHODS: Sixty-eight RCCs were included in this retrospective study for model development and internal validation. Another 26 RCCs were included from public databases (The Cancer Genome Atlas-TCGA) for independent external validation. Following image preparation steps (reconstruction, resampling, normalization, and discretization), 275 texture features were extracted from unenhanced and corticomedullary phase CT images. Feature selection was firstly done with reproducibility analysis by three radiologists, and; then, with a wrapper-based classifier-specific algorithm. A nested cross-validation was performed for feature selection and model optimization. Base classifiers were the artificial neural network (ANN) and support vector machine (SVM). Base classifiers were also combined with three additional algorithms to improve generalizability performance. Classifications were done with the following groups: (i), non-clear cell RCC (non-cc-RCC) versus clear cell RCC (cc-RCC) and (ii), cc-RCC versus papillary cell RCC (pc-RCC) versus chromophobe cell RCC (chc-RCC). Main performance metric for comparisons was the Matthews correlation coefficient (MCC).
RESULTS: Number of the reproducible features is smaller for the unenhanced images (93 out of 275) compared to the corticomedullary phase images (232 out of 275). Overall performance metrics of the machine learning-based qCT-TA derived from corticomedullary phase images were better than those of unenhanced images. Using corticomedullary phase images, ANN with adaptive boosting algorithm performed best for discrimination of non-cc-RCCs from cc-RCCs (MCC = 0.728) with an external validation accuracy, sensitivity, and specificity of 84.6%, 69.2%, and 100%, respectively. On the other hand, the performance of the machine learning-based qCT-TA is rather poor for distinguishing three major subtypes. The SVM with bagging algorithm performed best for discrimination of pc-RCC from other RCC subtypes (MCC = 0.804) with an external validation accuracy, sensitivity, and specificity of 69.2%, 71.4%, and 100%, respectively.
CONCLUSIONS: Machine learning-based qCT-TA can distinguish non-cc-RCCs from cc-RCCs with a satisfying performance. On the other hand, the performance of the method for distinguishing three major subtypes is rather poor. Corticomedullary phase CT images provide much more valuable texture parameters than unenhanced images.
Fusion Radiomics Features from Conventional MRI Predict MGMT Promoter Methylation Status in Lower Grade Gliomas
Jiang, Chendan
Kong, Ziren
Liu, Sirui
Feng, Shi
Zhang, Yiwei
Zhu, Ruizhe
Chen, Wenlin
Wang, Yuekun
Lyu, Yuelei
You, Hui
Zhao, Dachun
Wang, Renzhi
Wang, Yu
Ma, Wenbin
Feng, Feng
Eur J Radiol2019Journal Article, cited 0 times
TCGA-LGG
Radiomics
Radiogenomics
Classification
Magnetic Resonance Imaging (MRI)
PURPOSE: The methylation status of the O6-methylguanine-DNA methyltransferase (MGMT) promoter has been proven to be a prognostic and predictive biomarker for lower grade glioma (LGG). This study aims to build a radiomics model to preoperatively predict the MGMT promoter methylation status in LGG. METHOD: 122 pathology-confirmed LGG patients were retrospectively reviewed, with 87 local patients as the training dataset, and 35 from The Cancer Imaging Archive as independent validation. A total of 1702 radiomics features were extracted from three-dimensional contrast-enhanced T1 (3D-CE-T1)-weighted and T2-weighted MRI images, including 14 shape, 18 first order, 75 texture, and 744 wavelet features respectively. The radiomics features were selected with the least absolute shrinkage and selection operator algorithm, and prediction models were constructed with multiple classifiers. Models were evaluated using receiver operating characteristic (ROC). RESULTS: Five radiomics prediction models, namely, 3D-CE-T1-weighted single radiomics model, T2-weighted single radiomics model, fusion radiomics model, linear combination radiomics model, and clinical integrated model, were built. The fusion radiomics model, which constructed from the concatenation of both series, displayed the best performance, with an accuracy of 0.849 and an area under the curve (AUC) of 0.970 (0.939-1.000) in the training dataset, and an accuracy of 0.886 and an AUC of 0.898 (0.786-1.000) in the validation dataset. Linear combination of single radiomics models and integration of clinical factors did not improve. CONCLUSIONS: Conventional MRI radiomics models are reliable for predicting the MGMT promoter methylation status in LGG patients. The fusion of radiomics features from different series may increase the prediction performance.
Variability of manual segmentation of the prostate in axial T2-weighted MRI: A multi-reader study
Becker, A. S.
Chaitanya, K.
Schawkat, K.
Müehlematter, U. J.
Hotker, A. M.
Konukoglu, E.
Donati, O. F.
Eur J Radiol2019Journal Article, cited 3 times
Website
Prostate-3T
PROSTATE
Segmentation
Magnetic Resonance Imaging (MRI)
PURPOSE: To evaluate the interreader variability in prostate and seminal vesicle (SV) segmentation on T2w MRI. METHODS: Six readers segmented the peripheral zone (PZ), transitional zone (TZ) and SV slice-wise on axial T2w prostate MRI examinations of n=80 patients. Twenty different similarity scores, including dice score (DS), Hausdorff distance (HD) and volumetric similarity coefficient (VS), were computed with the VISCERAL EvaluateSegmentation software for all structures combined and separately for the whole gland (WG=PZ+TZ), TZ and SV. Differences between base, midgland and apex were evaluated with DS slice-wise. Descriptive statistics for similarity scores were computed. Wilcoxon testing to evaluate differences of DS, HD and VS was performed. RESULTS: Overall segmentation variability was good with a mean DS of 0.859 (+/-SD=0.0542), HD of 36.6 (+/-34.9 voxels) and VS of 0.926 (+/-0.065). The WG showed a DS, HD and VS of 0.738 (+/-0.144), 36.2 (+/-35.6 vx) and 0.853 (+/-0.143), respectively. The TZ showed generally lower variability with a DS of 0.738 (+/-0.144), HD of 24.8 (+/-16 vx) and VS of 0.908 (+/-0.126). The lowest variability was found for the SV with DS of 0.884 (+/-0.0407), HD of 17 (+/-10.9 vx) and VS of 0.936 (+/-0.0509). We found a markedly lower DS of the segmentations in the apex (0.85+/-0.12) compared to the base (0.87+/-0.10, p<0.01) and the midgland (0.89+/-0.10, p<0.001). CONCLUSIONS: We report baseline values for interreader variability of prostate and SV segmentation on T2w MRI. Variability was highest in the apex, lower in the base, and lowest in the midgland.
A CT-based deep learning model for predicting the nuclear grade of clear cell renal cell carcinoma
Lin, Fan
Ma, Changyi
Xu, Jinpeng
Lei, Yi
Li, Qing
Lan, Yong
Sun, Ming
Long, Wansheng
Cui, Enming
European Journal of Radiology2020Journal Article, cited 0 times
TCGA-KIRC
PURPOSE: To investigate the effects of different methodologies on the performance of deep learning (DL) model for differentiating high- from low-grade clear cell renal cell carcinoma (ccRCC).
METHOD: Patients with pathologically proven ccRCC diagnosed between October 2009 and March 2019 were assigned to training or internal test dataset, and external test dataset was acquired from The Cancer Genome Atlas-Kidney Renal Clear Cell Carcinoma (TCGA-KIRC) database. The effects of different methodologies on the performance of DL-model, including image cropping (IC), setting the attention level, selecting model complexity (MC), and applying transfer learning (TL), were compared using repeated measures analysis of variance (ANOVA) and receiver operating characteristic (ROC) curve analysis. The performance of DL-model was evaluated through accuracy and ROC analyses with internal and external tests.
RESULTS: In this retrospective study, patients (n = 390) from one hospital were randomly assigned to training (n = 370) or internal test dataset (n = 20), and the other 20 patients from TCGA-KIRC database were assigned to external test dataset. IC, the attention level, MC, and TL had major effects on the performance of the DL-model. The DL-model based on the cropping of an image less than three times the tumor diameter, without attention, a simple model and the application of TL achieved the best performance in internal (ACC = 73.7 ± 11.6%, AUC = 0.82 ± 0.11) and external (ACC = 77.9 ± 6.2%, AUC = 0.81 ± 0.04) tests.
CONCLUSIONS: CT-based DL model can be conveniently applied for grading ccRCC with simple IC in routine clinical practice.
Quality control and whole-gland, zonal and lesion annotations for the PROSTATEx challenge public dataset
Cuocolo, R.
Stanzione, A.
Castaldo, A.
De Lucia, D. R.
Imbriaco, M.
Eur J Radiol2021Journal Article, cited 0 times
Website
PROSTATEx
Segmentation
Image classification
Machine Learning
PURPOSE: Radiomic features are promising quantitative parameters that can be extracted from medical images and employed to build machine learning predictive models. However, generalizability is a key concern, encouraging the use of public image datasets. We performed a quality assessment of the PROSTATEx training dataset and provide publicly available lesion, whole-gland, and zonal anatomy segmentation masks. METHOD: Two radiology residents and two experienced board-certified radiologists reviewed the 204 prostate MRI scans (330 lesions) included in the training dataset. The quality of provided lesion coordinate was scored using the following scale: 0 = perfectly centered, 1 = within lesion, 2 = within the prostate without lesion, 3 = outside the prostate. All clearly detectable lesions were segmented individually slice-by-slice on T2-weighted and apparent diffusion coefficient images. With the same methodology, volumes of interest including the whole gland, transition, and peripheral zones were annotated. RESULTS: Of the 330 available lesion identifiers, 3 were duplicates (1%). From the remaining, 218 received score = 0, 74 score = 1, 31 score = 2 and 4 score = 3. Overall, 299 lesions were verified and segmented. Independently of lesion coordinate score and other issues (e.g., lesion coordinates falling outside DICOM images, artifacts etc.), the whole prostate gland and zonal anatomy were also manually annotated for all cases. CONCLUSION: While several issues were encountered evaluating the original PROSTATEx dataset, the improved quality and availability of lesion, whole-gland and zonal segmentations will increase its potential utility as a common benchmark in prostate MRI radiomics.
Development and external validation of a non-invasive molecular status predictor of chromosome 1p/19q co-deletion based on MRI radiomics analysis of Low Grade Glioma patients
Casale, R.
Lavrova, E.
Sanduleanu, S.
Woodruff, H. C.
Lambin, P.
Eur J Radiol2021Journal Article, cited 0 times
Website
Algorithm Development
TCGA-LGG
LGG-1p19qDeletion
Radiomics
BRAIN
PURPOSE: The 1p/19q co-deletion status has been demonstrated to be a prognostic biomarker in lower grade glioma (LGG). The objective of this study was to build a magnetic resonance (MRI)-derived radiomics model to predict the 1p/19q co-deletion status. METHOD: 209 pathology-confirmed LGG patients from 2 different datasets from The Cancer Imaging Archive were retrospectively reviewed; one dataset with 159 patients as the training and discovery dataset and the other one with 50 patients as validation dataset. Radiomics features were extracted from T2- and T1-weighted post-contrast MRI resampled data using linear and cubic interpolation methods. For each of the voxel resampling methods a three-step approach was used for feature selection and a random forest (RF) classifier was trained on the training dataset. Model performance was evaluated on training and validation datasets and clinical utility indexes (CUIs) were computed. The distributions and intercorrelation for selected features were analyzed. RESULTS: Seven radiomics features were selected from the cubic interpolated features and five from the linear interpolated features on the training dataset. The RF classifier showed similar performance for cubic and linear interpolation methods in the training dataset with accuracies of 0.81 (0.75-0.86) and 0.76 (0.71-0.82) respectively; in the validation dataset the accuracy dropped to 0.72 (0.6-0.82) using cubic interpolation and 0.72 (0.6-0.84) using linear resampling. CUIs showed the model achieved satisfactory negative values (0.605 using cubic interpolation and 0.569 for linear interpolation). CONCLUSIONS: MRI has the potential for predicting the 1p/19q status in LGGs. Both cubic and linear interpolation methods showed similar performance in external validation.
Performance of a deep learning-based lung nodule detection system as an alternative reader in a Chinese lung cancer screening program
Cui, X.
Zheng, S.
Heuvelmans, M. A.
Du, Y.
Sidorenkov, G.
Fan, S.
Li, Y.
Xie, Y.
Zhu, Z.
Dorrius, M. D.
Zhao, Y.
Veldhuis, R. N. J.
de Bock, G. H.
Oudkerk, M.
van Ooijen, P. M. A.
Vliegenthart, R.
Ye, Z.
Eur J Radiol2022Journal Article, cited 0 times
Website
LIDC-IDRI
National Lung Screening Trial (NLST)
*Deep Learning
Early Detection of Cancer
Humans
Radiomics
Reproducibility of Results
Sensitivity and Specificity
*Solitary Pulmonary Nodule/diagnostic imaging
Tomography
X-Ray Computed
Artificial intelligence
Computed Tomography (CT)
Computer Aided Diagnosis (CADx)
Pulmonary nodules
OBJECTIVE: To evaluate the performance of a deep learning-based computer-aided detection (DL-CAD) system in a Chinese low-dose CT (LDCT) lung cancer screening program. MATERIALS AND METHODS: One-hundred-and-eighty individuals with a lung nodule on their baseline LDCT lung cancer screening scan were randomly mixed with screenees without nodules in a 1:1 ratio (total: 360 individuals). All scans were assessed by double reading and subsequently processed by an academic DL-CAD system. The findings of double reading and the DL-CAD system were then evaluated by two senior radiologists to derive the reference standard. The detection performance was evaluated by the Free Response Operating Characteristic curve, sensitivity and false-positive (FP) rate. The senior radiologists categorized nodules according to nodule diameter, type (solid, part-solid, non-solid) and Lung-RADS. RESULTS: The reference standard consisted of 262 nodules >/= 4 mm in 196 individuals; 359 findings were considered false positives. The DL-CAD system achieved a sensitivity of 90.1% with 1.0 FP/scan for detection of lung nodules regardless of size or type, whereas double reading had a sensitivity of 76.0% with 0.04 FP/scan (P = 0.001). The sensitivity for detection of nodules >/= 4 - </= 6 mm was significantly higher with DL-CAD than with double reading (86.3% vs. 58.9% respectively; P = 0.001). Sixty-three nodules were only identified by the DL-CAD system, and 27 nodules only found by double reading. The DL-CAD system reached similar performance compared to double reading in Lung-RADS 3 (94.3% vs. 90.0%, P = 0.549) and Lung-RADS 4 nodules (100.0% vs. 97.0%, P = 1.000), but showed a higher sensitivity in Lung-RADS 2 (86.2% vs. 65.4%, P < 0.001). CONCLUSIONS: The DL-CAD system can accurately detect pulmonary nodules on LDCT, with an acceptable false-positive rate of 1 nodule per scan and has higher detection performance than double reading. This DL-CAD system may assist radiologists in nodule detection in LDCT lung cancer screening.
Exploration of a noninvasive radiomics classifier for breast cancer tumor microenvironment categorization and prognostic outcome prediction
Han, X.
Gong, Z.
Guo, Y.
Tang, W.
Wei, X.
Eur J Radiol2024Journal Article, cited 0 times
TCGA-BRCA
ISPY1
Breast Neoplasms
Machine Learning
Magnetic Resonance Imaging
Radiomics
Tumor Microenvironment
CIBERSORT
Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI)
Classification
K-means Clustering
Radiogenomics
Manual segmentation
Random Forest
RATIONALE AND OBJECTIVES: Breast cancer progression and treatment response are significantly influenced by the tumor microenvironment (TME). Traditional methods for assessing TME are invasive, posing a challenge for patient care. This study introduces a non-invasive approach to TME classification by integrating radiomics and machine learning, aiming to predict the TME status using imaging data, thereby aiding in prognostic outcome prediction. MATERIALS AND METHODS: Utilizing multi-omics data from The Cancer Genome Atlas (TCGA) and The Cancer Imaging Archive (TCIA), this study employed CIBERSORT and MCP-counter algorithms analyze immune infiltration in breast cancer. A radiomics classifier was developed using a random forest algorithm, leveraging quantitative features extracted from intratumoral and peritumoral regions of Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) scans. The classifer's ability to predict diverse TME states were and their prognostic implications were evaluated using Kaplan-Meier survival curves. RESULTS: Three distinct TME states were identified using RNA-Seq data, each displaying unique prognostic and biological characteristics. Notably, patients with increased immune cell infiltration showed significantly improved prognoses (P < 0.05). The classifier, comprising 24 radiomic features, demonstrated high predictive accuracy (AUC of training set = 0.960, 95 % CI: 0.922, 0.997; AUC of testing set = 0.853, 95 % CI: 0.687, 1.000) in differentiating these TME states. Predictions from the classifier also correlated significantly with overall patient survival (P < 0.05). CONCLUSION: This study offers a detailed analysis of the complex TME states in breast cancer and presents a reliable, noninvasive radiomics classifier for TME assessment. The classifer's accurate prediction of TME status and its correlation with prognosis highlight its potential as a tool in personalized breast cancer treatment, paving the way for more individualized and less invasive therapeutic strategies.
Exploring tumor heterogeneity in colorectal liver metastases by imaging: Unsupervised machine learning of preoperative CT radiomics features for prognostic stratification
Wang, Q.
Nilsson, H.
Xu, K.
Wei, X.
Chen, D.
Zhao, D.
Hu, X.
Wang, A.
Bai, G.
Eur J Radiol2024Journal Article, cited 0 times
Website
Colorectal-Liver-Metastases
Computed Tomography (CT)
Hepatectomy
Machine learning
Prognosis
Radiomics
OBJECTIVES: This study aimed to investigate tumor heterogeneity of colorectal liver metastases (CRLM) and stratify the patients into different risk groups of prognoses following liver resection by applying an unsupervised radiomics machine-learning approach to preoperative CT images. METHODS: This retrospective study retrieved clinical information and CT images of 197 patients with CRLM from The Cancer Imaging Archive (TCIA) database. Radiomics features were extracted from a segmented liver lesion identified at the portal venous phase. Those features which showed high stability, non-redundancy, and indicative information were selected. An unsupervised consensus clustering analysis on these features was adopted to identify subgroups of CRLM patients. Overall survival (OS), disease-free survival (DFS), and liver-specific DFS were compared between the identified subgroups. Cox regression analysis was applied to evaluate prognostic risk factors. RESULTS: A total of 851 radiomics features were extracted, and 56 robust features were finally selected for unsupervised clustering analysis which identified two distinct subgroups (96 and 101 patients respectively). There were significant differences in the OS, DFS, and liver-specific DFS between the subgroups (all log-rank p < 0.05). The subgroup with worse outcome using the proposed radiomics model was consistently associated with shorter OS, DFS, and liver-specific DFS, with hazard ratios of 1.78 (95 %CI: 1.12-2.83), 1.72 (95 %CI: 1.16-2.54), and 1.59 (95 %CI: 1.10-2.31), respectively. The general performance of this radiomics model outperformed the traditional Clinical Risk Score and Tumor Burden Score in the prognosis prediction after surgery for CRLM. CONCLUSION: Radiomics features derived from preoperative CT images can reveal the heterogeneity of CRLM and stratify the patients with CRLM into subgroups with significantly different clinical outcomes.
Assessing the stability and discriminative ability of radiomics features in the tumor microenvironment: Leveraging peri-tumoral regions in vestibular schwannoma
Hosseini, Mahboube Sadat
Aghamiri, Seyed Mahmoud Reza
Fatemi Ardekani, Ali
BagheriMofidi, Seyed Mehdi
European Journal of Radiology2024Journal Article, cited 0 times
Vestibular-Schwannoma-SEG
Radiomics
Tumor microenvironment
Vestibular Schwannoma
Peri-tumoral regions
Stability analysis
Purpose
The tumor microenvironment (TME) plays a crucial role in tumor progression and treatment response. Radiomics offers a non-invasive approach to studying the TME by extracting quantitative features from medical images. In this study, we present a novel approach to assess the stability and discriminative ability of radiomics features in the TME of vestibular schwannoma (VS).
Methods
Magnetic Resonance Imaging (MRI) data from 242 VS patients were analyzed, including contrast-enhanced T1-weighted (ceT1) and high-resolution T2-weighted (hrT2) sequences. Radiomics features were extracted from concentric peri-tumoral regions of varying sizes. The intraclass correlation coefficient (ICC) was used to assess feature stability and discriminative ability, establishing quantile thresholds for ICCmin and ICCmax.
Results
The identified thresholds for ICCmin and ICCmax were 0.45 and 0.72, respectively. Features were classified into four categories: stable and discriminative (S-D), stable and non-discriminative (S-ND), unstable and discriminative (US-D), and unstable and non-discriminative (US-ND). Different feature groups exhibited varying proportions of S-D features across ceT1 and hrT2 sequences. The similarity of S-D features between ceT1 and hrT2 sequences was evaluated using Jaccard’s index, with a value of 0.78 for all feature groups which is ranging from 0.68 (intensity features) to 1.00 (Neighbouring Gray Tone Difference Matrix (NGTDM) features).
Conclusions
This study provides a framework for identifying stable and discriminative radiomics features in the TME, which could serve as potential biomarkers or predictors of patient outcomes, ultimately improving the management of VS patients.
Novel radiomic analysis on bi-parametric MRI for characterizing differences between MR non-visible and visible clinically significant prostate cancer
Li, Lin
Shiradkar, Rakesh
Tirumani, Sree Harsha
Bittencourt, Leonardo Kayat
Fu, Pingfu
Mahran, Amr
Buzzy, Christina
Stricker, Phillip D.
Rastinehad, Ardeshir R.
Magi-Galluzzi, Cristina
Ponsky, Lee
Klein, Eric
Purysko, Andrei S.
Madabhushi, Anant
2023Journal Article, cited 0 times
PROSTATEx
Background: around one third of clinically significant prostate cancer (CsPCa) foci are reported to be MRI non-visible (MRI─).
Objective: To quantify the differences between MR visible (MRI+) and MRI─ CsPCa using intra- and peri-lesional radiomic features on bi-parametric MRI (bpMRI).
Methods: This retrospective and multi-institutional study comprised 164 patients with pre-biopsy 3T prostate multi-parametric MRI from 2014 to 2017. The MRI─ CsPCa referred to lesions with PI-RADS v2 score < 3 but ISUP grade group > 1. Three experienced radiologists were involved in annotating lesions and PI-RADS assignment. The validation set (Dv) comprised 52 patients from a single institution, the remaining 112 patients were used for training (Dt). 200 radiomic features were extracted from intra-lesional and peri-lesional regions on bpMRI.Logistic regression with least absolute shrinkage and selection operator (LASSO) and 10-fold cross-validation was applied on Dt to identify radiomic features associated with MRI─ and MRI+ CsPCa to generate corresponding risk scores RMRI─ and RMRI+. RbpMRI was further generated by integrating RMRI─ and RMRI+. Statistical significance was determined using the Wilcoxon signed-rank test.
Results: Both intra-lesional and peri-lesional bpMRI Haralick and CoLlAGe radiomic features were significantly associated with MRI─ CsPCa (p < 0.05). Intra-lesional ADC Haralick and CoLlAGe radiomic features were significantly different among MRI─ and MRI+ CsPCa (p < 0.05). RbpMRI yielded the highest AUC of 0.82 (95 % CI 0.72-0.91) compared to AUCs of RMRI+ 0.76 (95 % CI 0.63-0.89), and PI-RADS 0.58 (95 % CI 0.50-0.72) on Dv. RbpMRI correctly reclassified 10 out of 14 MRI─ CsPCa on Dv.
Conclusion: Our preliminary results demonstrated that both intra-lesional and peri-lesional bpMRI radiomic features were significantly associated with MRI─ CsPCa. These features could assist in CsPCa identification on bpMRI.
Comprehensive Assessment of Postoperative Recurrence and Survival in Patients with Cervical Cancer
Background
The prediction of postoperative recurrence and survival in cervical cancer patients has been a major clinical challenge. The combination of clinical parameters, inflammatory markers, intravoxel incoherent motion diffusion-weighted imaging (IVIM-DWI), and MRI-derived radiomics is expected to support the prediction of recurrence-free survival (RFS), disease-free survival (DFS), tumor-specific survival (CSS), and overall survival (OS) of cervical cancer patients after surgery.
Methods
A retrospective analysis of 181 cervical cancer patients with continuous follow-up was completed. The parameters of IVIM-DWI and radiomics were measured, analyzed, and screened. The LASSO regularization was used to calculate the radiomics score (Rad-score). Multivariate Cox regression analysis was used to construct nomogram models for predicting postoperative RFS, DFS, CSS, and OS in cervical cancer patients, with internal and external validation.
Results
Clinical stage, parametrial infiltration, internal irradiation, D-value, and Rad-score were independent prognostic factors for RFS; Squamous cell carcinoma antigen, internal irradiation, D-value, f-value and Rad-score were independent prognostic factors for DFS; Maximum tumor diameter, lymph node metastasis, platelets, D-value and Rad-score were independent prognostic factors for CSS; Lymph node metastasis, systemic inflammation response index, D-value and Rad-score were independent prognostic factors for OS. The AUCs of each model predicting RFS, DFS, CSS, and OS at 1, 3, and 5 years were 0.985, 0.929, 0.910 and 0.833, 0.818, 0.816 and 0.832, 0.863, 0.891 and 0.804, 0.812, 0.870, respectively.
Conclusions
Nomograms based on clinical and imaging parameters showed high clinical value in predicting postoperative RFS, DFS, CSS, and OS of cervical cancer patients and can be used as prognostic markers.
Automated liver tissues delineation techniques: A systematic survey on machine learning current trends and future orientations
Al-Kababji, Ayman
Bensaali, Faycal
Dakua, Sarada Prasad
Himeur, Yassine
2023Journal Article, cited 0 times
CT-ORG
Pancreas-CT
Machine Learning
Machine learning and computer vision techniques have grown rapidly in recent years due to their automation, suitability, and ability to generate astounding results. Hence, in this paper, we survey the key studies that are published between 2014 and 2022, showcasing the different machine learning algorithms researchers have used to segment the liver, hepatic tumors, and hepatic-vasculature structures. We divide the surveyed studies based on the tissue of interest (hepatic-parenchyma, hepatic-tumors, or hepatic-vessels), highlighting the studies that tackle more than one task simultaneously. Additionally, the machine learning algorithms are classified as either supervised or unsupervised, and they are further partitioned if the amount of work that falls under a certain scheme is significant. Moreover, different datasets and challenges found in literature and websites containing masks of the aforementioned tissues are thoroughly discussed, highlighting the organizers’ original contributions and those of other researchers. Also, the metrics used excessively in the literature are mentioned in our review, stressing their relevance to the task at hand. Finally, critical challenges and future directions are emphasized for innovative researchers to tackle, exposing gaps that need addressing, such as the scarcity of many studies on the vessels’ segmentation challenge and why their absence needs to be dealt with sooner than later.
Detecting Lung Abnormalities From X-rays Using an Improved SSL Algorithm
Livieris, Ioannis
Kanavos, Andreas
Pintelas, Panagiotis
Electronic Notes in Theoretical Computer Science2019Journal Article, cited 0 times
TCGA-LUAD
Classification
Automatic tumor segmentation in single-spectral MRI using a texture-based and contour-based algorithm
Nabizadeh, Nooshin
Kubat, Miroslav
Expert Systems with Applications2017Journal Article, cited 8 times
Website
NCI-MICCAI 2013 Challenge
MRI
Segmentation
Texture features
Regularized Winnow
Skippy greedy snake
BRAIN
Automatic detection of brain tumors in single-spectral magnetic resonance images is a challenging task. Existing techniques suffer from inadequate performance, dependence on initial assumptions, and, sometimes, the need for manual interference. The research reported in this paper seeks to reduce some of these shortcomings, and to remove others, achieving satisfactory performance at reasonable computational costs. The success of the system described here is explained by the synergy of the following aspects: (1) a broad choice of high-level features to characterize the image's texture, (2) an efficient mechanism to eliminate less useful features (3) a machine-learning technique to induce a classifier that signals the presence of a tumor-affected tissue, and (4) an improved version of the skippy greedy snake algorithm to outline the tumor's contours. The paper describes the system and reports experiments with synthetic as well as real data. (C) 2017 Elsevier Ltd. All rights reserved.
Deep Feature Learning For Soft Tissue Sarcoma Classification In MR Images Via Transfer Learning
Hermessi, Haithem
Mourali, Olfa
Zagrouba, Ezzeddine
Expert Systems with Applications2018Journal Article, cited 0 times
Website
Soft Tissue Sarcoma
Liposarcomas
Leiomyosarcomas
Map-Reduce based tipping point scheduler for parallel image processing
Akhtar, Mohammad Nishat
Saleh, Junita Mohamad
Awais, Habib
Bakar, Elmi Abu
Expert Systems with Applications2020Journal Article, cited 0 times
Website
LIDC-IDRI
Algorithm Development
Segmentation
Nowadays, Big Data image processing is very much in need due to its proven success in the field of business information system, medical science and social media. However, as the days are passing by, the computation of Big Data images is becoming more complex which ultimately results in complex resource management and higher task execution time. Researchers have been using a combination of CPU and GPU based computing to cut down the execution time, however, when it comes to scaling of compute nodes, then the combination of CPU and GPU based computing still remains a challenge due to the high communication cost factor. In order to tackle this issue, the Map-Reduce framework has come out to be a viable option as its workflow optimization could be enhanced by changing its underlying job scheduling mechanism. This paper presents a comparative study of job scheduling algorithms which could be deployed over various Big Data based image processing application and also proposes a tipping point scheduling algorithm to optimize the workflow for job execution on multiple nodes. The evaluation of the proposed scheduling algorithm is done by implementing parallel image segmentation algorithm to detect lung tumor for up to 3GB size of image dataset. In terms of performance comprising of task execution time and throughput, the proposed tipping point scheduler has come out to be the best scheduler followed by the Map-Reduce based Fair scheduler. The proposed tipping point scheduler is 1.14 times better than Map-Reduce based Fair scheduler and 1.33 times better than Map-Reduced based FIFO scheduler in terms of task execution time and throughput. In terms of speedup comparison between single node and multiple nodes, the proposed tipping point scheduler attained a speedup of 4.5 X for multi-node architecture.; ; Keywords:; Job scheduler; Workflow optimization; Map-Reduce; Tipping point scheduler; Parallel image segmentation; Lung tumor
T2-FDL: A robust sparse representation method using adaptive type-2 fuzzy dictionary learning for medical image classification
Ghasemi, Majid
Kelarestaghi, Manoochehr
Eshghi, Farshad
Sharifi, Arash
Expert Systems with Applications2020Journal Article, cited 0 times
Website
REMBRANDT
TCGA-LGG
BRAIN
Machine Learning
In this paper, a robust sparse representation for medical image classification is proposed based on the adaptive type-2 fuzzy learning (T2-FDL) system. In the proposed method, sparse coding and dictionary learning processes are executed iteratively until a near-optimal dictionary is obtained. The sparse coding step aiming at finding a combination of dictionary atoms to represent the input data efficiently, and the dictionary learning step rigorously adjusts a minimum set of dictionary items. The two-step operation helps create an adaptive sparse representation algorithm by involving the type-2 fuzzy sets in the design process of image classification. Since the existing image measurements are not made under the same conditions and with the same accuracy, the performance of medical diagnosis is always affected by noise and uncertainty. By introducing an adaptive type-2 fuzzy learning method, a better approximation in an environment with higher degrees of uncertainty and noise is achieved. The experiments are executed over two open-access brain tumor magnetic resonance image databases, REMBRANDT and TCGA-LGG, from The Cancer Imaging Archive (TCIA). The experimental results of a brain tumor classification task show that the proposed T2-FDL method can adequately minimize the negative effects of uncertainty in the input images. The results demonstrate the outperformance of T2-FDL compared to other important classification methods in the literature, in terms of accuracy, specificity, and sensitivity.
Classification of non-small cell lung cancer using one-dimensional convolutional neural network
Moitra, Dipanjan
Kr. Mandal, Rakesh
Expert Systems with Applications2020Journal Article, cited 0 times
Website
NSCLC Radiogenomics
Lung cancer
Deep learning
Convolutional Neural Network (CNN)
Non-Small Cell Lung Cancer (NSCLC) is a major lung cancer type. Proper diagnosis depends mainly on tumor staging and grading. Pathological prognosis often faces problems because of the limited availability of tissue samples. Machine learning methods may play a vital role in such cases. 2D or 3D Deep Neural Networks (DNNs) has been the predominant technology in this domain. Contemporary studies tried to classify NSCLC tumors as benign or malignant. The application of 1D CNN in automated staging and grading of NSCLC is not very frequent. The aim of the present study is to develop a 1D CNN model for automated staging and grading of NSCLC. The updated NSCLC Radiogenomics Collection from The Cancer Imaging Archive (TCIA) was used in the study. The segmented tumor images were fed into a hybrid feature detection and extraction model (MSER-SURF). The extracted features were clubbed with the clinical TNM stage and histopathological grade information and fed into the 1D CNN model. The performance of the proposed CNN model was satisfactory. The accuracy and ROC-AUC score were higher than the other leading machine learning methods. The study also did well compared to state-of-the-art studies. The proposed model shows that 1D CNN is equally useful in NSCLC prediction like a conventional 2D/3D CNN model. The model may further be refined by carrying out experiments with varied hyper-parameters. Further studies may be conducted by considering semi-supervised or unsupervised learning techniques.
3D automatic levels propagation approach to breast MRI tumor segmentation
Bouchebbah, Fatah
Slimani, Hachem
Expert Systems with Applications2020Journal Article, cited 0 times
Website
RIDER Breast MRI
Segmentation
Magnetic Resonance Imaging MRI is a relevant tool for breast cancer screening. Moreover, an accurate 3D segmentation of breast tumors from MRI scans plays a key role in the analysis of the disease. In this manuscript, we propose a novel 3D automatic method for segmenting MRI breast tumors, called 3D Automatic Levels Propagation Approach (3D-ALPA). The proposed method performs the segmentation automatically in two steps: in the first step, the entire MRI volume to process is segmented slice by slice. Specifically, using a new automatic approach called 2D Automatic Levels Propagation Approach (2D-ALPA) which is an improved version of a previous semi-automatic approach, named 2D Levels Propagation Approach (2D-LPA). In the second step, the partial segmentations obtained after the application of 2D-ALPA are recombined to rebuild the complete volume(s) of tumor(s). 3D-ALPA has many characteristics, mainly: it is an automatic method which can take into consideration multi-tumor segmentation, and it has the property to be easily applicable according to the Axial, Coronal, as well as Sagittal planes. Therefore, it offers a multi-view representation of the segmented tumor(s). To validate the new 3D-ALPA method, we have firstly performed tests on a 2D private dataset composed of eighteen patients to estimate the accuracy of the new 2D-ALPA in comparison to the previous 2D-LPA. The obtained results have been in favor of the proposed 2D-ALPA, showing hence an improvement in accuracy after integrating the automatization in the 2D-ALPA approach. Then, we have evaluated the complete 3D-ALPA method on a 3D private dataset constituted of MRI exams of twenty-two patients having real breast tumors of different types, and on the public RIDER dataset. Essentially, 3D-ALPA has been evaluated regarding two main features: segmentation accuracy and running time, by considering two kinds of breast tumors: non-enhanced and enhanced tumors. The experimental studies have shown that 3D-ALPA has produced better results for the both kinds of tumors than a recent and concurrent method in the literature that addresses the same problematic.
Hierarchical deep multi-modal network for medical visual question answering
Gupta, Deepak
Suman, Swati
Ekbal, Asif
Expert Systems with Applications2021Journal Article, cited 0 times
Head-Neck-PET-CT
LGG-1p19qDeletion
MRI-DIR
NSCLC Radiogenomics
Visual Question Answering in Medical domain (VQA-Med) plays an important role in providing medical assistance to the end-users. These users are expected to raise either a straightforward question with a Yes/No answer or a challenging question that requires a detailed and descriptive answer. The existing techniques in VQA-Med fail to distinguish between the different question types sometimes complicates the simpler problems, or over-simplifies the complicated ones. It is certainly true that for different question types, several distinct systems can lead to confusion and discomfort for the end-users. To address this issue, we propose a hierarchical deep multi-modal network that analyzes and classifies end-user questions/queries and then incorporates a query-specific approach for answer prediction. We refer our proposed approach as Hierarchical Question Segregation based Visual Question Answering, in short HQS-VQA. Our contributions are three-fold, viz. firstly, we propose a question segregation (QS) technique for VQA-Med; secondly, we integrate the QS model to the hierarchical deep multi-modal neural network to generate proper answers to the queries related to medical images; and thirdly, we study the impact of QS in Medical-VQA by comparing the performance of the proposed model with QS and a model without QS. We evaluate the performance of our proposed model on two benchmark datasets, viz. RAD and CLEF18. Experimental results show that our proposed HQS-VQA technique outperforms the baseline models with significant margins. We also conduct a detailed quantitative and qualitative analysis of the obtained results and discover potential causes of errors and their solutions.
Deep hybrid neural-like P systems for multiorgan segmentation in head and neck CT/MR images
Xue, Jie
Wang, Yuan
Kong, Deting
Wu, Feiyang
Yin, Anjie
Qu, Jianhua
Liu, Xiyu
Expert Systems with Applications2021Journal Article, cited 0 times
Website
AAPM RT-MAC
Convolutional Neural Network (CNN)
Segmentation
Automatic segmentation of organs-at-risk (OARs) of the head and neck, such as the brainstem, the left and right parotid glands, mandible, optic chiasm, and the left and right optic nerves, are crucial when formulating radiotherapy plans. However, there are difficulties due to (1) the small sizes of these organs (especially the optic chiasm and optic nerves) and (2) the different positions and phenotypes of the OARs. In this paper, we propose a novel, automatic multiorgan segmentation algorithm based on a new hybrid neural-like P system, to alleviate the above challenges. The new P system possesses the joint advantages of cell-like and neural-like P systems and includes new structures and rules, allowing it to solve more real-world problems in parallelism. In the new P system, effective ensemble convolutional neural networks (CNNs) are implemented with different initializations simultaneously to perform pixel-wise segmentations of OARs, which can obtain more effective features and leverage the strength of ensemble learning. Evaluations on three public datasets show the effectiveness and robustness of the proposed algorithm for accurate OARs segmentation in various image modalities.
A multi-task CNN approach for lung nodule malignancy classification and characterization
Marques, Sónia
Schiavo, Filippo
Ferreira, Carlos A.
Pedrosa, João
Cunha, António
Campilho, Aurélio
Expert Systems with Applications2021Journal Article, cited 0 times
Website
LIDC-IDRI
Computer Aided Detection (CADe)
Imaging features
Radiomic features
LUNG
Models
BatchNorm
ReLu
Softmax
Lung cancer is the type of cancer with highest mortality worldwide. Low-dose computerized tomography is the main tool used for lung cancer screening in clinical practice, allowing the visualization of lung nodules and the assessment of their malignancy. However, this evaluation is a complex task and subject to inter-observer variability, which has fueled the need for computer-aided diagnosis systems for lung nodule malignancy classification. While promising results have been obtained with automatic methods, it is often not straightforward to determine which features a given model is basing its decisions on and this lack of explainability can be a significant stumbling block in guaranteeing the adoption of automatic systems in clinical scenarios. Though visual malignancy assessment has a subjective component, radiologists strongly base their decision on nodule features such as nodule spiculation and texture, and a malignancy classification model should thus follow the same rationale. As such, this study focuses on the characterization of lung nodules as a means for the classification of nodules in terms of malignancy. For this purpose, different model architectures for nodule characterization are proposed and compared, with the final goal of malignancy classification. It is shown that models that combine direct malignancy prediction with specific branches for nodule characterization have a better performance than the remaining models, achieving an Area Under the Curve of 0.783. The most relevant features for malignancy classification according to the model were lobulation, spiculation and texture, which is found to be in line with current clinical practice.
An automated slice sorting technique for multi-slice computed tomography liver cancer images using convolutional network
Kaur, Amandeep
Chauhan, Ajay Pal Singh
Aggarwal, Ashwani Kumar
Expert Systems with Applications2021Journal Article, cited 1 times
Website
CT-ORG
LIVER
Classification
An early detection and diagnosis of liver cancer can help the radiation therapist in choosing the target area and the amount of radiation dose to be delivered to the patients. The radiologists usually spend a lot of time in selecting the most relevant slices from thousands of scans, which are usually obtained from multi-slice CT scanners. The purpose of this paper multi-organ classification of 3D CT images of liver cancer suspected patients by convolution network. A dataset consisting of 63503 CT images of liver cancer patients taken from The Cancer Imaging Archive (TCIA) has been used to validate the proposed method. The method is a CNN for classification of CT liver cancer images. The classification results in terms of accuracy, precision, sensitivity, specificity, true positive rate, false negative rate, and F1 score have been computed. The results manifest a high validation accuracy of 99.1%, when convolution network is trained with the data augmented volume slices as compared to accuracy of 98.7% with that obtained original volume slices. The overall test accuracy for data augmented volume slice dataset is 93.1% superior to other volume slices. The main contribution of this work is that it will help the radiation therapist to focus on a small subset of CT image data. This is achieved by segregating the whole set of 63503 CT images into three categories based on the likelihood of the spread of cancer to other organs in liver cancer suspected patients. Consequently, only 19453 CT images had liver visible in them, making rest of 44050 CT images less relevant for liver cancer detection. The proposed method will help in the rapid diagnosis and treatment of liver cancer patients.
COVID-19 detection on Chest X-ray images: A comparison of CNN architectures and ensembles
Breve, Fabricio Aparecido
Expert Systems with Applications2022Journal Article, cited 0 times
MIDRC-RICORD-1C
COVID-19 quickly became a global pandemic after only four months of its first detection. It is crucial to detect this disease as soon as possible to decrease its spread. The use of chest X-ray (CXR) images became an effective screening strategy, complementary to the reverse transcription-polymerase chain reaction (RT-PCR). Convolutional neural networks (CNNs) are often used for automatic image classification and they can be very useful in CXR diagnostics. In this paper, 21 different CNN architectures are tested and compared in the task of identifying COVID-19 in CXR images. They were applied to the COVIDx8B dataset, a large COVID-19 dataset with 16,352 CXR images coming from patients of at least 51 countries. Ensembles of CNNs were also employed and they showed better efficacy than individual instances. The best individual CNN instance results were achieved by DenseNet169, with an accuracy of 98.15% and an F1 score of 98.12%. These were further increased to 99.25% and 99.24%, respectively, through an ensemble with five instances of DenseNet169. These results are higher than those obtained in recent works using the same dataset.
Complete fully automatic detection, segmentation and 3D reconstruction of tumor volume for non-small cell lung cancer using YOLOv4 and region-based active contour model
Dlamini, S.
Chen, Y. H.
Kuo, C. F. J.
Expert Systems with Applications2023Journal Article, cited 0 times
QIN LUNG CT
LIDC-IDRI
LUNA16 Challenge
lung tumor
detection
segmentation
active contour
YOLO
k-means
treatment response
classification
networks
accurate
nodules
level
We aim to develop a fully automatic system that will detect, segment and accurately reconstruct non-small cell lung cancer tumors into space using YOLOv4 and region-based active contour model. The system consists of two main sections which are detection and volumetric rendering. The detection section is composed of image enhancement, augmentation, labeling and localization while the volumetric rendering is mainly image filtering, tumor extraction, region-based active contour and 3D reconstruction. In this method the images are enhanced to eliminate noise before augmentation which is intended to multiply and diversify the image data. Labeling was then carried out in order to create a solid learning foundation for the localization model. Images with localized tumors were passed through smoothing filters and then clustered to extract tumor masks. Lastly contour information was obtained to render the volumetric tumor. The designed system displays a strong detection performance with a precision of 96.57%, sensitivity and F 1 score of 97.02% and 96.79% respectively at a detection speed of 34 fps, prediction time per image of 21.38 ms. The system segmentation validation achieved a dice score coefficient of 92.19 % on tumor extraction. A 99.74 % accuracy was obtained during the verification of the method's volumetric rendering using a 3D printed image of the rendered tumor. The rendering of the volumetric tumor was obtained at an average time of 11 s. This system shows a strong performance and reliability due to its ability to detect, segment and reconstruct a volumetric tumor into space with high confidence.
Multiple medical image encryption algorithm based on scrambling of region of interest and diffusion of odd-even interleaved points
Wang, Xingyuan
Wang, Yafei
Expert Systems with Applications2023Journal Article, cited 0 times
Website
TCGA-KIRP
Security
Due to the security requirement brought by the rapid development of electronic medical, this paper proposes an encryption algorithm for multiple medical images. The algorithm can not only encrypt grayscale medical images of any number and any size at the same time, but also has good encryption effect when applied to color images. Considering the characteristics of medical images, we design an encryption algorithm based on the region of interest (ROI). Firstly, extract the region of interest of the plaintext images and obtain the coordinates, calculate the hash value of the large image composed of all plaintext images. Set the coordinates and hash value as the secret key. This operation makes the whole encryption algorithm closely related to the plaintext images, which greatly enhances the ability to resist chosen plaintext attacks and improves security of the algorithm. In the process of encryption, chaotic sequences generated by Logistic-Tent chaotic system (LTS) are used to perform two scrambling and one diffusion, that is, pixel swapping based on the region of interest, Fisher-Yates scrambling and our newly proposed diffusion algorithm based on odd–even interleaved points. After testing and performance analysis, the algorithm can achieve good encryption effect, can resist various attacks, and has a higher security level and faster encryption speed.
QGFormer: Queries-guided transformer for flexible medical image synthesis with domain missing
Hao, Huaibo
Xue, Jie
Huang, Pu
Ren, Liwen
Li, Dengwang
Expert Systems with Applications2024Journal Article, cited 0 times
Website
RSNA-ASNR-MICCAI BraTS 2021
GammaKnife-Hippocampal
BraTS 2021
Synthetic images
Multi-modal imaging
Domain missing poses a common challenge in medical clinical practice, limiting diagnostic accuracy compared to the complete multi-domain images that provide complementary information. We propose QGFormer to address this issue by flexibly imputing missing domains from any available source domain using a single model, which is challenging due to (1) the inherent limitation of CNNs to capture long-range dependencies, (2) the difficulty in modeling the inter- and intra-domain dependencies of multi-domain images, and (3) inefficiencies in fusing domain-specific features associated with missing domains. To tackle these challenges, we introduce two spatial-domanial attentions (SDAs), which establish intra-domain (spatial dimension) and inter-domain (domain dimension) dependencies independently or jointly. QGFormer, constructed based on SDAs, comprises three components: Encoder, Decoder and Fusion. The Encoder and Decoder form the backbone, modeling contextual dependencies to create a hierarchical representation of features. The QGFormer Fusion then adaptively aggregates these representations to synthesize specific missing domains from coarse to fine, guided by learnable domain queries. This process is interpretable because the attention scores in Fusion indicate how much attention the target domains pay to different inputs and regions. In addition, the scalable architecture enables QGFormer to segment tumors with domain missing by replacing domain queries with segment queries. Extensive experiments demonstrate that our approach achieves consistent improvements in multi-domain imputation, cross-domain image translation and multitask of synthesis and segmentation.
Neural network-based reversible data hiding for medical image
Kong, Ping
Zhang, Yongdong
Huang, Lin
Zhou, Liang
Chen, Lifan
Qin, Chuan
Expert Systems with Applications2024Journal Article, cited 0 times
Website
CTpred-Sunitinib-panNET
MIDRC-RICORD-1A
StageII-Colorectal-CT
Seeking multi-view commonality and peculiarity: A novel decoupling method for lung cancer subtype classification
Gao, Ziyu
Luo, Yin
Wang, Minghui
Cao, Chi
Jiang, Houzhou
Liang, Wei
Li, Ao
Expert Systems with Applications2025Journal Article, cited 0 times
Website
NSCLC-Radiomics
NSCLC Radiogenomics
Multi-view learning
Decoupling
Histologic subtype classification
Non-Small Cell Lung Cancer (NSCLC)
In the management of non-small cell lung cancer (NSCLC), accurate and non-invasive classification of histological subtypes from computed tomography (CT) images is pivotal for devising appropriate treatment strategies. Despite encouraging progress in existing multi-view deep learning approaches, severe problems persist in effectively managing this crucial but challenging task, particularly concerning inter-view discrepancy and intra-view interference. To address these issues, this study presents a novel multi-view decoupling (MVD) method dedicated to seeking commonality and peculiarity across views using a divide-and-conquer strategy. Specifically, MVD employs an attention-based decoupling mechanism that simultaneously projects all views onto distinct view-invariant and view-specific subspaces, thereby generating both common and peculiar representation for each view. Moreover, a cross-view transformation loss is designed to successfully mitigate inter-view discrepancy in the view-invariant subspace, leveraging a unique view-to-view transformation perspective. Meanwhile, a cross-subtype discrimination loss is introduced to ensure that peculiar representations in view-specific subspaces exclusively capture subtype-irrelevant information thereby effectively eradicating intra-view interference via adversarial learning. MVD achieves an area under the receiver operating characteristic curve (AUC) of 0.838 and 0.805 on public and in-house NSCLC datasets respectively, consistently outperforming state-of-the-art approaches by a significant margin. In addition, extensive ablation experiments confirm that MVD effectively addresses the challenges of inter-view discrepancy and intra-view interference, establishing it as a valuable tool for enhanced accuracy and reliability in NSCLC histological subtype classification.
Open osteology: Medical imaging databases as skeletal collections
Simmons-Ehrhardt, Terrie
Forensic Imaging2021Journal Article, cited 1 times
Website
CT Lymph Nodes
Highlights; •Medical imaging datasets can be used as skeletal reference collections; •Computed tomography data from TCIA can be accessed via 3D Slicer; •Many tools in 3D Slicer support skeletal analyses and dissemination products; •3D bone models can be used for education, research, training, web-based reference; •Bone modeling in 3D Slicer will support common workflows and shareable datasets; Abstract; The increasing availability of de-identified medical image databases, especially of computed tomography (CT) scans, presents an opportunity for “open osteology,” or the establishment of new skeletal reference collections. The number of free and/or open-source software packages for generating three-dimensional (3D) CT models, such as 3D Slicer, reduces financial obstacles to working with CT data and encourages the development of common workflows and datasets. The direct link to the Cancer Imaging Archive from 3D Slicer facilitates access to medical imaging datasets to support education and research with virtual skeletal data. Generation of 3D models enables computational methods for skeletal analyses and can also lead to the generation of virtual libraries representing large amounts of human skeletal variation. 3D printing of 3D CT models can supplement physical skeletal collections for the classroom and research beyond the standard commercially available specimens. Web-based technologies support 3D model and CT volume visualization, interaction, and measurement, increasing opportunities for dissemination and collaboration as well as the possible integration of 3D data as references for skeletal analysis tools. Increasing awareness and usage of pre-existing free and open-source resources applicable to forensic anthropology will facilitate method/workflow development, validation, and eventually standardization. This presentation will discuss online sources of skeletal data, outline methods for processing CT scans with free software into 3D digital models and discuss web-based technologies and repositories that allow interaction with 3D skeletal models. The demonstration of these methods will contribute to discussions on the expansion of virtual anthropology and open osteology.
Prostate cancer classification from ultrasound and MRI images using deep learning based Explainable Artificial Intelligence
Hassan, Md Rafiul
Islam, Md Fakrul
Uddin, Md Zia
Ghoshal, Goutam
Hassan, Mohammad Mehedi
Huda, Shamsul
Fortino, Giancarlo
Future Generation Computer Systems2022Journal Article, cited 0 times
Website
Prostate-MRI-US-Biopsy
Prostate Cancer
Deep Learning
A blockchain-based protocol for tracking user access to shared medical imaging
de Aguiar, Erikson J.
dos Santos, Alyson J.
Meneguette, Rodolfo I.
De Grande, Robson E.
Ueyama, Jó
Future Generation Computer Systems2022Journal Article, cited 0 times
Website
CPTAC-CM
CPTAC-LSCC
Modern healthcare systems are complex and regularly share sensitive data among multiple stakeholders, such as doctors, patients, and pharmacists. Patients’ data has increased and requires safe methods for its management. Research works related to blockchain, such as MIT MedRec, have strived to draft trustworthy and immutable systems to share data. However, blockchain may be challenging in healthcare scenarios due to issues about privacy and control of data sharing destinations. This paper presents a protocol for tracking shared medical data, which includes images, and controlling the medical data access by multiple conflicting stakeholders. Several efforts rely on blockchain for healthcare, but just a few are concerned about malicious data leakage in blockchain-based healthcare systems. We implement a token mechanism stored in DICOM files and managed by Hyperledger Fabric Blockchain. Our findings and evaluations revealed low chances of a hash collision, such as employing a fitting-resistance birthday attack. Although our solution was devised for healthcare, it can inspire and be easily ported to other blockchain-based application scenarios, such as Ethereum or Hyperledger Besu for business networks.
Development and validation of an educational software based in artificial neural networks for training in radiology (JORCAD) through an interactive learning activity
Hernández-Rodríguez, Jorge
Rodríguez-Conde, María-José
Santos-Sánchez, José-Ángel
Cabrero-Fraile, Francisco-Javier
Heliyon2023Journal Article, cited 0 times
Lung-PET-CT-Dx
The use of Computer Aided Detection (CAD) software has been previously documented as a valuable tool to improve specialist training in Radiology. This research assesses the utility of an educational software tool aimed to train residents in Radiology and other related medical specialties and students from Medicine degree. This in-house developed software, called JORCAD, integrates a CAD system based in Convolutional Neural Networks (CNNs) with annotated cases from radiological image databases. The methodology followed for software validation was expert judgement after completing an interactive learning activity. Participants received a theoretical session and a software usage tutorial and afterwards utilized the application in a dedicated workstation to analyze a series of proposed cases of thorax computed tomography (CT) and mammography. A total of 26 expert participants from the Radiology Department at Salamanca University Hospital (15 specialists and 11 residents) fulfilled the activity and evaluated different aspects through a series of surveys: software usability, case navigation tools, CAD module utility for learning and JORCAD educational capabilities. Participants also graded imaging cases to establish JORCAD usefulness for training radiology residents. According to the statistical analysis of survey results and expert cases scoring, along with their opinions, it can be concluded that JORCAD software is a useful tool for training future specialists. The combination of CAD with annotated cases from validated databases enhances learning, offering a second opinion and changing the usual training paradigm. Including software as JORCAD in residency training programs of Radiology and other medical specialties would have a positive effect on trainees' background knowledge.
Machine learning with multimodal data for COVID-19
Chen, Weijie
Sá, Rui C.
Bai, Yuntong
Napel, Sandy
Gevaert, Olivier
Lauderdale, Diane S.
Giger, Maryellen L.
Heliyon2023Journal Article, cited 0 times
COVID-19-AR
COVID-19-NY-SBU
In response to the unprecedented global healthcare crisis of the COVID-19 pandemic, the scientific community has joined forces to tackle the challenges and prepare for future pandemics. Multiple modalities of data have been investigated to understand the nature of COVID-19. In this paper, MIDRC investigators present an overview of the state-of-the-art development of multimodal machine learning for COVID-19 and model assessment considerations for future studies. We begin with a discussion of the lessons learned from radiogenomic studies for cancer diagnosis. We then summarize the multi-modality COVID-19 data investigated in the literature including symptoms and other clinical data, laboratory tests, imaging, pathology, physiology, and other omics data. Publicly available multimodal COVID-19 data provided by MIDRC and other sources are summarized. After an overview of machine learning developments using multimodal data for COVID-19, we present our perspectives on the future development of multimodal machine learning models for COVID-19.
Improved automated tumor segmentation in whole-body 3D scans using multi-directional 2D projection-based priors
Early cancer detection, guided by whole-body imaging, is important for the overall survival and well-being of the patients. While various computer-assisted systems have been developed to expedite and enhance cancer diagnostics and longitudinal monitoring, the detection and segmentation of tumors, especially from whole-body scans, remain challenging. To address this, we propose a novel end-to-end automated framework that first generates a tumor probability distribution map (TPDM), incorporating prior information about the tumor characteristics (e.g. size, shape, location). Subsequently, the TPDM is integrated with a state-of-the-art 3D segmentation network along with the original PET/CT or PET/MR images. This aims to produce more meaningful tumor segmentation masks compared to using the baseline 3D segmentation network alone. The proposed method was evaluated on three independent cohorts (autoPET, CAR-T, cHL) of images containing different cancer forms, obtained with different imaging modalities, and acquisition parameters and lesions annotated by different experts. The evaluation demonstrated the superiority of our proposed method over the baseline model by significant margins in terms of Dice coefficient, and lesion-wise sensitivity and precision. Many of the extremely small tumor lesions (i.e. the most difficult to segment) were missed by the baseline model but detected by the proposed model without additional false positives, resulting in clinically more relevant assessments. On average, an improvement of 0.0251 (autoPET), 0.144 (CAR-T), and 0.0528 (cHL) in overall Dice was observed. In conclusion, the proposed TPDM-based approach can be integrated with any state-of-the-art 3D UNET with potentially more accurate and robust segmentation results.
An integrated radiology-pathology machine learning classifier for outcome prediction following radical prostatectomy: Preliminary findings
OBJECTIVES: To evaluate the added benefit of integrating features from pre-treatment MRI (radiomics) and digitized post-surgical pathology slides (pathomics) in prostate cancer (PCa) patients for prognosticating outcomes post radical-prostatectomy (RP) including a) rising prostate specific antigen (PSA), and b) extraprostatic-extension (EPE). METHODS: Multi-institutional data (N = 58) of PCa patients who underwent pre-treatment 3-T MRI prior to RP were included in this retrospective study. Radiomic and pathomic features were extracted from PCa regions on MRI and RP specimens delineated by expert clinicians. On training set (D1, N = 44), Cox Proportional-Hazards models M(R), M(P) and M(RaP) were trained using radiomics, pathomics, and their combination, respectively, to prognosticate rising PSA (PSA > 0.03 ng/mL). Top features from M(RaP) were used to train a model to predict EPE on D1 and test on external dataset (D2, N = 14). C-index, Kalplan-Meier curves were used for survival analysis, and area under ROC (AUC) was used for EPE. M(RaP) was compared with the existing post-treatment risk-calculator, CAPRA (M(C)). RESULTS: Patients had median follow-up of 34 months. M(RaP) (c-index = 0.685 +/- 0.05) significantly outperformed M(R) (c-index = 0.646 +/- 0.05), M(P) (c-index = 0.631 +/- 0.06) and M(C) (c-index = 0.601 +/- 0.071) (p < 0.0001). Cross-validated Kaplan-Meier curves showed significant separation among risk groups for rising PSA for M(RaP) (p < 0.005, Hazard Ratio (HR) = 11.36) as compared to M(R) (p = 0.64, HR = 1.33), M(P) (p = 0.19, HR = 2.82) and M(C) (p = 0.10, HR = 3.05). Integrated radio-pathomic model M(RaP) (AUC = 0.80) outperformed M(R) (AUC = 0.57) and M(P) (AUC = 0.76) in predicting EPE on external-data (D2). CONCLUSIONS: Results from this preliminary study suggest that a combination of radiomic and pathomic features can better predict post-surgical outcomes (rising PSA and EPE) compared to either of them individually as well as extant prognostic nomogram (CAPRA).
A deep learning framework integrating MRI image preprocessing methods for brain tumor segmentation and classification
Dang, Khiet
Vo, Toi
Ngo, Lua
Ha, Huong
IBRO Neuroscience Reports2022Journal Article, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Glioma grading is critical in treatment planning and prognosis. This study aims to address this issue through MRI-based classification to develop an accurate model for glioma diagnosis. Here, we employed a deep learning pipeline with three essential steps: (1) MRI images were segmented using preprocessing approaches and UNet architecture, (2) brain tumor regions were extracted using segmentation, then (3) high-grade gliomas and low-grade gliomas were classified using the VGG and GoogleNet implementations. Among the additional preprocessing techniques used in conjunction with the segmentation task, the combination of data augmentation and Window Setting Optimization was found to be the most effective tool, resulting in the Dice coefficient of 0.82, 0.91, and 0.72 for enhancing tumor, whole tumor, and tumor core, respectively. While most of the proposed models achieve comparable accuracies of about 93 % on the testing dataset, the pipeline of VGG combined with UNet segmentation obtains the highest accuracy of 97.44 %. In conclusion, the presented architecture illustrates a realistic model for detecting gliomas; moreover, it emphasizes the significance of data augmentation and segmentation in improving model performance.
Evaluating synthetic neuroimaging data augmentation for automatic brain tumour segmentation with a deep fully-convolutional network
Asadi, Fawad
Angsuwatanakul, Thanate
O’Reilly, Jamie A.
IBRO Neuroscience Reports2024Journal Article, cited 2 times
Website
TCGA-LGG
Segmentation
Glioblastoma
Generative Adversarial Network (GAN)
Magnetic Resonance Imaging (MRI)
Synthetic images
Gliomas observed in medical images require expert neuro-radiologist evaluation for treatment planning and monitoring, motivating development of intelligent systems capable of automating aspects of tumour evaluation. Deep learning models for automatic image segmentation rely on the amount and quality of training data. In this study we developed a neuroimaging synthesis technique to augment data for training fully-convolutional networks (U-nets) to perform automatic glioma segmentation. We used StyleGAN2-ada to simultaneously generate fluid-attenuated inversion recovery (FLAIR) magnetic resonance images and corresponding glioma segmentation masks. Synthetic data were successively added to real training data (n = 2751) in fourteen rounds of 1000 and used to train U-nets that were evaluated on held-out validation (n = 590) and test sets (n = 588). U-nets were trained with and without geometric augmentation (translation, zoom and shear), and Dice coefficients were computed to evaluate segmentation performance. We also monitored the number of training iterations before stopping, total training time, and time per iteration to evaluate computational costs associated with training each U-net. Synthetic data augmentation yielded marginal improvements in Dice coefficients (validation set +0.0409, test set +0.0355), whereas geometric augmentation improved generalization (standard deviation between training, validation and test set performances of 0.01 with, and 0.04 without geometric augmentation). Based on the modest performance gains for automatic glioma segmentation we find it hard to justify the computational expense of developing a synthetic image generation pipeline. Future work may seek to optimize the efficiency of synthetic data generation for augmentation of neuroimaging data.
Analysis of lifting scheme based Double Density Dual-Tree Complex Wavelet Transform for de-noising medical images
Maria, H. Heartlin
Jossy, A. Maria
Malarvizhi, G.
Jenitta, A.
Optik2021Journal Article, cited 0 times
TCGA-OV-Proteogenomics
Medical images play a vital role in diagnosis of various diseases. This has paved a path to the extensive use of CT, mammogram, MRI and ultrasound images in the recent days which has caused a rising concern about the radiation dosage that is involved in medical screening process. Owing to this concern low dose screening is widely being performed and has resulted in the introduction of noise, artifacts thus producing low image quality which can adversely affect the judgment of the radiologists. This in turn has led to the demand of enhanced image de-noising techniques. This work is an approach to remove multiple types of noises from low dose medical images using lifting based Double Density Dual-Tree Complex Wavelet Transform (DDDTCWT) and a modified Bernoulli based thresholding technique enhanced by fuzzy optimization technique. The parameters observed from the simulation results of the proposed method were compared with the existing de-noising techniques and results of the proposed method have shown significant improvement over the conventional techniques. The proposed work not only efficiently de-noises the image but also enhances its visual appearance. The Lifting scheme used provides augmented memory for decomposition, thus speeding up the entire de-noising process.
GammaNet: An intensity-invariance deep neural network for computer-aided brain tumor segmentation
Due to their wide variety in location, appearance, size and intensity distribution, automatic and precise brain tumor segmentation is a challenging task. To address this issue, a computer-aided brain tumor segmentation system based on an adaptive gamma correction neural network (GammaNet) is proposed in this paper. Inspired from the conventional gamma correction, an adaptive gamma correction (AGC) block is proposed to realize intensity invariance and force the network to focus on significant regions. In addition, to adaptively adjust the intensity distributions of local regions, the feature maps are divided into several proposal regions, and local image characteristics are emphasized. Furthermore, to enlarge the receptive field without information loss and improve the segmentation performance, a dense atrous spatial pyramid pooling (Dense-ASPP) module is combined with AGC blocks to construct the GammaNet. The experimental results show that the dice similarity coefficient (DSC), sensitivity and intersection of union (IoU) of GammaNet are 85.8%, 87.8% and 80.31%, respectively, the implementation of AGC blocks and the Dense-ASPP can improve the DSC by 3.69% and 1.11%, respectively, which indicates that the GammaNet can achieve state-of-the-art performance.
Identification and classification of DICOM files with burned-in text content
Vcelak, Petr
Kryl, Martin
Kratochvil, Michal
Kleckova, Jana
International Journal of Medical Informatics2019Journal Article, cited 0 times
NSCLC-Radiomics
PROSTATE-DIAGNOSIS
Classification
DICOM
RSNA Clinical Trials Processor (CTP)
Background; Protected health information burned in pixel data is not indicated for various reasons in DICOM. It complicates the secondary use of such data. In recent years, there have been several attempts to anonymize or de-identify DICOM files. Existing approaches have different constraints. No completely reliable solution exists. Especially for large datasets, it is necessary to quickly analyse and identify files potentially violating privacy.; ; Methods; Classification is based on adaptive-iterative algorithm designed to identify one of three classes. There are several image transformations, optical character recognition, and filters; then a local decision is made. A confirmed local decision is the final one. The classifier was trained on a dataset composed of 15,334 images of various modalities.; ; Results; The false positive rates are in all cases below 4.00%, and 1.81% in the mission-critical problem of detecting protected health information. The classifier's weighted average recall was 94.85%, the weighted average inverse recall was 97.42% and Cohen's Kappa coefficient was 0.920.; ; Conclusion; The proposed novel approach for classification of burned-in text is highly configurable and able to analyse images from different modalities with a noisy background. The solution was validated and is intended to identify DICOM files that need to have restricted access or be thoroughly de-identified due to privacy issues. Unlike with existing tools, the recognised text, including its coordinates, can be further used for de-identification.
Effect of patient inhalation profile and airway structure on drug deposition in image-based models with particle-particle interactions
Williams, J.
Kolehmainen, J.
Cunningham, S.
Ozel, A.
Wolfram, U.
Int J Pharm2022Journal Article, cited 0 times
Website
LCTSC
Lung CT Segmentation Challenge
LUNG
Computational fluid dynamics
Patient-specific modelling
For many of the one billion sufferers of respiratory diseases worldwide, managing their disease with inhalers improves their ability to breathe. Poor disease management and rising pollution can trigger exacerbations that require urgent relief. Higher drug deposition in the throat instead of the lungs limits the impact on patient symptoms. To optimise delivery to the lung, patient-specific computational studies of aerosol inhalation can be used. However in many studies, inhalation modelling does not represent situations when the breathing is impaired, such as in recovery from an exacerbation, where the patient's inhalation is much faster and shorter. Here we compare differences in deposition of inhaler particles (10, 4 mum) in the airways of three patients. We aimed to evaluate deposition differences between healthy and impaired breathing with image-based healthy and diseased patient models. We found that the ratio of drug in the lower to upper lobes was 35% larger with a healthy inhalation. For smaller particles the upper airway deposition was similar in all patients, but local deposition hotspots differed in size, location and intensity. Our results identify that image-based airways must be used in respiratory modelling. Various inhalation profiles should be tested for optimal prediction of inhaler deposition.
A Pilot/Phase II Study of Stereotactic Radiosurgery for Brain Metastases Using Rational Dose Selection
Yu, J.B.
Singh, C.
Bindra, R.S.
Contessa, J.N.
Husain, Z.A.
Hansen, J.E.
Park, H.S.M.
Roberts, K.B.
Bond, J.
Tien, C.
Guo, F.
Colaco, R.J.
Housri, N.
Magnuson, W.J.
Omay, B.
Chiang, V.L.
2018Journal Article, cited 0 times
PROSTATEx
Prostate Lesion Malignancy Classification from Multiparametric MRI Images Using Convolution Neural Network
Zong, W.
Liu, C.
Liu, S.
Lee, J.K.
Chetty, I.J.
Elshaikh, M.A.
Movsas, B.
Wen, N.
2018Journal Article, cited 0 times
PROSTATEx
The Dose per Fraction as a Significant Prognostic Factor in Nasopharyngeal Carcinoma (NPC) Treated with Intensity Modulated Radiation Therapy (IMRT)
Xu, F.
Ni, W.
Gao, Y.
Cao, W.
Chen, J.
2018Journal Article, cited 0 times
PROSTATEx
Tumor Heterogeneity and Genomics to Predict Radiation Therapy Outcome for Head-and-Neck Cancer: A Machine Learning Approach
Singh, A.
Goyal, S.
Rao, Y. J.
Loew, M.
International Journal of Radiation Oncology*Biology*Physics2019Journal Article, cited 0 times
Website
TCGA-HNSC
HNSCC
Head-Neck-PET-CT
head and neck squamous cell carcinoma (HNSCC)
Classification
Head and Neck Squamous Cell Carcinoma (HNSCC) is usually treated with Radiation Therapy (RT). Recurrence of the tumor occurs in some patients. The purpose of this study was to determine whether information present in the heterogeneity of tumor regions in the pre-treatment PET scans of HNSCC patients can be used to predict recurrence. We then extended our study to include gene mutation information of a patient group to assess its value as an additional feature to determine treatment efficacy.; Materials/Methods; Pre-treatment PET scans of 20 patients from the first database (HNSCC), included in The Cancer Imaging Archive (TCIA), were analyzed. The follow-up duration for those patients varied between two and ten years. Accompanying clinical data were used to divide the patients into two categories according to whether they had a recurrence of the tumor. Radiation structures included in the database were overlain on the PET scans to delineate the tumor, whose heterogeneity is measured by texture analysis. The classification is carried out in two ways: making a decision for each image slice, and treating the collection of slices as a 3D volume. This approach was tested on an independent set of 53 patients from a second TCIA database (Head-Neck-PET-CT [HNPC]). The Cancer Genome Atlas (TCGA) identified frequent mutations in the expression of PIK3CA, CDKN2A and TP53 genes in HNSCC patients. We combined gene expression features with texture features for 11 patients of the third database (TCGA-HNSC), and re-evaluated the classification accuracies.
Impact of Prior Y90 Dosimetry on Toxicity and Outcomes Following SBRT for Hepatocellular Carcinoma
Campbell, Shauna
Juloori, Aditya
Smile, Timothy
LaHurd, Danielle
Yu, Naichang
Woody, Neil
Stephans, Kevin
2020Journal Article, cited 0 times
PROSTATEx
Tumor Targeted Low Dose Radiation and Immunotherapy in Mouse Models of Melanoma
Rossetti-Chung, Allen
Bhela, Siddheshvar
Claps, Lindsey
Lawrence, Jessica
Vezys, Vaiva
2020Journal Article, cited 0 times
PROSTATEx
Biopsy Positivity in Prostate Cancer Patients Undergoing MpMRI-Targeted Radiation Dose Escalation
Meshman, Jessica
Farnia, Benjamin
Stoyanova, Radka
Reis, Isildinha
Abramowitz, Matthew
Dal Pra, Alan
Horwitz, Eric
Pollack, Alan
2020Journal Article, cited 0 times
PROSTATEx
Prediction of Gleason Grade Group of Prostate Cancer on Multiparametric MRI using Deep Machine Learning Models
Zong, Weiwei
Lee, Joon
Pantelic, Milan
Wen, Ning
2020Journal Article, cited 0 times
PROSTATEx
Left Anterior Descending Coronary Artery Radiation Dose Association With All-Cause Mortality in NRG Oncology Trial RTOG 0617
McKenzie, Elizabeth
Zhang, Samuel
Zakariaee, Roja
Guthier, Christian V
Hakimian, Behrooz
Mirhadi, Amin
Kamrava, Mitchell
Padda, Sukhmani K
Lewis, John H
Nikolova, Andriana
Mak, Raymond H
Atkins, Katelyn M
2022Journal Article, cited 0 times
NSCLC-Cetuximab
PURPOSE: A left anterior descending (LAD) coronary artery volume (V) receiving 15 Gy (V15 Gy) ≥10% has been recently observed to be an independent risk factor of major adverse cardiac events and all-cause mortality in patients with locally advanced non-small cell lung cancer treated with radiation therapy. However, this dose constraint has not been validated in independent or prospective data sets.
METHODS AND MATERIALS: The NRG Oncology/Radiation Therapy Oncology Group (RTOG) 0617 data set from the National Clinical Trials Network was used. The LAD coronary artery was manually contoured. Multivariable Cox regression was performed, adjusting for known prognostic factors. Kaplan-Meier estimates of overall survival (OS) were calculated. For assessment of baseline cardiovascular risk, only age, sex, and smoking history were available.
RESULTS: There were 449 patients with LAD dose-volume data and clinical outcomes available after 10 patients were excluded owing to unreliable LAD dose statistics. The median age was 64 years. The median LAD V15 Gy was 38% (interquartile range, 15%-62%), including 94 patients (21%) with LAD V15 Gy <10% and 355 (79%) with LAD V15 Gy ≥10%. Adjusting for prognostic factors, LAD V15 Gy ≥10% versus <10% was associated with an increased risk of all-cause mortality (hazard ratio [HR], 1.43; 95% confidence interval, 1.02-1.99; P = .037), whereas a mean heart dose ≥10 Gy versus <10 Gy was not (adjusted HR, 1.12; 95% confidence interval, 0.88-1.43; P = .36). The median OS for patients with LAD V15 Gy ≥10% versus <10% was 20.2 versus 25.1 months, respectively, with 2-year OS estimates of 47% versus 67% (P = .004), respectively.
CONCLUSIONS: In a reanalysis of RTOG 0617, LAD V15 Gy ≥10% was associated with an increased risk of all-cause mortality. These findings underscore the need for improved cardiac risk stratification and aggressive risk mitigation strategies, including implementation of cardiac substructure dose constraints in national guidelines and clinical trials.
The Auto-Lindberg Project: Standardized Target Nomenclature in Radiation Oncology Enables Real-World Data Extraction From Radiation Treatment Plans
Hope, A.
Kim, J. W.
Kazmierski, M.
Welch, M.
Marsilla, J.
Huang, S. H.
Hosni, A.
Tadic, T.
Patel, T.
Haibe-Kains, B.
Waldron, J.
O'Sullivan, B.
Bratman, S.
Int J Radiat Oncol Biol Phys2024Journal Article, cited 0 times
RADCURE
*Radiation Oncology
Radiotherapy Dosage
*Radiotherapy
Intensity-Modulated
Radiotherapy Planning
Computer-Assisted
Lymph Nodes
Oropharyngeal cancer
Laryngeal cancer
Larynx
nasopharynx
hypopharynx
Head and neck cancer
Algorithm Development
Treatment plan archives contain vast quantities of patient-specific data in a digital format, but are underused due to challenges in storage, retrieval, and analysis methodology. With standardized nomenclature and careful patient outcomes monitoring, treatment plans can be rich sources of data to explore relevant clinical questions. Even without outcomes, treatment plan archives contain data to address questions such as pretreatment disease distribution or institutional treatment strategies.; ; A comprehensive understanding of cancer's natural history and lymph node (LN) distribution is critical to management of each patient's disease. Macroscopic tumor location has important implications for adjacent LN regions that may also harbor microscopic cancer involvement. Lindberg et al demonstrated from large patient data sets that different head and neck cancer subsites had different distributions of involved LNs.1 Similar population-based data are rare2 in the modern era, barring some surgical studies.3, 4, 5 Nodal involvement risk estimates can help select patients for elective neck irradiation, including choices of ipsilateral versus bilateral treatment (eg, oropharyngeal carcinoma [OPC]).6; ; In this study, an algorithm automatically extracted LN data from a large data set of treatment plans for patients with head and neck cancer. Further programmatic methods generated representative example “AutoLindberg” diagrams and summary tables regarding the extent of cervical LN involvement for clinically relevant patient subsets.
Automated grading of prostate cancer using convolutional neural network and ordinal class classifier
Abraham, Bejoy
Nair, Madhu S.
Informatics in Medicine Unlocked2019Journal Article, cited 0 times
Website
Soft Tissue Sarcoma
PROSTATEx-2 2017 challenge
VGG-16 Convolutional Neural Network
Convolutional Neural Network (CNN)
Prostate Cancer (PCa) is one of the most prominent cancer among men. Early diagnosis and treatment planning are significant in reducing the mortality rate due to PCa. Accurate prediction of grade is required to ensure prompt treatment for cancer. Grading of prostate cancer can be considered as an ordinal class classification problem. This paper presents a novel method for the grading of prostate cancer from multiparametric magnetic resonance images using VGG-16 Convolutional Neural Network and Ordinal Class Classifier with J48 as the base classifier. Multiparametric magnetic resonance images of the PROSTATEx-2 2017 grand challenge dataset are employed for this work. The method achieved a moderate quadratic weighted kappa score of 0.4727 in the grading of PCa into 5 grade groups, which is higher than state-of-the-art methods. The method also achieved a positive predictive value of 0.9079 in predicting clinically significant prostate cancer.
Optothermal tissue response for laser-based investigation of thyroid cancer
Okebiorun, Michael O.
ElGohary, Sherif H.
Informatics in Medicine Unlocked2020Journal Article, cited 0 times
Website
TCGA-THCA
Computer Aided Detection (CADe)
To characterize thyroid cancer imaging-based detection, we implemented a simulation of the optical and thermal response in an optical investigation of thyroid cancer. We employed the 3D Monte Carlo method and the bio-heat equation to determine the fluence and temperature distribution via the Molecular Optical Simulation Environment (MOSE) with a Finite element (FE) simulator. The optothermal effect of a neck surface-based source is also compared to a trachea-based source. Results show fluence and temperature distribution in a realistic 3D neck model with both endogenous and hypothetical tissue-specific exogenous contrast agents. It also reveals that the trachea illumination has a factor of ten better absorption and temperature change than the neck-surface illumination, and tumor-specific exogenous contrast agents have a relatively higher absorption and temperature change in the tumors, which could be assistive to clinicians and researchers to improve and better understand the region's response to laser-based diagnosis.
Addressing architectural distortion in mammogram using AlexNet and support vector machine
Vedalankar, Aditi V.
Gupta, Shankar S.
Manthalkar, Ramchandra R.
Informatics in Medicine Unlocked2021Journal Article, cited 0 times
Website
CBIS-DDSM
Convolutional Neural Network (CNN)
Classification
BREAST
Objective; ; To address the architectural distortion (AD) which is an irregularity in the parenchymal pattern of breast. The nature of AD is extremely complex; still, the study is very much essential because AD is viewed as a primitive sign of breast cancer. In this study, a new convolutional neural network (CNN) based system is developed that performs classification of AD distorted mammograms and other mammograms.; Methods; ; In the first part, mammograms undergo pre-processing and image augmentation techniques. In the other half, learned and handcrafted features are retrieved. The AlexNet Pretrained CNN is utilized for extraction of learned features. The support vector machine (SVM) validates the existence of AD. For improved classification, the scheme is tested for various conditions.; Results; ; A sophisticated CNN based system is developed for stepwise analysis of AD. The maximum accuracy, sensitivity and specificity yielded as 92%, 81.50% and 90.83% respectively. The results outperform the conventional methods.; Conclusion; ; Based on the overall study, it is recommended that a combination of CNN pre-trained network and support vector machine is a good option for identification of AD. The study will motivate researchers to find improved methods of high performance. Further, it will also help the radiologists.; Significance; ; The AD can develop up to two years before the growth of any anomaly. The proposed system will play an essential role in the detection of early manifestations of breast cancer. The system will aid society to go for better treatment options for women all over the world and curtail the mortality rate.
Deep Learning in Prostate Cancer Diagnosis and Gleason Grading in Histopathology Images: An Extensive Study
Linkon, Ali Hasan Md
Labib, Mahir
Hasan, Tarik
Hossain, Mozammal
E-Jannat, Marium
Informatics in Medicine Unlocked2021Journal Article, cited 0 times
Website
QIN-PROSTATE-Repeatability
H&E-stained slides
PROSTATE
Deep Learning
Among American men, prostate cancer is the cause of the second-highest death by any cancer. It is also the most common cancer in men worldwide, and the annual numbers are quite alarming. The most prognostic marker for prostate cancer is the Gleason grading system on histopathology images. Pathologists determine the Gleason grade on stained tissue specimens of Hematoxylin and Eosin (H&E) based on tumor structural growth patterns from whole slide images. Recent advances in Computer-Aided Detection (CAD) using deep learning have brought the immense scope of automatic detection and recognition at very high accuracy in prostate cancer like other medical diagnoses and prognoses. Automated deep learning systems have delivered promising results from histopathological images to accurate grading of prostate cancer. Many studies have shown that deep learning strategies can achieve better outcomes than simpler systems that make use of pathology samples. This article aims to provide an insight into the gradual evolution of deep learning in detecting prostate cancer and Gleason grading. This article also evaluates a comprehensive, synthesized overview of the current state and existing methodological approaches as well as unique insights in prostate cancer detection using deep learning. We have also described research findings, current limitations, and future avenues for research. We have tried to make this paper applicable to deep learning communities and hope it will encourage new collaborations to create dedicated applications and improvements for prostate cancer detection and Gleason grading.
Detection of effective genes in colon cancer: A machine learning approach
Fahami, Mohammad Amin
Roshanzamir, Mohamad
Izadi, Navid Hoseini
Keyvani, Vahideh
Alizadehsani, Roohallah
Informatics in Medicine Unlocked2021Journal Article, cited 0 times
Website
TCGA-COAD
Machine Learning
Radiogenomics
Nowadays, a variety of cancers have become common among humans which unfortunately are the cause of death for many of these people. Early detection and diagnosis of cancers can have a significant impact on the survival of patients and treatment cost reduction. Colon cancer is the third and the second main cause of women's and men's death worldwide among cancers. Hence, many researchers have been trying to provide new methods for early diagnosis of colon cancer. In this study, we apply statistical hypothesis tests such as t-test and Mann–Whitney–Wilcoxon and machine learning methods such as Neural Network, KNN and Decision Tree to detect the most effective genes in the vital status of colon cancer patients. We normalize the dataset using a new two-step method. In the first step, the genes within each sample (patient) are normalized to have zero mean and unit variance. In the second step, normalization is done for each gene across the whole dataset. Analyzing the results shows that this normalization method is more efficient than the others and improves the overall performance of the research. Afterwards, we apply unsupervised learning methods to find the meaningful structures in colon cancer gene expressions. In this regard, the dimensionality of the dataset is reduced by employing Principle Component Analysis (PCA). Next, we cluster the patients according to the PCA extracted features. We then check the labeling results of unsupervised learning methods using different supervised learning algorithms. Finally, we determine genes which have major impact on colon cancer mortality rate in each cluster. Our conducted study is the first which suggests that the colon cancer patients can be categorized into two clusters. In each cluster, 20 effective genes were extracted which can be important for early diagnosis of colon cancer. Many of these genes have been identified for the first time.
Robust chest CT image segmentation of COVID-19 lung infection based on limited data
Muller, D.
Soto-Rey, I.
Kramer, F.
Inform Med Unlocked2021Journal Article, cited 0 times
Website
CT Images in COVID-19
Artificial intelligence
Covid-19
Computed Tomography (CT)
Deep learning
Segmentation
Background: The coronavirus disease 2019 (COVID-19) affects billions of lives around the world and has a significant impact on public healthcare. For quantitative assessment and disease monitoring medical imaging like computed tomography offers great potential as alternative to RT-PCR methods. For this reason, automated image segmentation is highly desired as clinical decision support. However, publicly available COVID-19 imaging data is limited which leads to overfitting of traditional approaches. Methods: To address this problem, we propose an innovative automated segmentation pipeline for COVID-19 infected regions, which is able to handle small datasets by utilization as variant databases. Our method focuses on on-the-fly generation of unique and random image patches for training by performing several preprocessing methods and exploiting extensive data augmentation. For further reduction of the overfitting risk, we implemented a standard 3D U-Net architecture instead of new or computational complex neural network architectures. Results: Through a k-fold cross-validation on 20 CT scans as training and validation of COVID-19, we were able to develop a highly accurate as well as robust segmentation model for lungs and COVID-19 infected regions without overfitting on limited data. We performed an in-detail analysis and discussion on the robustness of our pipeline through a sensitivity analysis based on the cross-validation and impact on model generalizability of applied preprocessing techniques. Our method achieved Dice similarity coefficients for COVID-19 infection between predicted and annotated segmentation from radiologists of 0.804 on validation and 0.661 on a separate testing set consisting of 100 patients. Conclusions: We demonstrated that the proposed method outperforms related approaches, advances the state-of-the-art for COVID-19 segmentation and improves robust medical image analysis based on limited data.
RadGenNets: Deep learning-based radiogenomics model for gene mutation prediction in lung cancer
Tripathi, Satvik
Moyer, Ethan Jacob
Augustin, Alisha Isabelle
Zavalny, Alex
Dheer, Suhani
Sukumaran, Rithvik
Schwartz, Daniel
Gorski, Brandon
Dako, Farouk
Kim, Edward
Informatics in Medicine Unlocked2022Journal Article, cited 0 times
NSCLC Radiogenomics
Radiomics
Radiogenomics
Convolutional Neural Network (CNN)
Dense network
Classification
Deep Learning
LUNG
In this paper, we present our methodology that can be used for predicting gene mutation in patients with non-small cell lung cancer (NSCLC). There are three major types of gene mutations that a NSCLC patient’s gene structure can change to: epidermal growth factor receptor (EGFR), Kirsten rat sarcoma virus (KRAS), and Anaplastic lymphoma kinase (ALK). We worked with the clinical and genomics data for each of the 130 patients as well as their corresponding PET/CT scans. We preprocessed all of the data and then built a novel pipeline to integrate both the image and tabular data. We built a novel pipeline that used a fusion of Convolutional Neural Networks and Dense Neural Networks. Also, using a search approach, we picked an ensemble of deep learning models to classify the separate gene mutations. These models include EfficientNets, SENet, and ResNeXt WSL, among others. Our model achieved a high area under curve (AUC) score of 94% in predicting gene mutation.
Fusing texture, shape and deep model-learned information at decision level for automated classification of lung nodules on chest CT
Xie, Yutong
Zhang, Jianpeng
Xia, Yong
Fulham, Michael
Zhang, Yanning
Information Fusion2018Journal Article, cited 13 times
Website
LIDC
Lung nodule classification
Chest
CT
Deep convolutional neural network (DCNN)
Back propagation neural network (BPNN)
AdaBoost
information fusion
Feasibility study of a multi-criteria decision-making based hierarchical model for multi-modality feature and multi-classifier fusion: Applications in medical prognosis prediction
He, Qiang
Li, Xin
Kim, DW Nathan
Jia, Xun
Gu, Xuejun
Zhen, Xin
Zhou, Linghong
Information Fusion2020Journal Article, cited 0 times
Website
NSCLC-Radiomics
radiomics
Deep and statistical learning in biomedical imaging: State of the art in 3D MRI brain tumor segmentation
Fernando, K. Ruwani M.
Tsokos, Chris P.
Information Fusion2023Journal Article, cited 0 times
TCGA-GBM
TCGA-LGG
Clinical diagnosis and treatment decisions rely upon the integration of patient-specific data with clinical reasoning. Cancer presents a unique context that influences treatment decisions, given its diverse forms of disease evolution. Biomedical imaging allows non-invasive assessment of diseases based on visual evaluations, leading to better clinical outcome prediction and therapeutic planning. Early methods of brain cancer characterization predominantly relied upon the statistical modeling of neuroimaging data. Driven by breakthroughs in computer vision, deep learning has become the de facto standard in medical imaging. Integrated statistical and deep learning methods have recently emerged as a new direction in the automation of medical practice unifying multi-disciplinary knowledge in medicine, statistics, and artificial intelligence. In this study, we critically review major statistical, deep learning, and probabilistic deep learning models and their applications in brain imaging research with a focus on MRI-based brain tumor segmentation. These results highlight that model-driven classical statistics and data-driven deep learning is a potent combination for developing automated systems in clinical oncology.
Comprehensive quantifications of tumour microenvironment to predict the responsiveness to immunotherapy and prognosis for paediatric neuroblastomas
Song, M.
Sun, Y.
Hu, Y.
Wang, C.
Jin, Y.
Liu, Y.
Da, Y.
Zhao, Q.
Zheng, R.
Li, L.
Int Immunopharmacol2024Journal Article, cited 0 times
Website
TIL-WSI-TCGA
Axitinib
Immunotherapy
Neuroblastoma
Pyroptosis
Tumor microenvironment
Pathogenomics
Treatment strategies for paediatric neuroblastoma as well as many other cancers are limited by the unfavourable tumour microenvironment (TME). In this study, the TMEs of neuroblastoma were grouped by their genetic signatures into four distinct subtypes: immune enriched, immune desert, non-proliferative and fibrotic. An Immune Score and a Proliferation Score were constructed based on the molecular features of the subtypes to quantify the immune microenvironment or malignancy degree of cancer cells in neuroblastoma, respectively. The Immune Score correlated with a patient's response to immunotherapy; the Proliferation Score was an independent prognostic biomarker for neuroblastoma and proved to be more accurate than the existing clinical predictors. This double scoring system was further validated and the conserved molecular pattern associated with immune landscape and malignancy degree was confirmed. Axitinib and BI-2536 were confirmed as candidate drugs for neuroblastoma by the double scoring system. Both in vivo and in vitro experiments demonstrated that axitinib-induced pyroptosis of neuroblastoma cells activated anti-tumour immunity and inhibited tumour growth; BI-2536 induced cell cycle arrest at the S phase in neuroblastoma cells. The comprehensive double scoring system of neuroblastoma may predict prognosis and screen for therapeutic strategies which could provide personalized treatments.
A framework for multimodal imaging-based prognostic model building: Preliminary study on multimodal MRI in Glioblastoma Multiforme
In Glioblastoma Multiforme (GBM) image-derived features ("radiomics") could help in individualizing patient management. Simple geometric features of tumors (necrosis, edema, active tumor) and first-order statistics in Magnetic Resonance Imaging (MRI) are used in clinical practice. However, these features provide limited characterization power because they do not incorporate spatial information and thus cannot differentiate patterns. The aim of this work is to develop and evaluate a methodological framework dedicated to building a prognostic model based on heterogeneity textural features of multimodal MRI sequences (T1. T1-contrast. T2 and FLAIR) in GBM. The proposed workflow consists in i) registering the available 3D multimodal MR images and segmenting the tumor volume, ii) extracting image features such as heterogeneity metrics and iii) building a prognostic model by selecting, ranking and combining optimal features through machine learning (Support Vector Machine). This framework was applied to 40 histologically proven GBM patients with the endpoint being overall survival (OS) classified as above or below the median survival (15 months). The models combining features from a maximum of two modalities were evaluated using leave-one-out cross-validation (LOOCV). A classification accuracy of 90% (sensitivity 85%, specificity 95%) was obtained by combining features from T1 pre-contrast and T1 post-contrast sequences. Our results suggest that several textural features in each MR sequence have prognostic value in GBM. (C) 2015 AGBM. Published by Elsevier Masson SAS. All rights reserved.
A proposed artificial intelligence workflow to address application challenges leveraged on algorithm uncertainty
Li, D.
Hu, L.
Peng, X.
Xiao, N.
Zhao, H.
Liu, G.
Liu, H.
Li, K.
Ai, B.
Xia, H.
Lu, L.
Gao, Y.
Wu, J.
Liang, H.
iScience2022Journal Article, cited 3 times
Website
LCTSC
Lung CT Segmentation Challenge 2017
COVID-19
Computed Tomography (CT)
challenge competition
Artificial intelligence
Bioinformatics
Neural networks
Artificial Intelligence (AI) has achieved state-of-the-art performance in medical imaging. However, most algorithms focused exclusively on improving the accuracy of classification while neglecting the major challenges in a real-world application. The opacity of algorithms prevents users from knowing when the algorithms might fail. And the natural gap between training datasets and the in-reality data may lead to unexpected AI system malfunction. Knowing the underlying uncertainty is essential for improving system reliability. Therefore, we developed a COVID-19 AI system, utilizing a Bayesian neural network to calculate uncertainties in classification and reliability intervals of datasets. Validated with four multi-region datasets simulating different scenarios, our approach was proved to be effective to suggest the system failing possibility and give the decision power to human experts in time. Leveraging on the complementary strengths of AI and health professionals, our present method has the potential to improve the practicability of AI systems in clinical application.
Machine Learning in Medical Imaging
Giger, M. L.
J Am Coll Radiol2018Journal Article, cited 157 times
Website
Radiomics
Machine learning
computer aided diagnosis (CADx)
computer-assisted decision support
Deep learning
Advances in both imaging and computers have synergistically led to a rapid rise in the potential use of artificial intelligence in various radiological imaging tasks, such as risk assessment, detection, diagnosis, prognosis, and therapy response, as well as in multi-omics disease discovery. A brief overview of the field is given here, allowing the reader to recognize the terminology, the various subfields, and components of machine learning, as well as the clinical potential. Radiomics, an expansion of computer-aided diagnosis, has been defined as the conversion of images to minable data. The ultimate benefit of quantitative radiomics is to (1) yield predictive image-based phenotypes of disease for precision medicine or (2) yield quantitative image-based phenotypes for data mining with other -omics for discovery (ie, imaging genomics). For deep learning in radiology to succeed, note that well-annotated large data sets are needed since deep networks are complex, computer software and hardware are evolving constantly, and subtle differences in disease states are more difficult to perceive than differences in everyday objects. In the future, machine learning in radiology is expected to have a substantial clinical impact with imaging examinations being routinely obtained in clinical practice, providing an opportunity to improve decision support in medical image interpretation. The term of note is decision support, indicating that computers will augment human decision making, making it more effective and efficient. The clinical impact of having computers in the routine clinical practice may allow radiologists to further integrate their knowledge with their clinical colleagues in other medical specialties and allow for precision medicine.
Artificial Intelligence Using Open Source BI-RADS Data Exemplifying Potential Future Use
Ghosh, A.
J Am Coll Radiol2019Journal Article, cited 0 times
CBIS-DDSM
*Algorithms
*Artificial Intelligence
BREAST
Computer Aided Diagnosis (CADx)
Predictive Value of Tests
Artificial intelligence
BI-RADS
machine learning
Supervised training
radiologist-augmented workflow
OBJECTIVES: With much hype about artificial intelligence (AI) rendering radiologists redundant, a simple radiologist-augmented AI workflow is evaluated; the premise is that inclusion of a radiologist's opinion into an AI algorithm would make the algorithm achieve better accuracy than an algorithm trained on imaging parameters alone. Open-source BI-RADS data sets were evaluated to see whether inclusion of a radiologist's opinion (in the form of BI-RADS classification) in addition to image parameters improved the accuracy of prediction of histology using three machine learning algorithms vis-a-vis algorithms using image parameters alone. MATERIALS AND METHODS: BI-RADS data sets were obtained from the University of California, Irvine Machine Learning Repository (data set 1) and the Digital Database for Screening Mammography repository (data set 2); three machine learning algorithms were trained using 10-fold cross-validation. Two sets of models were trained: M1, using lesion shape, margin, density, and patient age for data set 1 and image texture parameters for data set 2, and M2, using the previous image parameters and the BI-RADS classification provided by radiologists. The area under the curve and the Gini coefficient for M1 and M2 were compared for the validation data set. RESULTS: The models using the radiologist-provided BI-RADS classification performed significantly better than the models not using them (P < .0001). CONCLUSION: AI and radiologist working together can achieve better results, helping in case-based decision making. Further evaluation of the metrics involved in predictor handling by AI algorithms will provide newer insights into imaging.
Computer-Assisted Decision Support System in Pulmonary Cancer Detection and Stage Classification on CT Images
Masood, Anum
Sheng, Bin
Li, Ping
Hou, Xuhong
Wei, Xiaoer
Qin, Jing
Feng, Dagan
Journal of Biomedical Informatics2018Journal Article, cited 10 times
Website
Lung cancer stages
Nodule detection
Deep learning
Convolutional Neural Network (CNN)
mIoT (medical Internet of Things)
MBAN (Medical Body Area Network)
Content based medical image retrieval using topic and location model
Shamna, P.
Govindan, V. K.
Abdul Nazeer, K. A.
Journal of Biomedical Informatics2019Journal Article, cited 0 times
Website
Content based medical image retrieval
Radiomics
Imaging Feature
Background and objective Retrieval of medical images from an anatomically diverse dataset is a challenging task. Objective of our present study is to analyse the automated medical image retrieval system incorporating topic and location probabilities to enhance the performance. Materials and methods In this paper, we present an automated medical image retrieval system using Topic and Location Model. The topic information is generated using Guided Latent Dirichlet Allocation (GuidedLDA) method. A novel Location Model is proposed to incorporate the spatial information of visual words. We also introduce a new metric called position weighted Precision (wPrecision) to measure the rank order of the retrieved images. Results Experiments on two large medical image datasets - IRMA 2009 and Multimodal dataset - revealed that the proposed method outperforms existing medical image retrieval systems in terms of Precision and Mean Average Precision. The proposed method achieved better Mean Average Precision (86.74%) compared to the recent medical image retrieval systems using the Multimodal dataset with 7200 images. The proposed system achieved better Precision (97.5%) for top ten images compared to the recent medical image retrieval systems using IRMA 2009 dataset with 14,410 images. Conclusion Supplementing spatial details of visual words to the Topic Model enhances the retrieval efficiency of medical images from large repositories. Such automated medical image retrieval systems can be used to assist physician to retrieve medical images with better precision compared to the state-of-the-art retrieval systems.
Comparison of segmentation-free and segmentation-dependent computer-aided diagnosis of breast masses on a public mammography dataset
Sawyer Lee, Rebecca
Dunnmon, Jared A
He, Ann
Tang, Siyi
Re, Christopher
Rubin, Daniel L
J Biomed Inform2021Journal Article, cited 1 times
Website
CBIS-DDSM
Computer Aided Diagnosis (CADx)
Deep learning
Mammography
Segmentation
PURPOSE: To compare machine learning methods for classifying mass lesions on mammography images that use predefined image features computed over lesion segmentations to those that leverage segmentation-free representation learning on a standard, public evaluation dataset. METHODS: We apply several classification algorithms to the public Curated Breast Imaging Subset of the Digital Database for Screening Mammography (CBIS-DDSM), in which each image contains a mass lesion. Segmentation-free representation learning techniques for classifying lesions as benign or malignant include both a Bag-of-Visual-Words (BoVW) method and a Convolutional Neural Network (CNN). We compare classification performance of these techniques to that obtained using two different segmentation-dependent approaches from the literature that rely on specific combinations of end classifiers (e.g. linear discriminant analysis, neural networks) and predefined features computed over the lesion segmentation (e.g. spiculation measure, morphological characteristics, intensity metrics). RESULTS: We report area under the receiver operating characteristic curve (AZ) values for malignancy classification on CBIS-DDSM for each technique. We find average AZ values of 0.73 for a segmentation-free BoVW method, 0.86 for a segmentation-free CNN method, 0.75 for a segmentation-dependent linear discriminant analysis of Rubber-Band Straightening Transform features, and 0.58 for a hybrid rule-based neural network classification using a small number of hand-designed features. CONCLUSIONS: We find that malignancy classification performance on the CBIS-DDSM dataset using segmentation-free BoVW features is comparable to that of the best segmentation-dependent methods we study, but also observe that a common segmentation-free CNN model substantially and significantly outperforms each of these (p < 0.05). These results reinforce recent findings suggesting that representation learning techniques such as BoVW and CNNs are advantageous for mammogram analysis because they do not require lesion segmentation, the quality and specific characteristics of which can vary substantially across datasets. We further observe that segmentation-dependent methods achieve performance levels on CBIS-DDSM inferior to those achieved on the original evaluation datasets reported in the literature. Each of these findings reinforces the need for standardization of datasets, segmentation techniques, and model implementations in performance assessments of automated classifiers for medical imaging.
Pixel-wise body composition prediction with a multi-task conditional generative adversarial network
Wang, Q.
Xue, W.
Zhang, X.
Jin, F.
Hahn, J.
J Biomed Inform2021Journal Article, cited 0 times
Website
CT Lymph Nodes
LiTS
Image Registration
Generative Adversarial Network (GAN)
The analysis of human body composition plays a critical role in health management and disease prevention. However, current medical technologies to accurately assess body composition such as dual energy X-ray absorptiometry, computed tomography, and magnetic resonance imaging have the disadvantages of prohibitive cost or ionizing radiation. Recently, body shape based techniques using body scanners and depth cameras, have brought new opportunities for improving body composition estimation by intelligently analyzing body shape descriptors. In this paper, we present a multi-task deep neural network method utilizing a conditional generative adversarial network to predict the pixel level body composition using only 3D body surfaces. The proposed method can predict 2D subcutaneous and visceral fat maps in a single network with a high accuracy. We further introduce an interpreted patch discriminator which optimizes the texture accuracy of the 2D fat maps. The validity and effectiveness of our new method are demonstrated experimentally on TCIA and LiTS datasets. Our proposed approach outperforms competitive methods by at least 41.3% for the whole body fat percentage, 33.1% for the subcutaneous and visceral fat percentage, and 4.1% for the regional fat predictions.
Directional local ternary quantized extrema pattern: A new descriptor for biomedical image indexing and retrieval
Deep, G
Kaur, L
Gupta, S
Engineering Science and Technology, an International Journal2016Journal Article, cited 9 times
Website
LIDC-IDRI
Algorithm Development
Computed Tomography (CT)
Magnetic resonance imaging (MRI)
Texture features
A volumetric technique for fossil body mass estimation applied to Australopithecus afarensis
Brassey, Charlotte A
O'Mahoney, Thomas G
Chamberlain, Andrew T
Sellers, William I
Journal of human evolution2018Journal Article, cited 3 times
Website
NAF-Prostate
Australopithecus Afarensis
Anthropology
Facilitating innovation and knowledge transfer between homogeneous and heterogeneous datasets: Generic incremental transfer learning approach and multidisciplinary studies
Chui, Kwok Tai
Arya, Varsha
Band, Shahab S.
Alhalabi, Mobeen
Liu, Ryan Wen
Chi, Hao Ran
Journal of Innovation & Knowledge2023Journal Article, cited 0 times
Website
NSCLC-Radiomics-Genomics
SPIE-AAPM Lung CT Challenge
LungCT-Diagnosis
QIN Breast
QIN Breast DCE-MRI
Breast-MRI-NACT-Pilot
Transfer learning
Deep Learning
Open datasets serve as facilitators for researchers to conduct research with ground truth data. Generally, datasets contain innovation and knowledge in the domains that could be transferred between homogeneous datasets and have become feasible using machine learning models with the advent of transfer learning algorithms. Research initiatives are drawn to the heterogeneous datasets if these could extract useful innovation and knowledge across datasets of different domains. A breakthrough can be achieved without the restriction requiring the similarities between datasets. A multiple incremental transfer learning is proposed to yield optimal results in the target model. A multiple rounds multiple incremental transfer learning with a negative transfer avoidance algorithm are proposed as a generic approach to transfer innovation and knowledge from the source domain to the target domain. Incremental learning has played an important role in lowering the risk of transferring unrelated information which reduces the performance of machine learning models. To evaluate the effectiveness of the proposed algorithm, multidisciplinary studies are carried out in 5 disciplines with 15 benchmark datasets. Each discipline comprises 3 datasets as studies with homogeneous datasets whereas heterogeneous datasets are formed between disciplines. The results reveal that the proposed algorithm enhances the average accuracy by 4.35% compared with existing works. Ablation studies are also conducted to analyse the contributions of the individual techniques of the proposed algorithm, namely, the multiple rounds strategy, incremental learning, and negative transfer avoidance algorithms. These techniques enhance the average accuracy of the machine learning model by 3.44%, 0.849%, and 4.26%, respectively.
Secure telemedicine using RONI halftoned visual cryptography without pixel expansion
Bakshi, Arvind
Patel, Anoop Kumar
Journal of Information Security and Applications2019Journal Article, cited 0 times
Website
BRAIN
Algorithm Development
To provide quality healthcare services worldwide telemedicine is a well-known technique. It delivers healthcare services remotely. For the diagnosis of disease and prescription by the doctor, lots of information is needed to be shared over public and private channels. Medical information like MRI, X-Ray, CT-scan etc. contains very personal information and needs to be secured. Security like confidentiality, privacy, and integrity of medical data is still a challenge. It is observed that the existing security techniques like digital watermarking, encryption are not efficient for real-time use. This paper investigates the problem and provides the solution of security considering major aspects, using Visual Cryptography (VC). The proposed algorithm creates shares for parts of the image which does not have relevant information. All the information which contains data related to the disease is supposed to be relevant and is marked as the region of interest (ROI). The integrity of the image is maintained by inserting some information in the region of non-interest (RONI). All the shares generated are transmitted over different channels and embedded information is decrypted by overlapping (in XOR fashion) shares in theta(1) time. Visual perception of all the results discussed in this article is very clear. The proposed algorithm has performance metrics as PSNR (peak signal-to-noise ratio), SSIM (structure similarity matrix), and Accuracy having values 22.9452, 0.9701, and 99.8740 respectively. (C) 2019 Elsevier Ltd. All rights reserved.
Automatic Lung Segmentation for the Inclusion of Juxtapleural Nodules and Pulmonary Vessels using Curvature based Border Correction
Singadkar, Ganesh
Mahajan, Abhishek
Thakur, Meenakshi
Talbar, Sanjay
Journal of King Saud University-Computer and Information Sciences2018Journal Article, cited 1 times
Website
LIDC-IDRI
Autocorrection of lung boundary on 3D CT lung cancer images
Nurfauzi, R.
Nugroho, H. A.
Ardiyanto, I.
Frannita, E. L.
Journal of King Saud University - Computer and Information Sciences2019Journal Article, cited 0 times
Website
LIDC-IDRI
Computer Aided Detection (CADe)
LUNG
Segmentation
Adaptive Border Marching (ABM)
Lung cancer in men has the highest mortality rate among all types of cancer. Juxta-pleural and juxta-vascular are the most common nodules located on the lung surface. A computer aided detection (CADe) system is effective for assisting radiologists in diagnosing lung nodules. However, the lung segmentation step requires sophisticated methods when juxta-pleural and juxta-vascular nodules are present. Fast computational time and low error in covering nodule areas are the aims of this study. The proposed method consists of five stages, namely ground truth (GT) extraction, data preparation, tracheal extraction, separation of lung fusion and lung border correction. The used data consist of 57 3D CT lung cancer images taken from selected LIDC-IDRI dataset. These nodules are determined as the outer areas labeled by four radiologists. The proposed method achieves the fastest computational time of 0.32 s per slice or 60 times faster than that of conventional adaptive border marching (ABM). Moreover, it produces under segmentation of nodule value as low as 14.6%. It indicates that the proposed method has a potential to be embedded in the lung CADe system to cover pleural juxta and vascular nodule areas in lung segmentation.
Radiogenomic analysis: 1p/19q codeletion based subtyping of low-grade glioma by analysing advanced biomedical texture descriptors
Gore, Sonal
Jagtap, Jayant
Journal of King Saud University - Computer and Information Sciences2021Journal Article, cited 1 times
Website
LGG-1p19qDeletion
Gray-level co-occurrence matrix (GLCM)
Random Forest
Radiogenomics
BRAIN
Presurgical discrimination of 1p/19q codeletion status may have prognostic and diagnostic value for glioma patients for immediate personalized treatment. Artificial intelligence-based models have been proved as effective method to demonstrate computer aided diagnostic system for glioma cancer. An objective of study is to present an advanced biomedical texture descriptor to perform machine learning-assisted identification of 1p/19q codeletion status of low-grade glioma (LGG) cancer. An aim is to verify efficacy of textures, extracted using local binary pattern and derived from gray level co-occurrence matrix (GLCM). Proposed study used random forest-assisted radiomics model to analyse MRI images of 159 subjects. Four different advanced biomedical texture descriptors are proposed by experimenting different extensions of LBP method. These variants-(as variant I to IV) with 8-bit or 16-bit or 24-bit LBP codes are applied with different orientations in 5 × 5, 7 × 7 square-sized neighbourhood, which are recorded in LBP histograms. These histogram features are concatenated by GLCM-based textures including energy, correlation, contrast and homogeneity. Texture descriptors performed best with classification accuracy of 87.50% (AUC: 0.917, sensitivity: 95%, specificity: 75%, f1-score: 90.48%) using 8-bit LBP variant-I. 10-fold cross-validated accuracy of all four sets range from 65.62% to 87.50% using random forest classifier and mean-AUC range from 0.646 to 0.917.
MDFU-Net: Multiscale dilated features up-sampling network for accurate segmentation of tumor from heterogeneous brain data
Sultan, Haseeb
Owais, Muhammad
Nam, Se Hyun
Haider, Adnan
Akram, Rehan
Usman, Muhammad
Park, Kang Ryoung
Journal of King Saud University - Computer and Information Sciences2023Journal Article, cited 0 times
Website
Brain-Tumor-Progression
BraTS 2020
BRAIN
Computer Aided Diagnosis (CADx)
Algorithm Development
The existing methods for accurate brain tumor (BT) segmentation based on homogeneous datasets show significant performance degradation in actual clinical applications and lacked heterogeneous data analysis. To address these issues, we designed a deep learning-based multiscale dilated features up-sampling network (MDFU-Net) for accurate BT segmentation from heterogeneous brain data. Our method primarily uses the strength of multiscale dilated features (MDF) inside the encoder module to improve the segmentation performance. For the final segmentation, a simple yet effective decoder module is designed to process the dense spatial MDF. For experiments, our MDFU-Net is trained on one dataset and tested with another dataset in a heterogeneous environment, showing quantitative results of the Dice similarity coefficient (DC) of 62.66%, intersection over union (IoU) of 56.96%, specificity (Spe) of 99.29%, and sensitivity (Sen) of 51.98%, which were higher than those of the state-of-the-art methods. There are several reasons for the lower values of the evaluation metrics of the heterogeneous dataset, including the change in characteristics of different MRI modalities, the presence of minor lesions, and a highly imbalanced dataset. Moreover, the experimental results for a homogeneous dataset showed that our MDFU-Net achieved a DC of 82.96%, IoU of 74.94%, Spe of 99.89%, and Sen of 68.05%, which were also higher than those of the state-of-the-art methods. Our system, which is based on heterogeneous brain data as well as homogeneous brain data, can be advantageous to radiologists and medical experts.
Artificial intelligence and advanced MRI techniques: A comprehensive analysis of diffuse gliomas
INTRODUCTION: The complexity of diffuse gliomas relies on advanced imaging techniques like MRI to understand their heterogeneity. Utilizing the UCSF-PDGM dataset, this study harnesses MRI techniques, radiomics, and AI to analyze diffuse gliomas for optimizing patient outcomes. METHODS: The research utilized the dataset of 501 subjects with diffuse gliomas through a comprehensive MRI protocol. After performing intricate tumor segmentation, 82.800 radiomic features were extracted for each patient from nine segmentations across eight MRI sequences. These features informed neural network and XGBoost model training to predict patient outcomes and tumor grades, supplemented by SHAP analysis to pinpoint influential radiomic features. RESULTS: In our analysis of the UCSF-PDGM dataset, we observed a diverse range of WHO tumor grades and patient outcomes, discarding one corrupt MRI scan. Our segmentation method showed high accuracy when comparing automated and manual techniques. The neural network excelled in prediction of WHO tumor grades with an accuracy of 0.9500 for the necrotic tumor label. The SHAP-analysis highlighted the 3D First Order mean as one of the most influential radiomic features, with features like Original Shape Sphericity and Original Shape Elongation were notably prominent. CONCLUSION: A study using the UCSF-PDGM dataset highlighted AI and radiomics' profound impact on neuroradiology by demonstrating reliable tumor segmentation and identifying key radiomic features, despite challenges in predicting patient survival. The research emphasizes both the potential of AI in this field and the need for broader datasets of diverse MRI sequences to enhance patient outcomes. IMPLICATION FOR PRACTICE: The study underline the significant role of radiomics in improving the accuracy of tumor identification through radiomic features.
Molecular physiology of contrast enhancement in glioblastomas: An analysis of The Cancer Imaging Archive (TCIA)
Treiber, Jeffrey M
Steed, Tyler C
Brandel, Michael G
Patel, Kunal S
Dale, Anders M
Carter, Bob S
Chen, Clark C
J Clin Neurosci2018Journal Article, cited 2 times
Website
Radiogenomics
Radiomics
Glioblastoma Multiforme (GBM)
BRAIN
Magnetic Resonance Imaging (MRI)
Contrast enhancement
The physiologic processes underlying MRI contrast enhancement in glioblastoma patients remain poorly understood. MRIs of 148 glioblastoma subjects from The Cancer Imaging Archive were segmented using Iterative Probabilistic Voxel Labeling (IPVL). Three aspects of contrast enhancement (CE) were parametrized: the mean intensity of all CE voxels (CEi), the intensity heterogeneity in CE (CEh), and volumetric ratio of CE to necrosis (CEr). Associations between these parameters and patterns of gene expression were analyzed using DAVID functional enrichment analysis. Glioma CpG island methylator phenotype (G-CIMP) glioblastomas were poorly enhancing. Otherwise, no differences in CE parameters were found between proneural, neural, mesenchymal, and classical glioblastomas. High CEi was associated with expression of genes that mediate inflammatory responses. High CEh was associated with increased expression of genes that regulate remodeling of extracellular matrix (ECM) and endothelial permeability. High CEr was associated with increased expression of genes that mediate cellular response to stressful metabolic states, including hypoxia and starvation. Our results indicate that CE in glioblastoma is associated with distinct biological processes involved in inflammatory response and tissue hypoxia. Integrative analysis of these CE parameters may yield meaningful information pertaining to the biologic state of glioblastomas and guide future therapeutic paradigms.
Prognostic relevance of CSF and peri-tumoral edema volumes in glioblastoma
Mummareddy, Nishit
Salwi, Sanjana R
Kumar, Nishant Ganesh
Zhao, Zhiguo
Ye, Fei
Le, Chi H
Mobley, Bret C
Thompson, Reid C
Chambless, Lola B
Mistry, Akshitkumar M
Journal of Clinical Neuroscience2021Journal Article, cited 0 times
Website
TCGA-GBM
Glioblastoma Multiforme (GBM)
Starlight: A kernel optimizer for GPU processing
Zeni, Alberto
Del Sozzo, Emanuele
D'Arnese, Eleonora
Conficconi, Davide
Santambrogio, Marco D.
Journal of Parallel and Distributed Computing2024Journal Article, cited 0 times
Website
CPTAC-LUAD
Performance
High-Performance Computing
Graphics Processing Units (GPU)
Optimization
PyTorch
Algorithm Development
Over the past few years, GPUs have found widespread adoption in many scientific domains, offering notable performance and energy efficiency advantages compared to CPUs. However, optimizing GPU high-performance kernels poses challenges given the complexities of GPU architectures and programming models. Moreover, current GPU development tools provide few high-level suggestions and overlook the underlying hardware. Here we present Starlight, an open-source, highly flexible tool for enhancing GPU kernel analysis and optimization. Starlight autonomously describes Roofline Models, examines performance metrics, and correlates these insights with GPU architectural bottlenecks. Additionally, Starlight predicts potential performance enhancements before altering the source code. We demonstrate its efficacy by applying it to literature genomics and physics applications, attaining speedups from 1.1× to 2.5× over state-of-the-art baselines. Furthermore, Starlight supports the development of new GPU kernels, which we exemplify through an image processing application, showing speedups of 12.7× and 140× when compared against state-of-the-art FPGA- and GPU-based solutions.
Deep feature batch correction using ComBat for machine learning applications in computational pathology
Background
Developing artificial intelligence (AI) models for digital pathology requires large datasets from multiple sources. However, without careful implementation, AI models risk learning confounding site-specific features in datasets instead of clinically relevant information, leading to overestimated performance, poor generalizability to real-world data, and potential misdiagnosis.
Methods
Whole-slide images (WSIs) from The Cancer Genome Atlas (TCGA) colon (COAD), and stomach adenocarcinoma datasets were selected for inclusion in this study. Patch embeddings were obtained using three feature extraction models, followed by ComBat harmonization. Attention-based multiple instance learning models were trained to predict tissue-source site (TSS), as well as clinical and genetic attributes, using raw, Macenko normalized, and Combat-harmonized patch embeddings.
Results
TSS prediction achieved high accuracy (AUROC > 0.95) with all three feature extraction models. ComBat harmonization significantly reduced the AUROC for TSS prediction, with mean AUROCs dropping to approximately 0.5 for most models, indicating successful mitigation of batch effects (e.g., CCL-ResNet50 in TCGA-COAD: Pre-ComBat AUROC = 0.960, Post-ComBat AUROC = 0.506, p < 0.001). Clinical attributes associated with TSS, such as race and treatment response, showed decreased predictability post-harmonization. Notably, the prediction of genetic features like MSI status remained robust after harmonization (e.g., MSI in TCGA-COAD: Pre-ComBat AUROC = 0.667, Post-ComBat AUROC = 0.669, p=0.952), indicating the preservation of true histological signals.
Conclusion
ComBat harmonization of deep learning-derived histology features effectively reduces the risk of AI models learning confounding features in WSIs, ensuring more reliable performance estimates. This approach is promising for the integration of large-scale digital pathology datasets.
A reversible data hiding method by histogram shifting in high quality medical images
Huang, Li-Chin
Tseng, Lin-Yu
Hwang, Min-Shiang
Journal of Systems and Software2013Journal Article, cited 60 times
Website
Demystifying the results of RTOG 0617: Identification of dose sensitive cardiac sub-regions associated with overall survival
McWilliam, A.
Abravan, A.
Banfill, K.
Faivre-Finn, C.
van Herk, M.
J Thorac Oncol2023Journal Article, cited 0 times
NSCLC-Cetuximab (RTOG-0617)
cardiac dose
dose escalation
Non-Small Cell Lung Cancer (NSCLC)
radiotherapy
INTRODUCTION: The RTOG 0617 trial presented worse survival for patients with lung cancer treated in the high-dose (74Gy) arm. In multivariable models, radiation level and whole heart volumetric dose parameters were associated with survival. In this work, we consider heart sub-regions to explain the observed survival difference between radiation levels. METHODS: Voxel-based analysis identified anatomical regions where dose was associated with survival. Bootstrapping clinical and dosimetric variables into an elastic-net model selected variables associated with survival. Multivariable Cox regression survival models assessed significance of dose to the heart sub-region, compared to whole heart v5 and v30. Finally, trial outcome was assessed following propensity score matching of patients on lung dose, heart sub-region dose, and tumour volume. RESULTS: 458 patients were eligible for voxel-based analysis. A significance region (p<0.001) was identified in the base of the heart. Bootstrapping selected mean lung dose, radiation level, log tumour volume, and heart region dose. Multivariable Cox model showed dose to the heart region (p=0.02), and tumour volume (p=0.03) significantly associated with survival, radiation level was not significant (p=0.07). Models showed whole heart v5 and v30 were not associated with survival, with radiation level significant (p<0.05). In the matched cohort, no significant survival difference was seen between radiation levels. CONCLUSION: Dose to the base of the heart is associated with overall survival, partly removing the radiation level effect, and explaining that worse survival in the high dose arm is due in part to heart subregion dose. By defining a heart avoidance region, future dose escalation trials may be feasible.
Stable and discriminating radiomic predictor of recurrence in early stage non-small cell lung cancer: Multi-site study
Khorrami, Mohammadhadi
Bera, Kaustav
Leo, Patrick
Vaidya, Pranjal
Patil, Pradnya
Thawani, Rajat
Velu, Priya
Rajiah, Prabhakar
Alilou, Mehdi
Choi, Humberto
Feldman, Michael D
Gilkeson, Robert C
Linden, Philip
Fu, Pingfu
Pass, Harvey
Velcheti, Vamsidhar
Madabhushi, Anant
2020Journal Article, cited 0 times
NSCLC Radiogenomics-Stanford
OBJECTIVES: To evaluate whether combining stability and discriminability criteria in building radiomic classifiers will improve the prognosis of cancer recurrence in early stage non-small cell lung cancer on non-contrast computer tomography (CT).
MATERIALS AND METHODS: CT scans of 610 patients with early stage (IA, IB, IIA) NSCLC from four independent cohorts were evaluated. A total of 350 patients from Cleveland Clinic Foundation and University of Pennsylvania were divided into two equal sets for training (D1) and validation set (D2). 80 patients from The Cancer Genome Atlas Lung Adenocarcinoma and Squamous Cell Carcinoma and 195 patients from The Cancer Imaging Archive, were used as independent second (D3) and third (D4) validation sets. A linear discriminant analysis (LDA) classifier was built based on the most stable and discriminate features. In addition, a radiomic risk score (RRS) was generated by using least absolute shrinkage and selection operator, Cox regression model to predict time to progression (TTP) following surgery.
RESULTS: A feature selection strategy focusing on both feature discriminability and stability resulted in the classifier having a higher discriminability on validation datasets compared to the discriminability alone criteria in discriminating cancer recurrence (D2, AUC of 0.75 vs. 0.65; D3, 0.74 vs. 0.62; D4, 0.76 vs. 0.63). The RRS generated by most stable-discriminating features was significantly associated with TTP compared to discriminating alone criteria (HR = 1.66, C-index of 0.72 vs. HR = 1.04, C-index of 0.62).
CONCLUSION: Accounting for both stability and discriminability yielded a more generalizable classifier for predicting cancer recurrence and TTP in early stage NSCLC.
Hybrid intelligent approach for diagnosis of the lung nodule from CT images using spatial kernelized fuzzy c-means and ensemble learning
Farahani, Farzad Vasheghani
Ahmadi, Abbas
Zarandi, Mohammad Hossein Fazel
Mathematics and Computers in Simulation2018Journal Article, cited 1 times
Website
LIDC-IDRI
Lung cancer detection from CT image using improved profuse clustering and deep learning instantaneously trained neural networks
Shakeel, P. Mohamed
Burhanuddin, M.A.
Desa, Mohamad Ishak
Measurement2019Journal Article, cited 0 times
CPTAC-LSCC
Machine Learning
Automatic lung disease detection is a critical challenging task for researchers because of the noise signals getting included into creative signals amid the image capturing process which may corrupt the cancer image quality thusly bringing about the debased performance. So as to evade this, Lung cancer preprocessing has turned into an imperative stage with the key parts as edge detection, lung image resampling, lung image upgrade and image denoising for improving the nature of input image. Image Denoising is a critical pre-processing task preceding further preparing of the image like feature extraction, segmentation, surface examination, and so forth which elminates the noise whereas retaining the edges and additional complete features to the extent possible. This paper deals with improvement of the quality of lung image and diagnosis of lung cancer by reducing misclassification. The lung CT images are collected from Cancer imaging Archive (CIA) dataset, noise present in the images are eliminated by applying weighted mean histogram equalization approach which successfully removes noise from image, also enhancing the quality of the image, using improved profuse clustering technique (IPCT) for segmenting the affected region. Various spectral features are derived from the affected region. These are examined by applying deep learning instantaneously trained neural network for predicting lung cancer. Eventually, the system is examined by the efficiency of the system using MATLAB based simulation results. The system ensures that 98.42% of accuracy with minimum classification error 0.038.
Brain tumor detection based on Convolutional Neural Network with neutrosophic expert maximum fuzzy sure entropy
Özyurt, Fatih
Sert, Eser
Avci, Engin
Dogantekin, Esin
Measurement2019Journal Article, cited 0 times
TCGA-GBM
Convolutional Neural Network (CNN)
Classification
Machine Learning
Brain tumor classification is a challenging task in the field of medical image processing. The present study proposes a hybrid method using Neutrosophy and Convolutional Neural Network (NS-CNN). It aims to classify tumor region areas that are segmented from brain images as benign and malignant. In the first stage, MRI images were segmented using the neutrosophic set – expert maximum fuzzy-sure entropy (NS-EMFSE) approach. The features of the segmented brain images in the classification stage were obtained by CNN and classified using SVM and KNN classifiers. Experimental evaluation was carried out based on 5-fold cross-validation on 80 of benign tumors and 80 of malign tumors. The findings demonstrated that the CNN features displayed a high classification performance with different classifiers. Experimental results indicate that CNN features displayed a better classification performance with SVM as simulation results validated output data with an average success of 95.62%.
A deep learning reconstruction framework for low dose phase contrast computed tomography via inter-contrast enhancement
Zhang, Changsheng
Zhu, Guogang
Fu, Jian
Zhao, Gang
Measurement2023Journal Article, cited 0 times
Website
NSCLC-Radiomics-Genomics
Computed Tomography (CT)
Multi-contrast
Convolutional Neural Network (CNN)
Image Enhancement/methods
Phase contrast computed tomography (PCCT) offers excellent imaging contrast on soft tissue while it generate absorption, phase and dark-field contrast tomographic images. It has shown a great potential in clinical diagnosis. However, existing PCCT methods require high radiation doses. Reducing tube current is a universal low dose approach while it will introduce quantum noise in projections. In this paper, we report a deep learning (DL) framework for low dose PCCT based on inter-contrast enhancement. It utilizes the multi-contrast feature of PCCT and the varying effects of noise on each contrast. The missing structure in the contrasts that are more affected by noise can be recovered by those that are less affected. Considering the grating-based PCCT as example, the proposed framework is validated with experiments and a dramatic quality improvement of multi-contrast tomographic images has been obtained. This study shows potential of DL techniques in the field of low dose PCCT.
Clinical target volume segmentation based on gross tumor volume using deep learning for head and neck cancer treatment
Kihara, S.
Koike, Y.
Takegawa, H.
Anetai, Y.
Nakamura, S.
Tanigawa, N.
Koizumi, M.
Med Dosim2022Journal Article, cited 0 times
Website
OPC-Radiomics
Clinical target volume
Deep learning
Head and neck cancer
Radiotherapy
Segmentation
Accurate clinical target volume (CTV) delineation is important for head and neck intensity-modulated radiation therapy. However, delineation is time-consuming and susceptible to interobserver variability (IOV). Based on a manual contouring process commonly used in clinical practice, we developed a deep learning (DL)-based method to delineate a low-risk CTV with computed tomography (CT) and gross tumor volume (GTV) input and compared it with a CT-only input. A total of 310 patients with oropharynx cancer were randomly divided into the training set (250) and test set (60). The low-risk CTV and primary GTV contours were used to generate label data for the input and ground truth. A 3D U-Net with a two-channel input of CT and GTV (U-NetGTV) was proposed and its performance was compared with a U-Net with only CT input (U-NetCT). The Dice similarity coefficient (DSC) and average Hausdorff distance (AHD) were evaluated. The time required to predict the CTV was 0.86 s per patient. U-NetGTV showed a significantly higher mean DSC value than U-NetCT (0.80 +/- 0.03 and 0.76 +/- 0.05) and a significantly lower mean AHD value (3.0 +/- 0.5 mm vs 3.5 +/- 0.7 mm). Compared to the existing DL method with only CT input, the proposed GTV-based segmentation using DL showed a more precise low-risk CTV segmentation for head and neck cancer. Our findings suggest that the proposed method could reduce the contouring time of a low-risk CTV, allowing the standardization of target delineations for head and neck cancer.
Quantitative assessment of colorectal morphology: Implications for robotic colonoscopy
Alazmani, A
Hood, A
Jayne, D
Neville, A
Culmer, P
Medical Engineering & Physics2016Journal Article, cited 11 times
Website
CT COLONOGRAPHY
Segmentation
This paper presents a method of characterizing the distribution of colorectal morphometrics. It uses three-dimensional region growing and topological thinning algorithms to determine and visualize the luminal volume and centreline of the colon, respectively. Total and segmental lengths, diameters, volumes, and tortuosity angles were then quantified. The effects of body orientations on these parameters were also examined. Variations in total length were predominately due to differences in the transverse colon and sigmoid segments, and did not significantly differ between body orientations. The diameter of the proximal colon was significantly larger than the distal colon, with the largest value at the ascending and cecum segments. The volume of the transverse colon was significantly the largest, while those of the descending colon and rectum were the smallest. The prone position showed a higher frequency of high angles and consequently found to be more torturous than the supine position. This study yielded a method for complete segmental measurements of healthy colorectal anatomy and its tortuosity. The transverse and sigmoid colons were the major determinant in tortuosity and morphometrics between body orientations. Quantitative understanding of these parameters may potentially help to facilitate colonoscopy techniques, accuracy of polyp spatial distribution detection, and design of novel endoscopic devices.
C-NMC: B-lineage acute lymphoblastic leukaemia: A blood cancer dataset
Gupta, Ritu
Gehlot, Shiv
Gupta, Anubha
Medical Engineering & Physics2022Journal Article, cited 0 times
Website
C-NMC 2019
Leukemia
Computer Aided Diagnosis (CADx)
Jenner-Giemsa stain
histopathology imaging features
Classification
Development of computer-aided cancer diagnostic tools is an active research area owing to the advancements in deep-learning domain. Such technological solutions provide affordable and easily deployable diagnostic tools. Leukaemia, or blood cancer, is one of the leading cancers causing more than 0.3 million deaths every year. In order to aid the development of such an AI-enabled tool, we collected and curated a microscopic image dataset, namely C-NMC, of more than 15000 cancer cell images at a very high resolution of B-Lineage Acute Lymphoblastic Leukaemia (B-ALL). The dataset is prepared at the subject-level and contains images of both healthy and cancer patients. So far, this is the largest (as well as curated) dataset on B-ALL cancer in the public domain. C-NMC is available at The Cancer Imaging Archive (TCIA), USA and can be helpful for the research community worldwide for the development of B-ALL cancer diagnostic tools. This dataset was utilized in an international medical imaging challenge held at ISBI 2019 conference in Venice, Italy. In this paper, we present a detailed description and challenges of this dataset. We also present benchmarking results of all the methods applied so far on this dataset.
A computational analysis of a novel therapeutic approach combining an advanced medicinal therapeutic device and a fracture fixation assembly for the treatment of osteoporotic fractures: Effects of physiological loading, interface conditions, and fracture fixation materials
Mondal, Subrata
MacManus, David B
Bonatti, Amedeo Franco
De Maria, Carmelo
Dalgarno, Kenny
Chatzinikolaidou, Maria
De Acutis, Aurora
Vozzi, Giovanni
Fiorilli, Sonia
Vitale-Brovarone, Chiara
Dunne, Nicholas
Medical Engineering & Physics2023Journal Article, cited 0 times
CPTAC-SAR
The occurrence of periprosthetic femoral fractures (PFF) has increased in people with osteoporosis due to decreased bone density, poor bone quality, and stress shielding from prosthetic implants. PFF treatment in the elderly is a genuine concern for orthopaedic surgeons as no effective solution currently exists. Therefore, the goal of this study was to determine whether the design of a novel advanced medicinal therapeutic device (AMTD) manufactured from a polymeric blend in combination with a fracture fixation plate in the femur is capable of withstanding physiological loads without failure during the bone regenerative process. This was achieved by developing a finite element (FE) model of the AMTD together with a fracture fixation assembly, and a femur with an implanted femoral stem. The response of both normal and osteoporotic bone was investigated by implementing their respective material properties in the model. Physiological loading simulating the peak load during standing, walking, and stair climbing was investigated. The results showed that the fixation assembly was the prime load bearing component for this configuration of devices. Within the fixation assembly, the bone screws were found to have the highest stresses in the fixation assembly for all the loading conditions. Whereas the stresses within the AMTD were significantly below the maximum yield strength of the device's polymeric blend material. Furthermore, this study also investigated the performance of different fixation assembly materials and found Ti-6Al-4V to be the optimal material choice from those included in this study.
Patient-specific biomechanical model as whole-body CT image registration tool
Li, Mao
Miller, Karol
Joldes, Grand Roman
Doyle, Barry
Garlapati, Revanth Reddy
Kikinis, Ron
Wittek, Adam
Medical Image Analysis2015Journal Article, cited 15 times
Website
Image registration
patient-specific biomechanical model
non-linear finite element analysis
Fuzzy-c means
Hausdorff distance
Magnetic Resonance Imaging (MRI)
Computed Tomography (CT)
finite-element model
BRAIN
mechanical-properties
nonrigid registration
Whole-body computed tomography (CT) image registration is important for cancer diagnosis, therapy planning and treatment. Such registration requires accounting for large differences between source and target images caused by deformations of soft organs/tissues and articulated motion of skeletal structures. The registration algorithms relying solely on image processing methods exhibit deficiencies in accounting for such deformations and motion. We propose to predict the deformations and movements of body organs/tissues and skeletal structures for whole-body CT image registration using patient-specific non-linear biomechanical modelling. Unlike the conventional biomechanical modelling, our approach for building the biomechanical models does not require time-consuming segmentation of CT scans to divide the whole body into non-overlapping constituents with different material properties. Instead, a Fuzzy C-Means (FCM) algorithm is used for tissue classification to assign the constitutive properties automatically at integration points of the computation grid. We use only very simple segmentation of the spine when determining vertebrae displacements to define loading for biomechanical models. We demonstrate the feasibility and accuracy of our approach on CT images of seven patients suffering from cancer and aortic disease. The results confirm that accurate whole-body CT image registration can be achieved using a patient-specific non-linear biomechanical model constructed without time-consuming segmentation of the whole-body images. (C) 2015 Elsevier B.V. All rights reserved.
Segmentation of Pulmonary Nodules in Computed Tomography Using a Regression Neural Network Approach and its Application to the Lung Image Database Consortium and Image Database Resource Initiative Dataset
Messay, Temesguen
Hardie, Russell C
Tuinstra, Timothy R
Medical Image Analysis2015Journal Article, cited 55 times
Website
LIDC-IDRI
Computed Tomography (CT)
Automatic segmentation
Computer Aided Diagnosis (CADx)
Semi-automatic segmentation
Validation, comparison, and combination of algorithms for automatic detection of pulmonary nodules in computed tomography images: The LUNA16 challenge
Setio, A. A. A.
Traverso, A.
de Bel, T.
Berens, M. S. N.
Bogaard, C. V. D.
Cerello, P.
Chen, H.
Dou, Q.
Fantacci, M. E.
Geurts, B.
Gugten, R. V.
Heng, P. A.
Jansen, B.
de Kaste, M. M. J.
Kotov, V.
Lin, J. Y.
Manders, Jtmc
Sonora-Mengana, A.
Garcia-Naranjo, J. C.
Papavasileiou, E.
Prokop, M.
Saletta, M.
Schaefer-Prokop, C. M.
Scholten, E. T.
Scholten, L.
Snoeren, M. M.
Torres, E. L.
Vandemeulebroucke, J.
Walasek, N.
Zuidhof, G. C. A.
Ginneken, B. V.
Jacobs, C.
Med Image Anal2017Journal Article, cited 87 times
Website
LIDC-IDRI
LUNA16 Challenge
Computer Aided Detection (CADe)
Deep learning
LUNG
Automatic detection of pulmonary nodules in thoracic computed tomography (CT) scans has been an active area of research for the last two decades. However, there have only been few studies that provide a comparative performance evaluation of different systems on a common database. We have therefore set up the LUNA16 challenge, an objective evaluation framework for automatic nodule detection algorithms using the largest publicly available reference database of chest CT scans, the LIDC-IDRI data set. In LUNA16, participants develop their algorithm and upload their predictions on 888 CT scans in one of the two tracks: 1) the complete nodule detection track where a complete CAD system should be developed, or 2) the false positive reduction track where a provided set of nodule candidates should be classified. This paper describes the setup of LUNA16 and presents the results of the challenge so far. Moreover, the impact of combining individual systems on the detection performance was also investigated. It was observed that the leading solutions employed convolutional networks and used the provided set of nodule candidates. The combination of these solutions achieved an excellent sensitivity of over 95% at fewer than 1.0 false positives per scan. This highlights the potential of combining algorithms to improve the detection performance. Our observer study with four expert readers has shown that the best system detects nodules that were missed by expert readers who originally annotated the LIDC-IDRI data. We released this set of additional nodules for further development of CAD systems.
Spatial aggregation of holistically-nested convolutional neural networks for automated pancreas localization and segmentation
Roth, Holger R
Lu, Le
Lay, Nathan
Harrison, Adam P
Farag, Amal
Sohn, Andrew
Summers, Ronald M
Medical Image Analysis2018Journal Article, cited 0 times
Pancreas-CT
Accurate and automatic organ segmentation from 3D radiological scans is an important yet challenging problem for medical image analysis. Specifically, as a small, soft, and flexible abdominal organ, the pancreas demonstrates very high inter-patient anatomical variability in both its shape and volume. This inhibits traditional automated segmentation methods from achieving high accuracies, especially compared to the performance obtained for other organs, such as the liver, heart or kidneys. To fill this gap, we present an automated system from 3D computed tomography (CT) volumes that is based on a two-stage cascaded approach-pancreas localization and pancreas segmentation. For the first step, we localize the pancreas from the entire 3D CT scan, providing a reliable bounding box for the more refined segmentation step. We introduce a fully deep-learning approach, based on an efficient application of holistically-nested convolutional networks (HNNs) on the three orthogonal axial, sagittal, and coronal views. The resulting HNN per-pixel probability maps are then fused using pooling to reliably produce a 3D bounding box of the pancreas that maximizes the recall. We show that our introduced localizer compares favorably to both a conventional non-deep-learning method and a recent hybrid approach based on spatial aggregation of superpixels using random forest classification. The second, segmentation, phase operates within the computed bounding box and integrates semantic mid-level cues of deeply-learned organ interior and boundary maps, obtained by two additional and separate realizations of HNNs. By integrating these two mid-level cues, our method is capable of generating boundary-preserving pixel-wise class label maps that result in the final pancreas segmentation. Quantitative evaluation is performed on a publicly available dataset of 82 patient CT scans using 4-fold cross-validation (CV). We achieve a (mean ± std. dev.) Dice similarity coefficient (DSC) of 81.27 ± 6.27% in validation, which significantly outperforms both a previous state-of-the art method and a preliminary version of this work that report DSCs of 71.80 ± 10.70% and 78.01 ± 8.20%, respectively, using the same dataset.
The deformable most-likely-point paradigm
Sinha, A.
Billings, S. D.
Reiter, A.
Liu, X.
Ishii, M.
Hager, G. D.
Taylor, R. H.
Med Image Anal2019Journal Article, cited 1 times
Website
QIN-HEADNECK
Head-Neck Cetuximab
Image registration
Models
Nasal sinus
PELVIS
Endoscopy
In this paper, we present three deformable registration algorithms designed within a paradigm that uses 3D statistical shape models to accomplish two tasks simultaneously: 1) register point features from previously unseen data to a statistically derived shape (e.g., mean shape), and 2) deform the statistically derived shape to estimate the shape represented by the point features. This paradigm, called the deformable most-likely-point paradigm, is motivated by the idea that generative shape models built from available data can be used to estimate previously unseen data. We developed three deformable registration algorithms within this paradigm using statistical shape models built from reliably segmented objects with correspondences. Results from several experiments show that our algorithms produce accurate registrations and reconstructions in a variety of applications with errors up to CT resolution on medical datasets. Our code is available at https://github.com/AyushiSinha/cisstICP.
Semi-supervised Adversarial Model for Benign-Malignant Lung Nodule Classification on Chest CT
Xie, Yutong
Zhang, Jianpeng
Xia, Yong
Medical Image Analysis2019Journal Article, cited 0 times
Classification of benign-malignant lung nodules on chest CT is the most critical step in the early detection of lung cancer and prolongation of patient survival. Despite their success in image classification, deep convolutional neural networks (DCNNs) always require a large number of labeled training data, which are not available for most medical image analysis applications due to the work required in image acquization and particularly image annotation. In this paper, we propose a semi-supervised adversarial classification (SSAC) model that can be trained by using both labeled and unlabeled data for benign-malignant lung nodule classification. This model consists of an adversarial autoencoder-based unsupervised reconstruction network R, a supervised classification network C, and learnable transition layers that enable the adaption of the image representation ability learned by R to C. The SSAC model has been extended to the multi-view knowledge-based collaborative learning, aiming to employ three SSACs to characterize each nodule’s overall appearance, heterogeneity in shape and texture, respectively, and to perform such characterization on nine planar views. The MK-SSAC model has been evaluated on the benchmark LIDC-IDRI dataset and achieves an accuracy of 92.53% and an AUC of 95.81%, which are superior to the performance of other lung nodule classification and semi-supervised learning approaches.
Semi-supervised mp-MRI data synthesis with StitchLayer and auxiliary distance maximization
Wang, Zhiwei
Lin, Yi
Cheng, Kwang-Ting Tim
Yang, Xin
Medical Image Analysis2020Journal Article, cited 0 times
PROSTATEx
machine learning
labeling
SDCT-AuxNet(theta): DCT augmented stain deconvolutional CNN with auxiliary classifier for cancer diagnosis
Gehlot, Shiv
Gupta, Anubha
Gupta, Ritu
Medical Image Analysis2020Journal Article, cited 6 times
Website
C_NMC_2019 Dataset: ALL Challenge dataset of ISBI 2019
Convolutional Neural Network (CNN)
Deep Learning
Pathomics
Classification
Acute lymphoblastic leukemia (ALL) is a pervasive pediatric white blood cell cancer across the globe. With the popularity of convolutional neural networks (CNNs), computer-aided diagnosis of cancer has attracted considerable attention. Such tools are easily deployable and are cost-effective. Hence, these can enable extensive coverage of cancer diagnostic facilities. However, the development of such a tool for ALL cancer was challenging so far due to the non-availability of a large training dataset. The visual similarity between the malignant and normal cells adds to the complexity of the problem. This paper discusses the recent release of a large dataset and presents a novel deep learning architecture for the classification of cell images of ALL cancer. The proposed architecture, namely, SDCT-AuxNet(theta) is a 2-module framework that utilizes a compact CNN as the main classifier in one module and a Kernel SVM as the auxiliary classifier in the other one. While CNN classifier uses features through bilinear-pooling, spectral-averaged features are used by the auxiliary classifier. Further, this CNN is trained on the stain deconvolved quantity images in the optical density domain instead of the conventional RGB images. A novel test strategy is proposed that exploits both the classifiers for decision making using the confidence scores of their predicted class labels. Elaborate experiments have been carried out on our recently released public dataset of 15114 images of ALL cancer and healthy cells to establish the validity of the proposed methodology that is also robust to subject-level variability. A weighted F1 score of 94.8% is obtained that is best so far on this challenging dataset.
Deep neural network models for computational histopathology: A survey
Srinidhi, Chetan L
Ciga, Ozan
Martel, Anne L
Medical Image Analysis2020Journal Article, cited 0 times
Post-NAT-BRCA
Histopathological images contain rich phenotypic information that can be used to monitor underlying mechanisms contributing to disease progression and patient survival outcomes. Recently, deep learning has become the mainstream methodological choice for analyzing and interpreting histology images. In this paper, we present a comprehensive review of state-of-the-art deep learning approaches that have been used in the context of histopathological image analysis. From the survey of over 130 papers, we review the field's progress based on the methodological aspect of different machine learning strategies such as supervised, weakly supervised, unsupervised, transfer learning and various other sub-variants of these methods. We also provide an overview of deep learning based survival models that are applicable for disease-specific prognosis tasks. Finally, we summarize several existing open datasets and highlight critical challenges and limitations with current deep learning approaches, along with possible avenues for future research.
A novel approach to 2D/3D registration of X-ray images using Grangeat's relation
Frysch, R.
Pfeiffer, T.
Rose, G.
Med Image Anal2020Journal Article, cited 0 times
CPTAC-GBM
Image registration
Fast and accurate 2D/3D registration plays an important role in many applications, ranging from scientific and engineering domains all the way to medical care. Today's predominant methods are based on computationally expensive approaches, such as virtual forward or back projections, that limit the real-time applicability of the routines. Here, we present a novel concept that makes use of Grangeat's relation to intertwine information from the 3D volume and the 2D projection space in a way that allows pre-computation of all time-intensive steps. The main effort within actual registration tasks is reduced to simple resampling of the pre-calculated values, which can be executed rapidly on modern GPU hardware. We analyze the applicability of the proposed method on simulated data under various conditions and evaluate the findings on real data from a C-arm CT scanner. Our results show high registration quality in both simulated as well as real data scenarios and demonstrate a reduction in computation time for the crucial computation step by a factor of six to eight when compared to state-of-the-art routines. With minor trade-offs in accuracy, this speed-up can even be increased up to a factor of 100 in particular settings. To our knowledge, this is the first application of Grangeat's relation to the topic of 2D/3D registration. Due to its high computational efficiency and broad range of potential applications, we believe it constitutes a highly relevant approach for various problems dealing with cone beam transmission images.
Automated size-specific dose estimates using deep learning image processing
Juszczyk, Jan
Badura, Pawel
Czajkowska, Joanna
Wijata, Agata
Andrzejewski, Jacek
Bozek, Pawel
Smolinski, Michal
Biesok, Marta
Sage, Agata
Rudzki, Marcin
Wieclawek, Wojciech
Medical Image Analysis2020Journal Article, cited 0 times
Head-Neck Cetuximab
An automated vendor-independent system for dose monitoring in computed tomography (CT) medical examinations involving ionizing radiation is presented in this paper. The system provides precise size-specific dose estimates (SSDE) following the American Association of Physicists in Medicine regulations. Our dose management can operate on incomplete DICOM header metadata by retrieving necessary information from the dose report image by using optical character recognition. For the determination of the patient's effective diameter and water equivalent diameter, a convolutional neural network is employed for the semantic segmentation of the body area in axial CT slices. Validation experiments for the assessment of the SSDE determination and subsequent stages of our methodology involved a total of 335 CT series (60 352 images) from both public databases and our clinical data. We obtained the mean body area segmentation accuracy of 0.9955 and Jaccard index of 0.9752, yielding a slice-wise mean absolute error of effective diameter below 2 mm and water equivalent diameter at 1 mm, both below 1%. Three modes of the SSDE determination approach were investigated and compared to the results provided by the commercial system GE DoseWatch in three different body region categories: head, chest, and abdomen. Statistical analysis was employed to point out some significant remarks, especially in the head category.
An interpretable classifier for high-resolution breast cancer screening images utilizing weakly supervised localization
Shen, Y.
Wu, N.
Phang, J.
Park, J.
Liu, K.
Tyagi, S.
Heacock, L.
Kim, S. G.
Moy, L.
Cho, K.
Geras, K. J.
Med Image Anal2021Journal Article, cited 0 times
CBIS-DDSM
Breast/diagnostic imaging
*Breast Neoplasms/diagnostic imaging
Early Detection of Cancer
Female
Humans
Mammography
Neural Networks
Computer
Breast cancer screening
Deep learning
High-resolution image classification
Weakly supervised localization
Medical images differ from natural images in significantly higher resolutions and smaller regions of interest. Because of these differences, neural network architectures that work well for natural images might not be applicable to medical image analysis. In this work, we propose a novel neural network model to address these unique properties of medical images. This model first uses a low-capacity, yet memory-efficient, network on the whole image to identify the most informative regions. It then applies another higher-capacity network to collect details from chosen regions. Finally, it employs a fusion module that aggregates global and local information to make a prediction. While existing methods often require lesion segmentation during training, our model is trained with only image-level labels and can generate pixel-level saliency maps indicating possible malignant findings. We apply the model to screening mammography interpretation: predicting the presence or absence of benign and malignant lesions. On the NYU Breast Cancer Screening Dataset, our model outperforms (AUC = 0.93) ResNet-34 and Faster R-CNN in classifying breasts with malignant findings. On the CBIS-DDSM dataset, our model achieves performance (AUC = 0.858) on par with state-of-the-art approaches. Compared to ResNet-34, our model is 4.1x faster for inference while using 78.4% less GPU memory. Furthermore, we demonstrate, in a reader study, that our model surpasses radiologist-level AUC by a margin of 0.11.
3D Registration of pre-surgical prostate MRI and histopathology images via super-resolution volume reconstruction
Sood, R. R.
Shao, W.
Kunder, C.
Teslovich, N. C.
Wang, J. B.
Soerensen, S. J. C.
Madhuripan, N.
Jawahar, A.
Brooks, J. D.
Ghanouni, P.
Fan, R. E.
Sonn, G. A.
Rusu, M.
Med Image Anal2021Journal Article, cited 0 times
Website
PROSTATEx
PROSTATE-DIAGNOSIS
Generative Adversarial Network (GAN)
Multi-modal imaging
Magnetic Resonance Imaging (MRI)
H&E-stained slides
The use of MRI for prostate cancer diagnosis and treatment is increasing rapidly. However, identifying the presence and extent of cancer on MRI remains challenging, leading to high variability in detection even among expert radiologists. Improvement in cancer detection on MRI is essential to reducing this variability and maximizing the clinical utility of MRI. To date, such improvement has been limited by the lack of accurately labeled MRI datasets. Data from patients who underwent radical prostatectomy enables the spatial alignment of digitized histopathology images of the resected prostate with corresponding pre-surgical MRI. This alignment facilitates the delineation of detailed cancer labels on MRI via the projection of cancer from histopathology images onto MRI. We introduce a framework that performs 3D registration of whole-mount histopathology images to pre-surgical MRI in three steps. First, we developed a novel multi-image super-resolution generative adversarial network (miSRGAN), which learns information useful for 3D registration by producing a reconstructed 3D MRI. Second, we trained the network to learn information between histopathology slices to facilitate the application of 3D registration methods. Third, we registered the reconstructed 3D histopathology volumes to the reconstructed 3D MRI, mapping the extent of cancer from histopathology images onto MRI without the need for slice-to-slice correspondence. When compared to interpolation methods, our super-resolution reconstruction resulted in the highest PSNR relative to clinical 3D MRI (32.15 dB vs 30.16 dB for BSpline interpolation). Moreover, the registration of 3D volumes reconstructed via super-resolution for both MRI and histopathology images showed the best alignment of cancer regions when compared to (1) the state-of-the-art RAPSODI approach, (2) volumes that were not reconstructed, or (3) volumes that were reconstructed using nearest neighbor, linear, or BSpline interpolations. The improved 3D alignment of histopathology images and MRI facilitates the projection of accurate cancer labels on MRI, allowing for the development of improved MRI interpretation schemes and machine learning models to automatically detect cancer on MRI.
VR-Caps: A Virtual Environment for Capsule Endoscopy
İncetan, Kağan
Celik, Ibrahim Omer
Obeid, Abdulhamid
Gokceler, Guliz Irem
Ozyoruk, Kutsev Bengisu
Almalioglu, Yasin
Chen, Richard J
Mahmood, Faisal
Gilbert, Hunter
Durr, Nicholas J
Turan, Mehmet
Medical Image Analysis2021Journal Article, cited 0 times
CT COLONOGRAPHY
Current capsule endoscopes and next-generation robotic capsules for diagnosis and treatment of gastrointestinal diseases are complex cyber-physical platforms that must orchestrate complex software and hardware functions. The desired tasks for these systems include visual localization, depth estimation, 3D mapping, disease detection and segmentation, automated navigation, active control, path realization and optional therapeutic modules such as targeted drug delivery and biopsy sampling. Data-driven algorithms promise to enable many advanced functionalities for capsule endoscopes, but real-world data is challenging to obtain. Physically-realistic simulations providing synthetic data have emerged as a solution to the development of data-driven algorithms. In this work, we present a comprehensive simulation platform for capsule endoscopy operations and introduce VR-Caps, a virtual active capsule environment that simulates a range of normal and abnormal tissue conditions (e.g., inflated, dry, wet etc.) and varied organ types, capsule endoscope designs (e.g., mono, stereo, dual and 360∘ camera), and the type, number, strength, and placement of internal and external magnetic sources that enable active locomotion. VR-Caps makes it possible to both independently or jointly develop, optimize, and test medical imaging and analysis software for the current and next-generation endoscopic capsule systems. To validate this approach, we train state-of-the-art deep neural networks to accomplish various medical image analysis tasks using simulated data from VR-Caps and evaluate the performance of these models on real medical data. Results demonstrate the usefulness and effectiveness of the proposed virtual platform in developing algorithms that quantify fractional coverage, camera trajectory, 3D map reconstruction, and disease classification. All of the code, pre-trained weights and created 3D organ models of the virtual environment with detailed instructions how to setup and use the environment are made publicly available at https://github.com/CapsuleEndoscope/VirtualCapsuleEndoscopy and a video demonstration can be seen in the supplementary videos (Video-I).
CT-Based COVID-19 triage: Deep multitask learning improves joint identification and severity quantification
Goncharov, M.
Pisov, M.
Shevtsov, A.
Shirokikh, B.
Kurmukov, A.
Blokhin, I.
Chernina, V.
Solovev, A.
Gombolevskiy, V.
Morozov, S.
Belyaev, M.
Med Image Anal2021Journal Article, cited 83 times
Website
NSCLC-Radiomics
Radiomic features
Training
*COVID-19/diagnostic imaging
*Deep Learning
Humans
Pandemics
SARS-CoV-2
Tomography
X-Ray Computed
*Triage
Covid-19
LUNA16 Challenge
Computed Tomography (CT)
Convolutional Neural Network (CNN)
LUNG
ResNet50
The current COVID-19 pandemic overloads healthcare systems, including radiology departments. Though several deep learning approaches were developed to assist in CT analysis, nobody considered study triage directly as a computer science problem. We describe two basic setups: Identification of COVID-19 to prioritize studies of potentially infected patients to isolate them as early as possible; Severity quantification to highlight patients with severe COVID-19, thus direct them to a hospital or provide emergency medical care. We formalize these tasks as binary classification and estimation of affected lung percentage. Though similar problems were well-studied separately, we show that existing methods could provide reasonable quality only for one of these setups. We employ a multitask approach to consolidate both triage approaches and propose a convolutional neural network to leverage all available labels within a single model. In contrast with the related multitask approaches, we show the benefit from applying the classification layers to the most spatially detailed feature map at the upper part of U-Net instead of the less detailed latent representation at the bottom. We train our model on approximately 1500 publicly available CT studies and test it on the holdout dataset that consists of 123 chest CT studies of patients drawn from the same healthcare system, specifically 32 COVID-19 and 30 bacterial pneumonia cases, 30 cases with cancerous nodules, and 31 healthy controls. The proposed multitask model outperforms the other approaches and achieves ROC AUC scores of 0.87+/-0.01 vs. bacterial pneumonia, 0.93+/-0.01 vs. cancerous nodules, and 0.97+/-0.01 vs. healthy controls in Identification of COVID-19, and achieves 0.97+/-0.01 Spearman Correlation in Severity quantification. We have released our code and shared the annotated lesions masks for 32 CT images of patients with COVID-19 from the test dataset.
Weakly-supervised progressive denoising with unpaired CT images
Kim, Byeongjoon
Shim, Hyunjung
Baek, Jongduk
Medical Image Analysis2021Journal Article, cited 0 times
LDCT-and-Projection-data
Although low-dose CT imaging has attracted a great interest due to its reduced radiation risk to the patients, it suffers from severe and complex noise. Recent fully-supervised methods have shown impressive performances on CT denoising task. However, they require a huge amount of paired normal-dose and low-dose CT images, which is generally unavailable in real clinical practice. To address this problem, we propose a weakly-supervised denoising framework that generates paired original and noisier CT images from unpaired CT images using a physics-based noise model. Our denoising framework also includes a progressive denoising module that bypasses the challenges of mapping from low-dose to normal-dose CT images directly via progressively compensating the small noise gap. To quantitatively evaluate diagnostic image quality, we present the noise power spectrum and signal detection accuracy, which are well correlated with the visual inspection. The experimental results demonstrate that our method achieves remarkable performances, even superior to fully-supervised CT denoising with respect to the signal detectability. Moreover, our framework increases the flexibility in data collection, allowing us to utilize any unpaired data at any dose levels.
A CNN-based unified framework utilizing projection loss in unison with label noise handling for multiple Myeloma cancer diagnosis
Gehlot, S.
Gupta, A.
Gupta, R.
Med Image Anal2021Journal Article, cited 0 times
Website
C-NMC 2019 ALL Challenge dataset of ISBI 2019
Histopathology imaging features
Classification
Computer Aided Diagnosis (CADx)
Multiple Myeloma (MM) is a malignancy of plasma cells. Similar to other forms of cancer, it demands prompt diagnosis for reducing the risk of mortality. The conventional diagnostic tools are resource-intense and hence, these solutions are not easily scalable for extending their reach to the masses. Advancements in deep learning have led to rapid developments in affordable, resource optimized, easily deployable computer-assisted solutions. This work proposes a unified framework for MM diagnosis using microscopic blood cell imaging data that addresses the key challenges of inter-class visual similarity of healthy versus cancer cells and that of the label noise of the dataset. To extract class distinctive features, we propose projection loss to maximize the projection of a sample's activation on the respective class vector besides imposing orthogonality constraints on the class vectors. This projection loss is used along with the cross-entropy loss to design a dual branch architecture that helps achieve improved performance and provides scope for targeting the label noise problem. Based on this architecture, two methodologies have been proposed to correct the noisy labels. A coupling classifier has also been proposed to resolve the conflicts in the dual-branch architecture's predictions. We have utilized a large dataset of 72 subjects (26 healthy and 46 MM cancer) containing a total of 74996 images (including 34555 training cell images and 40441 test cell images). This is so far the most extensive dataset on Multiple Myeloma cancer ever reported in the literature. An ablation study has also been carried out. The proposed architecture performs best with a balanced accuracy of 94.17% on binary cell classification of healthy versus cancer in the comparative performance with ten state-of-the-art architectures. Extensive experiments on two additional publicly available datasets of two different modalities have also been utilized for analyzing the label noise handling capability of the proposed methodology. The code will be available under https://github.com/shivgahlout/CAD-MM.
Computer-aided diagnosis of prostate cancer using multiparametric MRI and clinical features: A patient-level classification framework
Mehta, P.
Antonelli, M.
Ahmed, H. U.
Emberton, M.
Punwani, S.
Ourselin, S.
Med Image Anal2021Journal Article, cited 1 times
Website
Algorithm Development
PROSTATEx
PI-RADS
Classification
T2-weighted
Image Registration
Radiogenomics
Radiomics
Magnetic Resonance Imaging (MRI)
Computer Aided Diagnosis (CADx)
Convolutional Neural Network (CNN)
Support Vector Machine (SVM)
Prostate Imaging Compared to Transperineal Ultrasound-guided biopsy for significant prostate cancer Risk Evaluation (PICTURE) study
Computer-aided diagnosis (CAD) of prostate cancer (PCa) using multiparametric magnetic resonance imaging (mpMRI) is actively being investigated as a means to provide clinical decision support to radiologists. Typically, these systems are trained using lesion annotations. However, lesion annotations are expensive to obtain and inadequate for characterizing certain tumor types e.g. diffuse tumors and MRI invisible tumors. In this work, we introduce a novel patient-level classification framework, denoted PCF, that is trained using patient-level labels only. In PCF, features are extracted from three-dimensional mpMRI and derived parameter maps using convolutional neural networks and subsequently, combined with clinical features by a multi-classifier support vector machine scheme. The output of PCF is a probability value that indicates whether a patient is harboring clinically significant PCa (Gleason score >/=3+4) or not. PCF achieved mean area under the receiver operating characteristic curves of 0.79 and 0.86 on the PICTURE and PROSTATEx datasets respectively, using five-fold cross-validation. Clinical evaluation over a temporally separated PICTURE dataset cohort demonstrated comparable sensitivity and specificity to an experienced radiologist. We envision PCF finding most utility as a second reader during routine diagnosis or as a triage tool to identify low-risk patients who do not require a clinical read.
Incorporating the Hybrid Deformable Model for Improving the Performance of Abdominal CT Segmentation via Multi-Scale Feature Fusion Network
Liang, Xiaokun
Li, Na
Zhang, Zhicheng
Xiong, Jing
Zhou, Shoujun
Xie, Yaoqin
Medical Image Analysis2021Journal Article, cited 0 times
Website
Pancreas-CT
Segmentation
U-net
Automated multi-organ abdominal Computed Tomography (CT) image segmentation can assist the treatment planning, diagnosis, and improve many clinical workflows' efficiency. The 3-D Convolutional Neural Network (CNN) recently attained state-of-the-art accuracy, which typically relies on supervised training with many manual annotated data. Many methods used the data augmentation strategy with a rigid or affine spatial transformation to alleviate the over-fitting problem and improve the network's robustness. However, the rigid or affine spatial transformation fails to capture the complex voxel-based deformation in the abdomen, filled with many soft organs. We developed a novel Hybrid Deformable Model (HDM), which consists of the inter-and intra-patient deformation for more effective data augmentation to tackle this issue. The inter-patient deformations were extracted from the learning-based deformable registration between different patients, while the intra-patient deformations were formed using the random 3-D Thin-Plate-Spline (TPS) transformation. Incorporating the HDM enabled the network to capture many of the subtle deformations of abdominal organs. To find a better solution and achieve faster convergence for network training, we fused the pre-trained multi-scale features into the a 3-D attention U-Net. We directly compared the segmentation accuracy of the proposed method to the previous techniques on several centers' datasets via cross-validation. The proposed method achieves the average Dice Similarity Coefficient (DSC) 0.852, which outperformed the other state-of-the-art on multi-organ abdominal CT segmentation results.
Deep cross-view co-regularized representation learning for glioma subtype identification
Ning, Zhenyuan
Tu, Chao
Di, Xiaohui
Feng, Qianjin
Zhang, Yu
Medical Image Analysis2021Journal Article, cited 0 times
Website
TCGA-GBM
TCGA-LGG
Deep learning
BRAIN
Magnetic Resonance Imaging (MRI)
Classification
The new subtypes of diffuse gliomas are recognized by the World Health Organization (WHO) on the basis of genotypes, e.g., isocitrate dehydrogenase and chromosome arms 1p/19q, in addition to the histologic phenotype. Glioma subtype identification can provide valid guidances for both risk-benefit assessment and clinical decision. The feature representations of gliomas in magnetic resonance imaging (MRI) have been prevalent for revealing underlying subtype status. However, since gliomas are highly heterogeneous tumors with quite variable imaging phenotypes, learning discriminative feature representations in MRI for gliomas remains challenging. In this paper, we propose a deep cross-view co-regularized representation learning framework for glioma subtype identification, in which view representation learning and multiple constraints are integrated into a unified paradigm. Specifically, we first learn latent view-specific representations based on cross-view images generated from MRI via a bi-directional mapping connecting original imaging space and latent space, and view-correlated regularizer and output-consistent regularizer in the latent space are employed to explore view correlation and derive view consistency, respectively. We further learn view-sharable representations which can explore complementary information of multiple views by projecting the view-specific representations into a holistically shared space and enhancing via adversary learning strategy. Finally, the view-specific and view-sharable representations are incorporated for identifying glioma subtype. Experimental results on multi-site datasets demonstrate the proposed method outperforms several state-of-the-art methods in detection of glioma subtype status.
CycleGAN denoising of extreme low-dose cardiac CT using wavelet-assisted noise disentanglement
Gu, J.
Yang, T. S.
Ye, J. C.
Yang, D. H.
Med Image Anal2021Journal Article, cited 1 times
Website
LDCT-and-Projection-data
Vasculature
Wavelet
cycleGAN
Deep Learning
Image denoising
Adversarial training
Coronary CT angiography
Cycle consistency
Low-dose CT
Unsupervised learning
Wavelet transform
In electrocardiography (ECG) gated cardiac CT angiography (CCTA), multiple images covering the entire cardiac cycle are taken continuously, so reduction of the accumulated radiation dose could be an important issue for patient safety. Although ECG-gated dose modulation (so-called ECG pulsing) is used to acquire many phases of CT images at a low dose, the reduction of the radiation dose introduces noise into the image reconstruction. To address this, we developed a high performance unsupervised deep learning method using noise disentanglement that can effectively learn the noise patterns even from extreme low dose CT images. For noise disentanglement, we use a wavelet transform to extract the high-frequency signals that contain the most noise. Since matched low-dose and high-dose cardiac CT data are impossible to obtain in practice, our neural network was trained in an unsupervised manner using cycleGAN for the extracted high frequency signals from the low-dose and unpaired high-dose CT images. Once the network is trained, denoised images are obtained by subtracting the estimated noise components from the input images. Image quality evaluation of the denoised images from only 4% dose CT images was performed by experienced radiologists for several anatomical structures. Visual grading analysis was conducted according to the sharpness level, noise level, and structural visibility. Also, the signal-to-noise ratio was calculated. The evaluation results showed that the quality of the images produced by the proposed method is much improved compared to low-dose CT images and to the baseline cycleGAN results. The proposed noise-disentangled cycleGAN with wavelet transform effectively removed noise from extreme low-dose CT images compared to the existing baseline algorithms. It can be an important denoising platform for low-dose CT.
Decomposing normal and abnormal features of medical images for content-based image retrieval of glioma imaging
Kobayashi, K.
Hataya, R.
Kurose, Y.
Miyake, M.
Takahashi, M.
Nakagawa, A.
Harada, T.
Hamamoto, R.
Med Image Anal2021Journal Article, cited 2 times
Website
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Classification
comparative diagnostic reading
Content-based image retrieval (CBIR)
Deep Learning
disentangled representation
feature decomposition
In medical imaging, the characteristics purely derived from a disease should reflect the extent to which abnormal findings deviate from the normal features. Indeed, physicians often need corresponding images without abnormal findings of interest or, conversely, images that contain similar abnormal findings regardless of normal anatomical context. This is called comparative diagnostic reading of medical images, which is essential for a correct diagnosis. To support comparative diagnostic reading, content-based image retrieval (CBIR) that can selectively utilize normal and abnormal features in medical images as two separable semantic components will be useful. In this study, we propose a neural network architecture to decompose the semantic components of medical images into two latent codes: normal anatomy code and abnormal anatomy code. The normal anatomy code represents counterfactual normal anatomies that should have existed if the sample is healthy, whereas the abnormal anatomy code attributes to abnormal changes that reflect deviation from the normal baseline. By calculating the similarity based on either normal or abnormal anatomy codes or the combination of the two codes, our algorithm can retrieve images according to the selected semantic component from a dataset consisting of brain magnetic resonance images of gliomas. Moreover, it can utilize a synthetic query vector combining normal and abnormal anatomy codes from two different query images. To evaluate whether the retrieved images are acquired according to the targeted semantic component, the overlap of the ground-truth labels is calculated as metrics of the semantic consistency. Our algorithm provides a flexible CBIR framework by handling the decomposed features with qualitatively and quantitatively remarkable results.
Self-supervised driven consistency training for annotation efficient histopathology image analysis
Srinidhi, Chetan L
Kim, Seung Wook
Chen, Fu-Der
Martel, Anne L
Medical Image Analysis2021Journal Article, cited 0 times
Post-NAT-BRCA
Training a neural network with a large labeled dataset is still a dominant paradigm in computational histopathology. However, obtaining such exhaustive manual annotations is often expensive, laborious, and prone to inter and intra-observer variability. While recent self-supervised and semi-supervised methods can alleviate this need by learning unsupervised feature representations, they still struggle to generalize well to downstream tasks when the number of labeled instances is small. In this work, we overcome this challenge by leveraging both task-agnostic and task-specific unlabeled data based on two novel strategies: (i) a self-supervised pretext task that harnesses the underlying multi-resolution contextual cues in histology whole-slide images to learn a powerful supervisory signal for unsupervised representation learning; (ii) a new teacher-student semi-supervised consistency paradigm that learns to effectively transfer the pretrained representations to downstream tasks based on prediction consistency with the task-specific unlabeled data. We carry out extensive validation experiments on three histopathology benchmark datasets across two classification and one regression based tasks, i.e., tumor metastasis detection, tissue type classification, and tumor cellularity quantification. Under limited-label data, the proposed method yields tangible improvements, which is close to or even outperforming other state-of-the-art self-supervised and supervised baselines. Furthermore, we empirically show that the idea of bootstrapping the self-supervised pretrained features is an effective way to improve the task-specific semi-supervised learning on standard benchmarks. Code and pretrained models are made available at: https://github.com/srinidhiPY/SSL_CR_Histo.
ProstAttention-Net: A deep attention model for prostate cancer segmentation by aggressiveness in MRI scans
Duran, A.
Dussert, G.
Rouviere, O.
Jaouen, T.
Jodoin, P. M.
Lartizien, C.
Med Image Anal2022Journal Article, cited 7 times
Website
Prostate Fused-MRI-Pathology
Computer Aided Detection (CADe)
Magnetic Resonance Imaging (MRI)
Humans
Male
Multiparametric Magnetic Resonance Imaging
Neoplasm Grading
PROSTATE
*Attention models
*Deep learning
*Prostate cancer
*Semantic segmentation
Multiparametric magnetic resonance imaging (mp-MRI) has shown excellent results in the detection of prostate cancer (PCa). However, characterizing prostate lesions aggressiveness in mp-MRI sequences is impossible in clinical practice, and biopsy remains the reference to determine the Gleason score (GS). In this work, we propose a novel end-to-end multi-class network that jointly segments the prostate gland and cancer lesions with GS group grading. After encoding the information on a latent space, the network is separated in two branches: 1) the first branch performs prostate segmentation 2) the second branch uses this zonal prior as an attention gate for the detection and grading of prostate lesions. The model was trained and validated with a 5-fold cross-validation on a heterogeneous series of 219 MRI exams acquired on three different scanners prior prostatectomy. In the free-response receiver operating characteristics (FROC) analysis for clinically significant lesions (defined as GS >6) detection, our model achieves 69.0%+/-14.5% sensitivity at 2.9 false positive per patient on the whole prostate and 70.8%+/-14.4% sensitivity at 1.5 false positive when considering the peripheral zone (PZ) only. Regarding the automatic GS group grading, Cohen's quadratic weighted kappa coefficient (kappa) is 0.418+/-0.138, which is the best reported lesion-wise kappa for GS segmentation to our knowledge. The model has encouraging generalization capacities with kappa=0.120+/-0.092 on the PROSTATEx-2 public dataset and achieves state-of-the-art performance for the segmentation of the whole prostate gland with a Dice of 0.875+/-0.013. Finally, we show that ProstAttention-Net improves performance in comparison to reference segmentation models, including U-Net, DeepLabv3+ and E-Net. The proposed attention mechanism is also shown to outperform Attention U-Net.
DDTNet: A dense dual-task network for tumor-infiltrating lymphocyte detection and segmentation in histopathological images of breast cancer
Zhang, Xiaoxuan
Zhu, Xiongfeng
Tang, Kai
Zhao, Yinghua
Lu, Zixiao
Feng, Qianjin
Med Image Anal2022Journal Article, cited 1 times
Website
Post-NAT-BRCA
Digital pathology
*Breast Neoplasms/diagnostic imaging/pathology
Female
Humans
Image Processing
Computer-Assisted/methods
*Lymphocytes
Tumor-Infiltrating/pathology
Prognosis
Staining and Labeling
*Computational pathology
*Dense dual-task
*Lymphocyte detection
*Lymphocyte segmentation
The morphological evaluation of tumor-infiltrating lymphocytes (TILs) in hematoxylin and eosin (H& E)-stained histopathological images is the key to breast cancer (BCa) diagnosis, prognosis, and therapeutic response prediction. For now, the qualitative assessment of TILs is carried out by pathologists, and computer-aided automatic lymphocyte measurement is still a great challenge because of the small size and complex distribution of lymphocytes. In this paper, we propose a novel dense dual-task network (DDTNet) to simultaneously achieve automatic TIL detection and segmentation in histopathological images. DDTNet consists of a backbone network (i.e., feature pyramid network) for extracting multi-scale morphological characteristics of TILs, a detection module for the localization of TIL centers, and a segmentation module for the delineation of TIL boundaries, where a boundary-aware branch is further used to provide a shape prior to segmentation. An effective feature fusion strategy is utilized to introduce multi-scale features with lymphocyte location information from highly correlated branches for precise segmentation. Experiments on three independent lymphocyte datasets of BCa demonstrate that DDTNet outperforms other advanced methods in detection and segmentation metrics. As part of this work, we also propose a semi-automatic method (TILAnno) to generate high-quality boundary annotations for TILs in H& E-stained histopathological images. TILAnno is used to produce a new lymphocyte dataset that contains 5029 annotated lymphocyte boundaries, which have been released to facilitate computational histopathology in the future.
Source free domain adaptation for medical image segmentation with fourier style mining
Yang, C.
Guo, X.
Chen, Z.
Yuan, Y.
Med Image Anal2022Journal Article, cited 0 times
Website
NCI-ISBI-Prostate-2013
Algorithm Development
Deep Learning
Consistency Learning
Contrastive Domain Distillation
Fourier Style Mining
Source Free Domain Adaptation
Unsupervised domain adaptation (UDA) aims to exploit the knowledge learned from a labeled source dataset to solve similar tasks in a new unlabeled target domain. Existing UDA techniques typically assume that samples from source and target domains are freely accessible during the training. However, it may be impractical to access source images due to privacy concerns, especially in medical imaging scenarios with the patient information. To tackle this issue, we devise a novel source free domain adaptation framework with fourier style mining, where only a well-trained source segmentation model is available for the adaptation to the target domain. Our framework is composed of two stages: a generation stage and an adaptation stage. In the generation stage, we design a Fourier Style Mining (FSM) generator to inverse source-like images through statistic information of the pretrained source model and mutual Fourier Transform. These generated source-like images can provide source data distribution and benefit the domain alignment. In the adaptation stage, we design a Contrastive Domain Distillation (CDD) module to achieve feature-level adaptation, including a domain distillation loss to transfer relation knowledge and a domain contrastive loss to narrow down the domain gap by a self-supervised paradigm. Besides, a Compact-Aware Domain Consistency (CADC) module is proposed to enhance consistency learning by filtering out noisy pseudo labels with shape compactness metric, thus achieving output-level adaptation. Extensive experiments on cross-device and cross-centre datasets are conducted for polyp and prostate segmentation, and our method delivers impressive performance compared with state-of-the-art domain adaptation methods. The source code is available at https://github.com/CityU-AIM-Group/SFDA-FSM.
Prior-aware autoencoders for lung pathology segmentation
Astaraki, M.
Smedby, O.
Wang, C.
Med Image Anal2022Journal Article, cited 0 times
LIDC-IDRI
NSCLC-Radiomics
*COVID-19/diagnostic imaging
*Carcinoma
Non-Small-Cell Lung
Humans
Image Processing
Computer-Assisted/methods
LUNG
Tomography
X-Ray Computed
Healthy image generation
Lung pathology segmentation
Prior-aware deep learning
Segmentation of lung pathology in Computed Tomography (CT) images is of great importance for lung disease screening. However, the presence of different types of lung pathologies with a wide range of heterogeneities in size, shape, location, and texture, on one side, and their visual similarity with respect to surrounding tissues, on the other side, make it challenging to perform reliable automatic lesion segmentation. To leverage segmentation performance, we propose a deep learning framework comprising a Normal Appearance Autoencoder (NAA) model to learn the distribution of healthy lung regions and reconstruct pathology-free images from the corresponding pathological inputs by replacing the pathological regions with the characteristics of healthy tissues. Detected regions that represent prior information regarding the shape and location of pathologies are then integrated into a segmentation network to guide the attention of the model into more meaningful delineations. The proposed pipeline was tested on three types of lung pathologies, including pulmonary nodules, Non-Small Cell Lung Cancer (NSCLC), and Covid-19 lesion on five comprehensive datasets. The results show the superiority of the proposed prior model, which outperformed the baseline segmentation models in all the cases with significant margins. On average, adding the prior model improved the Dice coefficient for the segmentation of lung nodules by 0.038, NSCLCs by 0.101, and Covid-19 lesions by 0.041. We conclude that the proposed NAA model produces reliable prior knowledge regarding the lung pathologies, and integrating such knowledge into a prior segmentation network leads to more accurate delineations.
Mutual consistency learning for semi-supervised medical image segmentation
Wu, Yicheng
Ge, Zongyuan
Zhang, Donghao
Xu, Minfeng
Zhang, Lei
Xia, Yong
Cai, Jianfei
Medical Image Analysis2022Journal Article, cited 1 times
Website
Pancreas-CT
Segmentation
Machine Learning
Colour adaptive generative networks for stain normalisation of histopathology images
Cong, C.
Liu, S.
Di Ieva, A.
Pagnucco, M.
Berkovsky, S.
Song, Y.
Med Image Anal2022Journal Article, cited 0 times
Website
TCGA-GBM
TCGA-LGG
Pathomics
Image color analysis
BRAIN
BREAST
Humans
Hematoxylin
Eosine Yellowish-(YS)
*Coloring Agents
Contrast enhancement
Stain normalisation
*Image Processing
Computer-Assisted/methods
Digital pathology
Generative Adversarial Network (GAN)
Semi-supervised learning
Deep learning has shown its effectiveness in histopathology image analysis, such as pathology detection and classification. However, stain colour variation in Hematoxylin and Eosin (H&E) stained histopathology images poses challenges in effectively training deep learning-based algorithms. To alleviate this problem, stain normalisation methods have been proposed, with most of the recent methods utilising generative adversarial networks (GAN). However, these methods are either trained fully with paired images from the target domain (supervised) or with unpaired images (unsupervised), suffering from either large discrepancy between domains or risks of undertrained/overfitted models when only the target domain images are used for training. In this paper, we introduce a colour adaptive generative network (CAGAN) for stain normalisation which combines both supervised learning from target domain and unsupervised learning from source domain. Specifically, we propose a dual-decoder generator and force consistency between their outputs thus introducing extra supervision which benefits from extra training with source domain images. Moreover, our model is immutable to stain colour variations due to the use of stain colour augmentation. We further implement histogram loss to ensure the processed images are coloured with the target domain colours regardless of their content differences. Extensive experiments on four public histopathology image datasets including TCGA-IDH, CAMELYON16, CAMELYON17 and BreakHis demonstrate that our proposed method produces high quality stain normalised images which improve the performance of benchmark algorithms by 5% to 10% compared to baselines not using normalisation.
Distance-based detection of out-of-distribution silent failures for Covid-19 lung lesion segmentation
González, Camila
Gotkowski, Karol
Fuchs, Moritz
Bucher, Andreas
Dadras, Armin
Fischbach, Ricarda
Kaltenborn, Isabel Jasmin
Mukhopadhyay, Anirban
Medical Image Analysis2022Journal Article, cited 0 times
ISBI-MR-Prostate-2013
Automatic segmentation of ground glass opacities and consolidations in chest computer tomography (CT) scans can potentially ease the burden of radiologists during times of high resource utilisation. However, deep learning models are not trusted in the clinical routine due to failing silently on out-of-distribution (OOD) data. We propose a lightweight OOD detection method that leverages the Mahalanobis distance in the feature space and seamlessly integrates into state-of-the-art segmentation pipelines. The simple approach can even augment pre-trained models with clinically relevant uncertainty quantification. We validate our method across four chest CT distribution shifts and two magnetic resonance imaging applications, namely segmentation of the hippocampus and the prostate. Our results show that the proposed method effectively detects far- and near-OOD samples across all explored scenarios.
Rapid artificial intelligence solutions in a pandemic—The COVID-19-20 Lung CT Lesion Segmentation Challenge
Roth, Holger R.
Xu, Ziyue
Tor-Díez, Carlos
Sanchez Jacob, Ramon
Zember, Jonathan
Molto, Jose
Li, Wenqi
Xu, Sheng
Turkbey, Baris
Turkbey, Evrim
Yang, Dong
Harouni, Ahmed
Rieke, Nicola
Hu, Shishuai
Isensee, Fabian
Tang, Claire
Yu, Qinji
Sölter, Jan
Zheng, Tong
Liauchuk, Vitali
Zhou, Ziqi
Moltz, Jan Hendrik
Oliveira, Bruno
Xia, Yong
Maier-Hein, Klaus H.
Li, Qikai
Husch, Andreas
Zhang, Luyang
Kovalev, Vassili
Kang, Li
Hering, Alessa
Vilaça, João L.
Flores, Mona
Xu, Daguang
Wood, Bradford
Linguraru, Marius George
Medical Image Analysis2022Journal Article, cited 18 times
Website
CT Images in COVID-19
COVID-19-AR
Segmentation
COVID-19
Artificial intelligence (AI) methods for the automatic detection and quantification of COVID-19 lesions in chest computed tomography (CT) might play an important role in the monitoring and management of the disease. We organized an international challenge and competition for the development and comparison of AI algorithms for this task, which we supported with public data and state-of-the-art benchmark methods. Board Certified Radiologists annotated 295 public images from two sources (A and B) for algorithms training (n=199, source A), validation (n=50, source A) and testing (n=23, source A; n=23, source B). There were 1,096 registered teams of which 225 and 98 completed the validation and testing phases, respectively. The challenge showed that AI models could be rapidly designed by diverse teams with the potential to measure disease or facilitate timely and patient-specific interventions. This paper provides an overview and the major outcomes of the COVID-19 Lung CT Lesion Segmentation Challenge — 2020.
Domain Generalization for Prostate Segmentation in Transrectal Ultrasound Images: A Multi-center Study
Vesal, S.
Gayo, I.
Bhattacharya, I.
Natarajan, S.
Marks, L. S.
Barratt, D. C.
Fan, R. E.
Hu, Y.
Sonn, G. A.
Rusu, M.
Med Image Anal2022Journal Article, cited 0 times
Prostate-MRI-US-Biopsy
Humans
Male
*Prostate/diagnostic imaging
Ultrasonography
*Neural Networks
Computer
Magnetic Resonance Imaging/methods
Pelvis
Continual learning segmentation
Deep learning
Gland segmentation
Prostate MRI
Targeted biopsy
Transrectal ultrasound
Prostate biopsy and image-guided treatment procedures are often performed under the guidance of ultrasound fused with magnetic resonance images (MRI). Accurate image fusion relies on accurate segmentation of the prostate on ultrasound images. Yet, the reduced signal-to-noise ratio and artifacts (e.g., speckle and shadowing) in ultrasound images limit the performance of automated prostate segmentation techniques and generalizing these methods to new image domains is inherently difficult. In this study, we address these challenges by introducing a novel 2.5D deep neural network for prostate segmentation on ultrasound images. Our approach addresses the limitations of transfer learning and finetuning methods (i.e., drop in performance on the original training data when the model weights are updated) by combining a supervised domain adaptation technique and a knowledge distillation loss. The knowledge distillation loss allows the preservation of previously learned knowledge and reduces the performance drop after model finetuning on new datasets. Furthermore, our approach relies on an attention module that considers model feature positioning information to improve the segmentation accuracy. We trained our model on 764 subjects from one institution and finetuned our model using only ten subjects from subsequent institutions. We analyzed the performance of our method on three large datasets encompassing 2067 subjects from three different institutions. Our method achieved an average Dice Similarity Coefficient (Dice) of 94.0+/-0.03 and Hausdorff Distance (HD95) of 2.28 mm in an independent set of subjects from the first institution. Moreover, our model generalized well in the studies from the other two institutions (Dice: 91.0+/-0.03; HD95: 3.7 mm and Dice: 82.0+/-0.03; HD95: 7.1 mm). We introduced an approach that successfully segmented the prostate on ultrasound images in a multi-center study, suggesting its clinical potential to facilitate the accurate fusion of ultrasound and MRI images to drive biopsy and image-guided treatments.
Multi-channel auto-encoders for learning domain invariant representations enabling superior classification of histopathology images
Moyes, A.
Gault, R.
Zhang, K.
Ming, J.
Crookes, D.
Wang, J.
Med Image Anal2022Journal Article, cited 0 times
Website
TCGA-COAD
Deep Learning
Histopathology imaging features
Representation Learning
Stain Invariance
Domain shift is a problem commonly encountered when developing automated histopathology pipelines. The performance of machine learning models such as convolutional neural networks within automated histopathology pipelines is often diminished when applying them to novel data domains due to factors arising from differing staining and scanning protocols. The Dual-Channel Auto-Encoder (DCAE) model was previously shown to produce feature representations that are less sensitive to appearance variation introduced by different digital slide scanners. In this work, the Multi-Channel Auto-Encoder (MCAE) model is presented as an extension to DCAE which learns from more than two domains of data. Experimental results show that the MCAE model produces feature representations that are less sensitive to inter-domain variations than the comparative StaNoSA method when tested on a novel synthetic dataset. This was apparent when applying the MCAE, DCAE, and StaNoSA models to three different classification tasks from unseen domains. The results of this experiment show the MCAE model out performs the other models. These results show that the MCAE model is able to generalise better to novel data, including data from unseen domains, than existing approaches by actively learning normalised feature representations.
The Liver Tumor Segmentation Benchmark (LiTS)
Bilic, Patrick
Christ, Patrick
Li, Hongwei Bran
Vorontsov, Eugene
Ben-Cohen, Avi
Kaissis, Georgios
Szeskin, Adi
Jacobs, Colin
Mamani, Gabriel Efrain Humpire
Chartrand, Gabriel
Lohöfer, Fabian
Holch, Julian Walter
Sommer, Wieland
Hofmann, Felix
Hostettler, Alexandre
Lev-Cohain, Naama
Drozdzal, Michal
Amitai, Michal Marianne
Vivanti, Refael
Sosna, Jacob
Ezhov, Ivan
Sekuboyina, Anjany
Navarro, Fernando
Kofler, Florian
Paetzold, Johannes C.
Shit, Suprosanna
Hu, Xiaobin
Lipková, Jana
Rempfler, Markus
Piraud, Marie
Kirschke, Jan
Wiestler, Benedikt
Zhang, Zhiheng
Hülsemeyer, Christian
Beetz, Marcel
Ettlinger, Florian
Antonelli, Michela
Bae, Woong
Bellver, Míriam
Bi, Lei
Chen, Hao
Chlebus, Grzegorz
Dam, Erik B.
Dou, Qi
Fu, Chi-Wing
Georgescu, Bogdan
Giró-i-Nieto, Xavier
Gruen, Felix
Han, Xu
Heng, Pheng-Ann
Hesser, Jürgen
Moltz, Jan Hendrik
Igel, Christian
Isensee, Fabian
Jäger, Paul
Jia, Fucang
Kaluva, Krishna Chaitanya
Khened, Mahendra
Kim, Ildoo
Kim, Jae-Hun
Kim, Sungwoong
Kohl, Simon
Konopczynski, Tomasz
Kori, Avinash
Krishnamurthi, Ganapathy
Li, Fan
Li, Hongchao
Li, Junbo
Li, Xiaomeng
Lowengrub, John
Ma, Jun
Maier-Hein, Klaus
Maninis, Kevis-Kokitsi
Meine, Hans
Merhof, Dorit
Pai, Akshay
Perslev, Mathias
Petersen, Jens
Pont-Tuset, Jordi
Qi, Jin
Qi, Xiaojuan
Rippel, Oliver
Roth, Karsten
Sarasua, Ignacio
Schenk, Andrea
Shen, Zengming
Torres, Jordi
Wachinger, Christian
Wang, Chunliang
Weninger, Leon
Wu, Jianrong
Xu, Daguang
Yang, Xiaoping
Yu, Simon Chun-Ho
Yuan, Yading
Yue, Miao
Zhang, Liping
Cardoso, Jorge
Bakas, Spyridon
Braren, Rickmer
Heinemann, Volker
Pal, Christopher
Tang, An
Kadoury, Samuel
Soler, Luc
van Ginneken, Bram
Greenspan, Hayit
Joskowicz, Leo
Menze, Bjoern
Medical Image Analysis2023Journal Article, cited 612 times
Website
TCGA-LIHC
Segmentation
Liver
Liver tumor
Deep learning
CT
In this work, we report the set-up and results of the Liver Tumor Segmentation Benchmark (LiTS), which was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI) 2017 and the International Conferences on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2017 and 2018. The image dataset is diverse and contains primary and secondary tumors with varied sizes and appearances with various lesion-to-background levels (hyper-/hypo-dense), created in collaboration with seven hospitals and research institutions. Seventy-five submitted liver and liver tumor segmentation algorithms were trained on a set of 131 computed tomography (CT) volumes and were tested on 70 unseen test images acquired from different patients. We found that not a single algorithm performed best for both liver and liver tumors in the three events. The best liver segmentation algorithm achieved a Dice score of 0.963, whereas, for tumor segmentation, the best algorithms achieved Dices scores of 0.674 (ISBI 2017), 0.702 (MICCAI 2017), and 0.739 (MICCAI 2018). Retrospectively, we performed additional analysis on liver tumor detection and revealed that not all top-performing segmentation algorithms worked well for tumor detection. The best liver tumor detection method achieved a lesion-wise recall of 0.458 (ISBI 2017), 0.515 (MICCAI 2017), and 0.554 (MICCAI 2018), indicating the need for further research. LiTS remains an active benchmark and resource for research, e.g., contributing the liver-related segmentation tasks in http://medicaldecathlon.com/. In addition, both data and online evaluation are accessible via https://competitions.codalab.org/competitions/17094.
Guidelines and evaluation of clinical explainable AI in medical image analysis
Jin, W.
Li, X.
Fatehi, M.
Hamarneh, G.
Med Image Anal2023Journal Article, cited 0 times
BraTS 2020
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Humans
*Artificial Intelligence
*Benchmarking
Clinical Relevance
Evidence Gaps
Explainable AI evaluation
Interpretable machine learning
Medical image analysis
Multi-modal medical image
Explainable artificial intelligence (XAI) is essential for enabling clinical users to get informed decision support from AI and comply with evidence-based medical practice. Applying XAI in clinical settings requires proper evaluation criteria to ensure the explanation technique is both technically sound and clinically useful, but specific support is lacking to achieve this goal. To bridge the research gap, we propose the Clinical XAI Guidelines that consist of five criteria a clinical XAI needs to be optimized for. The guidelines recommend choosing an explanation form based on Guideline 1 (G1) Understandability and G2 Clinical relevance. For the chosen explanation form, its specific XAI technique should be optimized for G3 Truthfulness, G4 Informative plausibility, and G5 Computational efficiency. Following the guidelines, we conducted a systematic evaluation on a novel problem of multi-modal medical image explanation with two clinical tasks, and proposed new evaluation metrics accordingly. Sixteen commonly-used heatmap XAI techniques were evaluated and found to be insufficient for clinical use due to their failure in G3 and G4. Our evaluation demonstrated the use of Clinical XAI Guidelines to support the design and evaluation of clinically viable XAI.
Data synthesis and adversarial networks: A review and meta-analysis in cancer imaging
Osuala, Richard
Kushibar, Kaisar
Garrucho, Lidia
Linardos, Akis
Szafranowska, Zuzanna
Klein, Stefan
Glocker, Ben
Diaz, Oliver
Lekadir, Karim
Medical Image Analysis2022Journal Article, cited 0 times
PROSTATE-MRI
Despite technological and medical advances, the detection, interpretation, and treatment of cancer based on imaging data continue to pose significant challenges. These include inter-observer variability, class imbalance, dataset shifts, inter- and intra-tumour heterogeneity, malignancy determination, and treatment effect uncertainty. Given the recent advancements in image synthesis, Generative Adversarial Networks (GANs), and adversarial training, we assess the potential of these technologies to address a number of key challenges of cancer imaging. We categorise these challenges into (a) data scarcity and imbalance, (b) data access and privacy, (c) data annotation and segmentation, (d) cancer detection and diagnosis, and (e) tumour profiling, treatment planning and monitoring. Based on our analysis of 164 publications that apply adversarial training techniques in the context of cancer imaging, we highlight multiple underexplored solutions with research potential. We further contribute the Synthesis Study Trustworthiness Test (SynTRUST), a meta-analysis framework for assessing the validation rigour of medical image synthesis studies. SynTRUST is based on 26 concrete measures of thoroughness, reproducibility, usefulness, scalability, and tenability. Based on SynTRUST, we analyse 16 of the most promising cancer imaging challenge solutions and observe a high validation rigour in general, but also several desirable improvements. With this work, we strive to bridge the gap between the needs of the clinical cancer imaging community and the current and prospective research on data synthesis and adversarial networks in the artificial intelligence community.
AIGAN: Attention-encoding Integrated Generative Adversarial Network for the reconstruction of low-dose CT and low-dose PET images
Fu, Yu
Dong, Shunjie
Niu, Meng
Xue, Le
Guo, Hanning
Huang, Yanyan
Xu, Yuanfan
Yu, Tianbai
Shi, Kuangyu
Yang, Qianqian
Shi, Yiyu
Zhang, Hong
Tian, Mei
Zhuo, Cheng
Medical Image Analysis2023Journal Article, cited 0 times
Website
LDCT-and-Projection-data
Generative Adversarial Network (GAN)
Image Fusion
Radiomics
Computed Tomography (CT)
Positron Emission Tomography (PET)
X-ray computed tomography (CT) and positron emission tomography (PET) are two of the most commonly used medical imaging technologies for the evaluation of many diseases. Full-dose imaging for CT and PET ensures the image quality but usually raises concerns about the potential health risks of radiation exposure. The contradiction between reducing the radiation exposure and remaining diagnostic performance can be addressed effectively by reconstructing the low-dose CT (L-CT) and low-dose PET (L-PET) images to the same high-quality ones as full-dose (F-CT and F-PET). In this paper, we propose an Attention-encoding Integrated Generative Adversarial Network (AIGAN) to achieve efficient and universal full-dose reconstruction for L-CT and L-PET images. AIGAN consists of three modules: the cascade generator, the dual-scale discriminator and the multi-scale spatial fusion module (MSFM). A sequence of consecutive L-CT (L-PET) slices is first fed into the cascade generator that integrates with a generation-encoding-generation pipeline. The generator plays the zero-sum game with the dual-scale discriminator for two stages: the coarse and fine stages. In both stages, the generator generates the estimated F-CT (F-PET) images as like the original F-CT (F-PET) images as possible. After the fine stage, the estimated fine full-dose images are then fed into the MSFM, which fully explores the inter- and intra-slice structural information, to output the final generated full-dose images. Experimental results show that the proposed AIGAN achieves the state-of-the-art performances on commonly used metrics and satisfies the reconstruction needs for clinical standards.
Contour-aware network with class-wise convolutions for 3D abdominal multi-organ segmentation
Gao, H.
Lyu, M.
Zhao, X.
Yang, F.
Bai, X.
Med Image Anal2023Journal Article, cited 0 times
Pancreas-CT
Humans
*Tomography
X-Ray Computed/methods
*Image Processing
Computer-Assisted/methods
CT image
Deep learning
Image segmentation
Three-dimensional organ segmentation
PyTorch
Accurate delineation of multiple organs is a critical process for various medical procedures, which could be operator-dependent and time-consuming. Existing organ segmentation methods, which were mainly inspired by natural image analysis techniques, might not fully exploit the traits of the multi-organ segmentation task and could not accurately segment the organs with various shapes and sizes simultaneously. In this work, the characteristics of multi-organ segmentation are considered: the global count, position and scale of organs are generally predictable, while their local shape and appearance are volatile. Thus, we supplement the region segmentation backbone with a contour localization task to increase the certainty along delicate boundaries. Meantime, each organ has exclusive anatomical traits, which motivates us to deal with class variability with class-wise convolutions to highlight organ-specific features and suppress irrelevant responses at different field-of-views. To validate our method with adequate amounts of patients and organs, we constructed a multi-center dataset, which contains 110 3D CT scans with 24,528 axial slices, and provided voxel-level manual segmentations of 14 abdominal organs, which adds up to 1,532 3D structures in total. Extensive ablation and visualization studies on it validate the effectiveness of the proposed method. Quantitative analysis shows that we achieve state-of-the-art performance for most abdominal organs, and obtain 3.63 mm 95% Hausdorff Distance and 83.32% Dice Similarity Coefficient on an average.
Segment anything model for medical image analysis: An experimental study
Mazurowski, Maciej A
Dong, Haoyu
Gu, Hanxue
Yang, Jichen
Konz, Nicholas
Zhang, Yixin
Medical Image Analysis2023Journal Article, cited 0 times
CT-ORG
Training segmentation models for medical images continues to be challenging due to the limited availability of data annotations. Segment Anything Model (SAM) is a foundation model trained on over 1 billion annotations, predominantly for natural images, that is intended to segment user-defined objects of interest in an interactive manner. While the model performance on natural images is impressive, medical image domains pose their own set of challenges. Here, we perform an extensive evaluation of SAM's ability to segment medical images on a collection of 19 medical imaging datasets from various modalities and anatomies. In our experiments, we generated point and box prompts for SAM using a standard method that simulates interactive segmentation. We report the following findings: (1) SAM's performance based on single prompts highly varies depending on the dataset and the task, from IoU=0.1135 for spine MRI to IoU=0.8650 for hip X-ray. (2) Segmentation performance appears to be better for well-circumscribed objects with prompts with less ambiguity such as the segmentation of organs in computed tomography and poorer in various other scenarios such as the segmentation of brain tumors. (3) SAM performs notably better with box prompts than with point prompts. (4) SAM outperforms similar methods RITM, SimpleClick, and FocalClick in almost all single-point prompt settings. (5) When multiple-point prompts are provided iteratively, SAM's performance generally improves only slightly while other methods' performance improves to the level that surpasses SAM's point-based performance. We also provide several illustrations for SAM's performance on all tested datasets, iterative segmentation, and SAM's behavior given prompt ambiguity. We conclude that SAM shows impressive zero-shot segmentation performance for certain medical imaging datasets, but moderate to poor performance for others. SAM has the potential to make a significant impact in automated medical image segmentation in medical imaging, but appropriate care needs to be applied when using it. Code for evaluation SAM is made publicly available at https://github.com/mazurowski-lab/segment-anything-medical-evaluation.
Prototypical few-shot segmentation for cross-institution male pelvic structures with spatial registration
Li, Yiwen
Fu, Yunguan
Gayo, Iani J M B
Yang, Qianye
Min, Zhe
Saeed, Shaheer U
Yan, Wen
Wang, Yipei
Noble, J Alison
Emberton, Mark
Clarkson, Matthew J
Huisman, Henkjan
Barratt, Dean C
Prisacariu, Victor A
Hu, Yipeng
Medical Image Analysis2023Journal Article, cited 0 times
Prostate-3T
PROSTATE-DIAGNOSIS
PROSTATE-MRI
The prowess that makes few-shot learning desirable in medical image analysis is the efficient use of the support image data, which are labelled to classify or segment new classes, a task that otherwise requires substantially more training images and expert annotations. This work describes a fully 3D prototypical few-shot segmentation algorithm, such that the trained networks can be effectively adapted to clinically interesting structures that are absent in training, using only a few labelled images from a different institute. First, to compensate for the widely recognised spatial variability between institutions in episodic adaptation of novel classes, a novel spatial registration mechanism is integrated into prototypical learning, consisting of a segmentation head and an spatial alignment module. Second, to assist the training with observed imperfect alignment, support mask conditioning module is proposed to further utilise the annotation available from the support images. Extensive experiments are presented in an application of segmenting eight anatomical structures important for interventional planning, using a data set of 589 pelvic T2-weighted MR images, acquired at seven institutes. The results demonstrate the efficacy in each of the 3D formulation, the spatial registration, and the support mask conditioning, all of which made positive contributions independently or collectively. Compared with the previously proposed 2D alternatives, the few-shot segmentation performance was improved with statistical significance, regardless whether the support data come from the same or different institutes.
Tumor radiogenomics in gliomas with Bayesian layered variable selection
Mohammed, S.
Kurtek, S.
Bharath, K.
Rao, A.
Baladandayuthapani, V.
Med Image Anal2023Journal Article, cited 0 times
BraTS-TCGA-LGG
Cancer driver genes
Lower grade gliomas
Magnetic Resonance Imaging (MRI)
Radiogenomics
Phenotyping
We propose a statistical framework to analyze radiological magnetic resonance imaging (MRI) and genomic data to identify the underlying radiogenomic associations in lower grade gliomas (LGG). We devise a novel imaging phenotype by dividing the tumor region into concentric spherical layers that mimics the tumor evolution process. MRI data within each layer is represented by voxel-intensity-based probability density functions which capture the complete information about tumor heterogeneity. Under a Riemannian-geometric framework these densities are mapped to a vector of principal component scores which act as imaging phenotypes. Subsequently, we build Bayesian variable selection models for each layer with the imaging phenotypes as the response and the genomic markers as predictors. Our novel hierarchical prior formulation incorporates the interior-to-exterior structure of the layers, and the correlation between the genomic markers. We employ a computationally-efficient Expectation-Maximization-based strategy for estimation. Simulation studies demonstrate the superior performance of our approach compared to other approaches. With a focus on the cancer driver genes in LGG, we discuss some biologically relevant findings. Genes implicated with survival and oncogenesis are identified as being associated with the spherical layers, which could potentially serve as early-stage diagnostic markers for disease monitoring, prior to routine invasive approaches. We provide a R package that can be used to deploy our framework to identify radiogenomic associations.
Advances in medical image analysis with vision Transformers: A comprehensive review
Azad, Reza
Kazerouni, Amirhossein
Heidari, Moein
Aghdam, Ehsan Khodapanah
Molaei, Amirali
Jia, Yiwei
Jose, Abin
Roy, Rijo
Merhof, Dorit
Medical Image Analysis2023Journal Article, cited 0 times
COVID-19-NY-SBU
TCGA-KIRC-Radiogenomics
TCGA-LUAD
TCGA-LUSC
The remarkable performance of the Transformer architecture in natural language processing has recently also triggered broad interest in Computer Vision. Among other merits, Transformers are witnessed as capable of learning long-range dependencies and spatial correlations, which is a clear advantage over convolutional neural networks (CNNs), which have been the de facto standard in Computer Vision problems so far. Thus, Transformers have become an integral part of modern medical image analysis. In this review, we provide an encyclopedic review of the applications of Transformers in medical imaging. Specifically, we present a systematic and thorough review of relevant recent Transformer literature for different medical image analysis tasks, including classification, segmentation, detection, registration, synthesis, and clinical report generation. For each of these applications, we investigate the novelty, strengths and weaknesses of the different proposed strategies and develop taxonomies highlighting key properties and contributions. Further, if applicable, we outline current benchmarks on different datasets. Finally, we summarize key challenges and discuss different future research directions. In addition, we have provided cited papers with their corresponding implementations in https://github.com/mindflow-institue/Awesome-Transformer.
Prompt tuning for parameter-efficient medical image segmentation
Fischer, Marc
Bartler, Alexander
Yang, Bin
Med Image Anal2023Journal Article, cited 1 times
Website
Pancreas-CT
Segmentation
Algorithm Development
Prompt tuning
Self-attention
Self-supervision
Semantic segmentation
Semi-supervised deep learning
Transformer
Neural networks pre-trained on a self-supervision scheme have become the standard when operating in data rich environments with scarce annotations. As such, fine-tuning a model to a downstream task in a parameter-efficient but effective way, e.g. for a new set of classes in the case of semantic segmentation, is of increasing importance. In this work, we propose and investigate several contributions to achieve a parameter-efficient but effective adaptation for semantic segmentation on two medical imaging datasets. Relying on the recently popularized prompt tuning approach, we provide a prompt-able UNETR (PUNETR) architecture, that is frozen after pre-training, but adaptable throughout the network by class-dependent learnable prompt tokens. We pre-train this architecture with a dedicated dense self-supervision scheme based on assignments to online generated prototypes (contrastive prototype assignment, CPA) of a student teacher combination. Concurrently, an additional segmentation loss is applied for a subset of classes during pre-training, further increasing the effectiveness of leveraged prompts in the fine-tuning phase. We demonstrate that the resulting method is able to attenuate the gap between fully fine-tuned and parameter-efficiently adapted models on CT imaging datasets. To this end, the difference between fully fine-tuned and prompt-tuned variants amounts to 7.81 pp for the TCIA/BTCV dataset as well as 5.37 and 6.57 pp for subsets of the TotalSegmentator dataset in the mean Dice Similarity Coefficient (DSC, in %) while only adjusting prompt tokens, corresponding to 0.51% of the pre-trained backbone model with 24.4M frozen parameters. The code for this work is available on https://github.com/marcdcfischer/PUNETR.
WarpDrive: Improving spatial normalization using manual refinements
Oxenford, S.
Rios, A. S.
Hollunder, B.
Neudorfer, C.
Boutet, A.
Elias, G. J. B.
Germann, J.
Loh, A.
Deeb, W.
Salvato, B.
Almeida, L.
Foote, K. D.
Amaral, R.
Rosenberg, P. B.
Tang-Wai, D. F.
Wolk, D. A.
Burke, A. D.
Sabbagh, M. N.
Salloway, S.
Chakravarty, M. M.
Smith, G. S.
Lyketsos, C. G.
Okun, M. S.
Anderson, W. S.
Mari, Z.
Ponce, F. A.
Lozano, A.
Neumann, W. J.
Al-Fatly, B.
Horn, A.
Med Image Anal2023Journal Article, cited 0 times
TCGA-KIRC
TCGA-KIRP
TCGA-LIHC
ABDOMEN
LIVER
Radiomics
Semiautomatic segmentation
Image Registration
Segmentation
Algorithm Development
Deep brain stimulation
Image normalization
Interactive registration
Spatial normalization-the process of mapping subject brain images to an average template brain-has evolved over the last 20+ years into a reliable method that facilitates the comparison of brain imaging results across patients, centers & modalities. While overall successful, sometimes, this automatic process yields suboptimal results, especially when dealing with brains with extensive neurodegeneration and atrophy patterns, or when high accuracy in specific regions is needed. Here we introduce WarpDrive, a novel tool for manual refinements of image alignment after automated registration. We show that the tool applied in a cohort of patients with Alzheimer's disease who underwent deep brain stimulation surgery helps create more accurate representations of the data as well as meaningful models to explain patient outcomes. The tool is built to handle any type of 3D imaging data, also allowing refinements in high-resolution imaging, including histology and multiple modalities to precisely aggregate multiple data sources together.
Sketch-based semantic retrieval of medical images
Kobayashi, Kazuma
Gu, Lin
Hataya, Ryuichiro
Mizuno, Takaaki
Miyake, Mototaka
Watanabe, Hirokazu
Takahashi, Masamichi
Takamizawa, Yasuyuki
Yoshida, Yukihiro
Nakamura, Satoshi
Kouno, Nobuji
Bolatkan, Amina
Kurose, Yusuke
Harada, Tatsuya
Hamamoto, Ryuji
Medical Image Analysis2023Journal Article, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
The volume of medical images stored in hospitals is rapidly increasing; however, the utilization of these accumulated medical images remains limited. Existing content-based medical image retrieval (CBMIR) systems typically require example images, leading to practical limitations, such as the lack of customizable, fine-grained image retrieval, the inability to search without example images, and difficulty in retrieving rare cases. In this paper, we introduce a sketch-based medical image retrieval (SBMIR) system that enables users to find images of interest without the need for example images. The key concept is feature decomposition of medical images, which allows the entire feature of a medical image to be decomposed into and reconstructed from normal and abnormal features. Building on this concept, our SBMIR system provides an easy-to-use two-step graphical user interface: users first select a template image to specify a normal feature and then draw a semantic sketch of the disease on the template image to represent an abnormal feature. The system integrates both types of input to construct a query vector and retrieves reference images. For evaluation, ten healthcare professionals participated in a user test using two datasets. Consequently, our SBMIR system enabled users to overcome previous challenges, including image retrieval based on fine-grained image characteristics, image retrieval without example images, and image retrieval for rare cases. Our SBMIR system provides on-demand, customizable medical image retrieval, thereby expanding the utility of medical image databases.
ATEC23 Challenge: Automated prediction of treatment effectiveness in ovarian cancer using histopathological images
Wang, Ching-Wei
Firdi, Nabila Puspita
Chu, Tzu-Chiao
Faiz, Mohammad Faiz Iqbal
Iqbal, Mohammad Zafar
Li, Yifan
Yang, Bo
Mallya, Mayur
Bashashati, Ali
Li, Fei
Wang, Haipeng
Lu, Mengkang
Xia, Yong
Chao, Tai-Kuang
Medical Image Analysis2024Journal Article, cited 0 times
Website
Ovarian Bevacizumab Response
Pathomics
Challenge
Ovarian cancer
Precision oncology
Computational pathology
Targeting therapy
Ovarian cancer, predominantly epithelial ovarian cancer (EOC), is a global health concern due to its high mortality rate. Despite the progress made during the last two decades in the surgery and chemotherapy of ovarian cancer, more than 70% of advanced patients are with recurrent cancer and disease. Bevacizumab is a humanized monoclonal antibody, which blocks VEGF signaling in cancer, inhibits angiogenesis and causes tumor shrinkage, and has been recently approved by the FDA as a monotherapy for advanced ovarian cancer in combination with chemotherapy. Unfortunately, Bevacizumab may also induce harmful adverse effects, such as hypertension, bleeding, arterial thromboembolism, poor wound healing and gastrointestinal perforation. Given the expensive cost and unwanted toxicities, there is an urgent need for predictive methods to identify who could benefit from bevacizumab. Of the 18 (approved) requests from 5 countries, 6 teams using 284 whole section WSIs for training to develop fully automated systems submitted their predictions on a test set of 180 tissue core images, with the corresponding ground truth labels kept private. This paper summarizes the 5 qualified methods successfully submitted to the international challenge of automated prediction of treatment effectiveness in ovarian cancer using the histopathologic images (ATEC23) held at the 26th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) in 2023 and evaluates the methods in comparison with 5 state of the art deep learning approaches. This study further assesses the effectiveness of the presented prediction models as indicators for patient selection utilizing both Cox proportional hazards analysis and Kaplan–Meier survival analysis. A robust and cost-effective deep learning pipeline for digital histopathology tasks has become a necessity within the context of the medical community. This challenge highlights the limitations of current MIL methods, particularly within the context of prognosis-based classification tasks, and the importance of DCNNs like inception that has nonlinear convolutional modules at various resolutions to facilitate processing the data in multiple resolutions, which is a key feature required for pathology related prediction tasks. This further suggests the use of feature reuse at various scales to improve models for future research directions. In particular, this paper releases the labels of the testing set and provides applications for future research directions in precision oncology to predict ovarian cancer treatment effectiveness and facilitate patient selection via histopathological images.
A new approach for brain tumor diagnosis system: Single image super resolution based maximum fuzzy entropy segmentation and convolutional neural network
Sert, Eser
Özyurt, Fatih
Doğantekin, Akif
Med Hypotheses2019Journal Article, cited 0 times
TCGA-GBM
Segmentation
Deep learning
Magnetic resonance imaging (MRI) images can be used to diagnose brain tumors. Thanks to these images, some methods have so far been proposed in order to distinguish between benign and malignant brain tumors. Many systems attempting to define these tumors are based on tissue analysis methods. However, various factors such as the quality of an MRI device, noisy images and low image resolution may decrease the quality of MRI images. To eliminate these problems, super resolution approaches are preferred as a complementary source for brain tumor images. The proposed method benefits from single image super resolution (SISR) and maximum fuzzy entropy segmentation (MFES) for brain tumor segmentation on an MRI image. Later, pre-trained ResNet architecture, which is a convolutional neural network (CNN) architecture, and support vector machine (SVM) are used to perform feature extraction and classification, respectively. It was observed in experimental studies that SISR displayed a higher performance in terms of brain tumor segmentation. Similarly, it displayed a higher performance in terms of classifying brain tumor regions as well as benign and malignant brain tumors. As a result, the present study indicated that SISR yielded an accuracy rate of 95% in the diagnosis of segmented brain tumors, which exceeds brain tumor segmentation using MFES without SISR by 7.5%.
An expert system for brain tumor detection: Fuzzy C-means with super resolution and convolutional neural network with extreme learning machine
Ozyurt, F.
Sert, E.
Avci, D.
Med Hypotheses2020Journal Article, cited 10 times
Website
TCGA-GBM
BRAIN
Algorithm Development
Super-resolution, which is one of the trend issues of recent times, increases the resolution of the images to higher levels. Increasing the resolution of a vital image in terms of the information it contains such as brain magnetic resonance image (MRI), makes the important information in the MRI image more visible and clearer. Thus, it is provided that the borders of the tumors in the related image are found more successfully. In this study, brain tumor detection based on fuzzy C-means with super-resolution and convolutional neural networks with extreme learning machine algorithms (SR-FCM-CNN) approach has been proposed. The aim of this study has been segmented the tumors in high performance by using Super Resolution Fuzzy-C-Means (SR-FCM) approach for tumor detection from brain MR images. Afterward, feature extraction and pretrained SqueezeNet architecture from convolutional neural network (CNN) architectures and classification process with extreme learning machine (ELM) were performed. In the experimental studies, it has been determined that brain tumors have been better segmented and removed using SR-FCM method. Using the SquezeeNet architecture, features were extracted from a smaller neural network model with fewer parameters. In the proposed method, 98.33% accuracy rate has been detected in the diagnosis of segmented brain tumors using SR-FCM. This rate is greater 10% than the rate of recognition of brain tumors segmented with fuzzy C-means (FCM) without SR.
Brain tumor segmentation approach based on the extreme learning machine and significantly fast and robust fuzzy C-means clustering algorithms running on Raspberry Pi hardware
ŞİŞİK, Fatih
Sert, Eser
Medical Hypotheses2020Journal Article, cited 0 times
TCGA-GBM
REMBRANDT
Segmentation
Fuzzy C-means clustering (FCM)
Automatic decision support systems have gained importance in health sector in recent years. In parallel with recent developments in the fields of artificial intelligence and image processing, embedded systems are also used in decision support systems for tumor diagnosis. Extreme learning machine (ELM), is a recently developed, quick and efficient algorithm which can quickly and flawlessly diagnose tumors using machine learning techniques. Similarly, significantly fast and robust fuzzy C-means clustering algorithm (FRFCM) is a novel and fast algorithm which can display a high performance. In the present study, a brain tumor segmentation approach is proposed based on extreme learning machine and significantly fast and robust fuzzy C-means clustering algorithms (BTS-ELM-FRFCM) running on Raspberry Pi (PRI) hardware. The present study mainly aims to introduce a new segmentation system hardware containing new algorithms and offering a high level of accuracy the health sector. PRI’s are useful mobile devices due to their cost-effectiveness and satisfying hardware. 3200 training images were used to train ELM in the present study. 20 pieces of MRI images were used for testing process. Figure of merid (FOM), Jaccard similarity coefficient (JSC) and Dice indexes were used in order to evaluate the performance of the proposed approach. In addition, the proposed method was compared with brain tumor segmentation based on support vector machine (BTS-SVM), brain tumor segmentation based on fuzzy C-means (BTS-FCM) and brain tumor segmentation based on self-organizing maps and k-means (BTS-SOM). The statistical analysis on FOM, JSC and Dice results obtained using four different approaches indicated that BTS-ELM-FRFCM displayed the highest performance. Thus, it can be concluded that the embedded system designed in the present study can perform brain tumor segmentation with a high accuracy rate.
Generating post-hoc explanation from deep neural networks for multi-modal medical image analysis tasks
Jin, W.
Li, X.
Fatehi, M.
Hamarneh, G.
MethodsX2023Journal Article, cited 1 times
Website
Explaining model decisions from medical image inputs is necessary for deploying deep neural network (DNN) based models as clinical decision assistants. The acquisition of multi-modal medical images is pervasive in practice for supporting the clinical decision-making process. Multi-modal images capture different aspects of the same underlying regions of interest. Explaining DNN decisions on multi-modal medical images is thus a clinically important problem. Our methods adopt commonly-used post-hoc artificial intelligence feature attribution methods to explain DNN decisions on multi-modal medical images, including two categories of gradient- and perturbation-based methods. * Gradient-based explanation methods - such as Guided BackProp, DeepLift - utilize the gradient signal to estimate the feature importance for model prediction. * Perturbation-based methods - such as occlusion, LIME, kernel SHAP - utilize the input-output sampling pairs to estimate the feature importance. * We describe the implementation details on how to make the methods work for multi-modal image input, and make the implementation code available.
An efficient image segmentation and classification of lung lesions in pet and CT image fusion using DTWT incorporated SVM
Priya, R. Mohana
Venkatesan, P.
2021Journal Article, cited 0 times
RIDER Lung PET-CT
Automatic segmentation and classification of fused lung Computed Tomography (CT) and Positron Emission Tomography (PET) images is presented. This system consists of four basic stages: 1). Lung image fusion process; 2). Segmentation of fused lung CT/PET images; 3). Post pre-processing; 4). Classification of fused lung images. In the first step, the lung image fusion process is made by deep learning method. At first, the input CT/PET images are decomposed by Dual Tree m-band Wavelet Transform (DTWT). The coefficients of DTWT are fused by deep learning method. This fused image of CT/PET is the input for the following steps. In the next step, the fused CT/PET images are decomposed by DTWT. It produces lower and higher frequency sub-band coefficients. Then the lower frequency components are set to zero. Then higher frequency components are used for reconstruction. Then the clustering-based thresholding method is used for segmentation. In post pre-processing step, the unwanted small regions are removed by morphological operations. Then the lung region is detected. At last, in the classification step, the features are extracted by the intensity and texture-based features. These features are classified by hybrid classifiers like Support Vector Machine (SVM) are used. The performance of the system has a higher classification accuracy of 99% using SVM classifier.
Self supervised contrastive learning for digital histopathology
Ciga, Ozan
Xu, Tony
Martel, Anne Louise
Machine Learning with Applications2022Journal Article, cited 28 times
Website
Prostate-MRI
C-NMC 2019
SN-AM
Post-NAT-BRCA
AML-Cytomorphology_LMU
CPTAC
TCGA
Algorithm Development
Pathomics
Unsupervised learning has been a long-standing goal of machine learning and is especially important for medical image analysis, where the learning can compensate for the scarcity of labeled datasets. A promising subclass of unsupervised learning is self-supervised learning, which aims to learn salient features using the raw input as the learning signal. In this work, we tackle the issue of learning domain-specific features without any supervision to improve multiple task performances that are of interest to the digital histopathology community. We apply a contrastive self-supervised learning method to digital histopathology by collecting and pretraining on 57 histopathology datasets without any labels. We find that combining multiple multi-organ datasets with different types of staining and resolution properties improves the quality of the learned features. Furthermore, we find using more images for pretraining leads to a better performance in multiple downstream tasks, albeit there are diminishing returns as more unlabeled images are incorporated into the pretraining. Linear classifiers trained on top of the learned features show that networks pretrained on digital histopathology datasets perform better than ImageNet pretrained networks, boosting task performances by more than 28% in scores on average. Interestingly, we did not observe a consistent correlation between the pretraining dataset site or the organ versus the downstream task (e.g., pretraining with only breast images does not necessarily lead to a superior downstream task performance for breast-related tasks). These findings may also be useful when applying newer contrastive techniques to histopathology data. Pretrained PyTorch models are made publicly available at https://github.com/ozanciga/self-supervised-histopathology.
Sampling strategies for learning-based 3D medical image compression
Nagoor, Omniah H.
Whittle, Joss
Deng, Jingjing
Mora, Benjamin
Jones, Mark W.
Machine Learning with Applications2022Journal Article, cited 0 times
Website
Algorithm Development
AAPM RT-MAC Grand Challenge 2019
Long Short-Term Memory (LSTM)
Recent achievements of sequence prediction models in numerous domains, including compression, provide great potential for novel learning-based codecs. In such models, the input sequence’s shape and size play a crucial role in learning the mapping function of the data distribution to the target output. This work examines numerous input configurations and sampling schemes for a many-to-one sequence prediction model, specifically for compressing 3D medical images (16-bit depth) losslessly. The main objective is to determine the optimal practice for enabling the proposed Long Short-Term Memory (LSTM) model to achieve high compression ratio and fast encoding–decoding performance.; ; Our LSTM models are trained with 4-fold cross-validation on 12 high-resolution CT dataset while measuring model’s compression ratios and execution time. Several configurations of sequences have been evaluated, and our results demonstrate that pyramid-shaped sampling represents the best trade-off between performance and compression ratio (up to 3x). We solve a problem of non-deterministic environments that allow our models to run in parallel without much compression performance drop.; ; Experimental evaluation was carried out on datasets acquired by different hospitals, representing different body segments, and distinct scanning modalities (CT and MRI). Our new methodology allows straightforward parallelisation that speeds-up the decoder by up to 37x compared to previous methods. Overall, the trained models demonstrate efficiency and generalisability for compressing 3D medical images losslessly while still outperforming well-known lossless methods by approximately 17% and 12%. To the best of our knowledge, this is the first study that focuses on voxel-wise predictions of volumetric medical imaging for lossless compression.
Machine learning based medical image deepfake detection: A comparative study
Solaiyappan, Siddharth
Wen, Yuxin
Machine Learning with Applications2022Journal Article, cited 0 times
LIDC-IDRI
Deep generative networks in recent years have reinforced the need for caution while consuming various modalities of digital information. One avenue of deepfake creation is aligned with injection and removal of tumors from medical scans. Failure to detect medical deepfakes can lead to large setbacks on hospital resources or even loss of life. This paper attempts to address the detection of such attacks with a structured case study. Specifically, we evaluate eight different machine learning algorithms, which include three conventional machine learning methods (Support Vector Machine, Random Forest, Decision Tree) and five deep learning models (DenseNet121, DenseNet201, ResNet50, ResNet101, VGG19) in distinguishing between tampered and untampered images. For deep learning models, the five models are used for feature extraction, then each pre-trained model is fine-tuned. The findings of this work show near perfect accuracy in detecting instances of tumor injections and removals.
A comparison of two methods for estimating DCE-MRI parameters via individual and cohort based AIFs in prostate cancer: A step towards practical implementation
Fedorov, Andriy
Fluckiger, Jacob
Ayers, Gregory D
Li, Xia
Gupta, Sandeep N
Tempany, Clare
Mulkern, Robert
Yankeelov, Thomas E
Fennessy, Fiona M
Magnetic resonance imaging2014Journal Article, cited 30 times
Website
Algorithm Development
PROSTATE
Dynamic Contrast-Enhanced (DCE)-MRI
Multi-parametric Magnetic Resonance Imaging, and specifically Dynamic Contrast Enhanced (DCE) MRI, play increasingly important roles in detection and staging of prostate cancer (PCa). One of the actively investigated approaches to DCE MRI analysis involves pharmacokinetic (PK) modeling to extract quantitative parameters that may be related to microvascular properties of the tissue. It is well-known that the prescribed arterial blood plasma concentration (or Arterial Input Function, AIF) input can have significant effects on the parameters estimated by PK modeling. The purpose of our study was to investigate such effects in DCE MRI data acquired in a typical clinical PCa setting. First, we investigated how the choice of a semi-automated or fully automated image-based individualized AIF (iAIF) estimation method affects the PK parameter values; and second, we examined the use of method-specific averaged AIF (cohort-based, or cAIF) as a means to attenuate the differences between the two AIF estimation methods. Two methods for automated image-based estimation of individualized (patient-specific) AIFs, one of which was previously validated for brain and the other for breast MRI, were compared. cAIFs were constructed by averaging the iAIF curves over the individual patients for each of the two methods. Pharmacokinetic analysis using the Generalized kinetic model and each of the four AIF choices (iAIF and cAIF for each of the two image-based AIF estimation approaches) was applied to derive the volume transfer rate (K(trans)) and extravascular extracellular volume fraction (ve) in the areas of prostate tumor. Differences between the parameters obtained using iAIF and cAIF for a given method (intra-method comparison) as well as inter-method differences were quantified. The study utilized DCE MRI data collected in 17 patients with histologically confirmed PCa. Comparison at the level of the tumor region of interest (ROI) showed that the two automated methods resulted in significantly different (p<0.05) mean estimates of ve, but not of K(trans). Comparing cAIF, different estimates for both ve, and K(trans) were obtained. Intra-method comparison between the iAIF- and cAIF-driven analyses showed the lack of effect on ve, while K(trans) values were significantly different for one of the methods. Our results indicate that the choice of the algorithm used for automated image-based AIF determination can lead to significant differences in the values of the estimated PK parameters. K(trans) estimates are more sensitive to the choice between cAIF/iAIF as compared to ve, leading to potentially significant differences depending on the AIF method. These observations may have practical consequences in evaluating the PK analysis results obtained in a multi-site setting.
Modification of population based arterial input function to incorporate individual variation
Kim, Harrison
Magn Reson Imaging2018Journal Article, cited 2 times
Website
QIN PROSTATE
Algorithm Development
PROSTATE
Arterial input function (AIF)
DCE-MRI
This technical note describes how to modify a population-based arterial input function to incorporate variation among the individuals. In DCE-MRI, an arterial input function (AIF) is often distorted by pulsated inflow effect and noise. A population-based AIF (pAIF) has high signal-to-noise ratio (SNR), but cannot incorporate the individual variation. AIF variation is mainly induced by variation in cardiac output and blood volume of the individuals, which can be detected by the full width at half maximum (FWHM) during the first passage and the amplitude of AIF, respectively. Thus pAIF scaled in time and amplitude fitting to the individual AIF may serve as a high SNR AIF incorporating the individual variation. The proposed method was validated using DCE-MRI images of 18 prostate cancer patients. Root mean square error (RMSE) of pAIF from individual AIFs was 0.88+/-0.48mM (mean+/-SD), but it was reduced to 0.25+/-0.11mM after pAIF modification using the proposed method (p<0.0001).
Evaluation of TP53/PIK3CA mutations using texture and morphology analysis on breast MRI
Moon, W. K.
Chen, H. H.
Shin, S. U.
Han, W.
Chang, R. F.
Magn Reson Imaging2019Journal Article, cited 0 times
TCGA-BRCA
Radiogenomics
Gray-level co-occurrence matrix (GLCM)
Breast
PURPOSE: Somatic mutations in TP53 and PIK3CA genes, the two most frequent genetic alternations in breast cancer, are associated with prognosis and therapeutic response. This study predicted the presence of TP53 and PIK3CA mutations in breast cancer by using texture and morphology analyses on breast MRI. MATERIALS AND METHODS: A total of 107 breast cancers (dataset A) from The Cancer Imaging Archive (TCIA) consisting of 40 TP53 mutation cancer and 67 cancers without TP53 mutation; 35 PIK3CA mutations cancer and 72 without PIK3CA mutation. 122 breast cancer (dataset B) from Seoul National University Hospital containing 54 TP53 mutation cancer and 68 without mutations were used in this study. At first, the tumor area was segmented by a region growing method. Subsequently, gray level co-occurrence matrix (GLCM) texture features were extracted after ranklet transform, and a series of features including compactness, margin, and ellipsoid fitting model were used to describe the morphological characteristics of tumors. Lastly, a logistic regression was used to identify the presence of TP53 and PIK3CA mutations. The classification performances were evaluated by accuracy, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV). Taking into account the trade-offs of sensitivity and specificity, the overall performances were evaluated by using receiver operating characteristic (ROC) curve analysis. RESULTS: The GLCM texture feature based on ranklet transform is more capable of recognizing TP53 and PIK3CA mutations than morphological feature, especially for the TP53 mutation that achieves statistically significant. The area under the ROC curve (AUC) for TP53 mutation dataset A and dataset B achieved 0.78 and 0.81 respectively. For PIK3CA mutation, the AUC of ranklet texture feature was 0.70. CONCLUSION: Texture analysis of segmented tumor on breast MRI based on ranklet transform is potential in recognizing the presence of TP53 mutation and PIK3CA mutation.
Formal methods for prostate cancer gleason score and treatment prediction using radiomic biomarkers
Brunese, Luca
Mercaldo, Francesco
Reginelli, Alfonso
Santone, Antonella
Magnetic resonance imaging2020Journal Article, cited 11 times
Website
PROSTATE-DIAGNOSIS
Fused Radiology-Pathology Prostate
Gleason scoring
radiomics
Biomarker
Deep unregistered multi-contrast MRI reconstruction
Liu, X.
Wang, J.
Jin, J.
Li, M.
Tang, F.
Crozier, S.
Liu, F.
Magn Reson Imaging2021Journal Article, cited 0 times
BraTS-TCGA-GBM
Algorithm Development
BRAIN
*Magnetic Resonance Imaging
*Neural Networks
Computer
Deep learning
Image reconstruction
Image registration
Magnetic resonance imaging (MRI)
Multi-contrast
Multiple magnetic resonance images of different contrasts are normally acquired for clinical diagnosis. Recently, research has shown that the previously acquired multi-contrast (MC) images of the same patient can be used as anatomical prior to accelerating magnetic resonance imaging (MRI). However, current MC-MRI networks are based on the assumption that the images are perfectly registered, which is rarely the case in real-world applications. In this paper, we propose an end-to-end deep neural network to reconstruct highly accelerated images by exploiting the shareable information from potentially misaligned reference images of an arbitrary contrast. Specifically, a spatial transformation (ST) module is designed and integrated into the reconstruction network to align the pre-acquired reference images with the images to be reconstructed. The misalignment is further alleviated by maximizing the normalized cross-correlation (NCC) between the MC images. The visualization of feature maps demonstrates that the proposed method effectively reduces the misalignment between the images for shareable information extraction when applied to the publicly available brain datasets. Additionally, the experimental results on these datasets show the proposed network allows the robust exploitation of shareable information across the misaligned MC images, leading to improved reconstruction results.
A segmentation-based method improving the performance of N4 bias field correction on T2weighted MR imaging data of the prostate
Dovrou, A.
Nikiforaki, K.
Zaridis, D.
Manikis, G. C.
Mylona, E.
Tachos, N.
Tsiknakis, M.
Fotiadis, D. I.
Marias, K.
Magn Reson Imaging2023Journal Article, cited 2 times
Website
PROSTATEx
PROSTATE-MRI
PROSTATE-DIAGNOSIS
PI-CAI
Male
Humans
*Prostate/pathology
*Image Processing
Computer-Assisted/methods
Magnetic Resonance Imaging/methods
Bias
Phantoms
Imaging
Full width at half maximum
N4 bias field correction
Periprostatic fat segmentation
Prostate imaging
Magnetic Resonance (MR) images suffer from spatial inhomogeneity, known as bias field corruption. The N4ITK filter is a state-of-the-art method used for correcting the bias field to optimize MR-based quantification. In this study, a novel approach is presented to quantitatively evaluate the performance of N4 bias field correction for pelvic prostate imaging. An exploratory analysis, regarding the different values of convergence threshold, shrink factor, fitting level, number of iterations and use of mask, is performed to quantify the performance of N4 filter in pelvic MR images. The performance of a total of 240 different N4 configurations is examined using the Full Width at Half Maximum (FWHM) of the segmented periprostatic fat distribution as evaluation metric. Phantom T2weighted images were used to assess the performance of N4 for a uniform test tissue mimicking material, excluding factors such as patient related susceptibility and anatomy heterogeneity. Moreover, 89 and 204 T2weighted patient images from two public datasets acquired by scanners with a combined surface and endorectal coil at 1.5 T and a surface coil at 3 T, respectively, were utilized and corrected with a variable set of N4 parameters. Furthermore, two external public datasets were used to validate the performance of the N4 filter in T2weighted patient images acquired by various scanning conditions with different magnetic field strengths and coils. The results show that the set of N4 parameters, converging to optimal representations of fat in the image, were: convergence threshold 0.001, shrink factor 2, fitting level 6, number of iterations 100 and the use of default mask for prostate images acquired by a combined surface and endorectal coil at both 1.5 T and 3 T. The corresponding optimal N4 configuration for MR prostate images acquired by a surface coil at 1.5 T or 3 T was: convergence threshold 0.001, shrink factor 2, fitting level 5, number of iterations 25 and the use of default mask. Hence, periprostatic fat segmentation can be used to define the optimal settings for achieving T2weighted prostate images free from bias field corruption to provide robust input for further analysis.
Predicting isocitrate dehydrogenase status among adult patients with diffuse glioma using patient characteristics, radiomic features, and magnetic resonance imaging: Multi-modal analysis by variable vision transformer
Usuzaki, T.
Inamori, R.
Shizukuishi, T.
Morishita, Y.
Takagi, H.
Ishikuro, M.
Obara, T.
Takase, K.
Magn Reson Imaging2024Journal Article, cited 0 times
Website
UCSF-PDGM
UPENN-GBM
Humans
*Isocitrate Dehydrogenase/genetics
*Glioma/diagnostic imaging
Female
Male
*Magnetic Resonance Imaging/methods
Middle Aged
Adult
*Brain Neoplasms/diagnostic imaging
Aged
Contrast Media
Mutation
Image Interpretation
Computer-Assisted/methods
Radiomics
Artificial intelligence
Brain tumor
Deep learning
Genetic mutation
Neural networks
OBJECTIVES: To evaluate the performance of the multimodal model, termed variable Vision Transformer (vViT), in the task of predicting isocitrate dehydrogenase (IDH) status among adult patients with diffuse glioma. MATERIALS AND METHODS: vViT was designed to predict IDH status using patient characteristics (sex and age), radiomic features, and contrast-enhanced T1-weighted images (CE-T1WI). Radiomic features were extracted from each enhancing tumor (ET), necrotic tumor core (NCR), and peritumoral edematous/infiltrated tissue (ED). CE-T1WI were split into four images and input to vViT. In the training, internal test, and external test, 271 patients with 1070 images (535 IDH wildtype, 535 IDH mutant), 35 patients with 194 images (97 IDH wildtype, 97 IDH mutant), and 291 patients with 872 images (436 IDH wildtype, 436 IDH mutant) were analyzed, respectively. Metrics including accuracy and AUC-ROC were calculated for the internal and external test datasets. Permutation importance analysis combined with the Mann-Whitney U test was performed to compare inputs. RESULTS: For the internal test dataset, vViT correctly predicted IDH status for all patients. For the external test dataset, an accuracy of 0.935 (95% confidence interval; 0.913-0.945) and AUC-ROC of 0.887 (0.798-0.956) were obtained. For both internal and external test datasets, CE-T1WI ET radiomic features and patient characteristics had higher importance than other inputs (p < 0.05). CONCLUSIONS: The vViT has the potential to be a competent model in predicting IDH status among adult patients with diffuse glioma. Our results indicate that age, sex, and CE-T1WI ET radiomic features have key information in estimating IDH status.
Imaging Genomics in Glioblastoma Multiforme: A Predictive Tool for Patients Prognosis, Survival, and Outcome
Anil, Rahul
Colen, Rivka R
Magnetic Resonance Imaging Clinics of North America2016Journal Article, cited 3 times
Website
Radiogenomics
Glioblastoma Multiforme (GBM)
The integration of imaging characteristics and genomic data has started a new trend in approach toward management of glioblastoma (GBM). Many ongoing studies are investigating imaging phenotypical signatures that could explain more about the behavior of GBM and its outcome. The discovery of biomarkers has played an adjuvant role in treating and predicting the outcome of patients with GBM. Discovering these imaging phenotypical signatures and dysregulated pathways/genes is needed and required to engineer treatment based on specific GBM manifestations. Characterizing these parameters will establish well-defined criteria so researchers can build on the treatment of GBM through personal medicine.
USE-Net: Incorporating Squeeze-and-Excitation blocks into U-Net for prostate zonal segmentation of multi-institutional MRI datasets
Rundo, Leonardo
Han, Changhee
Nagano, Yudai
Zhang, Jin
Hataya, Ryuichiro
Militello, Carmelo
Tangherloni, Andrea
Nobile, Marco S.
Ferretti, Claudio
Besozzi, Daniela
Gilardi, Maria Carla
Vitabile, Salvatore
Mauri, Giancarlo
Nakayama, Hideki
Cazzaniga, Paolo
Neurocomputing2019Journal Article, cited 0 times
ISBI-MR-Prostate-2013
Prostate cancer is the most common malignant tumors in men but prostate Magnetic Resonance Imaging (MRI) analysis remains challenging. Besides whole prostate gland segmentation, the capability to differentiate between the blurry boundary of the Central Gland (CG) and Peripheral Zone (PZ) can lead to differential diagnosis, since the frequency and severity of tumors differ in these regions. To tackle the prostate zonal segmentation task, we propose a novel Convolutional Neural Network (CNN), called USE-Net, which incorporates Squeeze-and-Excitation (SE) blocks into U-Net, i.e., one of the most effective CNNs in biomedical image segmentation. Especially, the SE blocks are added after every Encoder (Enc USE-Net) or Encoder-Decoder block (Enc-Dec USE-Net). This study evaluates the generalization ability of CNN-based architectures on three T2-weighted MRI datasets, each one consisting of a different number of patients and heterogeneous image characteristics, collected by different institutions. The following mixed scheme is used for training/testing: (i) training on either each individual dataset or multiple prostate MRI datasets and (ii) testing on all three datasets with all possible training/testing combinations. USE-Net is compared against three state-of-the-art CNN-based architectures (i.e., U-Net, pix2pix, and Mixed-Scale Dense Network), along with a semi-automatic continuous max-flow model. The results show that training on the union of the datasets generally outperforms training on each dataset separately, allowing for both intra-/cross-dataset generalization. Enc USE-Net shows good overall generalization under any training condition, while Enc-Dec USE-Net remarkably outperforms the other methods when trained on all datasets. These findings reveal that the SE blocks’ adaptive feature recalibration provides excellent cross-dataset generalization when testing is performed on samples of the datasets used during training. Therefore, we should consider multi-dataset training and SE blocks together as mutually indispensable methods to draw out each other’s full potential. In conclusion, adaptive mechanisms (e.g., feature recalibration) may be a valuable solution in medical imaging applications involving multi-institutional settings.
Unsupervised domain adaptation with adversarial learning for mass detection in mammogram
Shen, Rongbo
Yao, Jianhua
Yan, Kezhou
Tian, Kuan
Jiang, Cheng
Zhou, Ke
Neurocomputing2020Journal Article, cited 0 times
Website
CBIS-DDSM
Annotation
Radiomics
BREAST
Many medical image datasets have been collected without proper annotations for deep learning training. In this paper, we propose a novel unsupervised domain adaptation framework with adversarial learning to minimize the annotation efforts. Our framework employs a task specific network, i.e., fully convolutional network (FCN), for spatial density prediction. Moreover, we employ a domain discriminator, in which adversarial learning is adopted to align the less-annotated target domain features with the well-annotated source domain features in the feature space. We further propose a novel training strategy for the adversarial learning by coupling data from source and target domains and alternating the subnet updates. We employ the public CBIS-DDSM dataset as the source domain, and perform two sets of experiments on two target domains (i.e., the public INbreast dataset and a self-collected dataset), respectively. Experimental results suggest consistent and comparable performance improvement over the state-of-the-art methods. Our proposed training strategy is also proved to converge much faster.
Liver, kidney and spleen segmentation from CT scans and MRI with deep learning: A survey
Altini, Nicola
Prencipe, Berardino
Cascarano, Giacomo Donato
Brunetti, Antonio
Brunetti, Gioacchino
Triggiani, Vito
Carnimeo, Leonarda
Marino, Francescomaria
Guerriero, Andrea
Villani, Laura
Scardapane, Arnaldo
Bevilacqua, Vitoantonio
Neurocomputing2022Journal Article, cited 0 times
CT-ORG
Pancreas-CT
Deep Learning approaches for automatic segmentation of organs from CT scans and MRI are providing promising results, leading towards a revolution in the radiologists’ workflow. Precise delineations of abdominal organs boundaries reveal fundamental for a variety of purposes: surgical planning, volumetric estimation (e.g. Total Kidney Volume – TKV – assessment in Autosomal Dominant Polycystic Kidney Disease – ADPKD), diagnosis and monitoring of pathologies. Fundamental imaging techniques exploited for these tasks are Computed Tomography (CT) and Magnetic Resonance Imaging (MRI), which enable clinicians to perform 3D analyses of all Regions of Interests (ROIs). In the realm of existing methods for segmentation and classification of these zones, Convolutional Neural Networks (CNNs) are emerging as the reference approach. In the last five years an enormous research effort has been done about the possibility of applying CNNs in Medical Imaging, resulting in more than 8000 documents on Scopus and more than 80000 results on Google Scholar. The high accuracy provided by those systems cannot be denied as motivation of all obtained results, though there are still problems to be addressed with. In this survey, major article databases, as Scopus, for instance, were systematically investigated for different kinds of Deep Learning approaches in segmentation of abdominal organs with a particular focus on liver, kidney and spleen. In this work, approaches are accurately classified, both by relevance of each organ (for instance, segmentation of liver has specific properties, if compared to other organs) and by type of computational approach, as well as the architecture of the employed network. For this purpose, a case study of segmentation for each of these organs is presented.
Fully automatic MRI brain tumor segmentation using efficient spatial attention convolutional networks with composite loss
Mazumdar, Indrajit
Mukherjee, Jayanta
Neurocomputing2022Journal Article, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
BraTS 2021
Automatic segmentation
Deep Learning
BRAIN
Segmentation
Loss Function
Automatically segmenting tumors from brain magnetic resonance imaging scans is crucial for diagnosis and planning treatment. However, brain tumors are highly diverse in location, contrast, size, and shape, making automatic segmentation extremely challenging. Recent techniques for segmenting brain tumors are mostly built using convolutional neural networks (CNNs). However, most of these existing techniques are inefficient, having slow inference speed and high parameter count. To reduce the diagnostic time, we present an accurate and efficient CNN model having fast inference speed and low parameter count for fully automatic brain tumor segmentation. Our novel CNN, the efficient spatial attention network (ESA-Net), is an improved variant of the popular U-Net. ESA-Net was built using our proposed efficient spatial attention (ESA) blocks containing depthwise separable convolution layers and a lightweight spatial attention module. The ESA blocks significantly improve both efficiency and segmentation accuracy. We also proposed a new composite loss function by combining Dice, focal, and Hausdorff distance (HD) losses to significantly improve the segmentation accuracy by tackling extreme class imbalance and directly optimizing the Dice score and HD. The effectiveness of the proposed network and loss function was evaluated by performing extensive experiments on the BraTS 2021 benchmark dataset. ESA-Net significantly outperformed U-Net in segmentation accuracy while having four times faster inference speed and eight times fewer parameters. In addition, the composite loss outperformed other loss functions. The proposed model achieved significantly better segmentation accuracy than other efficient models while having faster inference speed and fewer parameters. Moreover, it obtained competitive segmentation accuracy against state-of-the-art models. The proposed system segments a patient’s brain in 2.7 s using a GPU and has 157 times faster inference speed and 177 times fewer parameters than other state-of-the-art systems.
Mutually aided uncertainty incorporated dual consistency regularization with pseudo label for semi-supervised medical image segmentation
Lu, Shanfu
Zhang, Zijian
Yan, Ziye
Wang, Yiran
Cheng, Tingting
Zhou, Rongrong
Yang, Guang
Neurocomputing2023Journal Article, cited 0 times
Website
Pancreas-CT
BraTS 2020
Semi-supervised learning
Segmentation
Pseudo-labeling
Consistency Learning
Algorithm Development
Semi-supervised learning has contributed plenty to promoting computer vision tasks. Especially concerning medical images, semi-supervised image segmentation can significantly reduce the labor and time cost of labeling images. Among the existing semi-supervised methods, pseudo-labelling and consistency regularization prevail; however, the current related methods still need to achieve satisfactory results due to the poor quality of the pseudo-labels generated and needing more certainty awareness the models. To address this problem, we propose a novel method that combines pseudo-labelling with dual consistency regularization based on a high capability of uncertainty awareness. This method leverages a cycle-loss regularized to lead to a more accurate uncertainty estimate. Followed by the uncertainty estimation, the certain region with its pseudo-label is further trained in a supervised manner. In contrast, the uncertain region is used to promote the dual consistency between the student and teacher networks. The developed approach was tested on three public datasets and showed that: 1) The proposed method achieves excellent performance improvement by leveraging unlabeled data; 2) Compared with several state-of-the-art (SOTA) semi-supervised segmentation methods, ours achieved better or comparable performance.
Sinogram upsampling using Primal-Dual UNet for undersampled CT and radial MRI reconstruction
Ernst, P.
Chatterjee, S.
Rose, G.
Speck, O.
Nurnberger, A.
Neural Netw2023Journal Article, cited 6 times
Website
CT Lymph Nodes
Algorithm Development
U-Net
Artifacts
Brain/diagnostic imaging
Computed Tomography (CT)
Deep learning
Magnetic Resonance Imaging (MRI)
Radial MRI reconstruction
Sparse CT reconstruction
Undersampled MR reconstruction
Computed tomography (CT) and magnetic resonance imaging (MRI) are two widely used clinical imaging modalities for non-invasive diagnosis. However, both of these modalities come with certain problems. CT uses harmful ionising radiation, and MRI suffers from slow acquisition speed. Both problems can be tackled by undersampling, such as sparse sampling. However, such undersampled data leads to lower resolution and introduces artefacts. Several techniques, including deep learning based methods, have been proposed to reconstruct such data. However, the undersampled reconstruction problem for these two modalities was always considered as two different problems and tackled separately by different research works. This paper proposes a unified solution for both sparse CT and undersampled radial MRI reconstruction, achieved by applying Fourier transform-based pre-processing on the radial MRI and then finally reconstructing both modalities using sinogram upsampling combined with filtered back-projection. The Primal-Dual network is a deep learning based method for reconstructing sparsely-sampled CT data. This paper introduces Primal-Dual UNet, which improves the Primal-Dual network in terms of accuracy and reconstruction speed. The proposed method resulted in an average SSIM of 0.932+/-0.021 while performing sparse CT reconstruction for fan-beam geometry with a sparsity level of 16, achieving a statistically significant improvement over the previous model, which resulted in 0.919+/-0.016. Furthermore, the proposed model resulted in 0.903+/-0.019 and 0.957+/-0.023 average SSIM while reconstructing undersampled brain and abdominal MRI data with an acceleration factor of 16, respectively - statistically significant improvements over the original model, which resulted in 0.867+/-0.025 and 0.949+/-0.025. Finally, this paper shows that the proposed network not only improves the overall image quality, but also improves the image quality for the regions-of-interest: liver, kidneys, and spleen; as well as generalises better than the baselines in presence the of a needle.
Multiple-instance ensemble for construction of deep heterogeneous committees for high-dimensional low-sample-size data
Zhou, Q.
Wang, S.
Zhu, H.
Zhang, X.
Zhang, Y.
Neural Netw2023Journal Article, cited 0 times
NSCLC-Radiomics
Attention
Committee learning
ensemble learning
Deep learning
high-dimensional low-sample-size domain
Deep ensemble learning, where we combine knowledge learned from multiple individual neural networks, has been widely adopted to improve the performance of neural networks in deep learning. This field can be encompassed by committee learning, which includes the construction of neural network cascades. This study focuses on the high-dimensional low-sample-size (HDLS) domain and introduces multiple instance ensemble (MIE) as a novel stacking method for ensembles and cascades. In this study, our proposed approach reformulates the ensemble learning process as a multiple-instance learning problem. We utilise the multiple-instance learning solution of pooling operations to associate feature representations of base neural networks into joint representations as a method of stacking. This study explores various attention mechanisms and proposes two novel committee learning strategies with MIE. In addition, we utilise the capability of MIE to generate pseudo-base neural networks to provide a proof-of-concept for a "growing" neural network cascade that is unbounded by the number of base neural networks. We have shown that our approach provides (1) a class of alternative ensemble methods that performs comparably with various stacking ensemble methods and (2) a novel method for the generation of high-performing "growing" cascades. The approach has also been verified across multiple HDLS datasets, achieving high performance for binary classification tasks in the low-sample size regime.
Addition of MR imaging features and genetic biomarkers strengthens glioblastoma survival prediction in TCGA patients
Nicolasjilwan, Manal
Hu, Ying
Yan, Chunhua
Meerzaman, Daoud
Holder, Chad A
Gutman, David
Jain, Rajan
Colen, Rivka
Rubin, Daniel L
Zinn, Pascal O
Hwang, Scott N
Raghavan, Prashant
Hammoud, Dima A
Scarpace, Lisa M
Mikkelsen, Tom
Chen, James
Gevaert, Olivier
Buetow, Kenneth
Freymann, John
Kirby, Justin
Flanders, Adam E
Wintermark, Max
Journal of Neuroradiology2014Journal Article, cited 49 times
Website
TCGA-GBM
Glioblastoma Multiforme (GBM)
Radiogenomics
PURPOSE: The purpose of our study was to assess whether a model combining clinical factors, MR imaging features, and genomics would better predict overall survival of patients with glioblastoma (GBM) than either individual data type. METHODS: The study was conducted leveraging The Cancer Genome Atlas (TCGA) effort supported by the National Institutes of Health. Six neuroradiologists reviewed MRI images from The Cancer Imaging Archive (http://cancerimagingarchive.net) of 102 GBM patients using the VASARI scoring system. The patients' clinical and genetic data were obtained from the TCGA website (http://www.cancergenome.nih.gov/). Patient outcome was measured in terms of overall survival time. The association between different categories of biomarkers and survival was evaluated using Cox analysis. RESULTS: The features that were significantly associated with survival were: (1) clinical factors: chemotherapy; (2) imaging: proportion of tumor contrast enhancement on MRI; and (3) genomics: HRAS copy number variation. The combination of these three biomarkers resulted in an incremental increase in the strength of prediction of survival, with the model that included clinical, imaging, and genetic variables having the highest predictive accuracy (area under the curve 0.679+/-0.068, Akaike's information criterion 566.7, P<0.001). CONCLUSION: A combination of clinical factors, imaging features, and HRAS copy number variation best predicts survival of patients with GBM.;
A novel depth search based light weight CAR network for the segmentation of brain tumour from MR images
Tankala, Sreekar
Pavani, Geetha
Biswal, Birendra
Siddartha, G.
Sahu, Gupteswar
Subrahmanyam, N. Bala
Aakash, S.
2022Journal Article, cited 0 times
TCGA-LGG
In this modern era, brain tumour is one of the dreadful diseases that occur due to the growth of abnormal cells or by the accumulation of dead cells in the brain. If these abnormalities are not detected in the early stages, they lead to severe conditions and may cause death to the patients. With the advancement of medical imaging, Magnetic Resonance Images (MRI) are developed to analyze the patients manually. However, this manual screening is prone to errors. To overcome this, a novel depth search-based network termed light weight channel attention and residual network (LWCAR-Net) is proposed by integrating with a novel depth search block (DSB) and a CAR module. The depth search block extracts the pertinent features by performing a series of convolution operations enabling the network to restore low-level information at every stage. On other hand, CAR module in decoding path refines the feature maps to increase the representation and generalization abilities of the network. This allows the network to locate the brain tumor pixels from MRI images more precisely. The performance of the depth search based LWCAR-Net is estimated by testing on different globally available datasets like BraTs 2020 and Kaggle LGG dataset. This method achieved a sensitivity of 95%, specificity of 99%, the accuracy of 99.97%, and dice coefficient of 95% respectively. Furthermore, the proposed model outperformed the existing state-of-the-art models like U-Net++, SegNet, etc by achieving an AUC of 98% in segmenting the brain tumour cells.
Brain extraction on MRI scans in presence of diffuse glioma: Multi-institutional performance evaluation of deep learning methods and robust modality-agnostic training
Thakur, S.
Doshi, J.
Pati, S.
Rathore, S.
Sako, C.
Bilello, M.
Ha, S. M.
Shukla, G.
Flanders, A.
Kotrotsou, A.
Milchenko, M.
Liem, S.
Alexander, G. S.
Lombardo, J.
Palmer, J. D.
LaMontagne, P.
Nazeri, A.
Talbar, S.
Kulkarni, U.
Marcus, D.
Colen, R.
Davatzikos, C.
Erus, G.
Bakas, S.
Neuroimage2020Journal Article, cited 0 times
Radiomics
TCGA-GBM
TCGA-LGG
Segmentation
Deep Learning
BRAIN
Brain extraction, or skull-stripping, is an essential pre-processing step in neuro-imaging that has a direct impact on the quality of all subsequent processing and analyses steps. It is also a key requirement in multi-institutional collaborations to comply with privacy-preserving regulations. Existing automated methods, including Deep Learning (DL) based methods that have obtained state-of-the-art results in recent years, have primarily targeted brain extraction without considering pathologically-affected brains. Accordingly, they perform sub-optimally when applied on magnetic resonance imaging (MRI) brain scans with apparent pathologies such as brain tumors. Furthermore, existing methods focus on using only T1-weighted MRI scans, even though multi-parametric MRI (mpMRI) scans are routinely acquired for patients with suspected brain tumors. In this study, we present a comprehensive performance evaluation of recent deep learning architectures for brain extraction, training models on mpMRI scans of pathologically-affected brains, with a particular focus on seeking a practically-applicable, low computational footprint approach, generalizable across multiple institutions, further facilitating collaborations. We identified a large retrospective multi-institutional dataset of n=3340 mpMRI brain tumor scans, with manually-inspected and approved gold-standard segmentations, acquired during standard clinical practice under varying acquisition protocols, both from private institutional data and public (TCIA) collections. To facilitate optimal utilization of rich mpMRI data, we further introduce and evaluate a novel ''modality-agnostic training'' technique that can be applied using any available modality, without need for model retraining. Our results indicate that the modality-agnostic approach(1) obtains accurate results, providing a generic and practical tool for brain extraction on scans with brain tumors.
SynthStrip: skull-stripping for any brain image
Hoopes, Andrew
Mora, Jocelyn S
Dalca, Adrian V
Fischl, Bruce
Hoffmann, Malte
Neuroimage2022Journal Article, cited 0 times
QIN GBM Treatment Response
The removal of non-brain signal from magnetic resonance imaging (MRI) data, known as skull-stripping, is an integral component of many neuroimage analysis streams. Despite their abundance, popular classical skull-stripping methods are usually tailored to images with specific acquisition properties, namely near-isotropic resolution and T1-weighted (T1w) MRI contrast, which are prevalent in research settings. As a result, existing tools tend to adapt poorly to other image types, such as stacks of thick slices acquired with fast spin-echo (FSE) MRI that are common in the clinic. While learning-based approaches for brain extraction have gained traction in recent years, these methods face a similar burden, as they are only effective for image types seen during the training procedure. To achieve robust skull-stripping across a landscape of imaging protocols, we introduce SynthStrip, a rapid, learning-based brain-extraction tool. By leveraging anatomical segmentations to generate an entirely synthetic training dataset with anatomies, intensity distributions, and artifacts that far exceed the realistic range of medical images, SynthStrip learns to successfully generalize to a variety of real acquired brain images, removing the need for training data with target contrasts. We demonstrate the efficacy of SynthStrip for a diverse set of image acquisitions and resolutions across subject populations, ranging from newborn to adult. We show substantial improvements in accuracy over popular skull-stripping baselines - all with a single trained model. Our method and labeled evaluation data are available at https://w3id.org/synthstrip.
MGA-Net: A novel mask-guided attention neural network for precision neonatal brain imaging
Jafrasteh, B.
Lubian-Lopez, S. P.
Trimarco, E.
Ruiz, M. R.
Barrios, C. R.
Almagro, Y. M.
Benavente-Fernandez, I.
Neuroimage2024Journal Article, cited 0 times
Website
QIN GBM Treatment Response
Brain volume estimation
Deep learning
Mask guided attention
Multimodal image processing
U-net architecture
In this study, we introduce MGA-Net, a novel mask-guided attention neural network, which extends the U-net model for precision neonatal brain imaging. MGA-Net is designed to extract the brain from other structures and reconstruct high-quality brain images. The network employs a common encoder and two decoders: one for brain mask extraction and the other for brain region reconstruction. A key feature of MGA-Net is its high-level mask-guided attention module, which leverages features from the brain mask decoder to enhance image reconstruction. To enable the same encoder and decoder to process both MRI and ultrasound (US) images, MGA-Net integrates sinusoidal positional encoding. This encoding assigns distinct positional values to MRI and US images, allowing the model to effectively learn from both modalities. Consequently, features learned from a single modality can aid in learning a modality with less available data, such as US. We extensively validated the proposed MGA-Net on diverse and independent datasets from varied clinical settings and neonatal age groups. The metrics used for assessment included the DICE similarity coefficient, recall, and accuracy for image segmentation; structural similarity for image reconstruction; and root mean squared error for total brain volume estimation from 3D ultrasound images. Our results demonstrate that MGA-Net significantly outperforms traditional methods, offering superior performance in brain extraction and segmentation while achieving high precision in image reconstruction and volumetric analysis. Thus, MGA-Net represents a robust and effective preprocessing tool for MRI and 3D ultrasound images, marking a significant advance in neuroimaging that enhances both research and clinical diagnostics in the neonatal period and beyond.
Genomics of Brain Tumor Imaging
Pope, Whitney B
Neuroimaging Clinics of North America2015Journal Article, cited 26 times
Website
Radiogenomics
Imaging genomics of glioblastoma: state of the art bridge between genomics and neuroradiology
ElBanan, Mohamed G
Amer, Ahmed M
Zinn, Pascal O
Colen, Rivka R
Neuroimaging Clinics of North America2015Journal Article, cited 29 times
Website
Radiogenomics
IDH mutation
BRAIN
Glioblastoma Multiforme (GBM)
Computer Aided Diagnosis (CADx)
Glioblastoma (GBM) is the most common and most aggressive primary malignant tumor of the central nervous system. Recently, researchers concluded that the "one-size-fits-all" approach for treatment of GBM is no longer valid and research should be directed toward more personalized and patient-tailored treatment protocols. Identification of the molecular and genomic pathways underlying GBM is essential for achieving this personalized and targeted therapeutic approach. Imaging genomics represents a new era as a noninvasive surrogate for genomic and molecular profile identification. This article discusses the basics of imaging genomics of GBM, its role in treatment decision-making, and its future potential in noninvasive genomic identification.
DEMARCATE: Density-based Magnetic Resonance Image Clustering for Assessing Tumor Heterogeneity in Cancer
Saha, Abhijoy
Banerjee, Sayantan
Kurtek, Sebastian
Narang, Shivali
Lee, Joonsang
Rao, Ganesh
Martinez, Juan
Bharath, Karthik
Rao, Arvind UK
Baladandayuthapani, Veerabhadran
NeuroImage: Clinical2016Journal Article, cited 4 times
Website
TCGA-GBM
Radiomics
Radiogenomics
Semi-automatic segmentation
K-means clustering
Principal component analysis (PCA)
Tumor heterogeneity is a crucial area of cancer research wherein inter- and intra-tumor differences are investigated to assess and monitor disease development and progression, especially in cancer. The proliferation of imaging and linked genomic data has enabled us to evaluate tumor heterogeneity on multiple levels. In this work, we examine magnetic resonance imaging (MRI) in patients with brain cancer to assess image-based tumor heterogeneity. Standard approaches to this problem use scalar summary measures (e.g., intensity-based histogram statistics) that do not adequately capture the complete and finer scale information in the voxel-level data. In this paper, we introduce a novel technique, DEMARCATE (DEnsity-based MAgnetic Resonance image Clustering for Assessing Tumor hEterogeneity) to explore the entire tumor heterogeneity density profiles (THDPs) obtained from the full tumor voxel space. THDPs are smoothed representations of the probability density function of the tumor images. We develop tools for analyzing such objects under the Fisher-Rao Riemannian framework that allows us to construct metrics for THDP comparisons across patients, which can be used in conjunction with standard clustering approaches. Our analyses of The Cancer Genome Atlas (TCGA) based Glioblastoma dataset reveal two significant clusters of patients with marked differences in tumor morphology, genomic characteristics and prognostic clinical outcomes. In addition, we see enrichment of image-based clusters with known molecular subtypes of glioblastoma multiforme, which further validates our representation of tumor heterogeneity and subsequent clustering techniques.
A radiomic signature as a non-invasive predictor of progression-free survival in patients with lower-grade gliomas
Liu, Xing
Li, Yiming
Qian, Zenghui
Sun, Zhiyan
Xu, Kaibin
Wang, Kai
Liu, Shuai
Fan, Xing
Li, Shaowu
Zhang, Zhong
NeuroImage: Clinical2018Journal Article, cited 0 times
Website
Radiomics
lower-grade glioma (LGG)
Progression-free survival
Radiogenomics
Inter-rater agreement in glioma segmentations on longitudinal MRI
Visser, M.
Muller, D. M. J.
van Duijn, R. J. M.
Smits, M.
Verburg, N.
Hendriks, E. J.
Nabuurs, R. J. A.
Bot, J. C. J.
Eijgelaar, R. S.
Witte, M.
van Herk, M. B.
Barkhof, F.
de Witt Hamer, P. C.
de Munck, J. C.
Neuroimage Clin2019Journal Article, cited 0 times
Website
VASARI
Glioblastoma Multiforme (GBM)
Diffuse gliomas
Segmentation
BACKGROUND: Tumor segmentation of glioma on MRI is a technique to monitor, quantify and report disease progression. Manual MRI segmentation is the gold standard but very labor intensive. At present the quality of this gold standard is not known for different stages of the disease, and prior work has mainly focused on treatment-naive glioblastoma. In this paper we studied the inter-rater agreement of manual MRI segmentation of glioblastoma and WHO grade II-III glioma for novices and experts at three stages of disease. We also studied the impact of inter-observer variation on extent of resection and growth rate. METHODS: In 20 patients with WHO grade IV glioblastoma and 20 patients with WHO grade II-III glioma (defined as non-glioblastoma) both the enhancing and non-enhancing tumor elements were segmented on MRI, using specialized software, by four novices and four experts before surgery, after surgery and at time of tumor progression. We used the generalized conformity index (GCI) and the intra-class correlation coefficient (ICC) of tumor volume as main outcome measures for inter-rater agreement. RESULTS: For glioblastoma, segmentations by experts and novices were comparable. The inter-rater agreement of enhancing tumor elements was excellent before surgery (GCI 0.79, ICC 0.99) poor after surgery (GCI 0.32, ICC 0.92), and good at progression (GCI 0.65, ICC 0.91). For non-glioblastoma, the inter-rater agreement was generally higher between experts than between novices. The inter-rater agreement was excellent between experts before surgery (GCI 0.77, ICC 0.92), was reasonable after surgery (GCI 0.48, ICC 0.84), and good at progression (GCI 0.60, ICC 0.80). The inter-rater agreement was good between novices before surgery (GCI 0.66, ICC 0.73), was poor after surgery (GCI 0.33, ICC 0.55), and poor at progression (GCI 0.36, ICC 0.73). Further analysis showed that the lower inter-rater agreement of segmentation on postoperative MRI could only partly be explained by the smaller volumes and fragmentation of residual tumor. The median interquartile range of extent of resection between raters was 8.3% and of growth rate was 0.22mm/year. CONCLUSION: Manual tumor segmentations on MRI have reasonable agreement for use in spatial and volumetric analysis. Agreement in spatial overlap is of concern with segmentation after surgery for glioblastoma and with segmentation of non-glioblastoma by non-experts.
SCRDN: Residual dense network with self-calibrated convolutions for low dose CT image denoising
Ma, Limin
Xue, Hengzhi
Yang, Guangtong
Zhang, Zitong
Li, Chen
Yao, Yudong
Teng, Yueyang
Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment2023Journal Article, cited 0 times
Website
LDCT-and-Projection-data
Dense network
Image denoising
computed tomography (CT)
Self-calibrated convolution
Low-dose computed tomography (LDCT) can reduce the X-ray radiation dose that the patients receive, up to 86%, which decreases the potential hazards and expands its application scope. However, LDCT images contain a lot of noise and artifacts, which brings great difficulties to doctors’ diagnosis. Recently, methods based on deep learning have obtained great success in noise reducing of LDCT images. In this paper, we propose a novel residual dense network with self-calibrated convolution (SCRDN) for LDCT images denoising. Compared with the basic CNN, SCRDN includes jump connection, dense connection and self-calibrated convolution instead of traditional convolution. It makes full use of the hierarchical features of original images to obtain the reconstructed images with more details. It also obtains a larger receptive field without introducing new parameters. The experimental results show that the proposed method can achieve performance improvements over most state-of-the-art methods used in CT denoising.
Body composition radiomic features as a predictor of survival in patients with non-small cellular lung carcinoma: A multicenter retrospective study
Rozynek, M.
Tabor, Z.
Klek, S.
Wojciechowski, W.
Nutrition2024Journal Article, cited 0 times
Website
ACRIN 6668
ACRIN-NSCLC-FDG-PET
Head-Neck-CT-Atlas
Radiomic features
DenseNet
pyRadiomics
Humans
*Carcinoma
Non-Small-Cell Lung/diagnostic imaging
*Lung Neoplasms/diagnostic imaging/therapy
Retrospective Studies
Radiomics
Lung
Body Composition
*Carcinoma
Artificial intelligence
Lung cancer
Survival
OBJECTIVES: This study combined two novel approaches in oncology patient outcome predictions-body composition and radiomic features analysis. The aim of this study was to validate whether automatically extracted muscle and adipose tissue radiomic features could be used as a predictor of survival in patients with non-small cell lung cancer. METHODS: The study included 178 patients with non-small cell lung cancer receiving concurrent platinum-based chemoradiotherapy. Abdominal imaging was conducted as a part of whole-body positron emission tomography/computed tomography performed before therapy. Methods used included automated assessment of the volume of interest using densely connected convolutional network classification model - DenseNet121, automated muscle and adipose tissue segmentation using U-net architecture implemented in nnUnet framework, and radiomic features extraction. Acquired body composition radiomic features and clinical data were used for overall and 1-y survival prediction using machine learning classification algorithms. RESULTS: The volume of interest detection model achieved the following metric scores: 0.98 accuracy, 0.89 precision, 0.96 recall, and 0.92 F1 score. Automated segmentation achieved a median dice coefficient >0.99 in all segmented regions. We extracted 330 body composition radiomic features for every patient. For overall survival prediction using clinical and radiomic data, the best-performing feature selection and prediction method achieved areas under the curve-receiver operating characteristic (AUC-ROC) of 0.73 (P < 0.05); for 1-y survival prediction AUC-ROC was 0.74 (P < 0.05). CONCLUSION: Automatically extracted muscle and adipose tissue radiomic features could be used as a predictor of survival in patients with non-small cell lung cancer.
Dosiomics improves prediction of locoregional recurrence for intensity modulated radiotherapy treated head and neck cancer cases
Wu, A.
Li, Y.
Qi, M.
Lu, X.
Jia, Q.
Guo, F.
Dai, Z.
Liu, Y.
Chen, C.
Zhou, L.
Song, T.
Oral Oncol2020Journal Article, cited 0 times
Website
Head-Neck-PET-CT
Intensity-modulated radiotherapy (IMRT)
Radiomics
Classification
OBJECTIVES: To investigate whether dosiomics can benefit to IMRT treated patient's locoregional recurrences (LR) prediction through a comparative study on prediction performance inspection between radiomics methods and that integrating dosiomics in head and neck cancer cases. MATERIALS AND METHODS: A cohort of 237 patients with head and neck cancer from four different institutions was obtained from The Cancer Imaging Archive and utilized to train and validate the radiomics-only prognostic model and integrate the dosiomics prognostic model. For radiomics, the radiomics features were initially extracted from images, including CTs and PETs, and selected on the basis of their concordance index (CI) values, then condensed via principle component analysis. Lastly, multivariate Cox proportional hazards regression models were constructed with class-imbalance adjustment as the LR prediction models by inputting those condensed features. For dosiomics integration model establishment, the initial features were similar, but with additional 3-dimensional dose distribution from radiation treatment plans. The CI and the Kaplan-Meier curves with log-rank analysis were used to assess and compare these models. RESULTS: Observed from the independent validation dataset, the CI of the model for dosiomics integration (0.66) was significantly different from that for radiomics (0.59) (Wilcoxon test, p=5.9x10(-31)). The integrated model successfully classified the patients into high- and low-risk groups (log-rank test, p=2.5x10(-02)), whereas the radiomics model was not able to provide such classification (log-rank test, p=0.37). CONCLUSION: Dosiomics can benefit in predicting the LR in IMRT-treated patients and should not be neglected for related investigations.
Radiomic analysis identifies tumor subtypes associated with distinct molecular and microenvironmental factors in head and neck squamous cell carcinoma
Katsoulakis, Evangelia
Yu, Yao
Apte, Aditya P.
Leeman, Jonathan E.
Katabi, Nora
Morris, Luc
Deasy, Joseph O.
Chan, Timothy A.
Lee, Nancy Y.
Riaz, Nadeem
Hatzoglou, Vaios
Oh, Jung Hun
Oral Oncology2020Journal Article, cited 0 times
Website
TCGA-HNSC
Radiomics
Radiogenomics
Machine learning
Purpose To identify whether radiomic features from pre-treatment computed tomography (CT) scans can predict molecular differences between head and neck squamous cell carcinoma (HNSCC) using The Cancer Imaging Archive (TCIA) and The Cancer Genome Atlas (TCGA). Methods 77 patients from the TCIA with HNSCC had imaging suitable for analysis. Radiomic features were extracted and unsupervised consensus clustering was performed to identify subtypes. Genomic data was extracted from the matched patients in the TCGA database. We explored relationships between radiomic features and molecular profiles of tumors, including the tumor immune microenvironment. A machine learning method was used to build a model predictive of CD8 + T-cells. An independent cohort of 83 HNSCC patients was used to validate the radiomic clusters. Results We initially extracted 104 two-dimensional radiomic features, and after feature stability tests and removal of volume dependent features, reduced this to 67 features for subsequent analysis. Consensus clustering based on these features resulted in two distinct clusters. The radiomic clusters differed by primary tumor subsite (p = 0.0096), HPV status (p = 0.0127), methylation-based clustering results (p = 0.0025), and tumor immune microenvironment. A random forest model using radiomic features predicted CD8 + T-cells independent of HPV status with R2 = 0.30 (p < 0.0001) on cross validation. Consensus clustering on the validation cohort resulted in two distinct clusters that differ in tumor subsite (p = 1.3 × 10-7) and HPV status (p = 4.0 × 10-7). Conclusion Radiomic analysis can identify biologic features of tumors such as HPV status and T-cell infiltration and may be able to provide other information in the near future to help with patient stratification.
Domain Transform Network for Photoacoustic Tomography from Limited-view and Sparsely Sampled Data
Tong, Tong
Huang, Wenhui
Wang, Kun
He, Zicong
Yin, Lin
Yang, Xin
Zhang, Shuixing
Tian, Jie
Photoacoustics2020Journal Article, cited 7 times
Website
Brain-Tumor-Progression
Deep Learning
Medical image reconstruction methods based on deep learning have recently demonstrated powerful performance in photoacoustic tomography (PAT) from limited-view and sparse data. However, because most of these methods must utilize conventional linear reconstruction methods to implement signal-to-image transformations, their performance is restricted. In this paper, we propose a novel deep learning reconstruction approach that integrates appropriate data pre-processing and training strategies. The Feature Projection Network (FPnet) presented herein is designed to learn this signal-to-image transformation through data-driven learning rather than through direct use of linear reconstruction. To further improve reconstruction results, our method integrates an image post-processing network (U-net). Experiments show that the proposed method can achieve high reconstruction quality from limited-view data with sparse measurements. When employing GPU acceleration, this method can achieve a reconstruction speed of 15 frames per second.
Quality gaps in public pancreas imaging datasets: Implications & challenges for AI applications
Suman, Garima
Patra, Anurima
Korfiatis, Panagiotis
Majumder, Shounak
Chari, Suresh T
Truty, Mark J
Fletcher, Joel G
Goenka, Ajit H
2021Journal Article, cited 0 times
CPTAC-PDA
Pancreas-CT
OBJECTIVE: Quality gaps in medical imaging datasets lead to profound errors in experiments. Our objective was to characterize such quality gaps in public pancreas imaging datasets (PPIDs), to evaluate their impact on previously published studies, and to provide post-hoc labels and segmentations as a value-add for these PPIDs.
METHODS: We scored the available PPIDs on the medical imaging data readiness (MIDaR) scale, and evaluated for associated metadata, image quality, acquisition phase, etiology of pancreas lesion, sources of confounders, and biases. Studies utilizing these PPIDs were evaluated for awareness of and any impact of quality gaps on their results. Volumetric pancreatic adenocarcinoma (PDA) segmentations were performed for non-annotated CTs by a junior radiologist (R1) and reviewed by a senior radiologist (R3).
RESULTS: We found three PPIDs with 560 CTs and six MRIs. NIH dataset of normal pancreas CTs (PCT) (n = 80 CTs) had optimal image quality and met MIDaR A criteria but parts of pancreas have been excluded in the provided segmentations. TCIA-PDA (n = 60 CTs; 6 MRIs) and MSD(n = 420 CTs) datasets categorized to MIDaR B due to incomplete annotations, limited metadata, and insufficient documentation. Substantial proportion of CTs from TCIA-PDA and MSD datasets were found unsuitable for AI due to biliary stents [TCIA-PDA:10 (17%); MSD:112 (27%)] or other factors (non-portal venous phase, suboptimal image quality, non-PDA etiology, or post-treatment status) [TCIA-PDA:5 (8.5%); MSD:156 (37.1%)]. These quality gaps were not accounted for in any of the 25 studies that have used these PPIDs (NIH-PCT:20; MSD:1; both: 4). PDA segmentations were done by R1 in 91 eligible CTs (TCIA-PDA:42; MSD:49). Of these, corrections were made by R3 in 16 CTs (18%) (TCIA-PDA:4; MSD:12) [mean (standard deviation) Dice: 0.72(0.21) and 0.63(0.23) respectively].
CONCLUSION: Substantial quality gaps, sources of bias, and high proportion of CTs unsuitable for AI characterize the available limited PPIDs. Published studies on these PPIDs do not account for these quality gaps. We complement these PPIDs through post-hoc labels and segmentations for public release on the TCIA portal. Collaborative efforts leading to large, well-curated PPIDs supported by adequate documentation are critically needed to translate the promise of AI to clinical practice.
Learning efficient, explainable and discriminative representations for pulmonary nodules classification
Jiang, Hanliang
Shen, Fuhao
Gao, Fei
Han, Weidong
Pattern Recognition2021Journal Article, cited 0 times
LIDC-IDRI
Automatic pulmonary nodules classification is significant for early diagnosis of lung cancers. Recently, deep learning techniques have enabled remarkable progress in this field. However, these deep models are typically of high computational complexity and work in a black-box manner. To combat these challenges, in this work, we aim to build an efficient and (partially) explainable classification model. Specially, we use neural architecture search (NAS) to automatically search 3D network architectures with excellent accuracy/speed trade-off. Besides, we use the convolutional block attention module (CBAM) in the networks, which helps us understand the reasoning process. During training, we use A-Softmax loss to learn angularly discriminative representations. In the inference stage, we employ an ensemble of diverse neural networks to improve the prediction accuracy and robustness. We conduct extensive experiments on the LIDC-IDRI database. Compared with previous state-of-the-art, our model shows highly comparable performance by using less than 1/40 parameters. Besides, empirical study shows that the reasoning process of learned networks is in conformity with physicians’ diagnosis. Related code and results have been released at: https://github.com/fei-hdu/NAS-Lung.
Detecting pulmonary diseases using deep features in X-ray images
Vieira, P.
Sousa, O.
Magalhaes, D.
Rabelo, R.
Silva, R.
Pattern Recognition2021Journal Article, cited 0 times
Website
COVID-19-AR
Deep Learning
LUNG
Image resampling
Convolutional neural networks (CNN)
COVID-19 leads to radiological evidence of lower respiratory tract lesions, which support analysis to screen this disease using chest X-ray. In this scenario, deep learning techniques are applied to detect COVID-19 pneumonia in X-ray images, aiding a fast and precise diagnosis. Here, we investigate seven deep learning architectures associated with data augmentation and transfer learning techniques to detect different pneumonia types. We also propose an image resizing method with the maximum window function that preserves anatomical structures of the chest. The results are promising, reaching an accuracy of 99.8% considering COVID-19, normal, and viral and bacterial pneumonia classes. The differentiation between viral pneumonia and COVID-19 achieved an accuracy of 99.8%, and 99.9% of accuracy between COVID-19 and bacterial pneumonia. We also evaluated the impact of the proposed image resizing method on classification performance comparing with the bilinear interpolation; this pre-processing increased the classification rate regardless of the deep learning architectures used. We c ompared our results with ten related works in the state-of-the-art using eight sets of experiments, which showed that the proposed method outperformed them in most cases. Therefore, we demonstrate that deep learning models trained with pre-processed X-ray images could precisely assist the specialist in COVID-19 detection.
Learning multi-scale synergic discriminative features for prostate image segmentation
Jia, Haozhe
Cai, Weidong
Huang, Heng
Xia, Yong
Pattern Recognition2022Journal Article, cited 0 times
ISBI-MR-Prostate-2013
Although deep convolutional neural networks (DCNNs) have been proposed for prostate MR image segmentation, the effectiveness of these methods is often limited by inadequate semantic discrimination and spatial context modeling. To address these issues, we propose a Multi-scale Synergic Discriminative Network (MSD-Net), which includes a shared encoder, a segmentation decoder, and a boundary detection decoder. We further design the cascaded pyramid convolutional block and residual refinement block, and incorporate them and the channel attention block into MSD-Net to exploit the multi-scale spatial contextual information and semantically consistent features of the gland. We also fuse the features from two decoders to boost the segmentation performance, and introduce the synergic multi-task loss to impose the consistence constraint on the joint segmentation and boundary detection. We evaluated MSD-Net against several prostate segmentation methods on three public datasets and achieved an improved accuracy. Our results indicate that the proposed MSD-Net outperforms existing methods with setting the new state-of-the-art for prostate segmentation in magnetic resonance images.
Nakagami-Fuzzy imaging framework for precise lesion segmentation in MRI
Alpar, Orcan
Dolezal, Rafael
Ryska, Pavel
Krejcar, Ondrej
Pattern Recognition2022Journal Article, cited 0 times
Website
CPTAC-GBM
MRI
Segmentation
Deep learning-based diagnosis and survival prediction of patients with renal cell carcinoma from primary whole slide images
Chen, Siteng
Wang, Xiyue
Zhang, Jun
Jiang, Liren
Gao, Feng
Xiang, Jinxi
Yang, Sen
Yang, Wei
Zheng, Junhua
Han, Xiao
2024Journal Article, cited 0 times
CPTAC-CCRCC
TCGA-KIRC
Renal cell carcinoma
Artificial Intelligence
Deep Learning
Diagnosis
Survival
It remains an urgent clinical demand to explore novel diagnostic and prognostic biomarkers for renal cell carcinoma (RCC). We proposed deep learning-based artificial intelligence strategies. The study included 1752 whole slide images from multiple centres.
Based on the pixel-level of RCC segmentation, the diagnosis diagnostic model achieved an area under the receiver operating characteristic curve (AUC) of 0.977 (95% CI 0.969–0.984) in the external validation cohort. In addition, our diagnostic model exhibited excellent performance in the differential diagnosis of RCC from renal oncocytoma, which achieved an AUC of 0.951 (0.922–0.972). The graderisk for the recognition of high-grade tumour achieved AUCs of 0.840 (0.805–0.871) in the Cancer Genome Atlas (TCGA) cohort, 0.857 (0.813–0.894) in the Shanghai General Hospital (General) cohort, and 0.894 (0.842–0.933) in the Clinical Proteomic Tumor Analysis Consortium (CPTAC) cohort, for the recognition of high-grade tumour. The OSrisk for predicting 5-year survival status achieved an AUC of 0.784 (0.746–0.819) in the TCGA cohort, which was further verified in the independent general cohort and the CPTAC cohort, with AUCs of 0.774 (0.723–0.820) and 0.702 (0.632–0.765), respectively. Moreover, the competing-risk nomogram (CRN) showed its potential to be a prognostic indicator, with a hazard ratio (HR) of 5.664 (3.893–8.239, p<0.0001), outperforming other traditional clinical prognostic indicators. Kaplan–Meier survival analysis further illustrated that our CRN could significantly distinguish patients with high survival risk (HR 5.664, 95% CI 3.893–8.239, p<0.0001), which outperformed current prognosis indicators.
Deep learning-based artificial intelligence could be a useful tool for clinicians to diagnose and predict the prognosis of RCC patients, thus improving the process of individualised treatment.
Deep-learning framework to detect lung abnormality – A study with chest X-Ray and lung CT scan images
Bhandary, Abhir
Prabhu, G. Ananth
Rajinikanth, V.
Thanaraj, K. Palani
Satapathy, Suresh Chandra
Robbins, David E.
Shasky, Charles
Zhang, Yu-Dong
Tavares, João Manuel R. S.
Raja, N. Sri Madhava
Pattern Recognition Letters2020Journal Article, cited 0 times
Website
LIDC-IDRI
Deep Learning
Support Vector Machine (SVM)
Lung abnormalities are highly risky conditions in humans. The early diagnosis of lung abnormalities is essential to reduce the risk by enabling quick and efficient treatment. This research work aims to propose a Deep-Learning (DL) framework to examine lung pneumonia and cancer. This work proposes two different DL techniques to assess the considered problem: (i) The initial DL method, named a modified AlexNet (MAN), is proposed to classify chest X-Ray images into normal and pneumonia class. In the MAN, the classification is implemented using with Support Vector Machine (SVM), and its performance is compared against Softmax. Further, its performance is validated with other pre-trained DL techniques, such as AlexNet, VGG16, VGG19 and ResNet50. (ii) The second DL work implements a fusion of handcrafted and learned features in the MAN to improve classification accuracy during lung cancer assessment. This work employs serial fusion and Principal Component Analysis (PCA) based features selection to enhance the feature vector. The performance of this DL frame work is tested using benchmark lung cancer CT images of LIDC-IDRI and classification accuracy (97.27%) is attained.
An integrated design of particle swarm optimization (PSO) with fusion of features for detection of brain tumor
Sharif, Muhammad
Amin, Javaria
Raza, Mudassar
Yasmin, Mussarat
Satapathy, Suresh Chandra
Pattern Recognition Letters2020Journal Article, cited 0 times
RIDER NEURO MRI
BraTS
Computer Aided Detection (CADe)
Segmentation
Classification
BRAIN
Tumor in brain is a major cause of death in human beings. If not treated properly and timely, there is a high chance of it to become malignant. Therefore, brain tumor detection at an initial stage is a significant requirement. In this work, initially the skull is removed through brain surface extraction (BSE) method. The skull removed image is then fed to particle swarm optimization (PSO) to achieve better segmentation. In the next step, Local binary patterns (LBP) and deep features of segmented images are extracted and genetic algorithm (GA) is applied for best features selection. Finally, artificial neural network (ANN) and other classifiers are utilized to classify the tumor grades. The publicly available complex brain datasets such as RIDER and BRATS 2018 Challenge are utilized for evaluation of method and attained 99% maximum accuracy. The results are also compared with existing methods which evident that the presented technique provided improved outcomes which are clear proof of its effectiveness and novelty.
Periodicity counting in videos with unsupervised learning of cyclic embeddings
Jacquelin, Nicolas
Vuillemot, Romain
Duffner, Stefan
Pattern Recognition Letters2022Journal Article, cited 0 times
4D-Lung
We introduce a context-agnostic unsupervised method to count periodicity in videos. Current methods estimate periodicity for a specific type of application (e.g. some repetitive human motion). We propose a novel method that provides a powerful generalisation ability since it is not biased towards specific visual features. It is thus applicable to a range of diverse domains that require no adaptation, by relying on a deep neural network that is trained completely unsupervised. More specifically, it is trained to transform the periodic temporal data into some lower-dimensional latent encoding in such a way that it forms a cyclic path in this latent space. We also introduce a novel algorithm that is able to reliably detect and count periods in complex time series. Despite being unsupervised and facing supervised methods with complex architectures, our experimental results demonstrate that our approach is able to reach state-of-the-art performance for periodicity counting on the challenging QUVA video benchmark.
The Veterans Affairs Precision Oncology Data Repository, a Clinical, Genomic, and Imaging Research Database
Elbers, Danne C.
Fillmore, Nathanael R.
Sung, Feng-Chi
Ganas, Spyridon S.
Prokhorenkov, Andrew
Meyer, Christopher
Hall, Robert B.
Ajjarapu, Samuel J.
Chen, Daniel C.
Meng, Frank
Grossman, Robert L.
Brophy, Mary T.
Do, Nhan V.
Patterns2020Journal Article, cited 0 times
Website
APOLLO-1-VA
The Veterans Affairs Precision Oncology Data Repository (VA-PODR) is a large, nationwide repository of de-identified data on patients diagnosed with cancer at the Department of Veterans Affairs (VA). Data include longitudinal clinical data from the VA's nationwide electronic health record system and the VA Central Cancer Registry, targeted tumor sequencing data, and medical imaging data including computed tomography (CT) scans and pathology slides. A subset of the repository is available at the Genomic Data Commons (GDC) and The Cancer Imaging Archive (TCIA), and the full repository is available through the Veterans Precision Oncology Data Commons (VPODC). By releasing this de-identified dataset, we aim to advance Veterans' health care through enabling translational research on the Veteran population by a wide variety of researchers.
Machine vision-assisted identification of the lung adenocarcinoma category and high-risk tumor area based on CT images
Chen, L.
Qi, H.
Lu, D.
Zhai, J.
Cai, K.
Wang, L.
Liang, G.
Zhang, Z.
Patterns (N Y)2022Journal Article, cited 1 times
Website
Lung Fused-CT-Pathology
NSCLC-Radiomics-Genomics
CPTAC-LUAD
NSCLC-Radiomics
APOLLO-5-LUAD
Computed Tomography (CT)
Convolutional Neural Network (CNN)
Deep learning
LUNG
Computer Aided Diagnosis (CADx)
Computed tomography (CT) is a widely used medical imaging technique. It is important to determine the relationship between CT images and pathological examination results of lung adenocarcinoma to better support its diagnosis. In this study, a bilateral-branch network with a knowledge distillation procedure (KDBBN) was developed for the auxiliary diagnosis of lung adenocarcinoma. KDBBN can automatically identify adenocarcinoma categories and detect the lesion area that most likely contributes to the identification of specific types of adenocarcinoma based on lung CT images. In addition, a knowledge distillation process was established for the proposed framework to ensure that the developed models can be applied to different datasets. The results of our comprehensive computational study confirmed that our method provides a reliable basis for adenocarcinoma diagnosis supplementary to the pathological examination. Meanwhile, the high-risk area labeled by KDBBN highly coincides with the related lesion area labeled by doctors in clinical diagnosis.
Topological data analysis of thoracic radiographic images shows improved radiomics-based lung tumor histology prediction
Vandaele, Robin
Mukherjee, Pritam
Selby, Heather Marie
Shah, Rajesh Pravin
Gevaert, Olivier
Patterns2022Journal Article, cited 0 times
LIDC-IDRI
Topological data analysis provides tools to capture wide-scale structural shape information in data. Its main method, persistent homology, has found successful applications to various machine-learning problems. Despite its recent gain in popularity, much of its potential for medical image analysis remains undiscovered. We explore the prominent learning problems on thoracic radiographic images of lung tumors for which persistent homology improves radiomic-based learning. It turns out that our topological features well capture complementary information important for benign versus malignant and adenocarcinoma versus squamous cell carcinoma tumor prediction while contributing less consistently to small cell versus non-small cell-an interesting result in its own right. Furthermore, while radiomic features are better for predicting malignancy scores assigned by expert radiologists through visual inspection, we find that topological features are better for predicting more accurate histology assessed through long-term radiology review, biopsy, surgical resection, progression, or response.
SwarmDeepSurv: swarm intelligence advances deep survival network for prognostic radiomics signatures in four solid cancers
Al-Tashi, Qasem
Saad, Maliazurina B.
Sheshadri, Ajay
Wu, Carol C.
Chang, Joe Y.
Al-Lazikani, Bissan
Gibbons, Christopher
Vokes, Natalie I.
Zhang, Jianjun
Lee, J. Jack
Heymach, John V.
Jaffray, David
Mirjalili, Seyedali
Wu, Jia
Patterns2023Journal Article, cited 0 times
Website
Non-Small Cell Lung Cancer (NSCLC)
TCGA-LUAD
TCGA-LUSC
NSCLC Radiogenomics
NSCLC-Radiomics
Head and neck squamous cell carcinoma (HNSCC)
HEAD-NECK-RADIOMICS-HN1
ISPY1/ACRIN 6657
TCGA-GBM
PyRadiomics
Cox proportional hazard model
Radiomics
cancer
survival analysis
Swarm
Imaging features
Transfer learning
Survival models exist to study relationships between biomarkers and treatment effects. Deep learning-powered survival models supersede the classical Cox proportional hazards (CoxPH) model, but substantial performance drops were observed on high-dimensional features because of irrelevant/redundant information. To fill this gap, we proposed SwarmDeepSurv by integrating swarm intelligence algorithms with the deep survival model. Furthermore, four objective functions were designed to optimize prognostic prediction while regularizing selected feature numbers. When testing on multicenter sets (n = 1,058) of four different cancer types, SwarmDeepSurv was less prone to overfitting and achieved optimal patient risk stratification compared with popular survival modeling algorithms. Strikingly, SwarmDeepSurv selected different features compared with classical feature selection algorithms, including the least absolute shrinkage and selection operator (LASSO), with nearly no feature overlapping across these models. Taken together, SwarmDeepSurv offers an alternative approach to model relationships between radiomics features and survival endpoints, which can further extend to study other input data types including genomics.
Repeatability of radiotherapy dose-painting prescriptions derived from a multiparametric magnetic resonance imaging model of glioblastoma infiltration
Brighi, C.
Verburg, N.
Koh, E. S.
Walker, A.
Chen, C.
Pillay, S.
de Witt Hamer, P. C.
Aly, F.
Holloway, L. C.
Keall, P. J.
Waddington, D. E. J.
Phys Imaging Radiat Oncol2022Journal Article, cited 0 times
Website
QIN GBM Treatment Response
Background and purpose: Glioblastoma (GBM) patients have a dismal prognosis. Tumours typically recur within months of surgical resection and post-operative chemoradiation. Multiparametric magnetic resonance imaging (mpMRI) biomarkers promise to improve GBM outcomes by identifying likely regions of infiltrative tumour in tumour probability (TP) maps. These regions could be treated with escalated dose via dose-painting radiotherapy to achieve higher rates of tumour control. Crucial to the technical validation of dose-painting using imaging biomarkers is the repeatability of the derived dose prescriptions. Here, we quantify repeatability of dose-painting prescriptions derived from mpMRI. Materials and methods: TP maps were calculated with a clinically validated model that linearly combined apparent diffusion coefficient (ADC) and relative cerebral blood volume (rBV) or ADC and relative cerebral blood flow (rBF) data. Maps were developed for 11 GBM patients who received two mpMRI scans separated by a short interval prior to chemoradiation treatment. A linear dose mapping function was applied to obtain dose-painting prescription (DP) maps for each session. Voxel-wise and group-wise repeatability metrics were calculated for parametric, TP and DP maps within radiotherapy margins. Results: DP maps derived from mpMRI were repeatable between imaging sessions (ICC > 0.85). ADC maps showed higher repeatability than rBV and rBF maps (Wilcoxon test, p = 0.001). TP maps obtained from the combination of ADC and rBF were the most stable (median ICC: 0.89). Conclusions: Dose-painting prescriptions derived from a mpMRI model of tumour infiltration have a good level of repeatability and can be used to generate reliable dose-painting plans for GBM patients.
Stress-testing pelvic autosegmentation algorithms using anatomical edge cases
Kanwar, Aasheesh
Merz, Brandon
Claunch, Cheryl
Rana, Shushan
Hung, Arthur
Thompson, Reid F.
2023Journal Article, cited 0 times
Prostate-Anatomical-Edge-Cases
Commercial autosegmentation has entered clinical use, however real-world performance may suffer in certain cases. We aimed to assess the influence of anatomic variants on performance. We identified 112 prostate cancer patients with anatomic variations (edge cases). Pelvic anatomy was autosegmented using three commercial tools. To evaluate performance, Dice similarity coefficients, and mean surface and 95% Hausdorff distances were calculated versus clinician-delineated references. Deep learning autosegmentation outperformed atlas-based and model-based methods. However, edge case performance was lower versus the normal cohort (0.12 mean DSC reduction). Anatomic variation presents challenges to commercial autosegmentation.
Multi-centre radiomics for prediction of recurrence following radical radiotherapy for head and neck cancers: Consequences of feature selection, machine learning classifiers and batch-effect harmonization
Varghese, Amal Joseph
Gouthamchand, Varsha
Sasidharan, Balu Krishna
Wee, Leonard
Sidhique, Sharief K
Rao, Julia Priyadarshini
Dekker, Andre
Hoebers, Frank
Devakumar, Devadhas
Irodi, Aparna
Balasingh, Timothy Peace
Godson, Henry Finlay
Joel, T
Mathew, Manu
Gunasingam Isiah, Rajesh
Pavamani, Simon Pradeep
Thomas, Hannah Mary T
Phys Imaging Radiat Oncol2023Journal Article, cited 1 times
Website
HEAD-NECK-RADIOMICS-HN1
Head-Neck-PET-CT
Head-and-neck cancer
Loco-regional recurrence
Machine learning
Multi-institutional
Prognosis
Radiomics
BACKGROUND AND PURPOSE: Radiomics models trained with limited single institution data are often not reproducible and generalisable. We developed radiomics models that predict loco-regional recurrence within two years of radiotherapy with private and public datasets and their combinations, to simulate small and multi-institutional studies and study the responsiveness of the models to feature selection, machine learning algorithms, centre-effect harmonization and increased dataset sizes. MATERIALS AND METHODS: 562 patients histologically confirmed and treated for locally advanced head-and-neck cancer (LA-HNC) from two public and two private datasets; one private dataset exclusively reserved for validation. Clinical contours of primary tumours were not recontoured and were used for Pyradiomics based feature extraction. ComBat harmonization was applied, and LASSO-Logistic Regression (LR) and Support Vector Machine (SVM) models were built. 95% confidence interval (CI) of 1000 bootstrapped area-under-the-Receiver-operating-curves (AUC) provided predictive performance. Responsiveness of the models' performance to the choice of feature selection methods, ComBat harmonization, machine learning classifier, single and pooled data was evaluated. RESULTS: LASSO and SelectKBest selected 14 and 16 features, respectively; three were overlapping. Without ComBat, the LR and SVM models for three institutional data showed AUCs (CI) of 0.513 (0.481-0.559) and 0.632 (0.586-0.665), respectively. Performances following ComBat revealed AUCs of 0.559 (0.536-0.590) and 0.662 (0.606-0.690), respectively. Compared to single cohort AUCs (0.562-0.629), SVM models from pooled data performed significantly better at AUC = 0.680. CONCLUSIONS: Multi-institutional retrospective data accentuates the existing variabilities that affect radiomics. Carefully designed prospective, multi-institutional studies and data sharing are necessary for clinically relevant head-and-neck cancer prognostication models.
Integration of operator-validated contours in deformable image registration for dose accumulation in radiotherapy
Bosma, L. S.
Ries, M.
Denis de Senneville, B.
Raaymakers, B. W.
Zachiu, C.
Phys Imaging Radiat Oncol2023Journal Article, cited 0 times
Website
TCGA-KIRC
TCGA-KIRP
TCGA-LIHC
Semi-automatic segmentation
Adaptive radiotherapy
Constrained motion estimation
Contour guidance
Deformable dose warping
Deformable image registration
Preconditioning
BACKGROUND AND PURPOSE: Deformable image registration (DIR) is a core element of adaptive radiotherapy workflows, integrating daily contour propagation and/or dose accumulation in their design. Propagated contours are usually manually validated and may be edited, thereby locally invalidating the registration result. This means the registration cannot be used for dose accumulation. In this study we proposed and evaluated a novel multi-modal DIR algorithm that incorporated contour information to guide the registration. This integrates operator-validated contours with the estimated deformation vector field and warped dose. MATERIALS AND METHODS: The proposed algorithm consisted of both a normalized gradient field-based data-fidelity term on the images and an optical flow data-fidelity term on the contours. The Helmholtz-Hodge decomposition was incorporated to ensure anatomically plausible deformations. The algorithm was validated for same- and cross-contrast Magnetic Resonance (MR) image registrations, Computed Tomography (CT) registrations, and CT-to-MR registrations for different anatomies, all based on challenging clinical situations. The contour-correspondence, anatomical fidelity, registration error, and dose warping error were evaluated. RESULTS: The proposed contour-guided algorithm considerably and significantly increased contour overlap, decreasing the mean distance to agreement by a factor of 1.3 to 13.7, compared to the best algorithm without contour-guidance. Importantly, the registration error and dose warping error decreased significantly, by a factor of 1.2 to 2.0. CONCLUSIONS: Our contour-guided algorithm ensured that the deformation vector field and warped quantitative information were consistent with the operator-validated contours. This provides a feasible semi-automatic strategy for spatially correct warping of quantitative information even in difficult and artefacted cases.
Integrative radiomics analyses identify universal signature for predicting prognosis and therapeutic vulnerabilities across primary and secondary liver cancers: A multi-cohort study
As the hallmark of cancer, genetic and phenotypic heterogeneity leads to biomarkers that are typically tailored to specific cancer type or subtype. This specificity introduces complexities in facilitating streamlined evaluations across diverse cancer types and optimizing therapeutic outcomes. In this study, we comprehensively characterized the radiological patterns underlying liver cancer (LC) by integrating radiomics profiles from computed tomography (CT) images of hepatocellular carcinoma (HCC), intrahepatic cholangiocarcinoma (ICC), and colorectal cancer liver metastases (CRLM) through unsupervised clustering analysis. We identified three distinct radiomics clusters, displaying heterogeneity in prognosis. Subsequently, we formulated a shared prognosticator, the liver cancer radiomics signature (LCRS), by discovering and manifesting connectivity among radiomics phenotypes using GGI strategy. We validated that the LCRS is independent prognostic factor after adjusting for clinic-pathologic variables (all P < 0.05), with the LCRS-High group consistently associated with worse survival outcomes across HCC, ICC, and CRLM. However, the LCRS-High group showed clinical benefit from adjuvant chemotherapy, leading to reduced disease recurrence risk and improved survival. By contrast, the LCRS-Low group, including a subset of gastric cancer liver metastases (GCLM), exhibited more favorable response to immune checkpoint inhibitors (ICIs)-based combinational therapy (P = 0.02, hazard ratio (HR): 0.34 [95 % confidence interval (CI): 0.13-0.88]). Further analysis revealed that Notch signaling pathway was enriched in LCRS-High tumors, while LCRS-Low tumors exhibited higher infiltration of natural killer cell. These findings highlight the promise of this universal scoring model to personalize management strategies for patients with LC.
Complexity of brain tumors
Martín-Landrove, Miguel
Torres-Hoyos, Francisco
Rueda-Toicen, Antonio
2020Journal Article, cited 0 times
REMBRANDT
TCGA-GBM
TCGA-LGG
Tumor growth is a complex process characterized by uncontrolled cell proliferation and invasion of neighboring tissues. The understanding of these phenomena is of vital importance to establish the appropriate diagnosis and therapeutic strategies and starts with the evaluation of their complex morphology with suitable descriptors, such as those produced by scaling analysis. In the present work, scaling analysis is used for the extraction of dynamic parameters that characterize tumor growth processes in brain tumors. The emphasis in the analysis is on the assessment of general properties of tumor growth, such as the Family–Vicsek ansatz, which includes a great variety of ballistic growth models. Results indicate in a definitive way that gliomas strictly behave as it is proposed by the ansatz, while benign tumors behave quite differently. As a complementary view, complex visibility networks derived from the tumor interface support these results and its use is introduced as a possible descriptor in the understanding of tumor growth dynamics.
Security of Multi-frame DICOM Images Using XOR Encryption Approach
Natsheh, QN
Li, B
Gale, AG
Procedia Computer Science2016Journal Article, cited 4 times
Website
Breast-MRI-NACT-Pilot
Security
Transferring medical images using networks is subjected to a wide variety of security risks. Hence, there is a need of a robust and secure mechanism to exchange medical images over the Internet. The Digital Image and Communication in Medicine (DICOM) standard provides attributes for the header data confidentiality but not for the pixel image data. In this paper, a simple and effective encryption approach for pixel data is provided for multi-frame DICOM medical images. The main goal of the proposed approach is to reduce the encryption and decryption time of these images, using Advanced Encryption Standard (AES) where only one image is encrypted and XOR cipher for encrypting the remaining multi-frame DICOM images. The proposed algorithm is evaluated using computational time, normalized correlation, entropy, Peak-Signal-to-Noise-Ratio (PSNR) and histogram analysis. The results show that the proposed approach can reduce the encryption and decryption time and is able to ensure image confidentiality.
Breast Cancer Response Prediction in Neoadjuvant Chemotherapy Treatment Based on Texture Analysis
Ammar, Mohammed
Mahmoudi, Saïd
Stylianos, Drisis
Procedia Computer Science2016Journal Article, cited 2 times
Website
QIN Breast DCE-MRI
texture analysis
Computer Aided Diagnosis (CADx)
BREAST
MRI modality is one of the most usual techniques used for diagnosis and treatment planning of breast cancer. The aim of this study is to prove that texture based feature techniques such as co-occurrence matrix features extracted from MRI images can be used to quantify response of tumor treatment. To this aim, we use a dataset composed of two breast MRI examinations for 9 patients. Three of them were responders and six non responders. The first exam was achieved before the initiation of the treatment (baseline). The later one was done after the first cycle of the chemo treatment (control). A set of selected texture parameters have been selected and calculated for each exam. These selected parameters are: Cluster Shade, dissimilarity, entropy, homogeneity. The p-values estimated for the pathologic complete responders pCR and non pathologic complete responders pNCR patients prove that homogeneity (P-value=0.027) and cluster shade (P-value=0.0013) are the more relevant parameters related to pathologic complete responders pCR.
Interactive 3D Virtual Colonoscopic Navigation For Polyp Detection From CT Images
Joseph, Jinu
Kumar, Rajesh
Chandran, Pournami S
Vidya, PV
Procedia Computer Science2017Journal Article, cited 0 times
Website
colon cancer
endoscopy
polyp
volume rendering
3D thinning
surface rendering
Dijkstra's algorithm
principal curvature
Gaussian curvature
Lung Cancer Detection using CT Scan Images
Makaju, Suren
Prasad, PWC
Alsadoon, Abeer
Singh, AK
Elchouemi, A
Procedia Computer Science2018Journal Article, cited 5 times
Website
LIDC-IDRI
Computer aided detection (CADe)
LUNG
Swift Pre Rendering Volumetric Visualization of Magnetic Resonance Cardiac Images based on Isosurface Technique
Patel, Nikhilkumar P
Parmar, Shankar K
Jain, Kavindra R
Procedia Technology2014Journal Article, cited 0 times
Website
CHEST
Segmentation
Algorithm Development
Magnetic Resonance imaging (MRI) is a medical imaging procedure which uses strong magnetic fields and radio waves to produce cross sectional images of organs and internal structures of the body. Three dimensional (3D) models of CT is available and it has been practiced by almost all radiologists for pre-diagnosis. But in MRI still there is a scope for researcher to improvise a 3D model. Two dimensional images are taken from different viewpoints to reconstruct them in 3D, which is known as rendering process. In this paper, we have proposed a rendering concept for Medical (cardiac MRI) images based on iso values and number of marching cubes. Designer can place colors and textures over the 3D model to make it look realistic. This makes it easier for people to observe and visualize a substance in a better sense. The algorithm basically works on triangulation methods with various iso value and different combination of marching cube pairs. As a result of an application of marching cube concept, volumetric data (voxels) is generated. Voxels are then arranged and projected to visualize a 3D scene. Approximate processing time for various iso values are also compared in this paper.
Strong semantic segmentation for Covid-19 detection: Evaluating the use of deep learning models as a performant tool in radiography
Allioui, Hanane
Mourdi, Youssef
Sadgal, Mohamed
2022Journal Article, cited 0 times
LCTSC
INTRODUCTION: With the increasing number of Covid-19 cases as well as care costs, chest diseases have gained increasing interest in several communities, particularly in medical and computer vision. Clinical and analytical exams are widely recognized techniques for diagnosing and handling Covid-19 cases. However, strong detection tools can help avoid damage to chest tissues. The proposed method provides an important way to enhance the semantic segmentation process using combined potential deep learning (DL) modules to increase consistency. Based on Covid-19 CT images, this work hypothesized that a novel model for semantic segmentation might be able to extract definite graphical features of Covid-19 and afford an accurate clinical diagnosis while optimizing the classical test and saving time.
METHODS: CT images were collected considering different cases (normal chest CT, pneumonia, typical viral causes, and Covid-19 cases). The study presents an advanced DL method to deal with chest semantic segmentation issues. The approach employs a modified version of the U-net to enable and support Covid-19 detection from the studied images.
RESULTS: The validation tests demonstrated competitive results with important performance rates: Precision (90.96% ± 2.5) with an F-score of (91.08% ± 3.2), an accuracy of (93.37% ± 1.2), a sensitivity of (96.88% ± 2.8) and a specificity of (96.91% ± 2.3). In addition, the visual segmentation results are very close to the Ground truth.
CONCLUSION: The findings of this study reveal the proof-of-principle for using cooperative components to strengthen the semantic segmentation modules for effective and truthful Covid-19 diagnosis.
IMPLICATIONS FOR PRACTICE: This paper has highlighted that DL based approach, with several modules, may be contributing to provide strong support for radiographers and physicians, and that further use of DL is required to design and implement performant automated vision systems to detect chest diseases.
4DCT imaging to assess radiomics feature stability: An investigation for thoracic cancers
Larue, Ruben THM
Van De Voorde, Lien
van Timmeren, Janna E
Leijenaar, Ralph TH
Berbée, Maaike
Sosef, Meindert N
Schreurs, Wendy MJ
van Elmpt, Wouter
Lambin, Philippe
Radiotherapy and Oncology2017Journal Article, cited 7 times
Website
RIDER Lung CT
4D-Lung
Radiomics
ESOPHAGUS
LUNG
Computed Tomography (CT)
BACKGROUND AND PURPOSE: Quantitative tissue characteristics derived from medical images, also called radiomics, contain valuable prognostic information in several tumour-sites. The large number of features available increases the risk of overfitting. Typically test-retest CT-scans are used to reduce dimensionality and select robust features. However, these scans are not always available. We propose to use different phases of respiratory-correlated 4D CT-scans (4DCT) as alternative. MATERIALS AND METHODS: In test-retest CT-scans of 26 non-small cell lung cancer (NSCLC) patients and 4DCT-scans (8 breathing phases) of 20 NSCLC and 20 oesophageal cancer patients, 1045 radiomics features of the primary tumours were calculated. A concordance correlation coefficient (CCC) >0.85 was used to identify robust features. Correlation with prognostic value was tested using univariate cox regression in 120 oesophageal cancer patients. RESULTS: Features based on unfiltered images demonstrated greater robustness than wavelet-filtered features. In total 63/74 (85%) unfiltered features and 268/299 (90%) wavelet features stable in the 4D-lung dataset were also stable in the test-retest dataset. In oesophageal cancer 397/1045 (38%) features were robust, of which 108 features were significantly associated with overall-survival. CONCLUSION: 4DCT-scans can be used as alternative to eliminate unstable radiomics features as first step in a feature selection procedure. Feature robustness is tumour-site specific and independent of prognostic value.
CT-based radiomic features predict tumor grading and have prognostic value in patients with soft tissue sarcomas treated with neoadjuvant radiation therapy
Peeken, J. C.
Bernhofer, M.
Spraker, M. B.
Pfeiffer, D.
Devecka, M.
Thamer, A.
Shouman, M. A.
Ott, A.
Nusslin, F.
Mayr, N. A.
Rost, B.
Nyflot, M. J.
Combs, S. E.
Radiother Oncol2019Journal Article, cited 0 times
Website
Radiomics
Soft-tissue Sarcoma
Machine learning
PURPOSE: In soft tissue sarcoma (STS) patients systemic progression and survival remain comparably low despite low local recurrence rates. In this work, we investigated whether quantitative imaging features ("radiomics") of radiotherapy planning CT-scans carry a prognostic value for pre-therapeutic risk assessment. METHODS: CT-scans, tumor grade, and clinical information were collected from three independent retrospective cohorts of 83 (TUM), 87 (UW) and 51 (McGill) STS patients, respectively. After manual segmentation and preprocessing, 1358 radiomic features were extracted. Feature reduction and machine learning modeling for the prediction of grading, overall survival (OS), distant (DPFS) and local (LPFS) progression free survival were performed followed by external validation. RESULTS: Radiomic models were able to differentiate grade 3 from non-grade 3 STS (area under the receiver operator characteristic curve (AUC): 0.64). The Radiomic models were able to predict OS (C-index: 0.73), DPFS (C-index: 0.68) and LPFS (C-index: 0.77) in the validation cohort. A combined clinical-radiomics model showed the best prediction for OS (C-index: 0.76). The radiomic scores were significantly associated in univariate and multivariate cox regression and allowed for significant risk stratification for all three endpoints. CONCLUSION: This is the first report demonstrating a prognostic potential and tumor grading differentiation by CT-based radiomics.
Proton vs photon: A model-based approach to patient selection for reduction of cardiac toxicity in locally advanced lung cancer
Teoh, S.
Fiorini, F.
George, B.
Vallis, K. A.
Van den Heuvel, F.
Radiother Oncol2019Journal Article, cited 0 times
4D-Lung
Segmentation
Models
LUNG
PURPOSE/OBJECTIVE: To use a model-based approach to identify a sub-group of patients with locally advanced lung cancer who would benefit from proton therapy compared to photon therapy for reduction of cardiac toxicity. MATERIAL/METHODS: Volumetric modulated arc photon therapy (VMAT) and robust-optimised intensity modulated proton therapy (IMPT) plans were generated for twenty patients with locally advanced lung cancer to give a dose of 70Gy (relative biological effectiveness (RBE)) in 35 fractions. Cases were selected to represent a range of anatomical locations of disease. Contouring, treatment planning and organs-at-risk constraints followed RTOG-1308 protocol. Whole heart and ub-structure doses were compared. Risk estimates of grade3 cardiac toxicity were calculated based on normal tissue complication probability (NTCP) models which incorporated dose metrics and patients baseline risk-factors (pre-existing heart disease (HD)). RESULTS: There was no statistically significant difference in target coverage between VMAT and IMPT. IMPT delivered lower doses to the heart and cardiac substructures (mean, heart V5 and V30, P<.05). In VMAT plans, there were statistically significant positive correlations between heart dose and the thoracic vertebral level that corresponded to the most inferior limit of the disease. The median level at which the superior aspect of the heart contour began was the T7 vertebrae. There was a statistically significant difference in dose (mean, V5 and V30) to the heart and all substructures (except mean dose to left coronary artery and V30 to sino-atrial node) when disease overlapped with or was inferior to the T7 vertebrae. In the presence of pre-existing HD and disease overlapping with or inferior to the T7 vertebrae, the mean estimated relative risk reduction of grade3 toxicities was 24-59%. CONCLUSION: IMPT is expected to reduce cardiac toxicity compared to VMAT by reducing dose to the heart and substructures. Patients with both pre-existing heart disease and tumour and nodal spread overlapping with or inferior to the T7 vertebrae are likely to benefit most from proton over photon therapy.
Comparing deep learning-based auto-segmentation of organs at risk and clinical target volumes to expert inter-observer variability in radiotherapy planning
Wong, Jordan
Fong, Allan
McVicar, Nevin
Smith, Sally
Giambattista, Joshua
Wells, Derek
Kolbeck, Carter
Giambattista, Jonathan
Gondara, Lovedeep
Alexander, Abraham
Radiother Oncol2019Journal Article, cited 0 times
Algorithm Development
LCTSC
Lung CT Segmentation Challenge 2017
TCGA-LUAD
TCGA-BLCA
TCGA-HNSC
TCGA-PRAD
Head-Neck Cetuximab
HNSCC
BACKGROUND: Deep learning-based auto-segmented contours (DC) aim to alleviate labour intensive contouring of organs at risk (OAR) and clinical target volumes (CTV). Most previous DC validation studies have a limited number of expert observers for comparison and/or use a validation dataset related to the training dataset. We determine if DC models are comparable to Radiation Oncologist (RO) inter-observer variability on an independent dataset. METHODS: Expert contours (EC) were created by multiple ROs for central nervous system (CNS), head and neck (H&N), and prostate radiotherapy (RT) OARs and CTVs. DCs were generated using deep learning-based auto-segmentation software trained by a single RO on publicly available data. Contours were compared using Dice Similarity Coefficient (DSC) and 95% Hausdorff distance (HD). RESULTS: Sixty planning CT scans had 2-4 ECs, for a total of 60 CNS, 53 H&N, and 50 prostate RT contour sets. The mean DC and EC contouring times were 0.4 vs 7.7 min for CNS, 0.6 vs 26.6 min for H&N, and 0.4 vs 21.3 min for prostate RT contours. There were minimal differences in DSC and 95% HD involving DCs for OAR comparisons, but more noticeable differences for CTV comparisons. CONCLUSIONS: The accuracy of DCs trained by a single RO is comparable to expert inter-observer variability for the RT planning contours in this study. Use of deep learning-based auto-segmentation in clinical practice will likely lead to significant benefits to RT planning workflow and resources.
Impact of internal target volume definition for pencil beam scanned proton treatment planning in the presence of respiratory motion variability for lung cancer: A proof of concept
Krieger, Miriam
Giger, Alina
Salomir, Rares
Bieri, Oliver
Celicanin, Zarko
Cattin, Philippe C
Lomax, Antony J
Weber, Damien C
Zhang, Ye
Radiotherapy and Oncology2020Journal Article, cited 0 times
Website
Proton Radiation Therapy
4D-Lung
Deep Learning Model for Automatic Contouring of Cardiovascular Substructures on Radiotherapy Planning CT Images: Dosimetric Validation and Reader Study based Clinical Acceptability Testing
Fernandes, Miguel Garrett
Bussink, Johan
Stam, Barbara
Wijsman, Robin
Schinagl, Dominic AX
Teuwen, Jonas
Monshouwer, René
Radiotherapy and Oncology2021Journal Article, cited 0 times
Website
NSCLC-Radiomics
NSCLC-Centuximab
RTOG-0617
Deep Learning
Radiotherapy
External validation of a CT-based radiomics signature in oropharyngeal cancer: Assessing sources of variation
Guevorguian, P.
Chinnery, T.
Lang, P.
Nichols, A.
Mattonen, S. A.
Radiother Oncol2022Journal Article, cited 0 times
OPC-Radiomics
Computed Tomography (CT)
Machine learning
Oropharyngeal cancer
Overall survival
Radiomics
Validation
BACKGROUND AND PURPOSE: Radiomics is a high-throughput approach that allows for quantitative analysis of imaging data for prognostic applications. Medical images are used in oropharyngeal cancer (OPC) diagnosis and treatment planning and these images may contain prognostic information allowing for treatment personalization. However, the lack of validated models has been a barrier to the translation of radiomic research to the clinic. We hypothesize that a previously developed radiomics model for risk stratification in OPC can be validated in a local dataset. MATERIALS AND METHODS: The radiomics signature predicting overall survival incorporates features derived from the primary gross tumor volume of OPC patients treated with radiation +/- chemotherapy at a single institution (n = 343). Model fit, calibration, discrimination, and utility were evaluated. The signature was compared with a clinical model using overall stage and a model incorporating both radiomics and clinical data. A model detecting dental artifacts on computed tomography images was also validated. RESULTS: The radiomics signature had a Concordance index (C-index) of 0.66 comparable to the clinical model's C-index of 0.65. The combined model significantly outperformed (C-index of 0.69, p = 0.024) the clinical model, suggesting that radiomics provides added value. The dental artifact model demonstrated strong ability in detecting dental artifacts with an area under the curve of 0.87. CONCLUSION: This work demonstrates model performance comparable to previous validation work and provides a framework for future independent and multi-center validation efforts. With sufficient validation, radiomic models have the potential to improve traditional systems of risk stratification, treatment personalization and patient outcomes.
Computed tomography and radiation dose images-based deep-learning model for predicting radiation pneumonitis in lung cancer patients after radiation therapy
Zhang, Zhen
Wang, Zhixiang
Luo, Tianchen
Yan, Meng
Dekker, Andre
De Ruysscher, Dirk
Traverso, Alberto
Wee, Leonard
Zhao, Lujun
Radiotherapy and Oncology2023Journal Article, cited 0 times
NSCLC-Cetuximab
PURPOSE: To develop a deep learning model that combines CT and radiation dose (RD) images to predict the occurrence of radiation pneumonitis (RP) in lung cancer patients who received radical (chemo)radiotherapy.
METHODS: CT, RD images and clinical parameters were obtained from 314 retrospectively-collected patients (training set) and 35 prospectively-collected patients (test-set-1) who were diagnosed with lung cancer and received radical radiotherapy in the dose range of 50 Gy and 70 Gy. Another 194 (60 Gy group, test-set-2) and 158 (74 Gy group, test-set-3) patients from the clinical trial RTOG 0617 were used for external validation. A ResNet architecture was used to develop a prediction model that combines CT and RD features. Thereafter, the CT and RD weights were adjusted by using 40 patients from test-set-2 or 3 to accommodate cohorts with different clinical settings or dose delivery patterns. Visual interpretation was implemented using a gradient-weighted class activation map (grad-CAM) to observe the area of model attention during the prediction process. To improve the usability, ready-to-use online software was developed.
RESULTS: The discriminative ability of a baseline trained model had an AUC of 0.83 for test-set-1, 0.55 for test-set-2, and 0.63 for test-set-3. After adjusting CT and RD weights of the model using a subset of the RTOG-0617 subjects, the discriminatory power of test-set-2 and 3 improved to AUC 0.65 and AUC 0.70, respectively. Grad-CAM showed the regions of interest to the model that contribute to the prediction of RP.
CONCLUSION: A novel deep learning approach combining CT and RD images can effectively and accurately predict the occurrence of RP, and this model can be adjusted easily to fit new cohorts.
A subregion-based prediction model for local-regional recurrence risk in head and neck squamous cell carcinoma
Pan, Ziqi
Men, Kuo
Liang, Bin
Song, Zhiyue
Wu, Runye
Dai, Jianrong
Radiother Oncol2023Journal Article, cited 0 times
Website
Head-Neck-PET-CT
Dosiomics
Head and neck cancer
Prognostic model
Radiomics
Radiotherapy
BACKGROUND AND PURPOSE: Given that the intratumoral heterogeneity of head and neck squamous cell carcinoma may be related to the local control rate of radiotherapy, the aim of this study was to construct a subregion-based model that can predict the risk of local-regional recurrence, and to quantitatively assess the relative contribution of subregions. MATERIALS AND METHODS: The CT images, PET images, dose images and GTVs of 228 patients with head and neck squamous cell carcinoma from four different institutions of the The Cancer Imaging Archive(TCIA) were included in the study. Using a supervoxel segmentation algorithm called maskSLIC to generate individual-level subregions. After extracting 1781 radiomics and 1767 dosiomics features from subregions, an attention-based multiple instance risk prediction model (MIR) was established. The GTV model was developed based on the whole tumour area and was used to compare the prediction performance with the MIR model. Furthermore, the MIR-Clinical model was constructed by integrating the MIR model with clinical factors. Subregional analysis was carried out through the Wilcoxon test to find the differential radiomic features between the highest and lowest weighted subregions. RESULTS: Compared with the GTV model, the C-index of MIR model was significantly increased from 0.624 to 0.721(Wilcoxon test, p value< 0.0001). When MIR model was combined with clinical factors, the C-index was further increased to 0.766. Subregional analysis showed that for LR patients, the top three differential radiomic features between the highest and lowest weighted subregions were GLRLM_ShortRunHighGrayLevelEmphasis, GRLM_HghGrayLevelRunEmphasis and GLRLM_LongRunHighGrayLevelEmphasis. CONCLUSION: This study developed a subregion-based model that can predict the risk of local-regional recurrence and quantitatively assess relevant subregions, which may provide technical support for the precision radiotherapy in head and neck squamous cell carcinoma.
Deep learning for contour quality assurance for RTOG 0933: In-silico evaluation
Porter, E. M.
Vu, C.
Sala, I. M.
Guerrero, T.
Siddiqui, Z. A.
Radiother Oncol2024Journal Article, cited 0 times
Website
GammaKnife-Hippocampal
Radiotherapy
Machine learning
Quality assurance
PURPOSE: To validate a CT-based deep learning (DL) hippocampal segmentation model trained on a single-institutional dataset and explore its utility for multi-institutional contour quality assurance (QA). METHODS: A DL model was trained to contour hippocampi from a dataset generated by an institutional observer (IO) contouring on brain MRIs from a single-institution cohort. The model was then evaluated on the RTOG 0933 dataset by comparing the treating physician (TP) contours to blinded IO and DL contours using Dice and Haussdorf distance (HD) agreement metrics as well as evaluating differences in dose to hippocampi when TP vs. IO vs. DL contours are used for planning. The specificity and sensitivity of the DL model to capture planning discrepancies was quantified using criteria of HD > 7 mm and Dmax hippocampi > 17 Gy. RESULTS: The DL model showed greater agreement with IO contours compared to TP contours (DL:IO L/R Dice 74 %/73 %, HD 4.86/4.74; DL:TP L/R Dice 62 %/65 %, HD 7.23/6.94, all p < 0.001). Thirty percent of contours and 53 % of dose plans failed QA. The DL model achieved an AUC L/R 0.80/0.79 on the contour QA task via Haussdorff comparison and AUC of 0.91 via Dmax comparison. The false negative rate was 17.2 %/20.5 % (contours) and 5.8 % (dose). False negative cases tended to demonstrate a higher DL:IO Dice agreement (L/R p = 0.42/0.03) and better qualitative visual agreement compared with true positive cases. CONCLUSION: Our study demonstrates the feasibility of using a single-institutional DL model to perform contour QA on a multi-institutional trial for the task of hippocampal segmentation.
Correlation of dynamic blood dose with clinical outcomes in radiotherapy for head-and-neck cancer
Tattenberg, Sebastian
Shin, Jungwook
Hoehr, Cornelia
Sung, Wonmo
Radiotherapy and Oncology2024Journal Article, cited 0 times
Website
HEAD-NECK-PET-CT
Radiotherapy
Radiation source personalization for nanoparticle-enhanced radiotherapy using dynamic contrast-enhanced MRI in the treatment planning process
Díaz-Galindo, C.A.
Garnica-Garza, H.M.
2024Journal Article, cited 0 times
GLIS-RT
Radiotherapy
MRI
Nanoparticle-enhanced radiotherapy offers the potential to selectively increase the radiation dose imparted to the tumor while at the same time sparing the healthy structures around it. Among the recommendations of an interdisciplinary group for the clinical translation of this treatment modality, is the developing of methods to quantify the effects that the nanoparticle concentration has on the radiation dosimetry and incorporate these effects into the treatment planning process. In this work, using Monte Carlo simulations and dynamic contrast-enhanced MRI images, treatment plans for nanoparticle-enhanced radiotherapy are calculated in order to evaluate the effects that realistic distributions of the nanoparticles have on the resultant plans and to devise treatment strategies to account for these effects, including the selection of the proper x-ray source configuration in terms of energy and collimation. Dynamic contrast-enhanced MRI studies were obtained for two treatment sites, namely brain and breast. A model to convert the MRI signal to contrast agent concentration was applied to each set of images. Two treatment modalities, 3D conformal radiotherapy and Stereotactic Body Radiation Therapy, were evaluated at three different x-ray spectra, namely 6 MV from a linear accelerator, 110 kVp and 220 kVp from a tungsten target. For the breast patient, as the nanoparticle distribution varies markedly with time, the treatment plans were obtained at two different times after administration. It was determined that maximum doses to the healthy structures around the tumor are mostly determined by the minimum nanoparticle concentration in the tumor. The presence of highly hypoxic or necrotic tissue, which fails to accumulate the nanoparticles, or leakage of the contrast agent into the surrounding healthy tissue, make irradiation with conventional conformal radiotherapy unfeasible for kilovoltage beam energies, as the uniform beam apertures lack the ability to compensate for the non-uniform distribution of the nanoparticles. Therefore, proper quantification of the nanoparticle distribution not only in the target volume but also in the surrounding tissues and structures is crucial for the proper planning of nanoparticle-enhanced radiotherapy and a treatment delivery with a high-degree of freedom, such as small-field stereotactic body radiotherapy, should be the method of choice for this treatment modality.
Realistic extension of partial-body pediatric CT for whole-body organ dose estimation in radiotherapy patients
While modern radiotherapy treatments can deliver a localized radiation dose to the tumor, healthy tissues at distance are inevitably exposed to scatter radiation that has been linked to late health effects such as second cancers. Quantifying the radiation dose received by tissues beyond the target is critical for research on such late health effects. However, the typical radiotherapy planning CT only covers part of the body near the target and the organs of interest for late effects research are not always included. Therefore, the purpose of this study was to develop a method for extending a partial-body pediatric CT scan for estimating organ doses beyond the original CT scan range. Our method uses a library of CT images for 359 pediatric patients from which a candidate patient is selected for providing surrogate anatomy. The most appropriate surrogate patient images to use for the extension are determined based on patient demographic information pulled from the image metadata. Image registration is performed through comparison of the patients' skeletons. The images showing closest similarity are adapted by a transformation method and appended to the original partial-body CT and a new structure file containing organ contours is written; we refer to this extended CT scan with organ contours as the Anatomical Predictive Extension (APE). To test the APE method, three patients with nearly full-body anatomy were extracted from the library, and a continuous subset of the images was removed to simulate a partial-body CT. The APE method was then applied to the partial-body CT to create extended anatomies, with the original images serving as ground truth. Radiotherapy plans were simulated using the Monte Carlo code XVMC on both the original and APE anatomies, with the original serving as ground truth. Three pediatric radiotherapy cases were considered for performance testing: (1) head CT for a simulated brain tumor extended to chest; (2) superior chest CT for simulated Hodgkin's lymphoma extended to inferior chest; (3) pelvic CT for Wilms tumor extended to superior chest. Three geometric metrics (Dice similarity coefficient, overlap fraction, and volume similarity) were calculated to quantify the differences between the original patient and the extended anatomies. In all cases, calculated organ doses showed good agreement between the original and APE anatomies. The average absolute relative dose difference across all organs considered for the three cases was 11%, 12% and 15%, respectively. The APE method is useful for estimating radiation doses to peripheral organs in support of research on late effects following radiotherapy.
Radiation dose in CT-based active surveillance of small renal masses may be reduced by 75%: A retrospective exploratory multiobserver study
Borgbjerg, Jens
Larsen, Nis Elbrønd
Salte, Ivar Mjåland
Grønli, Niklas Revold
Klæstrup, Elise
Negård, Anne
2023Journal Article, cited 0 times
C4KC-KiTS
A serial image analysis architecture with positron emission tomography using machine learning combined for the detection of lung cancer
Guzman Ortiz, S.
Hurtado Ortiz, R.
Jara Gavilanes, A.
Avila Faican, R.
Parra Zambrano, B.
Rev Esp Med Nucl Imagen Mol (Engl Ed)2024Journal Article, cited 0 times
Website
Lung-PET-CT-Dx
Positron Emission Tomography (PET)
Lung cancer
Machine learning
INTRODUCTION AND OBJECTIVES: Lung cancer is the second type of cancer with the second highest incidence rate and the first with the highest mortality rate in the world. Machine learning through the analysis of imaging tests such as positron emission tomography/computed tomography (PET/CT) has become a fundamental tool for the early and accurate detection of cancer. The objective of this study was to propose an image analysis architecture (PET/CT) ordered in phases through the application of ensemble or combined machine learning methods for the early detection of lung cancer by analyzing PET/CT images. MATERIAL AND METHODS: A retrospective observational study was conducted utilizing a public dataset entitled "A large-scale CT and PET/CT dataset for lung cancer diagnosis." Various imaging modalities, including CT, PET, and fused PET/CT images, were employed. The architecture or framework of this study comprised the following phases: 1. Image loading or collection, 2. Image selection, 3. Image transformation, and 4. Balancing the frequency distribution of image classes. Predictive models for lung cancer detection using PET/CT images included: a) the Stacking model, which used Random Forest and Support Vector Machine (SVM) as base models and complemented them with a logistic regression model, and b) the Boosting model, which employed the Adaptive Boosting (AdaBoost) model for comparison with the Stacking model. Quality metrics used for evaluation included accuracy, precision, recall, and F1-score. RESULTS: This study showed a general performance of 94% with the Stacking method and a general performance of 77% with the Boosting method. CONCLUSIONS: The Stacking method proved to be a model with high performance and quality for lung cancer detection when analyzing PET/CT images.
Artificial intelligence in oncology: From bench to clinic
Elkhader, Jamal
Elemento, Olivier
2021Journal Article, cited 0 times
PROSTATE-DIAGNOSIS
PROSTATE-MRI
PROSTATEx
In the past few years, Artificial Intelligence (AI) techniques have been applied to almost every facet of oncology, from basic research to drug development and clinical care. In the clinical arena where AI has perhaps received the most attention, AI is showing promise in enhancing and automating image-based diagnostic approaches in fields such as radiology and pathology. Robust AI applications, which retain high performance and reproducibility over multiple datasets, extend from predicting indications for drug development to improving clinical decision support using electronic health record data. In this article, we review some of these advances. We also introduce common concepts and fundamentals of AI and its various uses, along with its caveats, to provide an overview of the opportunities and challenges in the field of oncology. Leveraging AI techniques productively to provide better care throughout a patient's medical journey can fuel the predictive promise of precision medicine.
Efficient copyright protection for three CT images based on quaternion polar harmonic Fourier moments
Xia, Zhiqiu
Wang, Xingyuan
Li, Xiaoxiao
Wang, Chunpeng
Unar, Salahuddin
Wang, Mingxu
Zhao, Tingting
Signal Processing2019Journal Article, cited 0 times
Algorithm Development
watermarking
Overall survival prediction in glioblastoma multiforme patients from volumetric, shape and texture features using machine learning
Sanghani, Parita
Ang, Beng Ti
King, Nicolas Kon Kam
Ren, Hongliang
2018Journal Article, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Glioblastoma multiforme (GBM) are aggressive brain tumors, which lead to poor overall survival (OS) of patients. OS prediction of GBM patients provides useful information for surgical and treatment planning. Radiomics research attempts at predicting disease prognosis, thus providing beneficial information for personalized treatment from a variety of imaging features extracted from multiple MR images. In this study, MR image derived texture features, tumor shape and volumetric features, and patient age were obtained for 163 patients. OS group prediction was performed for both 2-class (short and long) and 3-class (short, medium and long) survival groups. Support vector machine classification based recursive feature elimination method was used to perform feature selection. The performance of the classification model was assessed using 5-fold cross-validation. The 2-class and 3-class OS group prediction accuracy obtained were 98.7% and 88.95% respectively. The shape features used in this work have been evaluated for OS prediction of GBM patients for the first time. The feature selection and prediction scheme implemented in this study yielded high accuracy for both 2-class and 3-class OS group predictions. This study was performed using routinely acquired MR images for GBM patients, thus making the translation of this work into a clinical setup convenient.
NCI Workshop Report: Clinical and Computational Requirements for Correlating Imaging Phenotypes with Genomics Signatures
Colen, Rivka
Foster, Ian
Gatenby, Robert
Giger, Mary Ellen
Gillies, Robert
Gutman, David
Heller, Matthew
Jain, Rajan
Madabhushi, Anant
Madhavan, Subha
Napel, Sandy
Rao, Arvind
Saltz, Joel
Tatum, James
Verhaak, Roeland
Whitman, Gary
Translational Oncology2014Journal Article, cited 39 times
Website
Multi-modal imaging
Radiogenomics
Radiomics
TCGA-GBM
TCGA-BRCA
Pathomics
The National Cancer Institute (NCI) Cancer Imaging Program organized two related workshops on June 26-27, 2013, entitled "Correlating Imaging Phenotypes with Genomics Signatures Research" and "Scalable Computational Resources as Required for Imaging-Genomics Decision Support Systems." The first workshop focused on clinical and scientific requirements, exploring our knowledge of phenotypic characteristics of cancer biological properties to determine whether the field is sufficiently advanced to correlate with imaging phenotypes that underpin genomics and clinical outcomes, and exploring new scientific methods to extract phenotypic features from medical images and relate them to genomics analyses. The second workshop focused on computational methods that explore informatics and computational requirements to extract phenotypic features from medical images and relate them to genomics analyses and improve the accessibility and speed of dissemination of existing NIH resources. These workshops linked clinical and scientific requirements of currently known phenotypic and genotypic cancer biology characteristics with imaging phenotypes that underpin genomics and clinical outcomes. The group generated a set of recommendations to NCI leadership and the research community that encourage and support development of the emerging radiogenomics research field to address short-and longer-term goals in cancer research.
Determining the variability of lesion size measurements from ct patient data sets acquired under “no change” conditions
McNitt-Gray, Michael F
Kim, Grace Hyun
Zhao, Binsheng
Schwartz, Lawrence H
Clunie, David
Cohen, Kristin
Petrick, Nicholas
Fenimore, Charles
Lu, ZQ John
Buckler, Andrew J
Translational Oncology2015Journal Article, cited 0 times
RIDER Lung CT
2D and 3D CT Radiomics Features Prognostic Performance Comparison in Non-Small Cell Lung Cancer
Shen, Chen
Liu, Zhenyu
Guan, Min
Song, Jiangdian
Lian, Yucheng
Wang, Shuo
Tang, Zhenchao
Dong, Di
Kong, Lingfei
Wang, Meiyun
Translational Oncology2017Journal Article, cited 10 times
Website
NSCLC-Radiomics
LungCT-Diagnosis
non-small cell lung cancer
2D and 3D radiomics features
Harrell's concordance index (C-index)
Kaplan-Meier and Cox hazard survival analyses
Akaike's information criteria (AIC)
Radiomics
Age-related copy number variations and expression levels of F-box protein FBXL20 predict ovarian cancer prognosis
Zheng, S.
Fu, Y.
Transl Oncol2020Journal Article, cited 0 times
Website
TCGA-OV
Radiogenomics
About 70% of ovarian cancer (OvCa) cases are diagnosed at advanced stages (stage III/IV) with only 20-40% of them survive over 5years after diagnosis. A reliably screening marker could enable a paradigm shift in OvCa early diagnosis and risk stratification. Age is one of the most significant risk factors for OvCa. Older women have much higher rates of OvCa diagnosis and poorer clinical outcomes. In this article, we studied the correlation between aging and genetic alterations in The Cancer Genome Atlas Ovarian Cancer dataset. We demonstrated that copy number variations (CNVs) and expression levels of the F-Box and Leucine-Rich Repeat Protein 20 (FBXL20), a substrate recognizing protein in the SKP1-Cullin1-F-box-protein E3 ligase, can predict OvCa overall survival, disease-free survival and progression-free survival. More importantly, FBXL20 copy number loss predicts the diagnosis of OvCa at a younger age, with over 60% of patients in that subgroup have OvCa diagnosed at age less than 60years. Clinicopathological studies further demonstrated malignant histological and radiographical features associated with elevated FBXL20 expression levels. This study has thus identified a potential biomarker for OvCa prognosis.
Prediction of post-radiotherapy locoregional progression in HPV-associated oropharyngeal squamous cell carcinoma using machine-learning analysis of baseline PET/CT radiomics
Haider, S. P.
Sharaf, K.
Zeevi, T.
Baumeister, P.
Reichel, C.
Forghani, R.
Kann, B. H.
Petukhova, A.
Judson, B. L.
Prasad, M. L.
Liu, C.
Burtness, B.
Mahajan, A.
Payabvash, S.
Transl Oncol2020Journal Article, cited 0 times
Website
HEAD AND NECK
PET/CT
Radiomics
Head-Neck-PET-CT
HNSCC
Locoregional failure remains a therapeutic challenge in oropharyngeal squamous cell carcinoma (OPSCC). We aimed to devise novel objective imaging biomarkers for prediction of locoregional progression in HPV-associated OPSCC. Following manual lesion delineation, 1037 PET and 1037 CT radiomic features were extracted from each primary tumor and metastatic cervical lymph node on baseline PET/CT scans. Applying random forest machine-learning algorithms, we generated radiomic models for censoring-aware locoregional progression prognostication (evaluated by Harrell's C-index) and risk stratification (evaluated in Kaplan-Meier analysis). A total of 190 patients were included; an optimized model yielded a median (interquartile range) C-index of 0.76 (0.66-0.81; p=0.01) in prognostication of locoregional progression, using combined PET/CT radiomic features from primary tumors. Radiomics-based risk stratification reliably identified patients at risk for locoregional progression within 2-, 3-, 4-, and 5-year follow-up intervals, with log-rank p-values of p=0.003, p=0.001, p=0.02, p=0.006 in Kaplan-Meier analysis, respectively. Our results suggest PET/CT radiomic biomarkers can predict post-radiotherapy locoregional progression in HPV-associated OPSCC. Pending validation in large, independent cohorts, such objective biomarkers may improve patient selection for treatment de-intensification trials in this prognostically favorable OPSCC entity, and eventually facilitate personalized therapy.
Radiomic profiling of clear cell renal cell carcinoma reveals subtypes with distinct prognoses and molecular pathways
Lin, P.
Lin, Y. Q.
Gao, R. Z.
Wen, R.
Qin, H.
He, Y.
Yang, H.
Transl Oncol2021Journal Article, cited 0 times
Website
TCGA-KIRC
Radiomics
KIDNEY
Clear cell renal cell carcinoma (ccRCC)
Random Forest
Classification
BACKGROUND: To identify radiomic subtypes of clear cell renal cell carcinoma (ccRCC) patients with distinct clinical significance and molecular characteristics reflective of the heterogeneity of ccRCC. METHODS: Quantitative radiomic features of ccRCC were extracted from preoperative CT images of 160 ccRCC patients. Unsupervised consensus cluster analysis was performed to identify robust radiomic subtypes based on these features. The Kaplan-Meier method and chi-square test were used to assess the different clinicopathological characteristics and gene mutations among the radiomic subtypes. Subtype-specific marker genes were identified, and gene set enrichment analyses were performed to reveal the specific molecular characteristics of each subtype. Moreover, a gene expression-based classifier of radiomic subtypes was developed using the random forest algorithm and tested in another independent cohort (n = 101). RESULTS: Radiomic profiling revealed three ccRCC subtypes with distinct clinicopathological features and prognoses. VHL, MUC16, FBN2, and FLG were found to have different mutation frequencies in these radiomic subtypes. In addition, transcriptome analysis revealed that the dysregulation of cell cycle-related pathways may be responsible for the distinct clinical significance of the obtained subtypes. The prognostic value of the radiomic subtypes was further validated in another independent cohort (log-rank P = 0.015). CONCLUSION: In the present multi-scale radiogenomic analysis of ccRCC, radiomics played a central role. Radiomic subtypes could help discern genomic alterations and non-invasively stratify ccRCC patients.
N6-methyladenosine-related lncRNAs in combination with computational histopathology and radiomics predict the prognosis of bladder cancer
Huang, Z.
Wang, G.
Wu, Y.
Yang, T.
Shao, L.
Yang, B.
Li, P.
Li, J.
Transl Oncol2022Journal Article, cited 0 times
Website
TIL-WSI-TCGA
BLADDER
Biomarker
Diagnosis
Prognosis
Radiomics
Urinary bladder neoplasms
Image color analysis
OBJECTIVES: Identification of m6A- related lncRNAs associated with BC diagnosis and prognosis. METHODS: From the TCGA database, we obtained transcriptome data and corresponding clinical information (including histopathological and CT imaging data) for 408 patients. And bioinformatics, computational histopathology, and radiomics were used to identify and analyze diagnostic and prognostic biomarkers of m6A-related lncRNAs in BC. RESULTS: 3 significantly high-expressed m6A-related lncRNAs were significantly associated with the prognosis of BC. The BC samples were divided into two subgroups based on the expression of the 3 lncRNAs. The overall survival of patients in cluster 2 was significantly lower than that in cluster 1. The immune landscape results showed that the expression of PD-L1, T cells follicular helper, NK cells resting, and mast cells activated in cluster 2 were significantly higher, and naive B cells, plasma cells, T cells regulatory (Tregs), and mast cells resting were significantly lower. Computational histopathology results showed a significantly higher percentage of tumor-infiltrating lymphocytes (TILs) in cluster 2. The radiomics results show that the 3 eigenvalues of diagnostics image-original minimum, diagnostics image-original maximum, and original GLCM inverse variance are significantly higher in cluster 2. High expression of 2 bridge genes in the PPI network of 30 key immune genes predicts poorer disease-free survival, while immunohistochemistry showed that their expression levels were significantly higher in high-grade BC than in low-grade BC and normal tissue. CONCLUSION: Based on the results of immune landscape, computational histopathology, and radiomics, these 3 m6A-related lncRNAs may be diagnostic and prognostic biomarkers for BC.
Enhanced Numerical Method for the Design of 3-D-Printed Holographic Acoustic Lenses for Aberration Correction of Single-Element Transcranial Focused Ultrasound
Marcelino Ferri
José M. Bravo
Javier Redondo
Juan V. Sánchez-Pérez
Ultrasound in Medicine & Biology2018Journal Article, cited 0 times
Website
TCIA General
Head
Computed Tomography (CT)
Ultrasound
The correction of transcranial focused ultrasound aberrations is a relevant issue for enhancing various non-invasive medical treatments. The emission through multi-element phased arrays has been the most widely accepted method to improve focusing in recent years; however, the number and size of transducers represent a bottleneck that limits the focusing accuracy of the technique. To overcome this limitation, a new disruptive technology, based on 3-D-printed acoustic lenses, has recently been proposed. As the submillimeter precision of the latest generation of 3-D printers has been proven to overcome the spatial limitations of phased arrays, a new challenge is to improve the accuracy of the numerical simulations required to design this type of ultrasound lens. In the study described here, we evaluated two improvements in the numerical model applied in previous works for the design of 3-D-printed lenses: (i) allowing the propagation of shear waves in the skull by means of its simulation as an isotropic solid and (ii) introduction of absorption into the set of equations that describes the dynamics of the wave in both fluid and solid media. The results obtained in the numerical simulations are evidence that the inclusion of both s-waves and absorption significantly improves focusing.
A Clinical System for Non-invasive Blood-Brain Barrier Opening Using a Neuronavigation-Guided Single-Element Focused Ultrasound Transducer
Pouliopoulos, Antonios N
Wu, Shih-Ying
Burgess, Mark T
Karakatsani, Maria Eleni
Kamimura, Hermes A S
Konofagou, Elisa E
Ultrasound Med Biol2020Journal Article, cited 3 times
Website
Head-Neck Cetuximab
Focused ultrasound (FUS)-mediated blood-brain barrier (BBB) opening is currently being investigated in clinical trials. Here, we describe a portable clinical system with a therapeutic transducer suitable for humans, which eliminates the need for in-line magnetic resonance imaging (MRI) guidance. A neuronavigation-guided 0.25-MHz single-element FUS transducer was developed for non-invasive clinical BBB opening. Numerical simulations and experiments were performed to determine the characteristics of the FUS beam within a human skull. We also validated the feasibility of BBB opening obtained with this system in two non-human primates using U.S. Food and Drug Administration (FDA)-approved treatment parameters. Ultrasound propagation through a human skull fragment caused 44.4 +/- 1% pressure attenuation at a normal incidence angle, while the focal size decreased by 3.3 +/- 1.4% and 3.9 +/- 1.8% along the lateral and axial dimension, respectively. Measured lateral and axial shifts were 0.5 +/- 0.4 mm and 2.1 +/- 1.1 mm, while simulated shifts were 0.1 +/- 0.2 mm and 6.1 +/- 2.4 mm, respectively. A 1.5-MHz passive cavitation detector transcranially detected cavitation signals of Definity microbubbles flowing through a vessel-mimicking phantom. T1-weighted MRI confirmed a 153 +/- 5.5 mm(3) BBB opening in two non-human primates at a mechanical index of 0.4, using Definity microbubbles at the FDA-approved dose for imaging applications, without edema or hemorrhage. In conclusion, we developed a portable system for non-invasive BBB opening in humans, which can be achieved at clinically relevant ultrasound exposures without the need for in-line MRI guidance. The proposed FUS system may accelerate the adoption of non-invasive FUS-mediated therapies due to its fast application, low cost and portability.
An Uncertainty-aware Workflow for Keyhole Surgery Planning using Hierarchical Image Semantics
Gillmann, Christina
Maack, Robin G. C.
Post, Tobias
Wischgoll, Thomas
Hagen, Hans
Visual Informatics2018Journal Article, cited 1 times
Website
TCGA-GBM
Surgical guidance
BRAIN
KNEE
Segmentation
Keyhole surgeries become increasingly important in clinical daily routine as they help minimizing the damage of a patient’s healthy tissue. The planning of keyhole surgeries is based on medical imaging and an important factor that influences the surgeries’ success. Due to the image reconstruction process, medical image data contains uncertainty that exacerbates the planning of a keyhole surgery. In this paper we present a visual workflow that helps clinicians to examine and compare different surgery paths as well as visualizing the patients’ affected tissue. The analysis is based on the concept of hierarchical image semantics, that segment the underlying image data with respect to the input images’ uncertainty and the users understanding of tissue composition. Users can define arbitrary surgery paths that they need to investigate further. The defined paths can be queried by a rating function to identify paths that fulfill user-defined properties. The workflow allows a visual inspection of the affected tissues and its substructures. Therefore, the workflow includes a linked view system indicating the three-dimensional location of selected surgery paths as well as how these paths affect the patients tissue. To show the effectiveness of the presented approach, we applied it to the planning of a keyhole surgery of a brain tumor removal and a kneecap surgery.
Artificial Intelligence Opportunities for Vestibular Schwannoma Management Using Image Segmentation and Clinical Decision Tools
Shapey, Jonathan
Kujawa, Aaron
Dorent, Reuben
Saeed, Shakeel R
Kitchen, Neil
Obholzer, Rupert
Ourselin, Sebastien
Vercauteren, Tom
Thomas, Nick W M
2021Journal Article, cited 0 times
Vestibular-Schwannoma-SEG
Cross-linking breast tumor transcriptomic states and tissue histology
Dawood, M.
Eastwood, M.
Jahanifar, M.
Young, L.
Ben-Hur, A.
Branson, K.
Jones, L.
Rajpoot, N.
Minhas, Fuaa
Cell Rep Med2023Journal Article, cited 1 times
Website
CPTAC-BRCA
Whole Slide Imaging (WSI)
TCGA-BRCA
Humans
Female
*Gene Expression Profiling
Transcriptome/genetics
Neural Networks
Computer
Phenotype
*Breast Neoplasms/genetics
breast cancer
computational pathology
gene groups
genotype to phenotype mapping
graph neural networks
receptor status prediction
spatial transcriptomics
topic modelling
transcriptomics
Identification of the gene expression state of a cancer patient from routine pathology imaging and characterization of its phenotypic effects have significant clinical and therapeutic implications. However, prediction of expression of individual genes from whole slide images (WSIs) is challenging due to co-dependent or correlated expression of multiple genes. Here, we use a purely data-driven approach to first identify groups of genes with co-dependent expression and then predict their status from WSIs using a bespoke graph neural network. These gene groups allow us to capture the gene expression state of a patient with a small number of binary variables that are biologically meaningful and carry histopathological insights for clinical and therapeutic use cases. Prediction of gene expression state based on these gene groups allows associating histological phenotypes (cellular composition, mitotic counts, grading, etc.) with underlying gene expression patterns and opens avenues for gaining biological insights from routine pathology imaging directly.
Harnessing artificial intelligence for prostate cancer management
Zhu, Lingxuan
Pan, Jiahua
Mou, Weiming
Deng, Longxin
Zhu, Yinjie
Wang, Yanqing
Pareek, Gyan
Hyams, Elias
Carneiro, Benedito A.
Hadfield, Matthew J.
El-Deiry, Wafik S.
Yang, Tao
Tan, Tao
Tong, Tong
Ta, Na
Zhu, Yan
Gao, Yisha
Lai, Yancheng
Cheng, Liang
Chen, Rui
Xue, Wei
2024Journal Article, cited 0 times
PROSTATE-MRI
TCGA-PRAD
Prostate Fused-MRI-Pathology
NADT-Prostate
CMB-PCA
Artificial Intelligence
Prostatic Neoplasms
Prostate cancer (PCa) is a common malignancy in males. The pathology review of PCa is crucial for clinical decision-making, but traditional pathology review is labor intensive and subjective to some extent. Digital pathology and whole-slide imaging enable the application of artificial intelligence (AI) in pathology. This review highlights the success of AI in detecting and grading PCa, predicting patient outcomes, and identifying molecular subtypes. We propose that AI-based methods could collaborate with pathologists to reduce workload and assist clinicians in formulating treatment recommendations. We also introduce the general process and challenges in developing AI pathology models for PCa. Importantly, we summarize publicly available datasets and open-source codes to facilitate the utilization of existing data and the comparison of the performance of different models to improve future studies.
Material composition characterization from computed tomography via self-supervised learning promotes pulmonary disease diagnosis
Liu, Jiachen
Zhao, Wei
Liu, Yuxuan
Chen, Yang
Bai, Xiangzhi
Cell Reports Physical Science2024Journal Article, cited 0 times
Website
SPIE-AAPM Lung CT Challenge
Self-supervised learning
Deep Learning
Dual energy computed tomography
Computed tomography (CT) images primarily provide tissue morphological information, while material composition analysis may enable a more fundamental body assessment. However, existing methods for this suffer from low accuracy and severe degradation. Furthermore, the complex composition of bodies and the absence of labels constrain the potential use of deep learning. Here, we present a self-supervised learning approach, generating multiple basis material images with no labels (NoL-MBMI), for analyzing material composition without labels. Results from phantom and patient experiments demonstrate that NoL-MBMI can provide results with superior visual quality and accuracy. Notably, to extend the clinical usage of NoL-MBMI, we construct an automated system to extract material composition information directly from standard single-energy CT (SECT) data for diagnosis. We evaluate the system on two pulmonary diagnosis tasks and observe that deep-learning models using material composition features significantly outperform those using morphological features, suggesting the clinical effectiveness of diagnosing utilizing material composition and its potential for advancing medical imaging technology.
Artificial Intelligence in Prostate Imaging
Arlova, Alena
Choyke, Peter L.
Turkbey, Baris
2021Journal Article, cited 0 times
PROSTATEx
The use of deep learning in interventional radiotherapy (brachytherapy): A review with a focus on open source and open data
Fechter, Tobias
Sachpazidis, Ilias
Baltas, Dimos
2022Journal Article, cited 0 times
ISBI-MR-Prostate-2013
PROSTATEx
Deep learning advanced to one of the most important technologies in almost all medical fields. Especially in areas, related to medical imaging it plays a big role. However, in interventional radiotherapy (brachytherapy) deep learning is still in an early phase. In this review, first, we investigated and scrutinised the role of deep learning in all processes of interventional radiotherapy and directly related fields. Additionally, we summarised the most recent developments. For better understanding, we provide explanations of key terms and approaches to solving common deep learning problems. To reproduce results of deep learning algorithms both source code and training data must be available. Therefore, a second focus of this work is on the analysis of the availability of open source, open data and open models. In our analysis, we were able to show that deep learning plays already a major role in some areas of interventional radiotherapy, but is still hardly present in others. Nevertheless, its impact is increasing with the years, partly self-propelled but also influenced by closely related fields. Open source, data and models are growing in number but are still scarce and unevenly distributed among different research groups. The reluctance in publishing code, data and models limits reproducibility and restricts evaluation to mono-institutional datasets. The conclusion of our analysis is that deep learning can positively change the workflow of interventional radiotherapy but there is still room for improvements when it comes to reproducible results and standardised evaluation methods.
An open-source foundation for head and neck radiomics
Scott, Katy L.
Kim, Sejin
Joseph, Jermiah J.
Boccalon, Matthew
Welch, Mattea
Yousafzai, Umar
Smith, Ian
Mcintosh, Chris
Rey-McIntyre, Katrina
Huang, Shao Hui
Patel, Tirth
Tadic, Tony
O'Sullivan, Brian
Bratman, Scott V.
Hope, Andrew J.
Haibe-Kains, Benjamin
Radiotherapy and Oncology2024Journal Article, cited 0 times
RADCURE
head neck
radiomics
Purpose/Objective With the purported future of oncological care being precision medicine, the hunt for predictive biomarkers has become a focal point. A potential source lies in radiological imaging, which has motivated the field of radiomics for the last decade [1–4]. Radiomics research, however, has been hampered by inconsistent methodology, despite efforts to establish standard features [5]. The release of the open-source PyRadiomics toolkit [6] was a significant and necessary step to standardize radiomics analysis, but the collation and distribution of publically available radiomics datasets remains poorly organized within the community. As a result, significant overhead remains when dealing with multiple training, testing, and validation datasets from both internal and external sources. Further, a recent study has raised the question of whether radiomic features with high predictive value are surrogates for tumour volume measurements [7]. There is a need for standard methodology for radiomic feature extraction, as well as large, publicly available radiomic datasets that have undergone rigorous processing to benchmark analyses. In this study, we have developed a reproducible, automated, open-source processing pipeline to generate analysis-ready radiomics data. We showcase the pipeline's capabilities by processing and analyzing the largest publicly available head and neck cancer (HNC) dataset, RADCURE [8], and compare three previously published radiomics models [1,7,9] using the resulting data. Data outputs have been made available via https://www.orcestra.ca/, a web-app that hosts processed 'omics data. Material/Methods Our proposed pipeline leverages three main tools: Med-ImageTools [10], PyRadiomics [6], and ORCESTRA [11]. While the former two are imaging-specific, we have modified ORCESTRA to work with clinical radiological data. The proposed pipeline was developed using the RADCURE [12] dataset. It consists of 3,346 HNC CT image volumes, corresponding radiotherapy structure sets (RTSTRUCT) containing primary gross tumour volume (GTVp) contours, and clinical data. The Med-ImageTools library was used to generate complete file lists for each CT acquisition, associate these with the correct RTSTRUCT, and load both as Simple ITK [13] images. For each GTVp, preprocessing, quality checking, and radiomic feature extraction was performed using PyRadiomics. Extraction settings from the RADCURE prognostic modelling challenge [8] were applied. Feature extraction was repeated with two negative control samples for each CT, either by shuffling voxel index values or randomly generating voxel values within the range of values in the original CT [7] (Figure 1). The standard for data organization on ORCESTRA [11] is the MultiAssayExperiment R object [14], designed to harmonize multiple experimental assays from an overlapping patient set. To leverage this for radiomics, each set of extracted features becomes an experiment, with clinical data included as the primary metadata describing each patient. To demonstrate the pipeline's utility, we replicated previously published survival analysis models with the training and test cohorts from the RADCURE challenge subset [8]. Coefficients from the MW2018 [7] and Kwan [9] models were used to calculate prognostic index values for the test cohort. For comparison, we fit a Cox model to the RADCURE training cohort using the same radiomic signature and applied it to the test cohort. A univariate model for GTVp Mesh Volume was also tested. All models were compared using the concordance index. Results We processed 2,949 patients with GTVp contours, for a total of 2,988 GTVps from patients with varying primary tumour sites. We extracted 1,317 radiomic features from the CT and the negative control volume for each GTVp. For the 37 patients with multiple GTVps, features were extracted independently for each contour. The final data object containing all of these features, along with the clinical data and PyRadiomics configuration file, are available at https://www.orcestra.ca/radiomicset/10.5281/zenodo.8332910. The pipeline implementation is published at https://github.com/BHKLAB-DataProcessing/RADCUREradiomics. Results from our radiomics analysis are available in Table 1. The subset of 2400 GTVs was split into training and test cohorts based on the 'RADCURE-challenge’ label in the clinical data. The Kwan model was tested with the oropharynx patients only. Model performance is similar whether the features were extracted from the CT or negative control samples, signaling that the radiomic signature is likely highly correlated with tumour volume, a known confounder of radiomics analysis. This is confirmed by the comparable performance of the univariable volume model. Conclusion This standardized architecture framework and the publicly available processed RADCURE dataset can be used to benchmark new datasets or radiomics models semi-automatically. Future work will include organ at risk and nodal targets in the RADCURE dataset and the production of ORCESTRA objects for other publicly available HNC datasets. We anticipate that this pipeline and the RADCURE objects generated could be a standard testing benchmark for future radiomics analyses and publications.
External and Internal Validation of a Computer Assisted Diagnostic Model for Detecting Multi-Organ Mass Lesions in CT images
Xu, Lianyan
Yan, Ke
Lu, Le
Zhang, Weihong
Chen, Xu
Huo, Xiaofei
Lu, Jingjing
2021Journal Article, cited 0 times
CT Lymph Nodes
Objective We developed a universal lesion detector (ULDor) which showed good performance in in-lab experiments. The study aims to evaluate the performance and its ability to generalize in clinical setting via both external and internal validation. Methods The ULDor system consists of a convolutional neural network (CNN) trained on around 80K lesion annotations from about 12K CT studies in the DeepLesion dataset and S other public organ-specific datasets. During the validation process, the test sets include two parts: the external validation dataset which was comprised of 164 sets of non-contrasted chest and upper abdomen CT scans from a comprehensive hospital, and the internal validation dataset which was comprised of 187 sets of low-dose helical CT scans from the National Lung Screening Trial (NLST). We ran the model on the two test sets to output lesion detection. Three board-certified radiologists read the CT scans and verified the detection results of ULDor. We used positive predictive value (PPV) and sensitivity to evaluate the performance of the model in detecting space-occupying lesions at all extra-pulmonary organs visualized on CT images, including liver, kidney, pancreas, adrenal, spleen, esophagus, thyroid, lymph nodes, body wall, thoracic spine, etc. Results In the external validation, the lesion-level PPV and sensitivity of the model were 57.9% and 67.0%, respectively. On average, the model detected 2.1 findings per set, and among them, 0.9 were false positives. ULDor worked well for detecting liver lesions, with a PPV of 78.9% and a sensitivity of 92.7%, followed by kidney, with a PPV of 70.0% and a sensitivity of 58.3%. In internal validation with NLST test set, ULDor obtained a PPV of 75.3% and a sensitivity of 52.0% despite the relatively high noise level of soft tissue on images. Conclusions The performance tests of ULDor with the external real-world data have shown its high effectiveness in multiple-purposed detection for lesions in certain organs. With further optimisation and iterative upgrades, ULDor may be well suited for extensive application to external data.
Lung cancer incidence and mortality in National Lung Screening Trial participants who underwent low-dose CT prevalence screening: a retrospective cohort analysis of a randomised, multicentre, diagnostic screening trial
Patz Jr, Edward F
Greco, Erin
Gatsonis, Constantine
Pinsky, Paul
Kramer, Barnett S
Aberle, Denise R
The Lancet Oncology2016Journal Article, cited 67 times
Website
NLST
lung
LDCT
A radiomics approach to assess tumour-infiltrating CD8 cells and response to anti-PD-1 or anti-PD-L1 immunotherapy: an imaging biomarker, retrospective multicohort study
Sun, Roger
Limkin, Elaine Johanna
Vakalopoulou, Maria
Dercle, Laurent
Champiat, Stéphane
Han, Shan Rong
Verlingue, Loïc
Brandao, David
Lancia, Andrea
Ammari, Samy
The Lancet Oncology2018Journal Article, cited 4 times
Website
Radiomics
head and neck squamous-cell carcinoma (HNSC)
lung squamous-cell carcinoma (LUSC)
lung adenocarcinoma (LUAD)
liver hepatocellular carcinoma (LIHC)
bladder endothelial carcinoma (BLCA)
The Cancer Genome Atlas (TCGA)
Deep learning to distinguish pancreatic cancer tissue from non-cancerous pancreatic tissue: a retrospective study with cross-racial external validation
Liu, K. L.
Wu, T.
Chen, P. T.
Tsai, Y. M.
Roth, H.
Wu, M. S.
Liao, W. C.
Wang, W.
Lancet Digit Health2020Journal Article, cited 141 times
Website
Pancreas-CT
Medical Segmentation Decathlon 2021
Convolutional Neural Network (CNN)
Contrast Media
*Deep Learning
Diagnosis
Differential
Pancreas/diagnostic imaging
Pancreatic Neoplasms/*diagnostic imaging
Racial Groups
Radiographic Image Enhancement/methods
Radiographic Image Interpretation
Computer-Assisted/*methods
Reproducibility of Results
Retrospective Studies
Sensitivity and Specificity
Taiwan
Tomography
X-Ray Computed/*methods
BACKGROUND: The diagnostic performance of CT for pancreatic cancer is interpreter-dependent, and approximately 40% of tumours smaller than 2 cm evade detection. Convolutional neural networks (CNNs) have shown promise in image analysis, but the networks' potential for pancreatic cancer detection and diagnosis is unclear. We aimed to investigate whether CNN could distinguish individuals with and without pancreatic cancer on CT, compared with radiologist interpretation. METHODS: In this retrospective, diagnostic study, contrast-enhanced CT images of 370 patients with pancreatic cancer and 320 controls from a Taiwanese centre were manually labelled and randomly divided for training and validation (295 patients with pancreatic cancer and 256 controls) and testing (75 patients with pancreatic cancer and 64 controls; local test set 1). Images were preprocessed into patches, and a CNN was trained to classify patches as cancerous or non-cancerous. Individuals were classified as with or without pancreatic cancer on the basis of the proportion of patches diagnosed as cancerous by the CNN, using a cutoff determined using the training and validation set. The CNN was further tested with another local test set (101 patients with pancreatic cancers and 88 controls; local test set 2) and a US dataset (281 pancreatic cancers and 82 controls). Radiologist reports of pancreatic cancer images in the local test sets were retrieved for comparison. FINDINGS: Between Jan 1, 2006, and Dec 31, 2018, we obtained CT images. In local test set 1, CNN-based analysis had a sensitivity of 0.973, specificity of 1.000, and accuracy of 0.986 (area under the curve [AUC] 0.997 (95% CI 0.992-1.000). In local test set 2, CNN-based analysis had a sensitivity of 0.990, specificity of 0.989, and accuracy of 0.989 (AUC 0.999 [0.998-1.000]). In the US test set, CNN-based analysis had a sensitivity of 0.790, specificity of 0.976, and accuracy of 0.832 (AUC 0.920 [0.891-0.948)]. CNN-based analysis achieved higher sensitivity than radiologists did (0.983 vs 0.929, difference 0.054 [95% CI 0.011-0.098]; p=0.014) in the two local test sets combined. CNN missed three (1.7%) of 176 pancreatic cancers (1.1-1.2 cm). Radiologists missed 12 (7%) of 168 pancreatic cancers (1.0-3.3 cm), of which 11 (92%) were correctly classified using CNN. The sensitivity of CNN for tumours smaller than 2 cm was 92.1% in the local test sets and 63.1% in the US test set. INTERPRETATION: CNN could accurately distinguish pancreatic cancer on CT, with acceptable generalisability to images of patients from various races and ethnicities. CNN could supplement radiologist interpretation. FUNDING: Taiwan Ministry of Science and Technology.
Convolutional neural network for the detection of pancreatic cancer on CT scans
Suman, Garima
Panda, Ananya
Korfiatis, Panagiotis
Goenka, Ajit H
The Lancet Digital Health2020Journal Article, cited 0 times
CPTAC-PDA
Clinical validation of deep learning algorithms for radiotherapy targeting of non-small-cell lung cancer: an observational study
Hosny, Ahmed
Bitterman, Danielle S.
Guthier, Christian V.
Qian, Jack M.
Roberts, Hannah
Perni, Subha
Saraf, Anurag
Peng, Luke C.
Pashtan, Itai
Ye, Zezhong
Kann, Benjamin H.
Kozono, David E.
Christiani, David
Catalano, Paul J.
Aerts, Hugo J. W. L.
Mak, Raymond H.
The Lancet Digital Health2022Journal Article, cited 0 times
Website
Lung-PET-CT-Dx
NSCLC Radiogenomics
NSCLC-Cetuximab (RTOG-0617)
Inter-observer variability
Radiation Therapy
Background; Artificial intelligence (AI) and deep learning have shown great potential in streamlining clinical tasks. However, most studies remain confined to in silico validation in small internal cohorts, without external validation or data on real-world clinical utility. We developed a strategy for the clinical validation of deep learning models for segmenting primary non-small-cell lung cancer (NSCLC) tumours and involved lymph nodes in CT images, which is a time-intensive step in radiation treatment planning, with large variability among experts.; ; Methods; In this observational study, CT images and segmentations were collected from eight internal and external sources from the USA, the Netherlands, Canada, and China, with patients from the Maastro and Harvard-RT1 datasets used for model discovery (segmented by a single expert). Validation consisted of interobserver and intraobserver benchmarking, primary validation, functional validation, and end-user testing on the following datasets: multi-delineation, Harvard-RT1, Harvard-RT2, RTOG-0617, NSCLC-radiogenomics, Lung-PET-CT-Dx, RIDER, and thorax phantom. Primary validation consisted of stepwise testing on increasingly external datasets using measures of overlap including volumetric dice (VD) and surface dice (SD). Functional validation explored dosimetric effect, model failure modes, test-retest stability, and accuracy. End-user testing with eight experts assessed automated segmentations in a simulated clinical setting.; ; Findings; We included 2208 patients imaged between 2001 and 2015, with 787 patients used for model discovery and 1421 for model validation, including 28 patients for end-user testing. Models showed an improvement over the interobserver benchmark (multi-delineation dataset; VD 0·91 [IQR 0·83–0·92], p=0·0062; SD 0·86 [0·71–0·91], p=0·0005), and were within the intraobserver benchmark. For primary validation, AI performance on internal Harvard-RT1 data (segmented by the same expert who segmented the discovery data) was VD 0·83 (IQR 0·76–0·88) and SD 0·79 (0·68–0·88), within the interobserver benchmark. Performance on internal Harvard-RT2 data segmented by other experts was VD 0·70 (0·56–0·80) and SD 0·50 (0·34–0·71). Performance on RTOG-0617 clinical trial data was VD 0·71 (0·60–0·81) and SD 0·47 (0·35–0·59), with similar results on diagnostic radiology datasets NSCLC-radiogenomics and Lung-PET-CT-Dx. Despite these geometric overlap results, models yielded target volumes with equivalent radiation dose coverage to those of experts. We also found non-significant differences between de novo expert and AI-assisted segmentations. AI assistance led to a 65% reduction in segmentation time (5·4 min; p<0·0001) and a 32% reduction in interobserver variability (SD; p=0·013).; ; Interpretation; We present a clinical validation strategy for AI models. We found that in silico geometric segmentation metrics might not correlate with clinical utility of the models. Experts' segmentation style and preference might affect model performance.; ; Funding; US National Institutes of Health and EU European Research Council.
Spatially aware graph neural networks and cross-level molecular profile prediction in colon cancer histopathology: a retrospective multi-cohort study
Ding, Kexin
Zhou, Mu
Wang, He
Zhang, Shaoting
Metaxas, Dimitri N.
The Lancet Digital Health2022Journal Article, cited 1 times
Website
CPTAC-COAD
Pathomics
Digital pathology
Histopathology imaging features
Neural Networks
Computer
Background; Digital whole-slide images are a unique way to assess the spatial context of the cancer microenvironment. Exploring these spatial characteristics will enable us to better identify cross-level molecular markers that could deepen our understanding of cancer biology and related patient outcomes.; ; Methods; We proposed a graph neural network approach that emphasises spatialisation of tumour tiles towards a comprehensive evaluation of predicting cross-level molecular profiles of genetic mutations, copy number alterations, and functional protein expressions from whole-slide images. We introduced a transformation strategy that converts whole-slide image scans into graph-structured data to address the spatial heterogeneity of colon cancer. We developed and assessed the performance of the model on The Cancer Genome Atlas colon adenocarcinoma (TCGA-COAD) and validated it on two external datasets (ie, The Cancer Genome Atlas rectum adenocarcinoma [TCGA-READ] and Clinical Proteomic Tumor Analysis Consortium colon adenocarcinoma [CPTAC-COAD]). We also predicted microsatellite instability and result interpretability.; ; Findings; The model was developed on 459 colon tumour whole-slide images from TCGA-COAD, and externally validated on 165 rectum tumour whole-slide images from TCGA-READ and 161 colon tumour whole-slide images from CPTAC-COAD. For TCGA cohorts, our method accurately predicted the molecular classes of the gene mutations (area under the curve [AUCs] from 82·54 [95% CI 77·41–87·14] to 87·08 [83·28–90·82] on TCGA-COAD, and AUCs from 70·46 [61·37–79·61] to 81·80 [72·20–89·70] on TCGA-READ), along with genes with copy number alterations (AUCs from 81·98 [73·34–89·68] to 90·55 [86·02–94·89] on TCGA-COAD, and AUCs from 62·05 [48·94–73·46] to 76·48 [64·78–86·71] on TCGA-READ), microsatellite instability (MSI) status classification (AUC 83·92 [77·41–87·59] on TCGA-COAD, and AUC 61·28 [53·28–67·93] on TCGA-READ), and protein expressions (AUCs from 85·57 [81·16–89·44] to 89·64 [86·29–93·19] on TCGA-COAD, and AUCs from 51·77 [42·53–61·83] to 59·79 [50·79–68·57] on TCGA-READ). For the CPTAC-COAD cohort, our model predicted a panel of gene mutations with AUC values from 63·74 (95% CI 52·92–75·37) to 82·90 (73·69–90·71), genes with copy number alterations with AUC values from 62·39 (51·37–73·76) to 86·08 (79·67–91·74), and MSI status prediction with AUC value of 73·15 (63·21–83·13).; ; Interpretation; We showed that spatially connected graph models enable molecular profile predictions in colon cancer and are generalised to rectum cancer. After further validation, our method could be used to infer the prognostic value of multiscale molecular biomarkers and identify targeted therapies for patients with colon cancer.; ; Funding; This research has been partially funded by ARO MURI 805491, NSF IIS-1793883, NSF CNS-1747778, NSF IIS 1763523, DOD-ARO ACC-W911NF, and NSF OIA-2040638 to Dimitri N Metaxas.
Specific Versus Varied Practice in Perceptual Expertise Training
Robson, Samuel G.
Tangen, Jason M.
Searston, Rachel A.
2022Journal Article, cited 0 times
Osteosarcoma-Tumor-Assessment
We used a longitudinal randomized control experiment to compare the effect of specific practice (training on one form of a task) and varied practice (training on various forms of a task) on perceptual learning and transfer. Participants practiced a visual search task for 10 hours over 2 to 4 weeks. The specific practice group searched for features only in fingerprints during each session, whereas the varied practice group searched for features in five different image categories. Both groups were tested on a series of tasks at four time points: before training, midway through training, immediately after training ended, and 6 to 8 weeks later. The specific group improved more during training and demonstrated greater pre-post performance gains than the varied group on a visual search task with untrained fingerprint images. Both groups improved equally on a visual search task with an untrained image category, but only the specific group's performance dropped significantly when tested several weeks later. Finally, both groups improved equally on a series of untrained fingerprint tasks. Practice with respect to a single category (versus many) instills better near transfer, but category-specific and category-general visual search training appear equally effective for developing task-general expertise. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
Transcription elongation factors represent in vivo cancer dependencies in glioblastoma
Glioblastoma is a universally lethal cancer with a median survival time of approximately 15 months. Despite substantial efforts to define druggable targets, there are no therapeutic options that notably extend the lifespan of patients with glioblastoma. While previous work has largely focused on in vitro cellular models, here we demonstrate a more physiologically relevant approach to target discovery in glioblastoma. We adapted pooled RNA interference (RNAi) screening technology for use in orthotopic patient-derived xenograft models, creating a high-throughput negative-selection screening platform in a functional in vivo tumour microenvironment. Using this approach, we performed parallel in vivo and in vitro screens and discovered that the chromatin and transcriptional regulators needed for cell survival in vivo are non-overlapping with those required in vitro. We identified transcription pause-release and elongation factors as one set of in vivo-specific cancer dependencies, and determined that these factors are necessary for enhancer-mediated transcriptional adaptations that enable cells to survive the tumour microenvironment. Our lead hit, JMJD6, mediates the upregulation of in vivo stress and stimulus response pathways through enhancer-mediated transcriptional pause-release, promoting cell survival specifically in vivo. Targeting JMJD6 or other identified elongation factors extends survival in orthotopic xenograft mouse models, suggesting that targeting transcription elongation machinery may be an effective therapeutic strategy for glioblastoma. More broadly, this study demonstrates the power of in vivo phenotypic screening to identify new classes of 'cancer dependencies' not identified by previous in vitro approaches, and could supply new opportunities for therapeutic intervention.;
Decoding tumour phenotype by noninvasive imaging using a quantitative radiomics approach
Aerts, H. J.
Velazquez, E. R.
Leijenaar, R. T.
Parmar, C.
Grossmann, P.
Carvalho, S.
Bussink, J.
Monshouwer, R.
Haibe-Kains, B.
Rietveld, D.
Hoebers, F.
Rietbergen, M. M.
Leemans, C. R.
Dekker, A.
Quackenbush, J.
Gillies, R. J.
Lambin, P.
Nat Commun2014Journal Article, cited 1029 times
Website
NSCLC-Radiomics
NSCLC-Radiomics-Genomics
radiomic features
Computed Tomography (CT)
Human cancers exhibit strong phenotypic differences that can be visualized noninvasively by medical imaging. Radiomics refers to the comprehensive quantification of tumour phenotypes by applying a large number of quantitative image features. Here we present a radiomic analysis of 440 features quantifying tumour image intensity, shape and texture, which are extracted from computed tomography data of 1,019 patients with lung or head-and-neck cancer. We find that a large number of radiomic features have prognostic power in independent data sets of lung and head-and-neck cancer patients, many of which were not identified as significant before. Radiogenomics analysis reveals that a prognostic radiomic signature, capturing intratumour heterogeneity, is associated with underlying gene-expression patterns. These data suggest that radiomics identifies a general prognostic phenotype existing in both lung and head-and-neck cancer. This may have a clinical impact as imaging is routinely used in clinical practice, providing an unprecedented opportunity to improve decision-support in cancer treatment at low cost.
Spatiotemporal genomic architecture informs precision oncology in glioblastoma
Lee, Jin-Ku
Wang, Jiguang
Sa, Jason K.
Ladewig, Erik
Lee, Hae-Ock
Lee, In-Hee
Kang, Hyun Ju
Rosenbloom, Daniel S.
Camara, Pablo G.
Liu, Zhaoqi
van Nieuwenhuizen, Patrick
Jung, Sang Won
Choi, Seung Won
Kim, Junhyung
Chen, Andrew
Kim, Kyu-Tae
Shin, Sang
Seo, Yun Jee
Oh, Jin-Mi
Shin, Yong Jae
Park, Chul-Kee
Kong, Doo-Sik
Seol, Ho Jun
Blumberg, Andrew
Lee, Jung-Il
Iavarone, Antonio
Park, Woong-Yang
Rabadan, Raul
Nam, Do-Hyun
Nat Genet2017Journal Article, cited 45 times
Website
TCGA-GBM
Genomics
Precision medicine in cancer proposes that genomic characterization of tumors can inform personalized targeted therapies. However, this proposition is complicated by spatial and temporal heterogeneity. Here we study genomic and expression profiles across 127 multisector or longitudinal specimens from 52 individuals with glioblastoma (GBM). Using bulk and single-cell data, we find that samples from the same tumor mass share genomic and expression signatures, whereas geographically separated, multifocal tumors and/or long-term recurrent tumors are seeded from different clones. Chemical screening of patient-derived glioma cells (PDCs) shows that therapeutic response is associated with genetic similarity, and multifocal tumors that are enriched with PIK3CA mutations have a heterogeneous drug-response pattern. We show that targeting truncal events is more efficacious than targeting private events in reducing the tumor burden. In summary, this work demonstrates that evolutionary inference from integrated genomic analysis in multisector biopsies can inform targeted therapeutic interventions for patients with GBM.
Quantitative MRI radiomics in the prediction of molecular classifications of breast cancer subtypes in the TCGA/TCIA data set
Li, Hui
Zhu, Yitan
Burnside, Elizabeth S
Huang, Erich
Drukker, Karen
Hoadley, Katherine A
Fan, Cheng
Conzen, Suzanne D
Zuley, Margarita
Net, Jose M
NPJ Breast Cancer2016Journal Article, cited 63 times
Website
TCGA-BRCA
Radiomics
breast cancer
Wwox–Brca1 interaction: role in DNA repair pathway choice
Schrock, MS
Batar, B
Lee, J
Druck, T
Ferguson, B
Cho, JH
Akakpo, K
Hagrass, H
Heerema, NA
Xia, F
Oncogene2016Journal Article, cited 12 times
Website
Radiogenomics
REMBRANDT
Predicting 1p/19q co-deletion status from magnetic resonance imaging using deep learning in adult-type diffuse lower-grade gliomas: a discovery and validation study
Yan, J.
Zhang, S.
Sun, Q.
Wang, W.
Duan, W.
Wang, L.
Ding, T.
Pei, D.
Sun, C.
Wang, W.
Liu, Z.
Hong, X.
Wang, X.
Guo, Y.
Li, W.
Cheng, J.
Liu, X.
Li, Z. C.
Zhang, Z.
Lab Invest2022Journal Article, cited 0 times
Website
TCGA-LGG
Radiogenomics
BRAIN
Deep learning
Magnetic Resonance Imaging (MRI)
Determination of 1p/19q co-deletion status is important for the classification, prognostication, and personalized therapy in diffuse lower-grade gliomas (LGG). We developed and validated a deep learning imaging signature (DLIS) from preoperative magnetic resonance imaging (MRI) for predicting the 1p/19q status in patients with LGG. The DLIS was constructed on a training dataset (n = 330) and validated on both an internal validation dataset (n = 123) and a public TCIA dataset (n = 102). The receiver operating characteristic (ROC) analysis and precision recall curves (PRC) were used to measure the classification performance. The area under ROC curves (AUC) of the DLIS was 0.999 for training dataset, 0.986 for validation dataset, and 0.983 for testing dataset. The F1-score of the prediction model was 0.992 for training dataset, 0.940 for validation dataset, and 0.925 for testing dataset. Our data suggests that DLIS could be used to predict the 1p/19q status from preoperative imaging in patients with LGG. The imaging-based deep learning has the potential to be a noninvasive tool predictive of molecular markers in adult diffuse gliomas.
Image-based assessment of extracellular mucin-to-tumor area predicts consensus molecular subtypes (CMS) in colorectal cancer
Nguyen, H. G.
Lundstrom, O.
Blank, A.
Dawson, H.
Lugli, A.
Anisimova, M.
Zlobec, I.
Mod Pathol2022Journal Article, cited 1 times
Website
CPTAC-COAD
TCGA-COAD
H&E-stained slides
Deep Learning
Classification
Pathology
Pathomics
The backbone of all colorectal cancer classifications including the consensus molecular subtypes (CMS) highlights microsatellite instability (MSI) as a key molecular pathway. Although mucinous histology (generally defined as >50% extracellular mucin-to-tumor area) is a "typical" feature of MSI, it is not limited to this subgroup. Here, we investigate the association of CMS classification and mucin-to-tumor area quantified using a deep learning algorithm, and the expression of specific mucins in predicting CMS groups and clinical outcome. A weakly supervised segmentation method was developed to quantify extracellular mucin-to-tumor area in H&E images. Performance was compared to two pathologists' scores, then applied to two cohorts: (1) TCGA (n = 871 slides/412 patients) used for mucin-CMS group correlation and (2) Bern (n = 775 slides/517 patients) for histopathological correlations and next-generation Tissue Microarray construction. TCGA and CPTAC (n = 85 patients) were used to further validate mucin detection and CMS classification by gene and protein expression analysis for MUC2, MUC4, MUC5AC and MUC5B. An excellent inter-observer agreement between pathologists' scores and the algorithm was obtained (ICC = 0.92). In TCGA, mucinous tumors were predominantly CMS1 (25.7%), CMS3 (24.6%) and CMS4 (16.2%). Average mucin in CMS2 was 1.8%, indicating negligible amounts. RNA and protein expression of MUC2, MUC4, MUC5AC and MUC5B were low-to-absent in CMS2. MUC5AC protein expression correlated with aggressive tumor features (e.g., distant metastases (p = 0.0334), BRAF mutation (p < 0.0001), mismatch repair-deficiency (p < 0.0001), and unfavorable 5-year overall survival (44% versus 65% for positive/negative staining). MUC2 expression showed the opposite trend, correlating with less lymphatic (p = 0.0096) and venous vessel invasion (p = 0.0023), no impact on survival.The absence of mucin-expressing tumors in CMS2 provides an important phenotype-genotype correlation. Together with MSI, mucinous histology may help predict CMS classification using only histopathology and should be considered in future image classifiers of molecular subtypes.
Adoption of artificial intelligence in breast imaging: evaluation, ethical constraints and limitations
Hickman, Sarah E.
Baxter, Gabrielle C.
Gilbert, Fiona J.
2021Journal Article, cited 0 times
CBIS-DDSM
ISPY1
TCGA-BRCA
Retrospective studies have shown artificial intelligence (AI) algorithms can match as well as enhance radiologist’s performance in breast screening. These tools can facilitate tasks not feasible by humans such as the automatic triage of patients and prediction of treatment outcomes. Breast imaging faces growing pressure with the exponential growth in imaging requests and a predicted reduced workforce to provide reports. Solutions to alleviate these pressures are being sought with an increasing interest in the adoption of AI to improve workflow efficiency as well as patient outcomes. Vast quantities of data are needed to test and monitor AI algorithms before and after their incorporation into healthcare systems. Availability of data is currently limited, although strategies are being devised to harness the data that already exists within healthcare institutions. Challenges that underpin the realisation of AI into everyday breast imaging cannot be underestimated and the provision of guidance from national agencies to tackle these challenges, taking into account views from a societal, industrial and healthcare prospective is essential. This review provides background on the evaluation and use of AI in breast imaging in addition to exploring key ethical, technical, legal and regulatory challenges that have been identified so far.
Deep learning-based quantification of temporalis muscle has prognostic value in patients with glioblastoma
Mi, E.
Mauricaite, R.
Pakzad-Shahabi, L.
Chen, J.
Ho, A.
Williams, M.
Br J Cancer2022Journal Article, cited 1 times
Website
TCGA-GBM
Ivy GAP
REMBRANDT
Deep Learning
BRAIN
Magnetic Resonance Imaging (MRI)
Radiomics
Image Processing
Computer-Assisted/*methods
BACKGROUND: Glioblastoma is the commonest malignant brain tumour. Sarcopenia is associated with worse cancer survival, but manually quantifying muscle on imaging is time-consuming. We present a deep learning-based system for quantification of temporalis muscle, a surrogate for skeletal muscle mass, and assess its prognostic value in glioblastoma. METHODS: A neural network for temporalis segmentation was trained with 366 MRI head images from 132 patients from 4 different glioblastoma data sets and used to quantify muscle cross-sectional area (CSA). Association between temporalis CSA and survival was determined in 96 glioblastoma patients from internal and external data sets. RESULTS: The model achieved high segmentation accuracy (Dice coefficient 0.893). Median age was 55 and 58 years and 75.6 and 64.7% were males in the in-house and TCGA-GBM data sets, respectively. CSA was an independently significant predictor for survival in both the in-house and TCGA-GBM data sets (HR 0.464, 95% CI 0.218-0.988, p = 0.046; HR 0.466, 95% CI 0.235-0.925, p = 0.029, respectively). CONCLUSIONS: Temporalis CSA is a prognostic marker in patients with glioblastoma, rapidly and accurately assessable with deep learning. We are the first to show that a head/neck muscle-derived sarcopenia metric generated using deep learning is associated with oncological outcomes and one of the first to show deep learning-based muscle quantification has prognostic value in cancer.
Machine learning-based pathomics signature could act as a novel prognostic marker for patients with clear cell renal cell carcinoma
Chen, S.
Jiang, L.
Gao, F.
Zhang, E.
Wang, T.
Zhang, N.
Wang, X.
Zheng, J.
Br J Cancer2022Journal Article, cited 0 times
CPTAC-CCRCC
TCGA-KIRC
Pathomics
Whole Slide Imaging (WSI)
Carcinoma
Renal Cell/mortality/*pathology
Female
Humans
Image Interpretation
Computer-Assisted/*methods
Kidney Neoplasms/mortality/*pathology
Machine Learning
Male
Neoplasm Grading
Neoplasm Staging
Nomograms
Prognosis
Prospective Studies
Regression Analysis
Retrospective Studies
Survival Analysis
BACKGROUND: Traditional histopathology performed by pathologists through naked eyes is insufficient for accurate survival prediction of clear cell renal cell carcinoma (ccRCC). METHODS: A total of 483 whole slide images (WSIs) data from three patient cohorts were retrospectively analyzed. We performed machine learning algorithm to identify optimal digital pathological features and constructed machine learning-based pathomics signature (MLPS) for ccRCC patients. Prognostic performance of the prognostic model was also verified in two independent validation cohorts. RESULTS: MLPS could significantly distinguish ccRCC patients with high survival risk, with hazard ratio of 15.05, 4.49 and 1.65 in three independent cohorts, respectively. Cox regression analysis revealed that the MLPS could act as an independent prognostic factor for ccRCC patients. Integration nomogram based on MLPS, tumour stage system and tumour grade system improved the current survival prediction accuracy for ccRCC patients, with area under curve value of 89.5%, 90.0%, 88.5% and 85.9% for 1-, 3-, 5- and 10-year disease-free survival prediction. DISCUSSION: The machine learning-based pathomics signature could act as a novel prognostic marker for patients with ccRCC. Nevertheless, prospective studies with multicentric patient cohorts are still needed for further verifications.
Deep learning-based pathology signature could reveal lymph node status and act as a novel prognostic marker across multiple cancer types
Chen, S.
Xiang, J.
Wang, X.
Zhang, J.
Yang, S.
Yang, W.
Zheng, J.
Han, X.
Br J Cancer2023Journal Article, cited 0 times
Website
TCGA-COAD
TCGA-ESCA
TCGA-KIRC
TCGA-KIRP
TCGA-LIHC
TCGA-BRCA
TCGA-LUAD
TCGA-READ
TCGA-STAD
TCGA-TGCT
TCGA-THCA
CPTAC-COAD
CPTAC-KIRC
CPTAC-BRCA
CPTAC-LUAD
Whole Slide Imaging (WSI)
Pathomics
Classification
H&E-stained slides
Lymphatic Metastasis/pathology
Prognosis
*Deep Learning
Retrospective Studies
Lymph Nodes/pathology
BACKGROUND: Identifying lymph node metastasis (LNM) relies mainly on indirect radiology. Current studies omitted the quantified associations with traits beyond cancer types, failing to provide generalisation performance across various tumour types. METHODS: 4400 whole slide images across 11 cancer types were collected for training, cross-verification, and external validation of the pan-cancer lymph node metastasis (PC-LNM) model. We proposed an attention-based weakly supervised neural network based on self-supervised cancer-invariant features for the prediction task. RESULTS: PC-LNM achieved a test area under the curve (AUC) of 0.732 (95% confidence interval: 0.717-0.746, P < 0.0001) in fivefold cross-validation of multiple cancer types, which also demonstrated good generalisation in the external validation cohort with AUC of 0.699 (95% confidence interval: 0.658-0.737, P < 0.0001). The interpretability results derived from PC-LNM revealed that the regions with the highest attention scores identified by the model generally correspond to tumours with poorly differentiated morphologies. PC-LNM achieved superior performance over previously reported methods and could also act as an independent prognostic factor for patients across multiple tumour types. DISCUSSION: We presented an automated pan-cancer model for predicting the LNM status from primary tumour histology, which could act as a novel prognostic marker across multiple cancer types.
A mathematical-descriptor of tumor-mesoscopic-structure from computed-tomography images annotates prognostic- and molecular-phenotypes of epithelial ovarian cancer
Lu, Haonan
Arshad, Mubarik
Thornton, Andrew
Avesani, Giacomo
Cunnea, Paula
Curry, Ed
Kanavati, Fahdi
Liang, Jack
Nixon, Katherine
Williams, Sophie T.
Hassan, Mona Ali
Bowtell, David D. L.
Gabra, Hani
Fotopoulou, Christina
Rockall, Andrea
Aboagye, Eric O.
Nature Communications2019Journal Article, cited 0 times
Website
TCGA-OV
Machine learning
Classification
The five-year survival rate of epithelial ovarian cancer (EOC) is approximately 35-40% despite maximal treatment efforts, highlighting a need for stratification biomarkers for personalized treatment. Here we extract 657 quantitative mathematical descriptors from the preoperative CT images of 364 EOC patients at their initial presentation. Using machine learning, we derive a non-invasive summary-statistic of the primary ovarian tumor based on 4 descriptors, which we name "Radiomic Prognostic Vector" (RPV). RPV reliably identifies the 5% of patients with median overall survival less than 2 years, significantly improves established prognostic methods, and is validated in two independent, multi-center cohorts. Furthermore, genetic, transcriptomic and proteomic analysis from two independent datasets elucidate that stromal phenotype and DNA damage response pathways are activated in RPV-stratified tumors. RPV and its associated analysis platform could be exploited to guide personalized therapy of EOC and is potentially transferrable to other cancer types.
A genome-wide gain-of-function screen identifies CDKN2C as a HBV host factor
Eller, Carla
Heydmann, Laura
Colpitts, Che C.
El Saghire, Houssein
Piccioni, Federica
Jühling, Frank
Majzoub, Karim
Pons, Caroline
Bach, Charlotte
Lucifora, Julie
Lupberger, Joachim
Nassal, Michael
Cowley, Glenn S.
Fujiwara, Naoto
Hsieh, Sen-Yung
Hoshida, Yujin
Felli, Emanuele
Pessaux, Patrick
Sureau, Camille
Schuster, Catherine
Root, David E.
Verrier, Eloi R.
Baumert, Thomas F.
Nature Communications2020Journal Article, cited 0 times
Website
TCGA-LIHC
Chronic HBV infection is a major cause of liver disease and cancer worldwide. Approaches for cure are lacking, and the knowledge of virus-host interactions is still limited. Here, we perform a genome-wide gain-of-function screen using a poorly permissive hepatoma cell line to uncover host factors enhancing HBV infection. Validation studies in primary human hepatocytes identified CDKN2C as an important host factor for HBV replication. CDKN2C is overexpressed in highly permissive cells and HBV-infected patients. Mechanistic studies show a role for CDKN2C in inducing cell cycle G1 arrest through inhibition of CDK4/6 associated with the upregulation of HBV transcription enhancers. A correlation between CDKN2C expression and disease progression in HBV-infected patients suggests a role in HBV-induced liver disease. Taken together, we identify a previously undiscovered clinically relevant HBV host factor, allowing the development of improved infectious model systems for drug discovery and the study of the HBV life cycle.
EGFR/SRC/ERK-stabilized YTHDF2 promotes cholesterol dysregulation and invasive growth of glioblastoma
Fang, Runping
Chen, Xin
Zhang, Sicong
Shi, Hui
Ye, Youqiong
Shi, Hailing
Zou, Zhongyu
Li, Peng
Guo, Qing
Ma, Li
Nature Communications2021Journal Article, cited 14 times
Website
REMBRANDT
GBM
YTHDF2
Integrating deep learning CT-scan model, biological and clinical variables to predict severity of COVID-19 patients
Lassau, N.
Ammari, S.
Chouzenoux, E.
Gortais, H.
Herent, P.
Devilder, M.
Soliman, S.
Meyrignac, O.
Talabard, M. P.
Lamarque, J. P.
Dubois, R.
Loiseau, N.
Trichelair, P.
Bendjebbar, E.
Garcia, G.
Balleyguier, C.
Merad, M.
Stoclin, A.
Jegou, S.
Griscelli, F.
Tetelboum, N.
Li, Y.
Verma, S.
Terris, M.
Dardouri, T.
Gupta, K.
Neacsu, A.
Chemouni, F.
Sefta, M.
Jehanno, P.
Bousaid, I.
Boursin, Y.
Planchet, E.
Azoulay, M.
Dachary, J.
Brulport, F.
Gonzalez, A.
Dehaene, O.
Schiratti, J. B.
Schutte, K.
Pesquet, J. C.
Talbot, H.
Pronier, E.
Wainrib, G.
Clozel, T.
Barlesi, F.
Bellin, M. F.
Blum, M. G. B.
Nat Commun2021Journal Article, cited 20 times
Website
LIDC-IDRI
Deep Learning
Multivariate Analysis
Computed Tomography (CT)
Model
Imaging features
The SARS-COV-2 pandemic has put pressure on intensive care units, so that identifying predictors of disease severity is a priority. We collect 58 clinical and biological variables, and chest CT scan data, from 1003 coronavirus-infected patients from two French hospitals. We train a deep learning model based on CT scans to predict severity. We then construct the multimodal AI-severity score that includes 5 clinical and biological variables (age, sex, oxygenation, urea, platelet) in addition to the deep learning model. We show that neural network analysis of CT-scans brings unique prognosis information, although it is correlated with other markers of severity (oxygenation, LDH, and CRP) explaining the measurable but limited 0.03 increase of AUC obtained when adding CT-scan information to clinical variables. Here, we show that when comparing AI-severity with 11 existing severity scores, we find significantly improved prognosis performance; AI-severity can therefore rapidly become a reference scoring approach.
Human-interpretable image features derived from densely mapped cancer pathology slides predict diverse molecular phenotypes
Diao, J. A.
Wang, J. K.
Chui, W. F.
Mountain, V.
Gullapally, S. C.
Srinivasan, R.
Mitchell, R. N.
Glass, B.
Hoffman, S.
Rao, S. K.
Maheshwari, C.
Lahiri, A.
Prakash, A.
McLoughlin, R.
Kerner, J. K.
Resnick, M. B.
Montalto, M. C.
Khosla, A.
Wapinski, I. N.
Beck, A. H.
Elliott, H. L.
Taylor-Weiner, A.
Nat Commun2021Journal Article, cited 0 times
Website
Post-NAT-BRCA
H&E-stained slides
BREAST
Deep Learning
Computational methods have made substantial progress in improving the accuracy and throughput of pathology workflows for diagnostic, prognostic, and genomic prediction. Still, lack of interpretability remains a significant barrier to clinical integration. We present an approach for predicting clinically-relevant molecular phenotypes from whole-slide histopathology images using human-interpretable image features (HIFs). Our method leverages >1.6 million annotations from board-certified pathologists across >5700 samples to train deep learning models for cell and tissue classification that can exhaustively map whole-slide images at two and four micron-resolution. Cell- and tissue-type model outputs are combined into 607 HIFs that quantify specific and biologically-relevant characteristics across five cancer types. We demonstrate that these HIFs correlate with well-known markers of the tumor microenvironment and can predict diverse molecular signatures (AUROC 0.601-0.864), including expression of four immune checkpoint proteins and homologous recombination deficiency, with performance comparable to 'black-box' methods. Our HIF-based approach provides a comprehensive, quantitative, and interpretable window into the composition and spatial architecture of the tumor microenvironment.
Dynamic memory to alleviate catastrophic forgetting in continual learning with medical imaging
Perkonigg, M.
Hofmanninger, J.
Herold, C. J.
Brink, J. A.
Pianykh, O.
Prosch, H.
Langs, G.
Nat Commun2021Journal Article, cited 0 times
Website
LIDC-IDRI
LUNA16 Challenge
LNDb Challenge
Machine Learning
Computed Tomography (CT)
Medical imaging is a central part of clinical diagnosis and treatment guidance. Machine learning has increasingly gained relevance because it captures features of disease and treatment response that are relevant for therapeutic decision-making. In clinical practice, the continuous progress of image acquisition technology or diagnostic procedures, the diversity of scanners, and evolving imaging protocols hamper the utility of machine learning, as prediction accuracy on new data deteriorates, or models become outdated due to these domain shifts. We propose a continual learning approach to deal with such domain shifts occurring at unknown time points. We adapt models to emerging variations in a continuous data stream while counteracting catastrophic forgetting. A dynamic memory enables rehearsal on a subset of diverse training data to mitigate forgetting while enabling models to expand to new domains. The technique balances memory by detecting pseudo-domains, representing different style clusters within the data stream. Evaluation of two different tasks, cardiac segmentation in magnetic resonance imaging and lung nodule detection in computed tomography, demonstrate a consistent advantage of the method.
A SIMPLI (Single-cell Identification from MultiPLexed Images) approach for spatially-resolved tissue phenotyping at single-cell resolution
Bortolomeazzi, M.
Montorsi, L.
Temelkovski, D.
Keddar, M. R.
Acha-Sagredo, A.
Pitcher, M. J.
Basso, G.
Laghi, L.
Rodriguez-Justo, M.
Spencer, J.
Ciccarelli, F. D.
Nat Commun2022Journal Article, cited 1 times
Website
CRC_FFPE-CODEX_CellNeighs
Digital pathology
Pathomics
COLON
Antibodies
Colon/diagnostic imaging/pathology
Diagnostic Imaging/*methods
Humans
Image Processing
Computer-Assisted/*methods
Intestinal Mucosa/diagnostic imaging/pathology
Neoplasms/diagnostic imaging/pathology
Reproducibility of Results
*Single-Cell Analysis
T-Lymphocytes/pathology
Multiplexed imaging technologies enable the study of biological tissues at single-cell resolution while preserving spatial information. Currently, high-dimension imaging data analysis is technology-specific and requires multiple tools, restricting analytical scalability and result reproducibility. Here we present SIMPLI (Single-cell Identification from MultiPLexed Images), a flexible and technology-agnostic software that unifies all steps of multiplexed imaging data analysis. After raw image processing, SIMPLI performs a spatially resolved, single-cell analysis of the tissue slide as well as cell-independent quantifications of marker expression to investigate features undetectable at the cell level. SIMPLI is highly customisable and can run on desktop computers as well as high-performance computing environments, enabling workflow parallelisation for large datasets. SIMPLI produces multiple tabular and graphical outputs at each step of the analysis. Its containerised implementation and minimum configuration requirements make SIMPLI a portable and reproducible solution for multiplexed imaging data analysis. Software is available at "SIMPLI [ https://github.com/ciccalab/SIMPLI ]".
The Medical Segmentation Decathlon
Antonelli, M.
Reinke, A.
Bakas, S.
Farahani, K.
Kopp-Schneider, A.
Landman, B. A.
Litjens, G.
Menze, B.
Ronneberger, O.
Summers, R. M.
van Ginneken, B.
Bilello, M.
Bilic, P.
Christ, P. F.
Do, R. K. G.
Gollub, M. J.
Heckers, S. H.
Huisman, H.
Jarnagin, W. R.
McHugo, M. K.
Napel, S.
Pernicka, J. S. G.
Rhode, K.
Tobon-Gomez, C.
Vorontsov, E.
Meakin, J. A.
Ourselin, S.
Wiesenfarth, M.
Arbelaez, P.
Bae, B.
Chen, S.
Daza, L.
Feng, J.
He, B.
Isensee, F.
Ji, Y.
Jia, F.
Kim, I.
Maier-Hein, K.
Merhof, D.
Pai, A.
Park, B.
Perslev, M.
Rezaiifar, R.
Rippel, O.
Sarasua, I.
Shen, W.
Son, J.
Wachinger, C.
Wang, L.
Wang, Y.
Xia, Y.
Xu, D.
Xu, Z.
Zheng, Y.
Simpson, A. L.
Maier-Hein, L.
Cardoso, M. J.
Nat Commun2022Journal Article, cited 79 times
Website
TCGA-GBM
TCGA-LGG
BraTS-TCGA-GBM
BraTS-TCGA-LGG
NSCLC Radiogenomics: Initial Stanford Study of 26 Cases
Challenge
*Algorithms
*Image Processing
Computer-Assisted/methods
International challenges have become the de facto standard for comparative assessment of image analysis algorithms. Although segmentation is the most widely investigated medical image processing task, the various challenges have been organized to focus only on specific clinical tasks. We organized the Medical Segmentation Decathlon (MSD)-a biomedical image analysis challenge, in which algorithms compete in a multitude of both tasks and modalities to investigate the hypothesis that a method capable of performing well on multiple tasks will generalize well to a previously unseen task and potentially outperform a custom-designed solution. MSD results confirmed this hypothesis, moreover, MSD winner continued generalizing well to a wide range of other clinical problems for the next two years. Three main conclusions can be drawn from this study: (1) state-of-the-art image segmentation algorithms generalize well when retrained on unseen tasks; (2) consistent algorithmic performance across multiple tasks is a strong surrogate of algorithmic generalizability; (3) the training of accurate AI segmentation models is now commoditized to scientists that are not versed in AI model training.
Automated detection and segmentation of non-small cell lung cancer computed tomography images
Primakov, Sergey P.
Ibrahim, Abdalla
van Timmeren, Janita E.
Wu, Guangyao
Keek, Simon A.
Beuque, Manon
Granzier, Renée W. Y.
Lavrova, Elizaveta
Scrivener, Madeleine
Sanduleanu, Sebastian
Kayan, Esma
Halilaj, Iva
Lenaers, Anouk
Wu, Jianlin
Monshouwer, René
Geets, Xavier
Gietema, Hester A.
Hendriks, Lizza E. L.
Morin, Olivier
Jochems, Arthur
Woodruff, Henry C.
Lambin, Philippe
Nature Communications2022Journal Article, cited 3 times
Website
NSCLC-Radiomics
NSCLC Radiogenomics
NSCLC-Radiomics-Genomics
NSCLC-Radiomics-Interobserver1
non-small cell lung cancer
Segmentation
Detection and segmentation of abnormalities on medical images is highly important for patient management including diagnosis, radiotherapy, response evaluation, as well as for quantitative image research. We present a fully automated pipeline for the detection and volumetric segmentation of non-small cell lung cancer (NSCLC) developed and validated on 1328 thoracic CT scans from 8 institutions. Along with quantitative performance detailed by image slice thickness, tumor size, image interpretation difficulty, and tumor location, we report an in-silico prospective clinical trial, where we show that the proposed method is faster and more reproducible compared to the experts. Moreover, we demonstrate that on average, radiologists & radiation oncologists preferred automatic segmentations in 56% of the cases. Additionally, we evaluate the prognostic power of the automatic contours by applying RECIST criteria and measuring the tumor volumes. Segmentations by our method stratified patients into low and high survival groups with higher significance compared to those methods based on manual contours.
Deep learning empowered volume delineation of whole-body organs-at-risk for accelerated radiotherapy
Shi, Feng
Hu, Weigang
Wu, Jiaojiao
Han, Miaofei
Wang, Jiazhou
Zhang, Wei
Zhou, Qing
Zhou, Jingjie
Wei, Ying
Shao, Ying
Chen, Yanbo
Yu, Yue
Cao, Xiaohuan
Zhan, Yiqiang
Zhou, Xiang Sean
Gao, Yaozong
Shen, Dinggang
Nat Commun2022Journal Article, cited 0 times
FDG-PET-CT-Lesions
NSCLC Radiogenomics
LIDC-IDRI
Head-Neck-PET-CT
Humans
*Deep Learning
Tomography
X-Ray Computed
Organs at Risk
*Neoplasms/radiotherapy
Image Processing
Computer-Assisted
In radiotherapy for cancer patients, an indispensable process is to delineate organs-at-risk (OARs) and tumors. However, it is the most time-consuming step as manual delineation is always required from radiation oncologists. Herein, we propose a lightweight deep learning framework for radiotherapy treatment planning (RTP), named RTP-Net, to promote an automatic, rapid, and precise initialization of whole-body OARs and tumors. Briefly, the framework implements a cascade coarse-to-fine segmentation, with adaptive module for both small and large organs, and attention mechanisms for organs and boundaries. Our experiments show three merits: 1) Extensively evaluates on 67 delineation tasks on a large-scale dataset of 28,581 cases; 2) Demonstrates comparable or superior accuracy with an average Dice of 0.95; 3) Achieves near real-time delineation in most tasks with <2 s. This framework could be utilized to accelerate the contouring process in the All-in-One radiotherapy scheme, and thus greatly shorten the turnaround time of patients.
Spatial cellular architecture predicts prognosis in glioblastoma
Zheng, Y.
Carrillo-Perez, F.
Pizurica, M.
Heiland, D. H.
Gevaert, O.
Nat Commun2023Journal Article, cited 0 times
CPTAC-GBM
Pathomics
Deep Learning
Humans
*Glioblastoma/genetics
Astrocytes
Cell Plasticity
Cluster Analysis
Gene Expression Profiling
Intra-tumoral heterogeneity and cell-state plasticity are key drivers for the therapeutic resistance of glioblastoma. Here, we investigate the association between spatial cellular organization and glioblastoma prognosis. Leveraging single-cell RNA-seq and spatial transcriptomics data, we develop a deep learning model to predict transcriptional subtypes of glioblastoma cells from histology images. Employing this model, we phenotypically analyze 40 million tissue spots from 410 patients and identify consistent associations between tumor architecture and prognosis across two independent cohorts. Patients with poor prognosis exhibit higher proportions of tumor cells expressing a hypoxia-induced transcriptional program. Furthermore, a clustering pattern of astrocyte-like tumor cells is associated with worse prognosis, while dispersion and connection of the astrocytes with other transcriptional subtypes correlate with decreased risk. To validate these results, we develop a separate deep learning model that utilizes histology images to predict prognosis. Applying this model to spatial transcriptomics data reveal survival-associated regional gene expression programs. Overall, our study presents a scalable approach to unravel the transcriptional heterogeneity of glioblastoma and establishes a critical connection between spatial cellular architecture and clinical outcomes.
CellSighter: a neural network to classify cells in highly multiplexed images
Amitay, Yael
Bussi, Yuval
Feinstein, Ben
Bagon, Shai
Milo, Idan
Keren, Leeat
Nature Communications2023Journal Article, cited 0 times
CRC_FFPE-CODEX_CellNeighs
Machine Learning
Multiplexed imaging enables measurement of multiple proteins in situ, offering an unprecedented opportunity to chart various cell types and states in tissues. However, cell classification, the task of identifying the type of individual cells, remains challenging, labor-intensive, and limiting to throughput. Here, we present CellSighter, a deep-learning based pipeline to accelerate cell classification in multiplexed images. Given a small training set of expert-labeled images, CellSighter outputs the label probabilities for all cells in new images. CellSighter achieves over 80% accuracy for major cell types across imaging platforms, which approaches inter-observer concordance. Ablation studies and simulations show that CellSighter is able to generalize its training data and learn features of protein expression levels, as well as spatial features such as subcellular expression patterns. CellSighter’s design reduces overfitting, and it can be trained with only thousands or even hundreds of labeled examples. CellSighter also outputs a prediction confidence, allowing downstream experts control over the results. Altogether, CellSighter drastically reduces hands-on time for cell classification in multiplexed images, while improving accuracy and consistency across datasets.
Robust phenotyping of highly multiplexed tissue imaging data using pixel-level clustering
Liu, Candace C.
Greenwald, Noah F.
Kong, Alex
McCaffrey, Erin F.
Leow, Ke Xuan
Mrdjen, Dunja
Cannon, Bryan J.
Rumberger, Josef Lorenz
Varra, Sricharan Reddy
Angelo, Michael
Nature Communications2023Journal Article, cited 0 times
CRC_FFPE-CODEX_CellNeighs
While technologies for multiplexed imaging have provided an unprecedented understanding of tissue composition in health and disease, interpreting this data remains a significant computational challenge. To understand the spatial organization of tissue and how it relates to disease processes, imaging studies typically focus on cell-level phenotypes. However, images can capture biologically important objects that are outside of cells, such as the extracellular matrix. Here, we describe a pipeline, Pixie, that achieves robust and quantitative annotation of pixel-level features using unsupervised clustering and show its application across a variety of biological contexts and multiplexed imaging platforms. Furthermore, current cell phenotyping strategies that rely on unsupervised clustering can be labor intensive and require large amounts of manual cluster adjustments. We demonstrate how pixel clusters that lie within cells can be used to improve cell annotations. We comprehensively evaluate pre-processing steps and parameter choices to optimize clustering performance and quantify the reproducibility of our method. Importantly, Pixie is open source and easily customizable through a user-friendly interface.
Segment anything in medical images
Ma, J.
He, Y.
Li, F.
Han, L.
You, C.
Wang, B.
Nat Commun2024Journal Article, cited 375 times
Website
Adrenal-ACC-Ki67-Seg
FDG-PET-CT-Lesions
GLIS-RT
HCC-TACE-Seg
CT Lymph Nodes
PleThora
NSCLC Radiogenomics
Brain-TR-GammaKnife
CC-Tumor-Heterogeneity
Meningioma-SEG-CLASS
ISBI-MR-Prostate-2013
QIN-PROSTATE-Repeatability
CDD-CESM
*Image Processing
Computer-Assisted/methods
Medical Decathlon
BraTS
*Diagnostic Imaging
Medical image segmentation is a critical component in clinical practice, facilitating accurate diagnosis, treatment planning, and disease monitoring. However, existing methods, often tailored to specific modalities or disease types, lack generalizability across the diverse spectrum of medical image segmentation tasks. Here we present MedSAM, a foundation model designed for bridging this gap by enabling universal medical image segmentation. The model is developed on a large-scale medical image dataset with 1,570,263 image-mask pairs, covering 10 imaging modalities and over 30 cancer types. We conduct a comprehensive evaluation on 86 internal validation tasks and 60 external validation tasks, demonstrating better accuracy and robustness than modality-wise specialist models. By delivering accurate and efficient segmentation across a wide spectrum of tasks, MedSAM holds significant potential to expedite the evolution of diagnostic tools and the personalization of treatment plans.
Enhancing NSCLC recurrence prediction with PET/CT habitat imaging, ctDNA, and integrative radiogenomics-blood insights
Sujit, S. J.
Aminu, M.
Karpinets, T. V.
Chen, P.
Saad, M. B.
Salehjahromi, M.
Boom, J. D.
Qayati, M.
George, J. M.
Allen, H.
Antonoff, M. B.
Hong, L.
Hu, X.
Heeke, S.
Tran, H. T.
Le, X.
Elamin, Y. Y.
Altan, M.
Vokes, N. I.
Sheshadri, A.
Lin, J.
Zhang, J.
Lu, Y.
Behrens, C.
Godoy, M. C. B.
Wu, C. C.
Chang, J. Y.
Chung, C.
Jaffray, D. A.
Wistuba,, II
Lee, J. J.
Vaporciyan, A. A.
Gibbons, D. L.
Heymach, J.
Zhang, J.
Cascone, T.
Wu, J.
Nat Commun2024Journal Article, cited 0 times
Website
While we recognize the prognostic importance of clinicopathological measures and circulating tumor DNA (ctDNA), the independent contribution of quantitative image markers to prognosis in non-small cell lung cancer (NSCLC) remains underexplored. In our multi-institutional study of 394 NSCLC patients, we utilize pre-treatment computed tomography (CT) and (18)F-fluorodeoxyglucose positron emission tomography (FDG-PET) to establish a habitat imaging framework for assessing regional heterogeneity within individual tumors. This framework identifies three PET/CT subtypes, which maintain prognostic value after adjusting for clinicopathologic risk factors including tumor volume. Additionally, these subtypes complement ctDNA in predicting disease recurrence. Radiogenomics analysis unveil the molecular underpinnings of these imaging subtypes, highlighting downregulation in interferon alpha and gamma pathways in the high-risk subtype. In summary, our study demonstrates that these habitat imaging subtypes effectively stratify NSCLC patients based on their risk levels for disease recurrence after initial curative surgery or radiotherapy, providing valuable insights for personalized treatment approaches.
Video frame interpolation neural network for 3D tomography across different length scales
Gambini, L.
Gabbett, C.
Doolan, L.
Jones, L.
Coleman, J. N.
Gilligan, P.
Sanvito, S.
Nat Commun2024Journal Article, cited 0 times
Website
Pseudo-PHI-DICOM-Data
Image Enhancement
Graphene
Materials Science
Medical Research
Three-dimensional (3D) tomography is a powerful investigative tool for many scientific domains, going from materials science, to engineering, to medicine. Many factors may limit the 3D resolution, often spatially anisotropic, compromising the precision of the information retrievable. A neural network, designed for video-frame interpolation, is employed to enhance tomographic images, achieving cubic-voxel resolution. The method is applied to distinct domains: the investigation of the morphology of printed graphene nanosheets networks, obtained via focused ion beam-scanning electron microscope (FIB-SEM), magnetic resonance imaging of the human brain, and X-ray computed tomography scans of the abdomen. The accuracy of the 3D tomographic maps can be quantified through computer-vision metrics, but most importantly with the precision on the physical quantities retrievable from the reconstructions, in the case of FIB-SEM the porosity, tortuosity, and effective diffusivity. This work showcases a versatile image-augmentation strategy for optimizing 3D tomography acquisition conditions, while preserving the information content.
Image analysis-based tumor infiltrating lymphocytes measurement predicts breast cancer pathologic complete response in SWOG S0800 neoadjuvant chemotherapy trial
Fanucci, Kristina A.
Bai, Yalai
Pelekanou, Vasiliki
Nahleh, Zeina A.
Shafi, Saba
Burela, Sneha
Barlow, William E.
Sharma, Priyanka
Thompson, Alastair M.
Godwin, Andrew K.
Rimm, David L.
Hortobagyi, Gabriel N.
Liu, Yihan
Wang, Leona
Wei, Wei
Pusztai, Lajos
Blenman, Kim R. M.
NPJ Breast Cancer2023Journal Article, cited 0 times
Website
breast cancer
Neoadjuvant chemotherapy
lymphocytes
We assessed the predictive value of an image analysis-based tumor-infiltrating lymphocytes (TILs) score for pathologic complete response (pCR) and event-free survival in breast cancer (BC). About 113 pretreatment samples were analyzed from patients with stage IIB-IIIC HER-2-negative BC randomized to neoadjuvant chemotherapy ± bevacizumab. TILs quantification was performed on full sections using QuPath open-source software with a convolutional neural network cell classifier (CNN11). We used easTILs% as a digital metric of TILs score defined as [sum of lymphocytes area (mm2)/stromal area(mm2)] × 100. Pathologist-read stromal TILs score (sTILs%) was determined following published guidelines. Mean pretreatment easTILs% was significantly higher in cases with pCR compared to residual disease (median 36.1 vs.14.8%, p < 0.001). We observed a strong positive correlation (r = 0.606, p < 0.0001) between easTILs% and sTILs%. The area under the prediction curve (AUC) was higher for easTILs% than sTILs%, 0.709 and 0.627, respectively. Image analysis-based TILs quantification is predictive of pCR in BC and had better response discrimination than pathologist-read sTILs%.
Harnessing multimodal data integration to advance precision oncology
Boehm, Kevin M
Khosravi, Pegah
Vanguri, Rami
Gao, Jianjiong
Shah, Sohrab P
Nature Reviews Cancer2022Journal Article, cited 0 times
Website
Breast-MRI-NACT-Pilot
Multi-modal imaging
Machine Learning
End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography
With an estimated 160,000 deaths in 2018, lung cancer is the most common cause of cancer death in the United States(1). Lung cancer screening using low-dose computed tomography has been shown to reduce mortality by 20-43% and is now included in US screening guidelines(1-6). Existing challenges include inter-grader variability and high false-positive and false-negative rates(7-10). We propose a deep learning algorithm that uses a patient's current and prior computed tomography volumes to predict the risk of lung cancer. Our model achieves a state-of-the-art performance (94.4% area under the curve) on 6,716 National Lung Cancer Screening Trial cases, and performs similarly on an independent clinical validation set of 1,139 cases. We conducted two reader studies. When prior computed tomography imaging was not available, our model outperformed all six radiologists with absolute reductions of 11% in false positives and 5% in false negatives. Where prior computed tomography imaging was available, the model performance was on-par with the same radiologists. This creates an opportunity to optimize the screening process via computer assistance and automation. While the vast majority of patients remain unscreened, we show the potential for deep learning models to increase the accuracy, consistency and adoption of lung cancer screening worldwide.
Towards a general-purpose foundation model for computational pathology
Chen, R. J.
Ding, T.
Lu, M. Y.
Williamson, D. F. K.
Jaume, G.
Song, A. H.
Chen, B.
Zhang, A.
Shao, D.
Shaban, M.
Williams, M.
Oldenburg, L.
Weishaupt, L. L.
Wang, J. J.
Vaidya, A.
Le, L. P.
Gerber, G.
Sahai, S.
Williams, W.
Mahmood, F.
Nat Med2024Journal Article, cited 0 times
TCGA-LUAD
TCGA-LUSC
CPTAC-LUAD
CPTAC-LUSC
CPTAC-CCRCC
TCGA-GBM
TCGA-ESCA
Hungarian-Colorectal-Screening
TIL-WSI-TCGA
Large-scale data
Pathomics
Pathogenomics
*Artificial Intelligence
Workflow
Self-supervised
Cell segmentation
CLAM
ABMIL
Scikit-Learn
Cancer Metastases in Lymph Nodes Challenge 2016 (CAMELYON16) Challenge
Quantitative evaluation of tissue images is crucial for computational pathology (CPath) tasks, requiring the objective characterization of histopathological entities from whole-slide images (WSIs). The high resolution of WSIs and the variability of morphological features present significant challenges, complicating the large-scale annotation of data for high-performance applications. To address this challenge, current efforts have proposed the use of pretrained image encoders through transfer learning from natural image datasets or self-supervised learning on publicly available histopathology datasets, but have not been extensively developed and evaluated across diverse tissue types at scale. We introduce UNI, a general-purpose self-supervised model for pathology, pretrained using more than 100 million images from over 100,000 diagnostic H&E-stained WSIs (>77 TB of data) across 20 major tissue types. The model was evaluated on 34 representative CPath tasks of varying diagnostic difficulty. In addition to outperforming previous state-of-the-art models, we demonstrate new modeling capabilities in CPath such as resolution-agnostic tissue classification, slide classification using few-shot class prototypes, and disease subtyping generalization in classifying up to 108 cancer types in the OncoTree classification system. UNI advances unsupervised representation learning at scale in CPath in terms of both pretraining data and downstream evaluation, enabling data-efficient artificial intelligence models that can generalize and transfer to a wide range of diagnostically challenging tasks and clinical workflows in anatomic pathology.
Identification of cell types in multiplexed in situ images by combining protein expression and spatial information using CELESTA
Zhang, Weiruo
Li, Irene
Reticker-Flynn, Nathan E.
Good, Zinaida
Chang, Serena
Samusik, Nikolay
Saumyaa, Saumyaa
Li, Yuanyuan
Zhou, Xin
Liang, Rachel
Kong, Christina S.
Le, Quynh-Thu
Gentles, Andrew J.
Sunwoo, John B.
Nolan, Garry P.
Engleman, Edgar G.
Plevritis, Sylvia K.
2022Journal Article, cited 0 times
CRC_FFPE-CODEX_CellNeighs
Advances in multiplexed in situ imaging are revealing important insights in spatial biology. However, cell type identification remains a major challenge in imaging analysis, with most existing methods involving substantial manual assessment and subjective decisions for thousands of cells. We developed an unsupervised machine learning algorithm, CELESTA, which identifies the cell type of each cell, individually, using the cell’s marker expression profile and, when needed, its spatial information. We demonstrate the performance of CELESTA on multiplexed immunofluorescence images of colorectal cancer and head and neck squamous cell carcinoma (HNSCC). Using the cell types identified by CELESTA, we identify tissue architecture associated with lymph node metastasis in HNSCC, and validate our findings in an independent cohort. By coupling our spatial analysis with single-cell RNA-sequencing data on proximal sections of the same specimens, we identify cell–cell crosstalk associated with lymph node metastasis, demonstrating the power of CELESTA to facilitate identification of clinically relevant interactions.
Distributed radiomics as a signature validation study using the Personal Health Train infrastructure
Shi, Zhenwei
Zhovannik, Ivan
Traverso, Alberto
Dankers, Frank J. W. M.
Deist, Timo M.
Kalendralis, Petros
Monshouwer, René
Bussink, Johan
Fijten, Rianne
Aerts, Hugo J. W. L.
Dekker, Andre
Wee, Leonard
Scientific Data2019Journal Article, cited 0 times
NSCLC-Radiomics
CT
Prediction modelling with radiomics is a rapidly developing research topic that requires access to vast amounts of imaging data. Methods that work on decentralized data are urgently needed, because of concerns about patient privacy. Previously published computed tomography medical image sets with gross tumour volume (GTV) outlines for non-small cell lung cancer have been updated with extended follow-up. In a previous study, these were referred to as Lung1 (n = 421) and Lung2 (n = 221). The Lung1 dataset is made publicly accessible via The Cancer Imaging Archive (TCIA; https://www.cancerimagingarchive.net). We performed a decentralized multi-centre study to develop a radiomic signature (hereafter “ZS2019”) in one institution and validated the performance in an independent institution, without the need for data exchange and compared this to an analysis where all data was centralized. The performance of ZS2019 for 2-year overall survival validated in distributed radiomics was not statistically different from the centralized validation (AUC 0.61 vs 0.61; p = 0.52). Although slightly different in terms of data and methods, no statistically significant difference in performance was observed between the new signature and previous work (c-index 0.58 vs 0.65; p = 0.37). Our objective was not the development of a new signature with the best performance, but to suggest an approach for distributed radiomics. Therefore, we used a similar method as an earlier study. We foresee that the Lung1 dataset can be further re-used for testing radiomic models and investigating feature reproducibility.
Dataset of segmented nuclei in hematoxylin and eosin stained histopathology images of ten cancer types
Hou, Le
Gupta, Rajarsi
Van Arnam, John S.
Zhang, Yuwei
Sivalenka, Kaustubh
Samaras, Dimitris
Kurc, Tahsin M.
Saltz, Joel H.
Scientific Data2020Journal Article, cited 0 times
Pan-Cancer-Nuclei-Seg
The distribution and appearance of nuclei are essential markers for the diagnosis and study of cancer. Despite the importance of nuclear morphology, there is a lack of large scale, accurate, publicly accessible nucleus segmentation data. To address this, we developed an analysis pipeline that segments nuclei in whole slide tissue images from multiple cancer types with a quality control process. We have generated nucleus segmentation results in 5,060 Whole Slide Tissue images from 10 cancer types in The Cancer Genome Atlas. One key component of our work is that we carried out a multi-level quality control process (WSI-level and image patch-level), to evaluate the quality of our segmentation results. The image patch-level quality control used manual segmentation ground truth data from 1,356 sampled image patches. The datasets we publish in this work consist of roughly 5 billion quality controlled nuclei from more than 5,060 TCGA WSIs from 10 different TCGA cancer types and 1,356 manually segmented TCGA image patches from the same 10 cancer types plus additional 4 cancer types.
CT-ORG, a new dataset for multiple organ segmentation in computed tomography
Rister, Blaine
Yi, Darvin
Shivakumar, Kaushik
Nobashi, Tomomi
Rubin, Daniel L.
Scientific Data2020Journal Article, cited 0 times
CT-ORG
Pancreas-CT
Despite the relative ease of locating organs in the human body, automated organ segmentation has been hindered by the scarcity of labeled training data. Due to the tedium of labeling organ boundaries, most datasets are limited to either a small number of cases or a single organ. Furthermore, many are restricted to specific imaging conditions unrepresentative of clinical practice. To address this need, we developed a diverse dataset of 140 CT scans containing six organ classes: liver, lungs, bladder, kidney, bones and brain. For the lungs and bones, we expedited annotation using unsupervised morphological segmentation algorithms, which were accelerated by 3D Fourier transforms. Demonstrating the utility of the data, we trained a deep neural network which requires only 4.3 s to simultaneously segment all the organs in a case. We also show how to efficiently augment the data to improve model generalization, providing a GPU library for doing so. We hope this dataset and code, available through TCIA, will be useful for training and evaluating organ segmentation models.
Chest imaging representing a COVID-19 positive rural U.S. population
Desai, Shivang
Baghal, Ahmad
Wongsurawat, Thidathip
Jenjaroenpun, Piroon
Powell, Thomas
Al-Shukri, Shaymaa
Gates, Kim
Farmer, Phillip
Rutherford, Michael
Blake, Geri
Nolan, Tracy
Sexton, Kevin
Bennett, William
Smith, Kirk
Syed, Shorabuddin
Prior, Fred
Scientific Data2020Journal Article, cited 0 times
COVID-19-AR
As the COVID-19 pandemic unfolds, radiology imaging is playing an increasingly vital role in determining therapeutic options, patient management, and research directions. Publicly available data are essential to drive new research into disease etiology, early detection, and response to therapy. In response to the COVID-19 crisis, the National Cancer Institute (NCI) has extended the Cancer Imaging Archive (TCIA) to include COVID-19 related images. Rural populations are one population at risk for underrepresentation in such public repositories. We have published in TCIA a collection of radiographic and CT imaging studies for patients who tested positive for COVID-19 in the state of Arkansas. A set of clinical data describes each patient including demographics, comorbidities, selected lab data and key radiology findings. These data are cross-linked to SARS-COV-2 cDNA sequence data extracted from clinical isolates from the same population, uploaded to the GenBank repository. We believe this collection will help to address population imbalance in COVID-19 data by providing samples from this normally underrepresented population.
LoDoPaB-CT, a benchmark dataset for low-dose computed tomography reconstruction
Leuschner, J.
Schmidt, M.
Baguer, D. O.
Maass, P.
Sci Data2021Journal Article, cited 0 times
Website
LIDC-IDRI
LDCT-and-Projection-data
Computed Tomography (CT)
Model
Deep learning approaches for tomographic image reconstruction have become very effective and have been demonstrated to be competitive in the field. Comparing these approaches is a challenging task as they rely to a great extent on the data and setup used for training. With the Low-Dose Parallel Beam (LoDoPaB)-CT dataset, we provide a comprehensive, open-access database of computed tomography images and simulated low photon count measurements. It is suitable for training and comparing deep learning methods as well as classical reconstruction approaches. The dataset contains over 40000 scan slices from around 800 patients selected from the LIDC/IDRI database. The data selection and simulation setup are described in detail, and the generating script is publicly accessible. In addition, we provide a Python library for simplified access to the dataset and an online reconstruction challenge. Furthermore, the dataset can also be used for transfer learning as well as sparse and limited-angle reconstruction scenarios.
A DICOM dataset for evaluation of medical image de-identification
Rutherford, Michael
Mun, Seong K.
Levine, Betty
Bennett, William
Smith, Kirk
Farmer, Phil
Jarosz, Quasar
Wagner, Ulrike
Freyman, John
Blake, Geri
Tarbox, Lawrence
Farahani, Keyvan
Prior, Fred
Scientific Data2021Journal Article, cited 0 times
Pseudo-PHI-DICOM-Data
We developed a DICOM dataset that can be used to evaluate the performance of de-identification algorithms. DICOM objects (a total of 1,693 CT, MRI, PET, and digital X-ray images) were selected from datasets published in the Cancer Imaging Archive (TCIA). Synthetic Protected Health Information (PHI) was generated and inserted into selected DICOM Attributes to mimic typical clinical imaging exams. The DICOM Standard and TCIA curation audit logs guided the insertion of synthetic PHI into standard and non-standard DICOM data elements. A TCIA curation team tested the utility of the evaluation dataset. With this publication, the evaluation dataset (containing synthetic PHI) and de-identified evaluation dataset (the result of TCIA curation) are released on TCIA in advance of a competition, sponsored by the National Cancer Institute (NCI), for algorithmic de-identification of medical image datasets. The competition will use a much larger evaluation dataset constructed in the same manner. This paper describes the creation of the evaluation datasets and guidelines for their use.
Optical breast atlas as a testbed for image reconstruction in optical mammography
Xing, Y.
Duan, Y.
P. Indurkar P
Qiu, A.
Chen, N.
Sci Data2021Journal Article, cited 0 times
Website
QIN Breast DCE-MRI
Vasculature
Segmentation
Finite element model
Image Registration
Synthetic images
BREAST
MATLAB
TWIST sequence
Mammography
We present two optical breast atlases for optical mammography, aiming to advance the image reconstruction research by providing a common platform to test advanced image reconstruction algorithms. Each atlas consists of five individual breast models. The first atlas provides breast vasculature surface models, which are derived from human breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) data using image segmentation. A finite element-based method is used to deform the breast vasculature models from their natural shapes to generate the second atlas, compressed breast models. Breast compression is typically done in X-ray mammography but also necessary for some optical mammography systems. Technical validation is presented to demonstrate how the atlases can be used to study the image reconstruction algorithms. Optical measurements are generated numerically with compressed breast models and a predefined configuration of light sources and photodetectors. The simulated data is fed into three standard image reconstruction algorithms to reconstruct optical images of the vasculature, which can then be compared with the ground truth to evaluate their performance.
Segmentation of vestibular schwannoma from MRI, an open annotated dataset and baseline algorithm
Shapey, Jonathan
Kujawa, Aaron
Dorent, Reuben
Wang, Guotai
Dimitriadis, Alexis
Grishchuk, Diana
Paddick, Ian
Kitchen, Neil
Bradford, Robert
Saeed, Shakeel R.
Bisdas, Sotirios
Ourselin, Sébastien
Vercauteren, Tom
Scientific Data2021Journal Article, cited 4 times
Website
Vestibular-Schwannoma-SEG
Segmentation
MRI
Automatic segmentation of vestibular schwannomas (VS) from magnetic resonance imaging (MRI) could significantly improve clinical workflow and assist patient management. We have previously developed a novel artificial intelligence framework based on a 2.5D convolutional neural network achieving excellent results equivalent to those achieved by an independent human annotator. Here, we provide the first publicly-available annotated imaging dataset of VS by releasing the data and annotations used in our prior work. This collection contains a labelled dataset of 484 MR images collected on 242 consecutive patients with a VS undergoing Gamma Knife Stereotactic Radiosurgery at a single institution. Data includes all segmentations and contours used in treatment planning and details of the administered dose. Implementation of our automated segmentation algorithm uses MONAI, a freely-available open-source framework for deep learning in healthcare imaging. These data will facilitate the development and validation of automated segmentation frameworks for VS and may also be used to develop other multi-modal algorithmic models.
Histopathological whole slide image dataset for classification of treatment effectiveness to ovarian cancer
Wang, Ching-Wei
Chang, Cheng-Chang
Khalil, Muhammad Adil
Lin, Yi-Jia
Liou, Yi-An
Hsu, Po-Chao
Lee, Yu-Ching
Wang, Chih-Hung
Chao, Tai-Kuang
Scientific Data2022Journal Article, cited 0 times
Ovarian Bevacizumab Response
Ovarian cancer is the leading cause of gynecologic cancer death among women. Regardless of the development made in the past two decades in the surgery and chemotherapy of ovarian cancer, most of the advanced-stage patients are with recurrent cancer and die. The conventional treatment for ovarian cancer is to remove cancerous tissues using surgery followed by chemotherapy, however, patients with such treatment remain at great risk for tumor recurrence and progressive resistance. Nowadays, new treatment with molecular-targeted agents have become accessible. Bevacizumab as a monotherapy in combination with chemotherapy has been recently approved by FDA for the treatment of epithelial ovarian cancer (EOC). Prediction of therapeutic effects and individualization of therapeutic strategies are critical, but to the authors’ best knowledge, there are no effective biomarkers that can be used to predict patient response to bevacizumab treatment for EOC and peritoneal serous papillary carcinoma (PSPC). This dataset helps researchers to explore and develop methods to predict the therapeutic effect of patients with EOC and PSPC to bevacizumab.
The Digital Brain Tumour Atlas, an open histopathology resource
Roetzer-Pejrimovsky, Thomas
Moser, Anna-Christina
Atli, Baran
Vogel, Clemens Christian
Mercea, Petra A.
Prihoda, Romana
Gelpi, Ellen
Haberler, Christine
Höftberger, Romana
Hainfellner, Johannes A.
Baumann, Bernhard
Langs, Georg
Woehrer, Adelheid
Scientific Data2022Journal Article, cited 0 times
CPTAC-GBM
Currently, approximately 150 different brain tumour types are defined by the WHO. Recent endeavours to exploit machine learning and deep learning methods for supporting more precise diagnostics based on the histological tumour appearance have been hampered by the relative paucity of accessible digital histopathological datasets. While freely available datasets are relatively common in many medical specialties such as radiology and genomic medicine, there is still an unmet need regarding histopathological data. Thus, we digitized a significant portion of a large dedicated brain tumour bank based at the Division of Neuropathology and Neurochemistry of the Medical University of Vienna, covering brain tumour cases from 1995–2019. A total of 3,115 slides of 126 brain tumour types (including 47 control tissue slides) have been scanned. Additionally, complementary clinical annotations have been collected for each case. In the present manuscript, we thoroughly discuss this unique dataset and make it publicly available for potential use cases in machine learning and digital image analysis, teaching and as a reference for external validation.
Categorized contrast enhanced mammography dataset for diagnostic and artificial intelligence research
Khaled, Rana
Helal, Maha
Alfarghaly, Omar
Mokhtar, Omnia
Elkorany, Abeer
El Kassas, Hebatalla
Fahmy, Aly
Scientific Data2022Journal Article, cited 0 times
CDD-CESM
Contrast-enhanced spectral mammography (CESM) is a relatively recent imaging modality with increased diagnostic accuracy compared to digital mammography (DM). New deep learning (DL) models were developed that have accuracies equal to that of an average radiologist. However, most studies trained the DL models on DM images as no datasets exist for CESM images. We aim to resolve this limitation by releasing a Categorized Digital Database for Low energy and Subtracted Contrast Enhanced Spectral Mammography images (CDD-CESM) to evaluate decision support systems. The dataset includes 2006 images, with an average resolution of 2355 × 1315, consisting of 310 mass images, 48 architectural distortion images, 222 asymmetry images, 238 calcifications images, 334 mass enhancement images, 184 non-mass enhancement images, 159 postoperative images, 8 post neoadjuvant chemotherapy images, and 751 normal images, with 248 images having more than one finding. This is the first dataset to incorporate data selection, segmentation annotation, medical reports, and pathological diagnosis for all cases. Moreover, we propose and evaluate a DL-based technique to automatically segment abnormal findings in images.
Enhancing the REMBRANDT MRI collection with expert segmentation labels and quantitative radiomic features
Sayah, A.
Bencheqroun, C.
Bhuvaneshwar, K.
Belouali, A.
Bakas, S.
Sako, C.
Davatzikos, C.
Alaoui, A.
Madhavan, S.
Gusev, Y.
Sci Data2022Journal Article, cited 0 times
REMBRANDT
TCGA-GBM
TCGA-LGG
BraTS
Radiomics
Algorithm Development
Analysis Results
Brain/diagnostic imaging
Genomics/methods
Humans
Magnetic Resonance Imaging (MRI)
Neuroimaging
GLISTR
Malignancy of the brain and CNS is unfortunately a common diagnosis. A large subset of these lesions tends to be high grade tumors which portend poor prognoses and low survival rates, and are estimated to be the tenth leading cause of death worldwide. The complex nature of the brain tissue environment in which these lesions arise offers a rich opportunity for translational research. Magnetic Resonance Imaging (MRI) can provide a comprehensive view of the abnormal regions in the brain, therefore, its applications in the translational brain cancer research is considered essential for the diagnosis and monitoring of disease. Recent years has seen rapid growth in the field of radiogenomics, especially in cancer, and scientists have been able to successfully integrate the quantitative data extracted from medical images (also known as radiomics) with genomics to answer new and clinically relevant questions. In this paper, we took raw MRI scans from the REMBRANDT data collection from public domain, and performed volumetric segmentation to identify subregions of the brain. Radiomic features were then extracted to represent the MRIs in a quantitative yet summarized format. This resulting dataset now enables further biomedical and integrative data analysis, and is being made public via the NeuroImaging Tools & Resources Collaboratory (NITRC) repository ( https://www.nitrc.org/projects/rembrandt_brain/ ).
HunCRC: annotated pathological slides to enhance deep learning applications in colorectal cancer screening
Pataki, Bálint Ármin
Olar, Alex
Ribli, Dezső
Pesti, Adrián
Kontsek, Endre
Gyöngyösi, Benedek
Bilecz, Ágnes
Kovács, Tekla
Kovács, Kristóf Attila
Kramer, Zsófia
Kiss, András
Szócska, Miklós
Pollner, Péter
Csabai, István
Scientific Data2022Journal Article, cited 0 times
Hungarian-Colorectal-Screening
Histopathology is the gold standard method for staging and grading human tumors and provides critical information for the oncoteam’s decision making. Highly-trained pathologists are needed for careful microscopic analysis of the slides produced from tissue taken from biopsy. This is a time-consuming process. A reliable decision support system would assist healthcare systems that often suffer from a shortage of pathologists. Recent advances in digital pathology allow for high-resolution digitalization of pathological slides. Digital slide scanners combined with modern computer vision models, such as convolutional neural networks, can help pathologists in their everyday work, resulting in shortened diagnosis times. In this study, 200 digital whole-slide images are published which were collected via hematoxylin-eosin stained colorectal biopsy. Alongside the whole-slide images, detailed region level annotations are also provided for ten relevant pathological classes. The 200 digital slides, after pre-processing, resulted in 101,389 patches. A single patch is a 512 × 512 pixel image, covering 248 × 248 μm2 tissue area. Versions at higher resolution are available as well. Hopefully, HunCRC, this widely accessible dataset will aid future colorectal cancer computer-aided diagnosis and research.
Expert tumor annotations and radiomics for locally advanced breast cancer in DCE-MRI for ACRIN 6657/I-SPY1
Chitalia, R.
Pati, S.
Bhalerao, M.
Thakur, S. P.
Jahani, N.
Belenky, V.
McDonald, E. S.
Gibbs, J.
Newitt, D. C.
Hylton, N. M.
Kontos, D.
Bakas, S.
Sci Data2022Journal Article, cited 0 times
Website
ISPY1-Tumor-SEG-Radiomics
ISPY1
Algorithm Development
Radiogenomics
BREAST
IBSI
CaPTk
Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI)
Breast cancer is one of the most pervasive forms of cancer and its inherent intra- and inter-tumor heterogeneity contributes towards its poor prognosis. Multiple studies have reported results from either private institutional data or publicly available datasets. However, current public datasets are limited in terms of having consistency in: a) data quality, b) quality of expert annotation of pathology, and c) availability of baseline results from computational algorithms. To address these limitations, here we propose the enhancement of the I-SPY1 data collection, with uniformly curated data, tumor annotations, and quantitative imaging features. Specifically, the proposed dataset includes a) uniformly processed scans that are harmonized to match intensity and spatial characteristics, facilitating immediate use in computational studies, b) computationally-generated and manually-revised expert annotations of tumor regions, as well as c) a comprehensive set of quantitative imaging (also known as radiomic) features corresponding to the tumor regions. This collection describes our contribution towards repeatable, reproducible, and comparative quantitative studies leading to new predictive, prognostic, and diagnostic assessments.
The University of Pennsylvania glioblastoma (UPenn-GBM) cohort: advanced MRI, clinical, genomics, & radiomics
Bakas, S.
Sako, C.
Akbari, H.
Bilello, M.
Sotiras, A.
Shukla, G.
Rudie, J. D.
Santamaria, N. F.
Kazerooni, A. F.
Pati, S.
Rathore, S.
Mamourian, E.
Ha, S. M.
Parker, W.
Doshi, J.
Baid, U.
Bergman, M.
Binder, Z. A.
Verma, R.
Lustig, R. A.
Desai, A. S.
Bagley, S. J.
Mourelatos, Z.
Morrissette, J.
Watt, C. D.
Brem, S.
Wolf, R. L.
Melhem, E. R.
Nasrallah, M. P.
Mohan, S.
O'Rourke, D. M.
Davatzikos, C.
Sci Data2022Journal Article, cited 0 times
UPENN-GBM
Magnetic Resonance Imaging (MRI)
radiomics
Genomics
MRI
Glioblastoma is the most common aggressive adult brain tumor. Numerous studies have reported results from either private institutional data or publicly available datasets. However, current public datasets are limited in terms of: a) number of subjects, b) lack of consistent acquisition protocol, c) data quality, or d) accompanying clinical, demographic, and molecular information. Toward alleviating these limitations, we contribute the "University of Pennsylvania Glioblastoma Imaging, Genomics, and Radiomics" (UPenn-GBM) dataset, which describes the currently largest publicly available comprehensive collection of 630 patients diagnosed with de novo glioblastoma. The UPenn-GBM dataset includes (a) advanced multi-parametric magnetic resonance imaging scans acquired during routine clinical practice, at the University of Pennsylvania Health System, (b) accompanying clinical, demographic, and molecular information, (d) perfusion and diffusion derivative volumes, (e) computationally-derived and manually-revised expert annotations of tumor sub-regions, as well as (f) quantitative imaging (also known as radiomic) features corresponding to each of these regions. This collection describes our contribution towards repeatable, reproducible, and comparative quantitative studies leading to new predictive, prognostic, and diagnostic assessments.
Muscle and adipose tissue segmentations at the third cervical vertebral level in patients with head and neck cancer
Wahid, K. A.
Olson, B.
Jain, R.
Grossberg, A. J.
El-Habashy, D.
Dede, C.
Salama, V.
Abobakr, M.
Mohamed, A. S. R.
He, R.
Jaskari, J.
Sahlsten, J.
Kaski, K.
Fuller, C. D.
Naser, M. A.
Sci Data2022Journal Article, cited 0 times
Website
HNSCC
Radiomics outcome prediction in Oropharyngeal cancer
Adipose Tissue/diagnostic imaging
*Head and Neck Neoplasms/diagnostic imaging
Humans
Muscle
Skeletal/diagnostic imaging
Retrospective Studies
*Sarcopenia/diagnostic imaging/pathology
The accurate determination of sarcopenia is critical for disease management in patients with head and neck cancer (HNC). Quantitative determination of sarcopenia is currently dependent on manually-generated segmentations of skeletal muscle derived from computed tomography (CT) cross-sectional imaging. This has prompted the increasing utilization of machine learning models for automated sarcopenia determination. However, extant datasets currently do not provide the necessary manually-generated skeletal muscle segmentations at the C3 vertebral level needed for building these models. In this data descriptor, a set of 394 HNC patients were selected from The Cancer Imaging Archive, and their skeletal muscle and adipose tissue was manually segmented at the C3 vertebral level using sliceOmatic. Subsequently, using publicly disseminated Python scripts, we generated corresponding segmentations files in Neuroimaging Informatics Technology Initiative format. In addition to segmentation data, additional clinical demographic data germane to body composition analysis have been retrospectively collected for these patients. These data are a valuable resource for studying sarcopenia and body composition analysis in patients with HNC.
Pan-tumor CAnine cuTaneous Cancer Histology (CATCH) dataset
Wilm, Frauke
Fragoso, Marco
Marzahl, Christian
Qiu, Jingna
Puget, Chloé
Diehl, Laura
Bertram, Christof A.
Klopfleisch, Robert
Maier, Andreas
Breininger, Katharina
Aubreville, Marc
Scientific Data2022Journal Article, cited 0 times
CATCH
Due to morphological similarities, the differentiation of histologic sections of cutaneous tumors into individual subtypes can be challenging. Recently, deep learning-based approaches have proven their potential for supporting pathologists in this regard. However, many of these supervised algorithms require a large amount of annotated data for robust development. We present a publicly available dataset of 350 whole slide images of seven different canine cutaneous tumors complemented by 12,424 polygon annotations for 13 histologic classes, including seven cutaneous tumor subtypes. In inter-rater experiments, we show a high consistency of the provided labels, especially for tumor annotations. We further validate the dataset by training a deep neural network for the task of tissue segmentation and tumor subtype classification. We achieve a class-averaged Jaccard coefficient of 0.7047, and 0.9044 for tumor in particular. For classification, we achieve a slide-level accuracy of 0.9857. Since canine cutaneous tumors possess various histologic homologies to human tumors the added value of this dataset is not limited to veterinary pathology but extends to more general fields of application.
A whole-body FDG-PET/CT Dataset with manually annotated Tumor Lesions
Gatidis, Sergios
Hepp, Tobias
Früh, Marcel
La Fougère, Christian
Nikolaou, Konstantin
Pfannenberg, Christina
Schölkopf, Bernhard
Küstner, Thomas
Cyran, Clemens
Rubin, Daniel
Scientific Data2022Journal Article, cited 0 times
FDG-PET-CT-Lesions
Head-Neck-PET-CT
Lung-PET-CT-Dx
We describe a publicly available dataset of annotated Positron Emission Tomography/Computed Tomography (PET/CT) studies. 1014 whole body Fluorodeoxyglucose (FDG)-PET/CT datasets (501 studies of patients with malignant lymphoma, melanoma and non small cell lung cancer (NSCLC) and 513 studies without PET-positive malignant lesions (negative controls)) acquired between 2014 and 2018 were included. All examinations were acquired on a single, state-of-the-art PET/CT scanner. The imaging protocol consisted of a whole-body FDG-PET acquisition and a corresponding diagnostic CT scan. All FDG-avid lesions identified as malignant based on the clinical PET/CT report were manually segmented on PET images in a slice-per-slice (3D) manner. We provide the anonymized original DICOM files of all studies as well as the corresponding DICOM segmentation masks. In addition, we provide scripts for image processing and conversion to different file formats (NIfTI, mha, hdf5). Primary diagnosis, age and sex are provided as non-imaging information. We demonstrate how this dataset can be used for deep learning-based automated analysis of PET/CT data and provide the trained deep learning model.
CT and cone-beam CT of ablative radiation therapy for pancreatic cancer with expert organ-at-risk contours
Hong, Jun
Reyngold, Marsha
Crane, Christopher
Cuaron, John
Hajj, Carla
Mann, Justin
Zinovoy, Melissa
Yorke, Ellen
LoCastro, Eve
Apte, Aditya P.
Mageras, Gig
Scientific Data2022Journal Article, cited 0 times
Pancreatic-CT-CBCT-SEG
We describe a dataset from patients who received ablative radiation therapy for locally advanced pancreatic cancer (LAPC), consisting of computed tomography (CT) and cone-beam CT (CBCT) images with physician-drawn organ-at-risk (OAR) contours. The image datasets (one CT for treatment planning and two CBCT scans at the time of treatment per patient) were collected from 40 patients. All scans were acquired with the patient in the treatment position and in a deep inspiration breath-hold state. Six radiation oncologists delineated the gastrointestinal OARs consisting of small bowel, stomach and duodenum, such that the same physician delineated all image sets belonging to the same patient. Two trained medical physicists further edited the contours to ensure adherence to delineation guidelines. The image and contour files are available in DICOM format and are publicly available from The Cancer Imaging Archive (https://doi.org/10.7937/TCIA.ESHQ-4D90, Version 2). The dataset can serve as a criterion standard for evaluating the accuracy and reliability of deformable image registration and auto-segmentation algorithms, as well as a training set for deep-learning-based methods.
The LUMIERE dataset: Longitudinal Glioblastoma MRI with expert RANO evaluation
Suter, Yannick
Knecht, Urspeter
Valenzuela, Waldo
Notter, Michelle
Hewer, Ekkehard
Schucht, Philippe
Wiest, Roland
Reyes, Mauricio
Scientific Data2022Journal Article, cited 0 times
BraTS-TCGA-GBM
QIN GBM Treatment Response
Publicly available Glioblastoma (GBM) datasets predominantly include pre-operative Magnetic Resonance Imaging (MRI) or contain few follow-up images for each patient. Access to fully longitudinal datasets is critical to advance the refinement of treatment response assessment. We release a single-center longitudinal GBM MRI dataset with expert ratings of selected follow-up studies according to the response assessment in neuro-oncology criteria (RANO). The expert rating includes details about the rationale of the ratings. For a subset of patients, we provide pathology information regarding methylation of the O6-methylguanine-DNA methyltransferase (MGMT) promoter status and isocitrate dehydrogenase 1 (IDH1), as well as the overall survival time. The data includes T1-weighted pre- and post-contrast, T2-weighted, and fluid-attenuated inversion recovery (FLAIR) MRI. Segmentations from state-of-the-art automated segmentation tools, as well as radiomic features, complement the data. Possible applications of this dataset are radiomics research, the development and validation of automated segmentation methods, and studies on response assessment. This collection includes MRI data of 91 GBM patients with a total of 638 study dates and 2487 images.
Multimodality annotated hepatocellular carcinoma data set including pre- and post-TACE with imaging segmentation
Moawad, Ahmed W.
Morshid, Ali
Khalaf, Ahmed M.
Elmohr, Mohab M.
Hazle, John D.
Fuentes, David
Badawy, Mohamed
Kaseb, Ahmed O.
Hassan, Manal
Mahvash, Armeen
Szklaruk, Janio
Qayyum, Aliyya
Abusaif, Abdelrahman
Bennett, William C.
Nolan, Tracy S.
Camp, Brittney
Elsayes, Khaled M.
Scientific Data2023Journal Article, cited 0 times
HCC-TACE-Seg
LIVER
Semi-automatic segmentation
Computed Tomography (CT)
Multimodality annotated hepatocellular carcinoma data set including pre- and post-TACE with imaging segmentation
An Online Mammography Database with Biopsy Confirmed Types
Cai, Hongmin
Wang, Jinhua
Dan, Tingting
Li, Jiao
Fan, Zhihao
Yi, Weiting
Cui, Chunyan
Jiang, Xinhua
Li, Li
Scientific Data2023Journal Article, cited 1 times
Website
CMMD
Mammography
breast cancer
Breast carcinoma is the second largest cancer in the world among women. Early detection of breast cancer has been shown to increase the survival rate, thereby significantly increasing patients’ lifespan. Mammography, a noninvasive imaging tool with low cost, is widely used to diagnose breast disease at an early stage due to its high sensitivity. Although some public mammography datasets are useful, there is still a lack of open access datasets that expand beyond the white population as well as missing biopsy confirmation or with unknown molecular subtypes. To fill this gap, we build a database containing two online breast mammographies. The dataset named by Chinese Mammography Database (CMMD) contains 3712 mammographies involved 1775 patients, which is divided into two branches. The first dataset CMMD1 contains 1026 cases (2214 mammographies) with biopsy confirmed type of benign or malignant tumors. The second dataset CMMD2 includes 1498 mammographies for 749 patients with known molecular subtypes. Our database is constructed to enrich the diversity of mammography data and promote the development of relevant fields.
VinDr-Mammo: A large-scale benchmark dataset for computer-aided diagnosis in full-field digital mammography
Nguyen, Hieu T.
Nguyen, Ha Q.
Pham, Hieu H.
Lam, Khanh
Le, Linh T.
Dao, Minh
Vu, Van
Scientific Data2023Journal Article, cited 0 times
CMMD
Mammography, or breast X-ray imaging, is the most widely used imaging modality to detect cancer and other breast diseases. Recent studies have shown that deep learning-based computer-assisted detection and diagnosis (CADe/x) tools have been developed to support physicians and improve the accuracy of interpreting mammography. A number of large-scale mammography datasets from different populations with various associated annotations and clinical data have been introduced to study the potential of learning-based methods in the field of breast radiology. With the aim to develop more robust and more interpretable support systems in breast imaging, we introduce VinDr-Mammo, a Vietnamese dataset of digital mammography with breast-level assessment and extensive lesion-level annotations, enhancing the diversity of the publicly available mammography data. The dataset consists of 5,000 mammography exams, each of which has four standard views and is double read with disagreement (if any) being resolved by arbitration. The purpose of this dataset is to assess Breast Imaging Reporting and Data System (BI-RADS) and breast density at the individual breast level. In addition, the dataset also provides the category, location, and BI-RADS assessment of non-benign findings. We make VinDr-Mammo publicly available as a new imaging resource to promote advances in developing CADe/x tools for mammography interpretation.
A review of the machine learning datasets in mammography, their adherence to the FAIR principles and the outlook for the future
Logan, Joe
Kennedy, Paul J.
Catchpoole, Daniel
Scientific Data2023Journal Article, cited 0 times
CBIS-DDSM
The increasing rates of breast cancer, particularly in emerging economies, have led to interest in scalable deep learning-based solutions that improve the accuracy and cost-effectiveness of mammographic screening. However, such tools require large volumes of high-quality training data, which can be challenging to obtain. This paper combines the experience of an AI startup with an analysis of the FAIR principles of the eight available datasets. It demonstrates that the datasets vary considerably, particularly in their interoperability, as each dataset is skewed towards a particular clinical use-case. Additionally, the mix of digital captures and scanned film compounds the problem of variability, along with differences in licensing terms, ease of access, labelling reliability, and file formats. Improving interoperability through adherence to standards such as the BIRADS criteria for labelling and annotation, and a consistent file format, could markedly improve access and use of larger amounts of standardized data. This, in turn, could be increased further by GAN-based synthetic data generation, paving the way towards better health outcomes for breast cancer.
A brain MRI dataset and baseline evaluations for tumor recurrence prediction after Gamma Knife radiotherapy
Wang, Yibin
Duggar, William Neil
Caballero, David Michael
Thomas, Toms Vengaloor
Adari, Neha
Mundra, Eswara Kumar
Wang, Haifeng
Scientific Data2023Journal Article, cited 0 times
Brain-TR-GammaKnife
Prediction and identification of tumor recurrence are critical for brain cancer treatment design and planning. Stereotactic radiation therapy delivered with Gamma Knife has been developed as one of the common treatment approaches combined with others by delivering radiation that targets accurately on the tumor while not affecting nearby healthy tissues. In this paper, we release a fully publicly available brain cancer MRI dataset and the companion Gamma Knife treatment planning and follow-up data for the purpose of tumor recurrence prediction. The dataset contains original patient MRI images, radiation therapy data, and clinical information. Lesion annotations are provided, and inclusive preprocessing steps have been specified to simplify the usage of this dataset. A baseline framework based on a convolutional neural network is proposed companionably with basic evaluations. The release of this dataset will contribute to the future development of automated brain tumor recurrence prediction algorithms and promote the clinical implementations associated with the computer vision field. The dataset is made publicly available on The Cancer Imaging Archive (TCIA) (https://doi.org/10.7937/xb6d-py67).
Preoperative CT and survival data for patients undergoing resection of colorectal liver metastases
Simpson, A. L.
Peoples, J.
Creasy, J. M.
Fichtinger, G.
Gangai, N.
Keshavamurthy, K. N.
Lasso, A.
Shia, J.
D'Angelica, M. I.
Do, R. K. G.
Sci Data2024Journal Article, cited 0 times
Website
Colorectal-Liver-Metastases
Humans
*Colorectal Neoplasms/pathology
Hepatectomy/adverse effects
*Liver Neoplasms/secondary
Tomography
X-Ray Computed
The liver is a common site for the development of metastases in colorectal cancer. Treatment selection for patients with colorectal liver metastases (CRLM) is difficult; although hepatic resection will cure a minority of CRLM patients, recurrence is common. Reliable preoperative prediction of recurrence could therefore be a valuable tool for physicians in selecting the best candidates for hepatic resection in the treatment of CRLM. It has been hypothesized that evidence for recurrence could be found via quantitative image analysis on preoperative CT imaging of the future liver remnant before resection. To investigate this hypothesis, we have collected preoperative hepatic CT scans, clinicopathologic data, and recurrence/survival data, from a large, single-institution series of patients (n = 197) who underwent hepatic resection of CRLM. For each patient, we also created segmentations of the liver, vessels, tumors, and future liver remnant. The largest of its kind, this dataset is a resource that may aid in the development of quantitative imaging biomarkers and machine learning models for the prediction of post-resection hepatic recurrence of CRLM.
Curated benchmark dataset for ultrasound based breast lesion analysis
Pawłowska, Anna
Ćwierz-Pieńkowska, Anna
Domalik, Agnieszka
Jaguś, Dominika
Kasprzak, Piotr
Matkowski, Rafał
Fura, Łukasz
Nowicki, Andrzej
Żołek, Norbert
Scientific Data2024Journal Article, cited 0 times
Breast-Lesions-USG
Breast
Ultrasonography
A new detailed dataset of breast ultrasound scans (BrEaST) containing images of benign and malignant lesions as well as normal tissue examples, is presented. The dataset consists of 256 breast scans collected from 256 patients. Each scan was manually annotated and labeled by a radiologist experienced in breast ultrasound examination. In particular, each tumor was identified in the image using a freehand annotation and labeled according to BIRADS features and lexicon. The histopathological classification of the tumor was also provided for patients who underwent a biopsy. The BrEaST dataset is the first breast ultrasound dataset containing patient-level labels, image-level annotations, and tumor-level labels with all cases confirmed by follow-up care or core needle biopsy result. To enable research into breast disease detection, tumor segmentation and classification, the BrEaST dataset is made publicly available with the CC-BY 4.0 license.
A large open access dataset of brain metastasis 3D segmentations on MRI with clinical and imaging information
Ramakrishnan, Divya
Jekel, Leon
Chadha, Saahil
Janas, Anastasia
Moy, Harrison
Maleki, Nazanin
Sala, Matthew
Kaur, Manpreet
Petersen, Gabriel Cassinelli
Merkaj, Sara
von Reppert, Marc
Baid, Ujjwal
Bakas, Spyridon
Kirsch, Claudia
Davis, Melissa
Bousabarah, Khaled
Holler, Wolfgang
Lin, MingDe
Westerhoff, Malte
Aneja, Sanjay
Memon, Fatima
Aboian, Mariam S.
Scientific Data2024Journal Article, cited 0 times
Pretreat-MetsToBrain-Masks
Artificial Intelligence
Magnetic Resonance Imaging
Radiotherapy
Resection and whole brain radiotherapy (WBRT) are standard treatments for brain metastases (BM) but are associated with cognitive side effects. Stereotactic radiosurgery (SRS) uses a targeted approach with less side effects than WBRT. SRS requires precise identification and delineation of BM. While artificial intelligence (AI) algorithms have been developed for this, their clinical adoption is limited due to poor model performance in the clinical setting. The limitations of algorithms are often due to the quality of datasets used for training the AI network. The purpose of this study was to create a large, heterogenous, annotated BM dataset for training and validation of AI models. We present a BM dataset of 200 patients with pretreatment T1, T1 post-contrast, T2, and FLAIR MR images. The dataset includes contrast-enhancing and necrotic 3D segmentations on T1 post-contrast and peritumoral edema 3D segmentations on FLAIR. Our dataset contains 975 contrast-enhancing lesions, many of which are sub centimeter, along with clinical and imaging information. We used a streamlined approach to database-building through a PACS-integrated segmentation workflow.
Annotated test-retest dataset of lung cancer CT scan images reconstructed at multiple imaging parameters
Zhao, B.
Dercle, L.
Yang, H.
Riely, G. J.
Kris, M. G.
Schwartz, L. H.
Sci Data2024Journal Article, cited 0 times
Website
RIDER Lung CT
*Lung Neoplasms/diagnostic imaging
Humans
*Tomography
X-Ray Computed
*Carcinoma
Non-Small-Cell Lung/diagnostic imaging
Image Processing
Computer-Assisted
Machine Learning
Quantitative imaging biomarkers (QIB) are increasingly used in clinical research to advance precision medicine approaches in oncology. Computed tomography (CT) is a modality of choice for cancer diagnosis, prognosis, and response assessment due to its reliability and global accessibility. Here, we contribute to the cancer imaging community through The Cancer Imaging Archive (TCIA) by providing investigator-initiated, same-day repeat CT scan images of 32 non-small cell lung cancer (NSCLC) patients, along with radiologist-annotated lesion contours as a reference standard. Each scan was reconstructed into 6 image settings using various combinations of three slice thicknesses (1.25 mm, 2.5 mm, 5 mm) and two reconstruction kernels (lung, standard; GE CT equipment), which spans a wide range of CT imaging reconstruction parameters commonly used in lung cancer clinical practice and clinical trials. This holds considerable value for advancing the development of robust Radiomics, Artificial Intelligence (AI) and machine learning (ML) methods.
Radiomics strategies for risk assessment of tumour failure in head-and-neck cancer
Vallières, Martin
Kay-Rivest, Emily
Perrin, Léo Jean
Liem, Xavier
Furstoss, Christophe
Aerts, Hugo JWL
Khaouam, Nader
Nguyen-Tan, Phuc Felix
Wang, Chang-Shu
Sultanem, Khalil
arXiv preprint arXiv:1703.085162017Journal Article, cited 32 times
Website
Radiomics
HEAD AND NECK
Quantitative extraction of high-dimensional mineable data from medical images is a process known as radiomics. Radiomics is foreseen as an essential prognostic tool for cancer risk assessment and the quantification of intratumoural heterogeneity. In this work, 1615 radiomic features (quantifying tumour image intensity, shape, texture) extracted from pre-treatment FDG-PET and CT images of 300 patients from four different cohorts were analyzed for the risk assessment of locoregional recurrences (LR) and distant metastases (DM) in head-and-neck cancer. Prediction models combining radiomic and clinical variables were constructed via random forests and imbalance-adjustment strategies using two of the four cohorts. Independent validation of the prediction and prognostic performance of the models was carried out on the other two cohorts (LR: AUC = 0.69 and CI = 0.67; DM: AUC = 0.86 and CI = 0.88). Furthermore, the results obtained via Kaplan-Meier analysis demonstrated the potential of radiomics for assessing the risk of specific tumour outcomes using multiple stratification groups. This could have important clinical impact, notably by allowing for a better personalization of chemo-radiation treatments for head-and-neck cancer patients from different risk groups.
A Deep Learning-Based Radiomics Model for Prediction of Survival in Glioblastoma Multiforme
Lao, Jiangwei
Chen, Yinsheng
Li, Zhi-Cheng
Li, Qihua
Zhang, Ji
Liu, Jing
Zhai, Guangtao
Scientific RepoRtS2017Journal Article, cited 32 times
Website
TCGA-GBM
Radiomics
Glioblastoma Multiforme (GBM)
Deep learning
Traditional radiomics models mainly rely on explicitly-designed handcrafted features from medical images. This paper aimed to investigate if deep features extracted via transfer learning can generate radiomics signatures for prediction of overall survival (OS) in patients with Glioblastoma Multiforme (GBM). This study comprised a discovery data set of 75 patients and an independent validation data set of 37 patients. A total of 1403 handcrafted features and 98304 deep features were extracted from preoperative multi-modality MR images. After feature selection, a six-deep-feature signature was constructed by using the least absolute shrinkage and selection operator (LASSO) Cox regression model. A radiomics nomogram was further presented by combining the signature and clinical risk factors such as age and Karnofsky Performance Score. Compared with traditional risk factors, the proposed signature achieved better performance for prediction of OS (C-index = 0.710, 95% CI: 0.588, 0.932) and significant stratification of patients into prognostically distinct groups (P < 0.001, HR = 5.128, 95% CI: 2.029, 12.960). The combined model achieved improved predictive performance (C-index = 0.739). Our study demonstrates that transfer learning-based deep features are able to generate prognostic imaging signature for OS prediction and patient stratification for GBM, indicating the potential of deep imaging feature-based biomarker in preoperative care of GBM patients.
A Fully-Automatic Multiparametric Radiomics Model: Towards Reproducible and Prognostic Imaging Signature for Prediction of Overall Survival in Glioblastoma Multiforme
Li, Qihua
Bai, Hongmin
Chen, Yinsheng
Sun, Qiuchang
Liu, Lei
Zhou, Sijie
Wang, Guoliang
Liang, Chaofeng
Li, Zhi-Cheng
Scientific RepoRtS2017Journal Article, cited 9 times
Website
Radiomics
GBM
Radiogenomic analysis of hypoxia pathway is predictive of overall survival in Glioblastoma
Beig, N.
Patel, J.
Prasanna, P.
Hill, V.
Gupta, A.
Correa, R.
Bera, K.
Singh, S.
Partovi, S.
Varadan, V.
Ahluwalia, M.
Madabhushi, A.
Tiwari, P.
Scientific RepoRtS2018Journal Article, cited 5 times
Website
TCGA-GBM
Radiomics
Segmentation
Texture features
Hypoxia, a characteristic trait of Glioblastoma (GBM), is known to cause resistance to chemo-radiation treatment and is linked with poor survival. There is hence an urgent need to non-invasively characterize tumor hypoxia to improve GBM management. We hypothesized that (a) radiomic texture descriptors can capture tumor heterogeneity manifested as a result of molecular variations in tumor hypoxia, on routine treatment naive MRI, and (b) these imaging based texture surrogate markers of hypoxia can discriminate GBM patients as short-term (STS), mid-term (MTS), and long-term survivors (LTS). 115 studies (33 STS, 41 MTS, 41 LTS) with gadolinium-enhanced T1-weighted MRI (Gd-T1w) and T2-weighted (T2w) and FLAIR MRI protocols and the corresponding RNA sequences were obtained. After expert segmentation of necrotic, enhancing, and edematous/nonenhancing tumor regions for every study, 30 radiomic texture descriptors were extracted from every region across every MRI protocol. Using the expression profile of 21 hypoxia-associated genes, a hypoxia enrichment score (HES) was obtained for the training cohort of 85 cases. Mutual information score was used to identify a subset of radiomic features that were most informative of HES within 3-fold cross-validation to categorize studies as STS, MTS, and LTS. When validated on an additional cohort of 30 studies (11 STS, 9 MTS, 10 LTS), our results revealed that the most discriminative features of HES were also able to distinguish STS from LTS (p = 0.003).
Quantification of glioblastoma mass effect by lateral ventricle displacement
Mass effect has demonstrated prognostic significance for glioblastoma, but is poorly quantified. Here we define and characterize a novel neuroimaging parameter, lateral ventricle displacement (LVd), which quantifies mass effect in glioblastoma patients. LVd is defined as the magnitude of displacement from the center of mass of the lateral ventricle volume in glioblastoma patients relative to that a normal reference brain. Pre-operative MR images from 214 glioblastoma patients from The Cancer Imaging Archive (TCIA) were segmented using iterative probabilistic voxel labeling (IPVL). LVd, contrast enhancing volumes (CEV) and FLAIR hyper-intensity volumes (FHV) were determined. Associations with patient survival and tumor genomics were investigated using data from The Cancer Genome Atlas (TCGA). Glioblastoma patients had significantly higher LVd relative to patients without brain tumors. The variance of LVd was not explained by tumor volume, as defined by CEV or FLAIR. LVd was robustly associated with glioblastoma survival in Cox models which accounted for both age and Karnofsky's Performance Scale (KPS) (p = 0.006). Glioblastomas with higher LVd demonstrated increased expression of genes associated with tumor proliferation and decreased expression of genes associated with tumor invasion. Our results suggest LVd is a quantitative measure of glioblastoma mass effect and a prognostic imaging biomarker.
Highly accurate model for prediction of lung nodule malignancy with CT scans
Causey, Jason L
Zhang, Junyu
Ma, Shiqian
Jiang, Bo
Qualls, Jake A
Politte, David G
Prior, Fred
Zhang, Shuzhong
Huang, Xiuzhen
Scientific RepoRtS2018Journal Article, cited 5 times
Website
LIDC-IDRI
Radiomics
LUNG
Classification
Convolutional Neural Network (CNN)
Computed tomography (CT) examinations are commonly used to predict lung nodule malignancy in patients, which are shown to improve noninvasive early diagnosis of lung cancer. It remains challenging for computational approaches to achieve performance comparable to experienced radiologists. Here we present NoduleX, a systematic approach to predict lung nodule malignancy from CT data, based on deep learning convolutional neural networks (CNN). For training and validation, we analyze >1000 lung nodules in images from the LIDC/IDRI cohort. All nodules were identified and classified by four experienced thoracic radiologists who participated in the LIDC project. NoduleX achieves high accuracy for nodule malignancy classification, with an AUC of ~0.99. This is commensurate with the analysis of the dataset by experienced radiologists. Our approach, NoduleX, provides an effective framework for highly accurate nodule malignancy prediction with the model trained on a large patient population. Our results are replicable with software available at http://bioinformatics.astate.edu/NoduleX .
Assessing robustness of radiomic features by image perturbation
Image features need to be robust against differences in positioning, acquisition and segmentation to ensure reproducibility. Radiomic models that only include robust features can be used to analyse new images, whereas models with non-robust features may fail to predict the outcome of interest accurately. Test-retest imaging is recommended to assess robustness, but may not be available for the phenotype of interest. We therefore investigated 18 combinations of image perturbations to determine feature robustness, based on noise addition (N), translation (T), rotation (R), volume growth/shrinkage (V) and supervoxel-based contour randomisation (C). Test-retest and perturbation robustness were compared for combined total of 4032 morphological, statistical and texture features that were computed from the gross tumour volume in two cohorts with computed tomography imaging: I) 31 non-small-cell lung cancer (NSCLC) patients; II): 19 head-and-neck squamous cell carcinoma (HNSCC) patients. Robustness was determined using the 95% confidence interval (CI) of the intraclass correlation coefficient (1, 1). Features with CI >/= 0:90 were considered robust. The NTCV, TCV, RNCV and RCV perturbation chain produced similar results and identified the fewest false positive robust features (NSCLC: 0.2-0.9%; HNSCC: 1.7-1.9%). Thus, these perturbation chains may be used as an alternative to test-retest imaging to assess feature robustness.
Revealing Tumor Habitats from Texture Heterogeneity Analysis for Classification of Lung Cancer Malignancy and Aggressiveness
Cherezov, Dmitry
Goldgof, Dmitry
Hall, Lawrence
Gillies, Robert
Schabath, Matthew
Müller, Henning
Depeursinge, Adrien
Scientific RepoRtS2019Journal Article, cited 0 times
Website
NLST
lung
LDCT
We propose an approach for characterizing structural heterogeneity of lung cancer nodules using Computed Tomography Texture Analysis (CTTA). Measures of heterogeneity were used to test the hypothesis that heterogeneity can be used as predictor of nodule malignancy and patient survival. To do this, we use the National Lung Screening Trial (NLST) dataset to determine if heterogeneity can represent differences between nodules in lung cancer and nodules in non-lung cancer patients. 253 participants are in the training set and 207 participants in the test set. To discriminate cancerous from non-cancerous nodules at the time of diagnosis, a combination of heterogeneity and radiomic features were evaluated to produce the best area under receiver operating characteristic curve (AUROC) of 0.85 and accuracy 81.64%. Second, we tested the hypothesis that heterogeneity can predict patient survival. We analyzed 40 patients diagnosed with lung adenocarcinoma (20 short-term and 20 long-term survival patients) using a leave-one-out cross validation approach for performance evaluation. A combination of heterogeneity features and radiomic features produce an AUROC of 0.9 and an accuracy of 85% to discriminate long- and short-term survivors.
Deep learning in head & neck cancer outcome prediction
Diamant, André
Chatterjee, Avishek
Vallières, Martin
Shenouda, George
Seuntjens, Jan
Sci Rep2019Journal Article, cited 0 times
Head-Neck-PET-CT
Convolutional Neural Network (CNN)
Radiomics
Deep learning
HEAD AND NECK
head and neck squamous cell carcinoma (HNSCC)
Segmentation
Traditional radiomics involves the extraction of quantitative texture features from medical images in an attempt to determine correlations with clinical endpoints. We hypothesize that convolutional neural networks (CNNs) could enhance the performance of traditional radiomics, by detecting image patterns that may not be covered by a traditional radiomic framework. We test this hypothesis by training a CNN to predict treatment outcomes of patients with head and neck squamous cell carcinoma, based solely on their pre-treatment computed tomography image. The training (194 patients) and validation sets (106 patients), which are mutually independent and include 4 institutions, come from The Cancer Imaging Archive. When compared to a traditional radiomic framework applied to the same patient cohort, our method results in a AUC of 0.88 in predicting distant metastasis. When combining our model with the previous model, the AUC improves to 0.92. Our framework yields models that are shown to explicitly recognize traditional radiomic features, be directly visualized and perform accurate outcome prediction.
Prognostic models based on imaging findings in glioblastoma: Human versus Machine
Molina-García, David
Vera-Ramírez, Luis
Pérez-Beteta, Julián
Arana, Estanislao
Pérez-García, Víctor M.
Scientific RepoRtS2019Journal Article, cited 0 times
IvyGAP
REMBRANDT
TCGA-GBM
Glioblastoma
Machine Learning
MRI
Many studies have built machine-learning (ML)-based prognostic models for glioblastoma (GBM) based on radiological features. We wished to compare the predictive performance of these methods to human knowledge-based approaches. 404 GBM patients were included (311 discovery and 93 validation). 16 morphological and 28 textural descriptors were obtained from pretreatment volumetric postcontrast T1-weighted magnetic resonance images. Different prognostic ML methods were developed. An optimized linear prognostic model (OLPM) was also built using the four significant non-correlated parameters with individual prognosis value. OLPM achieved high prognostic value (validation c-index = 0.817) and outperformed ML models based on either the same parameter set or on the full set of 44 attributes considered. Neural networks with cross-validation-optimized attribute selection achieved comparable results (validation c-index = 0.825). ML models using only the four outstanding parameters obtained better results than their counterparts based on all the attributes, which presented overfitting. In conclusion, OLPM and ML methods studied here provided the most accurate survival predictors for glioblastoma to date, due to a combination of the strength of the methodology, the quality and volume of the data used and the careful attribute selection. The ML methods studied suffered overfitting and lost prognostic value when the number of parameters was increased.
Quantitative Imaging features Improve Discrimination of Malignancy in Pulmonary nodules
Pulmonary nodules are frequently detected radiological abnormalities in lung cancer screening. Nodules of the highest- and lowest-risk for cancer are often easily diagnosed by a trained radiologist there is still a high rate of indeterminate pulmonary nodules (IPN) of unknown risk. Here, we test the hypothesis that computer extracted quantitative features ("radiomics") can provide improved risk-assessment in the diagnostic setting. Nodules were segmented in 3D and 219 quantitative features are extracted from these volumes. Using these features novel malignancy risk predictors are formed with various stratifications based on size, shape and texture feature categories. We used images and data from the National Lung Screening Trial (NLST), curated a subset of 479 participants (244 for training and 235 for testing) that included incident lung cancers and nodule-positive controls. After removing redundant and non-reproducible features, optimal linear classifiers with area under the receiver operator characteristics (AUROC) curves were used with an exhaustive search approach to find a discriminant set of image features, which were validated in an independent test dataset. We identified several strong predictive models, using size and shape features the highest AUROC was 0.80. Using non-size based features the highest AUROC was 0.85. Combining features from all the categories, the highest AUROC were 0.83.
Radiomics based likelihood functions for cancer diagnosis
Radiomic features based classifiers and neural networks have shown promising results in tumor classification. The classification performance can be further improved greatly by exploring and incorporating the discriminative features towards cancer into mathematical models. In this research work, we have developed two radiomics driven likelihood models in Computed Tomography(CT) images to classify lung, colon, head and neck cancer. Initially, two diagnostic radiomic signatures were derived by extracting 105 3-D features from 200 lung nodules and by selecting the features with higher average scores from several supervised as well as unsupervised feature ranking algorithms. The signatures obtained from both the ranking approaches were integrated into two mathematical likelihood functions for tumor classification. Validation of the likelihood functions was performed on 265 public data sets of lung, colon, head and neck cancer with high classification rate. The achieved results show robustness of the models and suggest that diagnostic mathematical functions using general tumor phenotype can be successfully developed for cancer diagnosis.
Repeatability of Multiparametric Prostate MRI Radiomics Features
Schwier, Michael
van Griethuysen, Joost
Vangel, Mark G
Pieper, Steve
Peled, Sharon
Tempany, Clare
Aerts, Hugo J W L
Kikinis, Ron
Fennessy, Fiona M
Fedorov, Andriy
Sci Rep2019Journal Article, cited 46 times
Website
QIN-PROSTATE-Repeatability
Imaging features
Radiomics
In this study we assessed the repeatability of radiomics features on small prostate tumors using test-retest Multiparametric Magnetic Resonance Imaging (mpMRI). The premise of radiomics is that quantitative image-based features can serve as biomarkers for detecting and characterizing disease. For such biomarkers to be useful, repeatability is a basic requirement, meaning its value must remain stable between two scans, if the conditions remain stable. We investigated repeatability of radiomics features under various preprocessing and extraction configurations including various image normalization schemes, different image pre-filtering, and different bin widths for image discretization. Although we found many radiomics features and preprocessing combinations with high repeatability (Intraclass Correlation Coefficient > 0.85), our results indicate that overall the repeatability is highly sensitive to the processing parameters. Neither image normalization, using a variety of approaches, nor the use of pre-filtering options resulted in consistent improvements in repeatability. We urge caution when interpreting radiomics features and advise paying close attention to the processing configuration details of reported results. Furthermore, we advocate reporting all processing details in radiomics studies and strongly recommend the use of open source implementations.
Prediction of Treatment Response to Neoadjuvant Chemotherapy for Breast Cancer via Early Changes in Tumor Heterogeneity Captured by DCE-MRI Registration
We analyzed DCE-MR images from 132 women with locally advanced breast cancer from the I-SPY1 trial to evaluate changes of intra-tumor heterogeneity for augmenting early prediction of pathologic complete response (pCR) and recurrence-free survival (RFS) after neoadjuvant chemotherapy (NAC). Utilizing image registration, voxel-wise changes including tumor deformations and changes in DCE-MRI kinetic features were computed to characterize heterogeneous changes within the tumor. Using five-fold cross-validation, logistic regression and Cox regression were performed to model pCR and RFS, respectively. The extracted imaging features were evaluated in augmenting established predictors, including functional tumor volume (FTV) and histopathologic and demographic factors, using the area under the curve (AUC) and the C-statistic as performance measures. The extracted voxel-wise features were also compared to analogous conventional aggregated features to evaluate the potential advantage of voxel-wise analysis. Voxel-wise features improved prediction of pCR (AUC = 0.78 (±0.03) vs 0.71 (±0.04), p < 0.05 and RFS (C-statistic = 0.76 ( ± 0.05), vs 0.63 ( ± 0.01)), p < 0.05, while models based on analogous aggregate imaging features did not show appreciable performance changes (p > 0.05). Furthermore, all selected voxel-wise features demonstrated significant association with outcome (p < 0.05). Thus, precise measures of voxel-wise changes in tumor heterogeneity extracted from registered DCE-MRI scans can improve early prediction of neoadjuvant treatment outcomes in locally advanced breast cancer.
Automated and Manual Quantification of Tumour Cellularity in Digital Slides for Tumour Burden Assessment
The residual cancer burden index is an important quantitative measure used for assessing treatment response following neoadjuvant therapy for breast cancer. It has shown to be predictive of overall survival and is composed of two key metrics: qualitative assessment of lymph nodes and the percentage of invasive or in situ tumour cellularity (TC) in the tumour bed (TB). Currently, TC is assessed through eye-balling of routine histopathology slides estimating the proportion of tumour cells within the TB. With the advances in production of digitized slides and increasing availability of slide scanners in pathology laboratories, there is potential to measure TC using automated algorithms with greater precision and accuracy. We describe two methods for automated TC scoring: 1) a traditional approach to image analysis development whereby we mimic the pathologists' workflow, and 2) a recent development in artificial intelligence in which features are learned automatically in deep neural networks using image data alone. We show strong agreements between automated and manual analysis of digital slides. Agreements between our trained deep neural networks and experts in this study (0.82) approach the inter-rater agreements between pathologists (0.89). We also reveal properties that are captured when we apply deep neural network to whole slide images, and discuss the potential of using such visualisations to improve upon TC assessment in the future.
Data augmentation using generative adversarial networks (CycleGAN) to improve generalizability in CT segmentation tasks
Sandfort, Veit
Yan, Ke
Pickhardt, Perry J
Summers, Ronald M
Sci Rep2019Journal Article, cited 0 times
Pancreas-CT
Labeled medical imaging data is scarce and expensive to generate. To achieve generalizable deep learning models large amounts of data are needed. Standard data augmentation is a method to increase generalizability and is routinely performed. Generative adversarial networks offer a novel method for data augmentation. We evaluate the use of CycleGAN for data augmentation in CT segmentation tasks. Using a large image database we trained a CycleGAN to transform contrast CT images into non-contrast images. We then used the trained CycleGAN to augment our training using these synthetic non-contrast images. We compared the segmentation performance of a U-Net trained on the original dataset compared to a U-Net trained on the combined dataset of original data and synthetic non-contrast images. We further evaluated the U-Net segmentation performance on two separate datasets: The original contrast CT dataset on which segmentations were created and a second dataset from a different hospital containing only non-contrast CTs. We refer to these 2 separate datasets as the in-distribution and out-of-distribution datasets, respectively. We show that in several CT segmentation tasks performance is improved significantly, especially in out-of-distribution (noncontrast CT) data. For example, when training the model with standard augmentation techniques, performance of segmentation of the kidneys on out-of-distribution non-contrast images was dramatically lower than for in-distribution data (Dice score of 0.09 vs. 0.94 for out-of-distribution vs. in-distribution data, respectively, p < 0.001). When the kidney model was trained with CycleGAN augmentation techniques, the out-of-distribution (non-contrast) performance increased dramatically (from a Dice score of 0.09 to 0.66, p < 0.001). Improvements for the liver and spleen were smaller, from 0.86 to 0.89 and 0.65 to 0.69, respectively. We believe this method will be valuable to medical imaging researchers to reduce manual segmentation effort and cost in CT imaging.
Prediction of malignant glioma grades using contrast-enhanced T1-weighted and T2-weighted magnetic resonance images based on a radiomic analysis
Nakamoto, Takahiro
Takahashi, Wataru
Haga, Akihiro
Takahashi, Satoshi
Kiryu, Shigeru
Nawa, Kanabu
Ohta, Takeshi
Ozaki, Sho
Nozawa, Yuki
Tanaka, Shota
Mukasa, Akitake
Nakagawa, Keiichi
Sci Rep2019Journal Article, cited 0 times
Radiomics
BRAIN
TCGA-GBM
TCGA-LGG
We conducted a feasibility study to predict malignant glioma grades via radiomic analysis using contrast-enhanced T1-weighted magnetic resonance images (CE-T1WIs) and T2-weighted magnetic resonance images (T2WIs). We proposed a framework and applied it to CE-T1WIs and T2WIs (with tumor region data) acquired preoperatively from 157 patients with malignant glioma (grade III: 55, grade IV: 102) as the primary dataset and 67 patients with malignant glioma (grade III: 22, grade IV: 45) as the validation dataset. Radiomic features such as size/shape, intensity, histogram, and texture features were extracted from the tumor regions on the CE-T1WIs and T2WIs. The Wilcoxon-Mann-Whitney (WMW) test and least absolute shrinkage and selection operator logistic regression (LASSO-LR) were employed to select the radiomic features. Various machine learning (ML) algorithms were used to construct prediction models for the malignant glioma grades using the selected radiomic features. Leave-one-out cross-validation (LOOCV) was implemented to evaluate the performance of the prediction models in the primary dataset. The selected radiomic features for all folds in the LOOCV of the primary dataset were used to perform an independent validation. As evaluation indices, accuracies, sensitivities, specificities, and values for the area under receiver operating characteristic curve (or simply the area under the curve (AUC)) for all prediction models were calculated. The mean AUC value for all prediction models constructed by the ML algorithms in the LOOCV of the primary dataset was 0.902 +/- 0.024 (95% CI (confidence interval), 0.873-0.932). In the independent validation, the mean AUC value for all prediction models was 0.747 +/- 0.034 (95% CI, 0.705-0.790). The results of this study suggest that the malignant glioma grades could be sufficiently and easily predicted by preparing the CE-T1WIs, T2WIs, and tumor delineations for each patient. Our proposed framework may be an effective tool for preoperatively grading malignant gliomas.
Optimal Statistical incorporation of independent feature Stability information into Radiomics Studies
Götz, Michael
Maier-Hein, Klaus H
Scientific RepoRtS2020Journal Article, cited 0 times
Website
LIDC-IDRI
Radiomics
Lung
Models
MITK Phenotyping
Gradient boosting
Random forest
LASSO
Conducting side experiments termed robustness experiments, to identify features that are stable with respect to rescans, annotation, or other confounding effects is an important element in radiomics research. However, the matter of how to include the finding of these experiments into the model building process still needs to be explored. Three different methods for incorporating prior knowledge into a radiomics modelling process were evaluated: the naive approach (ignoring feature quality), the most common approach consisting of removing unstable features, and a novel approach using data augmentation for information transfer (DAFIT). Multiple experiments were conducted using both synthetic and publicly available real lung imaging patient data. Ignoring additional information from side experiments resulted in significantly overestimated model performances meaning the estimated mean area under the curve achieved with a model was increased. Removing unstable features improved the performance estimation, while slightly decreasing the model performance, i.e. decreasing the area under curve achieved with the model. The proposed approach was superior both in terms of the estimation of the model performance and the actual model performance. Our experiments show that data augmentation can prevent biases in performance estimation and has several advantages over the plain omission of the unstable feature. The actual gain that can be obtained depends on the quality and applicability of the prior information on the features in the given domain. This will be an important topic of future research.
Prediction of Molecular Mutations in Diffuse Low-Grade Gliomas using MR Imaging Features
Diffuse low-grade gliomas (LGG) have been reclassified based on molecular mutations, which require invasive tumor tissue sampling. Tissue sampling by biopsy may be limited by sampling error, whereas non-invasive imaging can evaluate the entirety of a tumor. This study presents a non-invasive analysis of low-grade gliomas using imaging features based on the updated classification. We introduce molecular (MGMT methylation, IDH mutation, 1p/19q co-deletion, ATRX mutation, and TERT mutations) prediction methods of low-grade gliomas with imaging. Imaging features are extracted from magnetic resonance imaging data and include texture features, fractal and multi-resolution fractal texture features, and volumetric features. Training models include nested leave-one-out cross-validation to select features, train the model, and estimate model performance. The prediction models of MGMT methylation, IDH mutations, 1p/19q co-deletion, ATRX mutation, and TERT mutations achieve a test performance AUC of 0.83 +/- 0.04, 0.84 +/- 0.03, 0.80 +/- 0.04, 0.70 +/- 0.09, and 0.82 +/- 0.04, respectively. Furthermore, our analysis shows that the fractal features have a significant effect on the predictive performance of MGMT methylation IDH mutations, 1p/19q co-deletion, and ATRX mutations. The performance of our prediction methods indicates the potential of correlating computed imaging features with LGG molecular mutations types and identifies candidates that may be considered potential predictive biomarkers of LGG molecular classification.
3D-MCN: A 3D Multi-scale Capsule Network for Lung Nodule Malignancy Prediction
Despite the advances in automatic lung cancer malignancy prediction, achieving high accuracy remains challenging. Existing solutions are mostly based on Convolutional Neural Networks (CNNs), which require a large amount of training data. Most of the developed CNN models are based only on the main nodule region, without considering the surrounding tissues. Obtaining high sensitivity is challenging with lung nodule malignancy prediction. Moreover, the interpretability of the proposed techniques should be a consideration when the end goal is to utilize the model in a clinical setting. Capsule networks (CapsNets) are new and revolutionary machine learning architectures proposed to overcome shortcomings of CNNs. Capitalizing on the success of CapsNet in biomedical domains, we propose a novel model for lung tumor malignancy prediction. The proposed framework, referred to as the 3D Multi-scale Capsule Network (3D-MCN), is uniquely designed to benefit from: (i) 3D inputs, providing information about the nodule in 3D; (ii) Multi-scale input, capturing the nodule's local features, as well as the characteristics of the surrounding tissues, and; (iii) CapsNet-based design, being capable of dealing with a small number of training samples. The proposed 3D-MCN architecture predicted lung nodule malignancy with a high accuracy of 93.12%, sensitivity of 94.94%, area under the curve (AUC) of 0.9641, and specificity of 90% when tested on the LIDC-IDRI dataset. When classifying patients as having a malignant condition (i.e., at least one malignant nodule is detected) or not, the proposed model achieved an accuracy of 83%, and a sensitivity and specificity of 84% and 81% respectively.
Weakly-supervised learning for lung carcinoma classification using deep learning
Kanavati, Fahdi
Toyokawa, Gouji
Momosaki, Seiya
Rambeau, Michael
Kozuma, Yuka
Shoji, Fumihiro
Yamazaki, Koji
Takeo, Sadanori
Iizuka, Osamu
Tsuneki, Masayuki
Scientific RepoRtS2020Journal Article, cited 52 times
Website
TCGA-LUAD
TCGA-LUSC
CPTAC-LSCC
Pathology
Deep Learning
Peritumoral and intratumoral radiomic features predict survival outcomes among patients diagnosed in lung cancer screening
The National Lung Screening Trial (NLST) demonstrated that screening with low-dose computed tomography (LDCT) is associated with a 20% reduction in lung cancer mortality. One potential limitation of LDCT screening is overdiagnosis of slow growing and indolent cancers. In this study, peritumoral and intratumoral radiomics was used to identify a vulnerable subset of lung patients associated with poor survival outcomes. Incident lung cancer patients from the NLST were split into training and test cohorts and an external cohort of non-screen detected adenocarcinomas was used for further validation. After removing redundant and non-reproducible radiomics features, backward elimination analyses identified a single model which was subjected to Classification and Regression Tree to stratify patients into three risk-groups based on two radiomics features (NGTDM Busyness and Statistical Root Mean Square [RMS]). The final model was validated in the test cohort and the cohort of non-screen detected adenocarcinomas. Using a radio-genomics dataset, Statistical RMS was significantly associated with FOXF2 gene by both correlation and two-group analyses. Our rigorous approach generated a novel radiomics model that identified a vulnerable high-risk group of early stage patients associated with poor outcomes. These patients may require aggressive follow-up and/or adjuvant therapy to mitigate their poor outcomes.
Federated learning in medicine: facilitating multi-institutional collaborations without sharing patient data
Several studies underscore the potential of deep learning in identifying complex patterns, leading to diagnostic and prognostic biomarkers. Identifying sufficiently large and diverse datasets, required for training, is a significant challenge in medicine and can rarely be found in individual institutions. Multi-institutional collaborations based on centrally-shared patient data face privacy and ownership challenges. Federated learning is a novel paradigm for data-private multi-institutional collaborations, where model-learning leverages all available data without sharing data between institutions, by distributing the model-training to the data-owners and aggregating their results. We show that federated learning among 10 institutions results in models reaching 99% of the model quality achieved with centralized data, and evaluate generalizability on data from institutions outside the federation. We further investigate the effects of data distribution across collaborating institutions on model quality and learning patterns, indicating that increased access to data through data private multi-institutional collaborations can benefit model quality more than the errors introduced by the collaborative method. Finally, we compare with other collaborative-learning approaches demonstrating the superiority of federated learning, and discuss practical implementation considerations. Clinical adoption of federated learning is expected to lead to models trained on datasets of unprecedented size, hence have a catalytic impact towards precision/personalized medicine.
Standardization of brain MR images across machines and protocols: bridging the gap for MRI-based radiomics
Carré, Alexandre
Klausner, Guillaume
Edjlali, Myriam
Lerousseau, Marvin
Briend-Diop, Jade
Sun, Roger
Ammari, Samy
Reuzé, Sylvain
Andres, Emilie Alvarez
Estienne, Théo
Scientific RepoRtS2020Journal Article, cited 0 times
Website
TCGA-GBM
TCGA-LGG
radiomics
Radiomics feature reproducibility under inter-rater variability in segmentations of CT images
Identifying image features that are robust with respect to segmentation variability is a tough challenge in radiomics. So far, this problem has mainly been tackled in test-retest analyses. In this work we analyse radiomics feature reproducibility in two phases: first with manual segmentations provided by four expert readers and second with probabilistic automated segmentations using a recently developed neural network (PHiseg). We test feature reproducibility on three publicly available datasets of lung, kidney and liver lesions. We find consistent results both over manual and automated segmentations in all three datasets and show that there are subsets of radiomic features which are robust against segmentation variability and other radiomic features which are prone to poor reproducibility under differing segmentations. By providing a detailed analysis of robustness of the most common radiomics features across several datasets, we envision that more reliable and reproducible radiomic models can be built in the future based on this work.
Context aware deep learning for brain tumor segmentation, subtype classification, and survival prediction using radiology images
Linmin, Pei
Lasitha, Vidyaratne
Monibor, Rahman Md
Iftekharuddin, Khan M
Scientific Reports (Nature Publisher Group)2020Journal Article, cited 0 times
Website
TCGA-LGG
BraTS-TCGA-GBM
BraTS-TCGA-LGG
machine learning
Segmentation
A brain tumor is an uncontrolled growth of cancerous cells in the brain. Accurate segmentation and classification of tumors are critical for subsequent prognosis and treatment planning. This work proposes context aware deep learning for brain tumor segmentation, subtype classification, and overall survival prediction using structural multimodal magnetic resonance images (mMRI). We first propose a 3D context aware deep learning, that considers uncertainty of tumor location in the radiology mMRI image sub-regions, to obtain tumor segmentation. We then apply a regular 3D convolutional neural network (CNN) on the tumor segments to achieve tumor subtype classification. Finally, we perform survival prediction using a hybrid method of deep learning and machine learning. To evaluate the performance, we apply the proposed methods to the Multimodal Brain Tumor Segmentation Challenge 2019 (BraTS 2019) dataset for tumor segmentation and overall survival prediction, and to the dataset of the Computational Precision Medicine Radiology-Pathology (CPM-RadPath) Challenge on Brain Tumor Classification 2019 for tumor classification. We also perform an extensive performance evaluation based on popular evaluation metrics, such as Dice score coefficient, Hausdorff distance at percentile 95 (HD95), classification accuracy, and mean square error. The results suggest that the proposed method offers robust tumor segmentation and survival prediction, respectively. Furthermore, the tumor classification results in this work is ranked at second place in the testing phase of the 2019 CPM-RadPath global challenge.
Radiomic features based on Hessian index for prediction of prognosis in head-and-neck cancer patients
Le, Quoc Cuong
Arimura, Hidetaka
Ninomiya, Kenta
Kabata, Yutaro
Scientific RepoRtS2020Journal Article, cited 0 times
Website
HNSCC
Head-Neck-PET-CT
radiomic features
Quantification of the spatial distribution of primary tumors in the lung to develop new prognostic biomarkers for locally advanced NSCLC
Vuong, Diem
Bogowicz, Marta
Wee, Leonard
Riesterer, Oliver
Vlaskou Badra, Eugenia
D’Cruz, Louisa Abigail
Balermpas, Panagiotis
van Timmeren, Janita E.
Burgermeister, Simon
Dekker, André
De Ruysscher, Dirk
Unkelbach, Jan
Thierstein, Sandra
Eboulet, Eric I.
Peters, Solange
Pless, Miklos
Guckenberger, Matthias
Tanadini-Lang, Stephanie
Scientific RepoRtS2021Journal Article, cited 0 times
Website
NSCLC-Radiomics
lung
The anatomical location and extent of primary lung tumors have shown prognostic value for overall survival (OS). However, its manual assessment is prone to interobserver variability. This study aims to use data driven identification of image characteristics for OS in locally advanced non-small cell lung cancer (NSCLC) patients. Five stage IIIA/IIIB NSCLC patient cohorts were retrospectively collected. Patients were treated either with radiochemotherapy (RCT): RCT1* (n = 107), RCT2 (n = 95), RCT3 (n = 37) or with surgery combined with radiotherapy or chemotherapy: S1* (n = 135), S2 (n = 55). Based on a deformable image registration (MIM Vista, 6.9.2.), an in-house developed software transferred each primary tumor to the CT scan of a reference patient while maintaining the original tumor shape. A frequency-weighted cumulative status map was created for both exploratory cohorts (indicated with an asterisk), where the spatial extent of the tumor was uni-labeled with 2 years OS. For the exploratory cohorts, a permutation test with random assignment of patient status was performed to identify regions with statistically significant worse OS, referred to as decreased survival areas (DSA). The minimal Euclidean distance between primary tumor to DSA was extracted from the independent cohorts (negative distance in case of overlap). To account for the tumor volume, the distance was scaled with the radius of the volume-equivalent sphere. For the S1 cohort, DSA were located at the right main bronchus whereas for the RCT1 cohort they further extended in cranio-caudal direction. In the independent cohorts, the model based on distance to DSA achieved performance: AUCRCT2 [95% CI] = 0.67 [0.55–0.78] and AUCRCT3 = 0.59 [0.39–0.79] for RCT patients, but showed bad performance for surgery cohort (AUCS2 = 0.52 [0.30–0.74]). Shorter distance to DSA was associated with worse outcome (p = 0.0074). In conclusion, this explanatory analysis quantifies the value of primary tumor location for OS prediction based on cumulative status maps. Shorter distance of primary tumor to a high-risk region was associated with worse prognosis in the RCT cohort.
Impact of GAN-based lesion-focused medical image super-resolution on the robustness of radiomic features
de Farias, E. C.
di Noia, C.
Han, C.
Sala, E.
Castelli, M.
Rundo, L.
Sci Rep2021Journal Article, cited 12 times
Website
NSCLC-Radiomics
Algorithms
Humans
Image Processing
Computer-Assisted/methods
Lung/*diagnostic imaging/pathology
Lung Neoplasms/*diagnostic imaging/pathology
Machine Learning
Tomography
X-Ray Computed/methods
Robust machine learning models based on radiomic features might allow for accurate diagnosis, prognosis, and medical decision-making. Unfortunately, the lack of standardized radiomic feature extraction has hampered their clinical use. Since the radiomic features tend to be affected by low voxel statistics in regions of interest, increasing the sample size would improve their robustness in clinical studies. Therefore, we propose a Generative Adversarial Network (GAN)-based lesion-focused framework for Computed Tomography (CT) image Super-Resolution (SR); for the lesion (i.e., cancer) patch-focused training, we incorporate Spatial Pyramid Pooling (SPP) into GAN-Constrained by the Identical, Residual, and Cycle Learning Ensemble (GAN-CIRCLE). At [Formula: see text] SR, the proposed model achieved better perceptual quality with less blurring than the other considered state-of-the-art SR methods, while producing comparable results at [Formula: see text] SR. We also evaluated the robustness of our model's radiomic feature in terms of quantization on a different lung cancer CT dataset using Principal Component Analysis (PCA). Intriguingly, the most important radiomic features in our PCA-based analysis were the most robust features extracted on the GAN-super-resolved images. These achievements pave the way for the application of GAN-based image Super-Resolution techniques for studies of radiomics for robust biomarker discovery.
A large dataset of white blood cells containing cell locations and types, along with segmented nuclei and cytoplasm
Accurate and early detection of anomalies in peripheral white blood cells plays a crucial role in the evaluation of well-being in individuals and the diagnosis and prognosis of hematologic diseases. For example, some blood disorders and immune system-related diseases are diagnosed by the differential count of white blood cells, which is one of the common laboratory tests. Data is one of the most important ingredients in the development and testing of many commercial and successful automatic or semi-automatic systems. To this end, this study introduces a free access dataset of normal peripheral white blood cells called Raabin-WBC containing about 40,000 images of white blood cells and color spots. For ensuring the validity of the data, a significant number of cells were labeled by two experts. Also, the ground truths of the nuclei and cytoplasm are extracted for 1145 selected cells. To provide the necessary diversity, various smears have been imaged, and two different cameras and two different microscopes were used. We did some preliminary deep learning experiments on Raabin-WBC to demonstrate how the generalization power of machine learning methods, especially deep neural networks, can be affected by the mentioned diversity. Raabin-WBC as a public data in the field of health can be used for the model development and testing in different machine learning tasks including classification, detection, segmentation, and localization.
MuSA: a graphical user interface for multi-OMICs data integration in radiogenomic studies
Zanfardino, Mario
Castaldo, Rossana
Pane, Katia
Affinito, Ornella
Aiello, Marco
Salvatore, Marco
Franzese, Monica
Scientific RepoRtS2021Journal Article, cited 0 times
Website
TCGA-BRCA
radiogenomics
Distant metastasis time to event analysis with CNNs in independent head and neck cancer cohorts
Deep learning models based on medical images play an increasingly important role for cancer outcome prediction. The standard approach involves usage of convolutional neural networks (CNNs) to automatically extract relevant features from the patient's image and perform a binary classification of the occurrence of a given clinical endpoint. In this work, a 2D-CNN and a 3D-CNN for the binary classification of distant metastasis (DM) occurrence in head and neck cancer patients were extended to perform time-to-event analysis. The newly built CNNs incorporate censoring information and output DM-free probability curves as a function of time for every patient. In total, 1037 patients were used to build and assess the performance of the time-to-event model. Training and validation was based on 294 patients also used in a previous benchmark classification study while for testing 743 patients from three independent cohorts were used. The best network could reproduce the good results from 3-fold cross validation [Harrell's concordance indices (HCIs) of 0.78, 0.74 and 0.80] in two out of three testing cohorts (HCIs of 0.88, 0.67 and 0.77). Additionally, the capability of the models for patient stratification into high and low-risk groups was investigated, the CNNs being able to significantly stratify all three testing cohorts. Results suggest that image-based deep learning models show good reliability for DM time-to-event analysis and could be used for treatment personalisation.
Tens of images can suffice to train neural networks for malignant leukocyte detection
Convolutional neural networks (CNNs) excel as powerful tools for biomedical image classification. It is commonly assumed that training CNNs requires large amounts of annotated data. This is a bottleneck in many medical applications where annotation relies on expert knowledge. Here, we analyze the binary classification performance of a CNN on two independent cytomorphology datasets as a function of training set size. Specifically, we train a sequential model to discriminate non-malignant leukocytes from blast cells, whose appearance in the peripheral blood is a hallmark of leukemia. We systematically vary training set size, finding that tens of training images suffice for a binary classification with an ROC-AUC over 90%. Saliency maps and layer-wise relevance propagation visualizations suggest that the network learns to increasingly focus on nuclear structures of leukocytes as the number of training images is increased. A low dimensional tSNE representation reveals that while the two classes are separated already for a few training images, the distinction between the classes becomes clearer when more training images are used. To evaluate the performance in a multi-class problem, we annotated single-cell images from a acute lymphoblastic leukemia dataset into six different hematopoietic classes. Multi-class prediction suggests that also here few single-cell images suffice if differences between morphological classes are large enough. The incorporation of deep learning algorithms into clinical practice has the potential to reduce variability and cost, democratize usage of expertise, and allow for early detection of disease onset and relapse. Our approach evaluates the performance of a deep learning based cytology classifier with respect to size and complexity of the training data and the classification task.
Weakly supervised temporal model for prediction of breast cancer distant recurrence
Efficient prediction of cancer recurrence in advance may help to recruit high risk breast cancer patients for clinical trial on-time and can guide a proper treatment plan. Several machine learning approaches have been developed for recurrence prediction in previous studies, but most of them use only structured electronic health records and only a small training dataset, with limited success in clinical application. While free-text clinic notes may offer the greatest nuance and detail about a patient's clinical status, they are largely excluded in previous predictive models due to the increase in processing complexity and need for a complex modeling framework. In this study, we developed a weak-supervision framework for breast cancer recurrence prediction in which we trained a deep learning model on a large sample of free-text clinic notes by utilizing a combination of manually curated labels and NLP-generated non-perfect recurrence labels. The model was trained jointly on manually curated data from 670 patients and NLP-curated data of 8062 patients. It was validated on manually annotated data from 224 patients with recurrence and achieved 0.94 AUROC. This weak supervision approach allowed us to learn from a larger dataset using imperfect labels and ultimately provided greater accuracy compared to a smaller hand-curated dataset, with less manual effort invested in curation.
Domain adaptation for segmentation of critical structures for prostate cancer therapy
NCI-ISBI 2013 Challenge: Automated Segmentation of Prostate Structures
ISBI-MR-Prostate-2013
Semi-automatic segmentation
Algorithm Development
Preoperative assessment of the proximity of critical structures to the tumors is crucial in avoiding unnecessary damage during prostate cancer treatment. A patient-specific 3D anatomical model of those structures, namely the neurovascular bundles (NVB) and the external urethral sphincters (EUS), can enable physicians to perform such assessments intuitively. As a crucial step to generate a patient-specific anatomical model from preoperative MRI in a clinical routine, we propose a multi-class automatic segmentation based on an anisotropic convolutional network. Our specific challenge is to train the network model on a unique source dataset only available at a single clinical site and deploy it to another target site without sharing the original images or labels. As network models trained on data from a single source suffer from quality loss due to the domain shift, we propose a semi-supervised domain adaptation (DA) method to refine the model's performance in the target domain. Our DA method combines transfer learning and uncertainty guided self-learning based on deep ensembles. Experiments on the segmentation of the prostate, NVB, and EUS, show significant performance gain with the combination of those techniques compared to pure TL and the combination of TL with simple self-learning ([Formula: see text] for all structures using a Wilcoxon's signed-rank test). Results on a different task and data (Pancreas CT segmentation) demonstrate our method's generic application capabilities. Our method has the advantage that it does not require any further data from the source domain, unlike the majority of recent domain adaptation strategies. This makes our method suitable for clinical applications, where the sharing of patient data is restricted.
Brain tumor segmentation based on deep learning and an attention mechanism using MRI multi-modalities brain images
Ranjbarzadeh, R.
Bagherian Kasgari, A.
Jafarzadeh Ghoushchi, S.
Anari, S.
Naseri, M.
Bendechache, M.
Sci Rep2021Journal Article, cited 0 times
BraTS-TCGA-LGG
BraTS-TCGA-GBM
Brain Neoplasms/*diagnostic imaging
Deep Learning
Humans
Magnetic Resonance Imaging (MRI)
Neural Networks
Computer
Neuroimaging/*methods
Radiographic Image Interpretation
Computer-Assisted/*methods
Specimen Handling
Brain tumor localization and segmentation from magnetic resonance imaging (MRI) are hard and important tasks for several applications in the field of medical analysis. As each brain imaging modality gives unique and key details related to each part of the tumor, many recent approaches used four modalities T1, T1c, T2, and FLAIR. Although many of them obtained a promising segmentation result on the BRATS 2018 dataset, they suffer from a complex structure that needs more time to train and test. So, in this paper, to obtain a flexible and effective brain tumor segmentation system, first, we propose a preprocessing approach to work only on a small part of the image rather than the whole part of the image. This method leads to a decrease in computing time and overcomes the overfitting problems in a Cascade Deep Learning model. In the second step, as we are dealing with a smaller part of brain images in each slice, a simple and efficient Cascade Convolutional Neural Network (C-ConvNet/C-CNN) is proposed. This C-CNN model mines both local and global features in two different routes. Also, to improve the brain tumor segmentation accuracy compared with the state-of-the-art models, a novel Distance-Wise Attention (DWA) mechanism is introduced. The DWA mechanism considers the effect of the center location of the tumor and the brain inside the model. Comprehensive experiments are conducted on the BRATS 2018 dataset and show that the proposed model obtains competitive results: the proposed method achieves a mean whole tumor, enhancing tumor, and tumor core dice scores of 0.9203, 0.9113 and 0.8726 respectively. Other quantitative and qualitative assessments are presented and discussed.
Early prediction of neoadjuvant chemotherapy response by exploiting a transfer learning approach on breast DCE-MRIs
Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI)
Radiomics
Radiogenomics
BREAST
Machine Learning
Magnetic Resonance Imaging (MRI)
Radiography
Support Vector Machine (SVM)
The dynamic contrast-enhanced MR imaging plays a crucial role in evaluating the effectiveness of neoadjuvant chemotherapy (NAC) even since its early stage through the prediction of the final pathological complete response (pCR). In this study, we proposed a transfer learning approach to predict if a patient achieved pCR (pCR) or did not (non-pCR) by exploiting, separately or in combination, pre-treatment and early-treatment exams from I-SPY1 TRIAL public database. First, low-level features, i.e., related to local structure of the image, were automatically extracted by a pre-trained convolutional neural network (CNN) overcoming manual feature extraction. Next, an optimal set of most stable features was detected and then used to design an SVM classifier. A first subset of patients, called fine-tuning dataset (30 pCR; 78 non-pCR), was used to perform the optimal choice of features. A second subset not involved in the feature selection process was employed as an independent test (7 pCR; 19 non-pCR) to validate the model. By combining the optimal features extracted from both pre-treatment and early-treatment exams with some clinical features, i.e., ER, PgR, HER2 and molecular subtype, an accuracy of 91.4% and 92.3%, and an AUC value of 0.93 and 0.90, were returned on the fine-tuning dataset and the independent test, respectively. Overall, the low-level CNN features have an important role in the early evaluation of the NAC efficacy by predicting pCR. The proposed model represents a first effort towards the development of a clinical support tool for an early prediction of pCR to NAC.
Deep learning for fully automatic detection, segmentation, and Gleason grade estimation of prostate cancer in multiparametric magnetic resonance images
Although the emergence of multi-parametric magnetic resonance imaging (mpMRI) has had a profound impact on the diagnosis of prostate cancers (PCa), analyzing these images remains still complex even for experts. This paper proposes a fully automatic system based on Deep Learning that performs localization, segmentation and Gleason grade group (GGG) estimation of PCa lesions from prostate mpMRIs. It uses 490 mpMRIs for training/validation and 75 for testing from two different datasets: ProstateX and Valencian Oncology Institute Foundation. In the test set, it achieves an excellent lesion-level AUC/sensitivity/specificity for the GGG[Formula: see text]2 significance criterion of 0.96/1.00/0.79 for the ProstateX dataset, and 0.95/1.00/0.80 for the IVO dataset. At a patient level, the results are 0.87/1.00/0.375 in ProstateX, and 0.91/1.00/0.762 in IVO. Furthermore, on the online ProstateX grand challenge, the model obtained an AUC of 0.85 (0.87 when trained only on the ProstateX data, tying up with the original winner of the challenge). For expert comparison, IVO radiologist's PI-RADS 4 sensitivity/specificity were 0.88/0.56 at a lesion level, and 0.85/0.58 at a patient level. The full code for the ProstateX-trained model is openly available at https://github.com/OscarPellicer/prostate_lesion_detection . We hope that this will represent a landmark for future research to use, compare and improve upon.
Cross-institutional outcome prediction for head and neck cancer patients using self-attention neural networks
Le, W. T.
Vorontsov, E.
Romero, F. P.
Seddik, L.
Elsharief, M. M.
Nguyen-Tan, P. F.
Roberge, D.
Bahig, H.
Kadoury, S.
Sci Rep2022Journal Article, cited 0 times
Head-Neck-PET-CT
Multimodal Imaging
Aged
Aged
80 and over
Attention
Biomarkers
Tumor
Carcinoma
Squamous Cell/*diagnostic imaging/therapy
Deep Learning
Diagnosis
Computer-Assisted/*methods
Female
Head and Neck Neoplasms/*diagnostic imaging/therapy
Humans
Image Processing
Computer-Assisted/*methods
Male
Middle Aged
Neoplasm Recurrence
Local/diagnostic imaging
*Neural Networks
Computer
Positron Emission Tomography Computed Tomography
Prognosis
Quality of Life
Retrospective Studies
In radiation oncology, predicting patient risk stratification allows specialization of therapy intensification as well as selecting between systemic and regional treatments, all of which helps to improve patient outcome and quality of life. Deep learning offers an advantage over traditional radiomics for medical image processing by learning salient features from training data originating from multiple datasets. However, while their large capacity allows to combine high-level medical imaging data for outcome prediction, they lack generalization to be used across institutions. In this work, a pseudo-volumetric convolutional neural network with a deep preprocessor module and self-attention (PreSANet) is proposed for the prediction of distant metastasis, locoregional recurrence, and overall survival occurrence probabilities within the 10 year follow-up time frame for head and neck cancer patients with squamous cell carcinoma. The model is capable of processing multi-modal inputs of variable scan length, as well as integrating patient data in the prediction model. These proposed architectural features and additional modalities all serve to extract additional information from the available data when availability to additional samples is limited. This model was trained on the public Cancer Imaging Archive Head-Neck-PET-CT dataset consisting of 298 patients undergoing curative radio/chemo-radiotherapy and acquired from 4 different institutions. The model was further validated on an internal retrospective dataset with 371 patients acquired from one of the institutions in the training dataset. An extensive set of ablation experiments were performed to test the utility of the proposed model characteristics, achieving an AUROC of [Formula: see text], [Formula: see text] and [Formula: see text] for DM, LR and OS respectively on the public TCIA Head-Neck-PET-CT dataset. External validation was performed on a retrospective dataset with 371 patients, achieving [Formula: see text] AUROC in all outcomes. To test for model generalization across sites, a validation scheme consisting of single site-holdout and cross-validation combining both datasets was used. The mean accuracy across 4 institutions obtained was [Formula: see text], [Formula: see text] and [Formula: see text] for DM, LR and OS respectively. The proposed model demonstrates an effective method for tumor outcome prediction for multi-site, multi-modal combining both volumetric data and structured patient clinical data.
Automated pancreas segmentation and volumetry using deep neural network on computed tomography
Pancreas segmentation is necessary for observing lesions, analyzing anatomical structures, and predicting patient prognosis. Therefore, various studies have designed segmentation models based on convolutional neural networks for pancreas segmentation. However, the deep learning approach is limited by a lack of data, and studies conducted on a large computed tomography dataset are scarce. Therefore, this study aims to perform deep-learning-based semantic segmentation on 1006 participants and evaluate the automatic segmentation performance of the pancreas via four individual three-dimensional segmentation networks. In this study, we performed internal validation with 1,006 patients and external validation using the cancer imaging archive pancreas dataset. We obtained mean precision, recall, and dice similarity coefficients of 0.869, 0.842, and 0.842, respectively, for internal validation via a relevant approach among the four deep learning networks. Using the external dataset, the deep learning network achieved mean precision, recall, and dice similarity coefficients of 0.779, 0.749, and 0.735, respectively. We expect that generalized deep-learning-based systems can assist clinical decisions by providing accurate pancreatic segmentation and quantitative information of the pancreas for abdominal computed tomography.
The impact of inter-observer variation in delineation on robustness of radiomics features in non-small cell lung cancer
Artificial intelligence and radiomics have the potential to revolutionise cancer prognostication and personalised treatment. Manual outlining of the tumour volume for extraction of radiomics features (RF) is a subjective process. This study investigates robustness of RF to inter-observer variation (IOV) in contouring in lung cancer. We utilised two public imaging datasets: 'NSCLC-Radiomics' and 'NSCLC-Radiomics-Interobserver1' ('Interobserver'). For 'NSCLC-Radiomics', we created an additional set of manual contours for 92 patients, and for 'Interobserver', there were five manual and five semi-automated contours available for 20 patients. Dice coefficients (DC) were calculated for contours. 1113 RF were extracted including shape, first order and texture features. Intraclass correlation coefficient (ICC) was computed to assess robustness of RF to IOV. Cox regression analysis for overall survival (OS) was performed with a previously published radiomics signature. The median DC ranged from 0.81 ('NSCLC-Radiomics') to 0.85 ('Interobserver'-semi-automated). The median ICC for the 'NSCLC-Radiomics', 'Interobserver' (manual) and 'Interobserver' (semi-automated) were 0.90, 0.88 and 0.93 respectively. The ICC varied by feature type and was lower for first order and gray level co-occurrence matrix (GLCM) features. Shape features had a lower median ICC in the 'NSCLC-Radiomics' dataset compared to the 'Interobserver' dataset. Survival analysis showed similar separation of curves for three of four RF apart from 'original_shape_Compactness2', a feature with low ICC (0.61). The majority of RF are robust to IOV, with first order, GLCM and shape features being the least robust. Semi-automated contouring improves feature stability. Decreased robustness of a feature is significant as it may impact upon the features' prognostic capability.
AutoComBat: a generic method for harmonizing MRI-based radiomic features
The use of multicentric data is becoming essential for developing generalizable radiomic signatures. In particular, Magnetic Resonance Imaging (MRI) data used in brain oncology are often heterogeneous in terms of scanners and acquisitions, which significantly impact quantitative radiomic features. Various methods have been proposed to decrease dependency, including methods acting directly on MR images, i.e., based on the application of several preprocessing steps before feature extraction or the ComBat method, which harmonizes radiomic features themselves. The ComBat method used for radiomics may be misleading and presents some limitations, such as the need to know the labels associated with the "batch effect". In addition, a statistically representative sample is required and the applicability of a signature whose batch label is not present in the train set is not possible. This work aimed to compare a priori and a posteriori radiomic harmonization methods and propose a code adaptation to be machine learning compatible. Furthermore, we have developed AutoComBat, which aims to automatically determine the batch labels, using either MRI metadata or quality metrics as inputs of the proposed constrained clustering. A heterogeneous dataset consisting of high and low-grade gliomas coming from eight different centers was considered. The different methods were compared based on their ability to decrease relative standard deviation of radiomic features extracted from white matter and on their performance on a classification task using different machine learning models. ComBat and AutoComBat using image-derived quality metrics as inputs for batch assignment and preprocessing methods presented promising results on white matter harmonization, but with no clear consensus for all MR images. Preprocessing showed the best results on the T1w-gd images for the grading task. For T2w-flair, AutoComBat, using either metadata plus quality metrics or metadata alone as inputs, performs better than the conventional ComBat, highlighting its potential for data harmonization. Our results are MRI weighting, feature class and task dependent and require further investigations on other datasets.
Radiomics and deep learning methods for the prediction of 2-year overall survival in LUNG1 dataset
In this study, we tested and compared radiomics and deep learning-based approaches on the public LUNG1 dataset, for the prediction of 2-year overall survival (OS) in non-small cell lung cancer patients. Radiomic features were extracted from the gross tumor volume using Pyradiomics, while deep features were extracted from bi-dimensional tumor slices by convolutional autoencoder. Both radiomic and deep features were fed to 24 different pipelines formed by the combination of four feature selection/reduction methods and six classifiers. Direct classification through convolutional neural networks (CNNs) was also performed. Each approach was investigated with and without the inclusion of clinical parameters. The maximum area under the receiver operating characteristic on the test set improved from 0.59, obtained for the baseline clinical model, to 0.67 +/- 0.03, 0.63 +/- 0.03 and 0.67 +/- 0.02 for models based on radiomic features, deep features, and their combination, and to 0.64 +/- 0.04 for direct CNN classification. Despite the high number of pipelines and approaches tested, results were comparable and in line with previous works, hence confirming that it is challenging to extract further imaging-based information from the LUNG1 dataset for the prediction of 2-year OS.
Improved generalized ComBat methods for harmonization of radiomic features
Radiomic approaches in precision medicine are promising, but variation associated with image acquisition factors can result in severe biases and low generalizability. Multicenter datasets used in these studies are often heterogeneous in multiple imaging parameters and/or have missing information, resulting in multimodal radiomic feature distributions. ComBat is a promising harmonization tool, but it only harmonizes by single/known variables and assumes standardized input data are normally distributed. We propose a procedure that sequentially harmonizes for multiple batch effects in an optimized order, called OPNested ComBat. Furthermore, we propose to address bimodality by employing a Gaussian Mixture Model (GMM) grouping considered as either a batch variable (OPNested + GMM) or as a protected clinical covariate (OPNested - GMM). Methods were evaluated on features extracted with CapTK and PyRadiomics from two public lung computed tomography (CT) datasets. We found that OPNested ComBat improved harmonization performance over standard ComBat. OPNested + GMM ComBat exhibited the best harmonization performance but the lowest predictive performance, while OPNested - GMM ComBat showed poorer harmonization performance, but the highest predictive performance. Our findings emphasize that improved harmonization performance is no guarantee of improved predictive performance, and that these methods show promise for superior standardization of datasets heterogeneous in multiple or unknown imaging parameters and greater generalizability.
An image registration method for voxel-wise analysis of whole-body oncological PET-CT
Whole-body positron emission tomography-computed tomography (PET-CT) imaging in oncology provides comprehensive information of each patient's disease status. However, image interpretation of volumetric data is a complex and time-consuming task. In this work, an image registration method targeted towards computer-aided voxel-wise analysis of whole-body PET-CT data was developed. The method used both CT images and tissue segmentation masks in parallel to spatially align images step-by-step. To evaluate its performance, a set of baseline PET-CT images of 131 classical Hodgkin lymphoma (cHL) patients and longitudinal image series of 135 head and neck cancer (HNC) patients were registered between and within subjects according to the proposed method. Results showed that major organs and anatomical structures generally were registered correctly. Whole-body inverse consistency vector and intensity magnitude errors were on average less than 5 mm and 45 Hounsfield units respectively in both registration tasks. Image registration was feasible in time and the nearly automatic pipeline enabled efficient image processing. Metabolic tumor volumes of the cHL patients and registration-derived therapy-related tissue volume change of the HNC patients mapped to template spaces confirmed proof-of-concept. In conclusion, the method established a robust point-correspondence and enabled quantitative visualization of group-wise image features on voxel level.
Region-adaptive magnetic resonance image enhancement for improving CNN-based segmentation of the prostate and prostatic zones
Zaridis, D. I.
Mylona, E.
Tachos, N.
Pezoulas, V. C.
Grigoriadis, G.
Tsiknakis, N.
Marias, K.
Tsiknakis, M.
Fotiadis, D. I.
Sci Rep2023Journal Article, cited 0 times
Prostate-3T
PROSTATEx
Humans
*Prostate/diagnostic imaging
*Image Processing
Computer-Assisted/methods
Neural Networks
Computer
Magnetic Resonance Imaging (MRI)
Algorithm Development
Automatic segmentation of the prostate of and the prostatic zones on MRI remains one of the most compelling research areas. While different image enhancement techniques are emerging as powerful tools for improving the performance of segmentation algorithms, their application still lacks consensus due to contrasting evidence regarding performance improvement and cross-model stability, further hampered by the inability to explain models' predictions. Particularly, for prostate segmentation, the effectiveness of image enhancement on different Convolutional Neural Networks (CNN) remains largely unexplored. The present work introduces a novel image enhancement method, named RACLAHE, to enhance the performance of CNN models for segmenting the prostate's gland and the prostatic zones. The improvement in performance and consistency across five CNN models (U-Net, U-Net++, U-Net3+, ResU-net and USE-NET) is compared against four popular image enhancement methods. Additionally, a methodology is proposed to explain, both quantitatively and qualitatively, the relation between saliency maps and ground truth probability maps. Overall, RACLAHE was the most consistent image enhancement algorithm in terms of performance improvement across CNN models with the mean increase in Dice Score ranging from 3 to 9% for the different prostatic regions, while achieving minimal inter-model variability. The integration of a feature driven methodology to explain the predictions after applying image enhancement methods, enables the development of a concrete, trustworthy automated pipeline for prostate segmentation on MR images.
A novel beam stopper-based approach for scatter correction in digital planar radiography
X-ray scatter in planar radiography degrades the contrast resolution of the image, thus reducing its diagnostic utility. Antiscatter grids partially block scattered photons at the cost of increasing the dose delivered by two- to four-fold and posing geometrical restrictions that hinder their use for other acquisition settings, such as portable radiography. The few software-based approaches investigated for planar radiography mainly estimate the scatter map from a low-frequency version of the image. We present a novel method for scatter correction in planar imaging based on direct patient measurements. Samples from the shadowed regions of an additional partially obstructed projection acquired with a beam stopper placed between the X-ray source and the patient are used to estimate the scatter map. Evaluation with simulated and real data showed an increase in contrast resolution for both lung and spine and recovery of ground truth values superior to those of three recently proposed methods. Our method avoids the biases of post-processing methods and yields results similar to those for an antiscatter grid while removing geometrical restrictions at around half the radiation dose. It can be used in unconventional imaging techniques, such as portable radiography, where training datasets needed for deep-learning approaches would be very difficult to obtain.
Value of handcrafted and deep radiomic features towards training robust machine learning classifiers for prediction of prostate cancer disease aggressiveness
There is a growing piece of evidence that artificial intelligence may be helpful in the entire prostate cancer disease continuum. However, building machine learning algorithms robust to inter- and intra-radiologist segmentation variability is still a challenge. With this goal in mind, several model training approaches were compared: removing unstable features according to the intraclass correlation coefficient (ICC); training independently with features extracted from each radiologist's mask; training with the feature average between both radiologists; extracting radiomic features from the intersection or union of masks; and creating a heterogeneous dataset by randomly selecting one of the radiologists' masks for each patient. The classifier trained with this last resampled dataset presented with the lowest generalization error, suggesting that training with heterogeneous data leads to the development of the most robust classifiers. On the contrary, removing features with low ICC resulted in the highest generalization error. The selected radiomics dataset, with the randomly chosen radiologists, was concatenated with deep features extracted from neural networks trained to segment the whole prostate. This new hybrid dataset was then used to train a classifier. The results revealed that, even though the hybrid classifier was less overfitted than the one trained with deep features, it still was unable to outperform the radiomics model.
A diagnostic classification of lung nodules using multiple-scale residual network
Computed tomography (CT) scans have been shown to be an effective way of improving diagnostic efficacy and reducing lung cancer mortality. However, distinguishing benign from malignant nodules in CT imaging remains challenging. This study aims to develop a multiple-scale residual network (MResNet) to automatically and precisely extract the general feature of lung nodules, and classify lung nodules based on deep learning. The MResNet aggregates the advantages of residual units and pyramid pooling module (PPM) to learn key features and extract the general feature for lung nodule classification. Specially, the MResNet uses the ResNet as a backbone network to learn contextual information and discriminate feature representation. Meanwhile, the PPM is used to fuse features under four different scales, including the coarse scale and the fine-grained scale to obtain more general lung features of the CT image. MResNet had an accuracy of 99.12%, a sensitivity of 98.64%, a specificity of 97.87%, a positive predictive value (PPV) of 99.92%, and a negative predictive value (NPV) of 97.87% in the training set. Additionally, its area under the receiver operating characteristic curve (AUC) was 0.9998 (0.99976-0.99991). MResNet's accuracy, sensitivity, specificity, PPV, NPV, and AUC in the testing set were 85.23%, 92.79%, 72.89%, 84.56%, 86.34%, and 0.9275 (0.91662-0.93833), respectively. The developed MResNet performed exceptionally well in estimating the malignancy risk of pulmonary nodules found on CT. The model has the potential to provide reliable and reproducible malignancy risk scores for clinicians and radiologists, thereby optimizing lung cancer screening management.
Fibroglandular tissue segmentation in breast MRI using vision transformers: a multi-institutional evaluation
Muller-Franzes, G.
Muller-Franzes, F.
Huck, L.
Raaff, V.
Kemmer, E.
Khader, F.
Arasteh, S. T.
Lemainque, T.
Kather, J. N.
Nebelung, S.
Kuhl, C.
Truhn, D.
Sci Rep2023Journal Article, cited 0 times
Duke-Breast-Cancer-MRI
Algorithm Development
Transformer
U-Net
Semi-automatic segmentation
Humans
Retrospective Studies
Magnetic Resonance Imaging (MRI)
Radiography
*Breast Density
Accurate and automatic segmentation of fibroglandular tissue in breast MRI screening is essential for the quantification of breast density and background parenchymal enhancement. In this retrospective study, we developed and evaluated a transformer-based neural network for breast segmentation (TraBS) in multi-institutional MRI data, and compared its performance to the well established convolutional neural network nnUNet. TraBS and nnUNet were trained and tested on 200 internal and 40 external breast MRI examinations using manual segmentations generated by experienced human readers. Segmentation performance was assessed in terms of the Dice score and the average symmetric surface distance. The Dice score for nnUNet was lower than for TraBS on the internal testset (0.909 +/- 0.069 versus 0.916 +/- 0.067, P < 0.001) and on the external testset (0.824 +/- 0.144 versus 0.864 +/- 0.081, P = 0.004). Moreover, the average symmetric surface distance was higher (= worse) for nnUNet than for TraBS on the internal (0.657 +/- 2.856 versus 0.548 +/- 2.195, P = 0.001) and on the external testset (0.727 +/- 0.620 versus 0.584 +/- 0.413, P = 0.03). Our study demonstrates that transformer-based networks improve the quality of fibroglandular tissue segmentation in breast MRI compared to convolutional-based models like nnUNet. These findings might help to enhance the accuracy of breast density and parenchymal enhancement quantification in breast MRI screening.
Sensitivity of standardised radiomics algorithms to mask generation across different software platforms
Whybra, Philip
Spezi, Emiliano
Scientific RepoRtS2023Journal Article, cited 0 times
Soft-tissue Sarcoma
Radiomics
Texture analysis
PET
MRI
The field of radiomics continues to converge on a standardised approach to image processing and feature extraction. Conventional radiomics requires a segmentation. Certain features can be sensitive to small contour variations. The industry standard for medical image communication stores contours as coordinate points that must be converted to a binary mask before image processing can take place. This study investigates the impact that the process of converting contours to mask can have on radiomic features calculation. To this end we used a popular open dataset for radiomics standardisation and we compared the impact of masks generated by importing the dataset into 4 medical imaging software. We interfaced our previously standardised radiomics platform with these software using their published application programming interface to access image volume, masks and other data needed to calculate features. Additionally, we used super-sampling strategies to systematically evaluate the impact of contour data pre processing methods on radiomic features calculation. Finally, we evaluated the effect that using different mask generation approaches could have on patient clustering in a multi-center radiomics study. The study shows that even when working on the same dataset, mask and feature discrepancy occurs depending on the contour to mask conversion technique implemented in various medical imaging software. We show that this also affects patient clustering and potentially radiomic-based modelling in multi-centre studies where a mix of mask generation software is used. We provide recommendations to negate this issue and facilitate reproducible and reliable radiomics.
A multimodal radiomic machine learning approach to predict the LCK expression and clinical prognosis in high-grade serous ovarian cancer
Zhan, F.
He, L.
Yu, Y.
Chen, Q.
Guo, Y.
Wang, L.
Sci Rep2023Journal Article, cited 0 times
TCGA-OV
Female
Prognosis
*Nomograms
Tomography
X-Ray Computed/methods
Machine Learning
*Ovarian Neoplasms/diagnostic imaging/genetics
Retrospective Studies
Tumor Microenvironment/genetics
Radiomics
Radiogenomics
We developed and validated a multimodal radiomic machine learning approach to noninvasively predict the expression of lymphocyte cell-specific protein-tyrosine kinase (LCK) expression and clinical prognosis of patients with high-grade serous ovarian cancer (HGSOC). We analyzed gene enrichment using 343 HGSOC cases extracted from The Cancer Genome Atlas. The corresponding biomedical computed tomography images accessed from The Cancer Imaging Archive were used to construct the radiomic signature (Radscore). A radiomic nomogram was built by combining the Radscore and clinical and genetic information based on multimodal analysis. We compared the model performances and clinical practicability via area under the curve (AUC), Kaplan-Meier survival, and decision curve analyses. LCK mRNA expression was associated with the prognosis of HGSOC patients, serving as a significant prognostic marker of the immune response and immune cells infiltration. Six radiomic characteristics were chosen to predict the expression of LCK and overall survival (OS) in HGSOC patients. The logistic regression (LR) radiomic model exhibited slightly better predictive abilities than the support vector machine model, as assessed by comparing combined results. The performance of the LR radiomic model for predicting the level of LCK expression with five-fold cross-validation achieved AUCs of 0.879 and 0.834, respectively, in the training and validation sets. Decision curve analysis at 60 months demonstrated the high clinical utility of our model within thresholds of 0.25 and 0.7. The radiomic nomograms were robust and displayed effective calibration. Abnormally high expression of LCK in HGSOC patients is significantly correlated with the tumor immune microenvironment and can be used as an essential indicator for predicting the prognosis of HGSOC. The multimodal radiomic machine learning approach can capture the heterogeneity of HGSOC, noninvasively predict the expression of LCK, and replace LCK for predictive analysis, providing a new idea for predicting the clinical prognosis of HGSOC and formulating a personalized treatment plan.
Morphological diagnosis of hematologic malignancy using feature fusion-based deep convolutional neural network
Yadav, D. P.
Kumar, D.
Jalal, A. S.
Kumar, A.
Singh, K. U.
Shah, M. A.
Sci Rep2023Journal Article, cited 0 times
AML-Cytomorphology_LMU
Classification
Pathomics
Algorithm Development
Machine Learning
Leukemia
Imaging features
Leukemia is a cancer of white blood cells characterized by immature lymphocytes. Due to blood cancer, many people die every year. Hence, the early detection of these blast cells is necessary for avoiding blood cancer. A novel deep convolutional neural network (CNN) 3SNet that has depth-wise convolution blocks to reduce the computation costs has been developed to aid the diagnosis of leukemia cells. The proposed method includes three inputs to the deep CNN model. These inputs are grayscale and their corresponding histogram of gradient (HOG) and local binary pattern (LBP) images. The HOG image finds the local shape, and the LBP image describes the leukaemia cell's texture pattern. The suggested model was trained and tested with images from the AML-Cytomorphology_LMU dataset. The mean average precision (MAP) for the cell with less than 100 images in the dataset was 84%, whereas for cells with more than 100 images in the dataset was 93.83%. In addition, the ROC curve area for these cells is more than 98%. This confirmed proposed model could be an adjunct tool to provide a second opinion to a doctor.
Comparing effectiveness of image perturbation and test retest imaging in improving radiomic model reliability
Zhang, J.
Teng, X.
Zhang, X.
Lam, S. K.
Lin, Z.
Liang, Y.
Yu, H.
Siu, S. W. K.
Chang, A. T. Y.
Zhang, H.
Kong, F. M.
Yang, R.
Cai, J.
Sci Rep2023Journal Article, cited 0 times
ACRIN 6698
ACRIN 6698/I-SPY2 Breast DWI
BMMR2 Challenge
Radiomics
Humans
Female
*Image Processing
Computer-Assisted/methods
Reproducibility of Results
Diffusion Magnetic Resonance Imaging
*Breast Neoplasms/diagnostic imaging
Image perturbation is a promising technique to assess radiomic feature repeatability, but whether it can achieve the same effect as test-retest imaging on model reliability is unknown. This study aimed to compare radiomic model reliability based on repeatable features determined by the two methods using four different classifiers. A 191-patient public breast cancer dataset with 71 test-retest scans was used with pre-determined 117 training and 74 testing samples. We collected apparent diffusion coefficient images and manual tumor segmentations for radiomic feature extraction. Random translations, rotations, and contour randomizations were performed on the training images, and intra-class correlation coefficient (ICC) was used to filter high repeatable features. We evaluated model reliability in both internal generalizability and robustness, which were quantified by training and testing AUC and prediction ICC. Higher testing performance was found at higher feature ICC thresholds, but it dropped significantly at ICC = 0.95 for the test-retest model. Similar optimal reliability can be achieved with testing AUC = 0.7-0.8 and prediction ICC > 0.9 at the ICC threshold of 0.9. It is recommended to include feature repeatability analysis using image perturbation in any radiomic study when test-retest is not feasible, but care should be taken when deciding the optimal feature repeatability criteria.
Improving the classification of veterinary thoracic radiographs through inter-species and inter-pathology self-supervised pre-training of deep learning models
The analysis of veterinary radiographic imaging data is an essential step in the diagnosis of many thoracic lesions. Given the limited time that physicians can devote to a single patient, it would be valuable to implement an automated system to help clinicians make faster but still accurate diagnoses. Currently, most of such systems are based on supervised deep learning approaches. However, the problem with these solutions is that they need a large database of labeled data. Access to such data is often limited, as it requires a great investment of both time and money. Therefore, in this work we present a solution that allows higher classification scores to be obtained using knowledge transfer from inter-species and inter-pathology self-supervised learning methods. Before training the network for classification, pretraining of the model was performed using self-supervised learning approaches on publicly available unlabeled radiographic data of human and dog images, which allowed substantially increasing the number of images for this phase. The self-supervised learning approaches included the Beta Variational Autoencoder, the Soft-Introspective Variational Autoencoder, and a Simple Framework for Contrastive Learning of Visual Representations. After the initial pretraining, fine-tuning was performed for the collected veterinary dataset using 20% of the available data. Next, a latent space exploration was performed for each model after which the encoding part of the model was fine-tuned again, this time in a supervised manner for classification. Simple Framework for Contrastive Learning of Visual Representations proved to be the most beneficial pretraining method. Therefore, it was for this method that experiments with various fine-tuning methods were carried out. We achieved a mean ROC AUC score of 0.77 and 0.66, respectively, for the laterolateral and dorsoventral projection datasets. The results show significant improvement compared to using the model without any pretraining approach.
Combination of tumor asphericity and an extracellular matrix-related prognostic gene signature in non-small cell lung cancer patients
Zschaeck, S.
Klinger, B.
van den Hoff, J.
Cegla, P.
Apostolova, I.
Kreissl, M. C.
Cholewinski, W.
Kukuk, E.
Strobel, H.
Amthauer, H.
Bluthgen, N.
Zips, D.
Hofheinz, F.
Sci Rep2023Journal Article, cited 0 times
NSCLC Radiogenomics
TCGA-LUSC
TCGA-LUAD
CPTAC-LSCC
CPTAC-LUAD
Humans
PET/CT
*Carcinoma
Non-Small-Cell Lung/pathology
Prognosis
*Lung Neoplasms/pathology
Retrospective Studies
Fluorodeoxyglucose F18/metabolism
Tomography
X-Ray Computed
Precision Medicine
Positron Emission Tomography Computed Tomography
One important aim of precision oncology is a personalized treatment of patients. This can be achieved by various biomarkers, especially imaging parameters and gene expression signatures are commonly used. So far, combination approaches are sparse. The aim of the study was to independently validate the prognostic value of the novel positron emission tomography (PET) parameter tumor asphericity (ASP) in non small cell lung cancer (NSCLC) patients and to investigate associations between published gene expression profiles and ASP. This was a retrospective evaluation of PET imaging and gene expression data from three public databases and two institutional datasets. The whole cohort comprised 253 NSCLC patients, all treated with curative intent surgery. Clinical parameters, standard PET parameters and ASP were evaluated in all patients. Additional gene expression data were available for 120 patients. Univariate Cox regression and Kaplan-Meier analysis was performed for the primary endpoint progression-free survival (PFS) and additional endpoints. Furthermore, multivariate cox regression testing was performed including clinically significant parameters, ASP, and the extracellular matrix-related prognostic gene signature (EPPI). In the whole cohort, a significant association with PFS was observed for ASP (p < 0.001) and EPPI (p = 0.012). Upon multivariate testing, EPPI remained significantly associated with PFS (p = 0.018) in the subgroup of patients with additional gene expression data, while ASP was significantly associated with PFS in the whole cohort (p = 0.012). In stage II patients, ASP was significantly associated with PFS (p = 0.009), and a previously published cutoff value for ASP (19.5%) was successfully validated (p = 0.008). In patients with additional gene expression data, EPPI showed a significant association with PFS, too (p = 0.033). The exploratory combination of ASP and EPPI showed that the combinatory approach has potential to further improve patient stratification compared to the use of only one parameter. We report the first successful validation of EPPI and ASP in stage II NSCLC patients. The combination of both parameters seems to be a very promising approach for improvement of risk stratification in a group of patients with urgent need for a more personalized treatment approach.
Addressing image misalignments in multi-parametric prostate MRI for enhanced computer-aided diagnosis of prostate cancer
Kovacs, B.
Netzer, N.
Baumgartner, M.
Schrader, A.
Isensee, F.
Weisser, C.
Wolf, I.
Gortz, M.
Jaeger, P. F.
Schutz, V.
Floca, R.
Gnirs, R.
Stenzinger, A.
Hohenfellner, M.
Schlemmer, H. P.
Bonekamp, D.
Maier-Hein, K. H.
Sci Rep2023Journal Article, cited 0 times
PROSTATEx
Image Registration
Algorithm Development
Computer Aided Diagnosis (CADx)
Male
Humans
*Prostate/diagnostic imaging/pathology
Magnetic Resonance Imaging/methods
Diagnosis
Computer-Assisted/methods
*Prostatic Neoplasms/diagnostic imaging/pathology
Computers
Prostate cancer (PCa) diagnosis on multi-parametric magnetic resonance images (MRI) requires radiologists with a high level of expertise. Misalignments between the MRI sequences can be caused by patient movement, elastic soft-tissue deformations, and imaging artifacts. They further increase the complexity of the task prompting radiologists to interpret the images. Recently, computer-aided diagnosis (CAD) tools have demonstrated potential for PCa diagnosis typically relying on complex co-registration of the input modalities. However, there is no consensus among research groups on whether CAD systems profit from using registration. Furthermore, alternative strategies to handle multi-modal misalignments have not been explored so far. Our study introduces and compares different strategies to cope with image misalignments and evaluates them regarding to their direct effect on diagnostic accuracy of PCa. In addition to established registration algorithms, we propose 'misalignment augmentation' as a concept to increase CAD robustness. As the results demonstrate, misalignment augmentations can not only compensate for a complete lack of registration, but if used in conjunction with registration, also improve the overall performance on an independent test set.
GPU-accelerated lung CT segmentation based on level sets and texture analysis
Reska, D.
Kretowski, M.
Sci Rep2024Journal Article, cited 0 times
LCTSC
Algorithm Development
Semi-automatic segmentation
*Tomography
X-Ray Computed/methods
*Algorithms
Lung/diagnostic imaging
Image Processing
Computer-Assisted/methods
This paper presents a novel semi-automatic method for lung segmentation in thoracic CT datasets. The fully three-dimensional algorithm is based on a level set representation of an active surface and integrates texture features to improve its robustness. The method's performance is enhanced by the graphics processing unit (GPU) acceleration. The segmentation process starts with a manual initialisation of 2D contours on a few representative slices of the analysed volume. Next, the starting regions for the active surface are generated according to the probability maps of texture features. The active surface is then evolved to give the final segmentation result. The recent implementation employs features based on grey-level co-occurrence matrices and Gabor filters. The algorithm was evaluated on real medical imaging data from the LCTCS 2017 challenge. The results were also compared with the outcomes of other segmentation methods. The proposed approach provided high segmentation accuracy while offering very competitive performance.
A publicly available deep learning model and dataset for segmentation of breast, fibroglandular tissue, and vessels in breast MRI
Breast density, or the amount of fibroglandular tissue (FGT) relative to the overall breast volume, increases the risk of developing breast cancer. Although previous studies have utilized deep learning to assess breast density, the limited public availability of data and quantitative tools hinders the development of better assessment tools. Our objective was to (1) create and share a large dataset of pixel-wise annotations according to well-defined criteria, and (2) develop, evaluate, and share an automated segmentation method for breast, FGT, and blood vessels using convolutional neural networks. We used the Duke Breast Cancer MRI dataset to randomly select 100 MRI studies and manually annotated the breast, FGT, and blood vessels for each study. Model performance was evaluated using the Dice similarity coefficient (DSC). The model achieved DSC values of 0.92 for breast, 0.86 for FGT, and 0.65 for blood vessels on the test set. The correlation between our model's predicted breast density and the manually generated masks was 0.95. The correlation between the predicted breast density and qualitative radiologist assessment was 0.75. Our automated models can accurately segment breast, FGT, and blood vessels using pre-contrast breast MRI data. The data and the models were made publicly available.
Fully-automated CT derived body composition analysis reveals sarcopenia in functioning adrenocortical carcinomas
Determination of body composition (the relative distribution of fat, muscle, and bone) has been used effectively to assess the risk of progression and overall clinical outcomes in different malignancies. Sarcopenia (loss of muscle mass) is especially associated with poor clinical outcomes in cancer. However, estimation of muscle mass through CT scan has been a cumbersome, manually intensive process requiring accurate contouring through dedicated personnel hours. Recently, fully automated technologies that can determine body composition in minutes have been developed and shown to be highly accurate in determining muscle, bone, and fat mass. We employed a fully automated technology, and analyzed images from a publicly available cancer imaging archive dataset (TCIA) and a tertiary academic center. The results show that adrenocortical carcinomas (ACC) have relatively sarcopenia compared to benign adrenal lesions. In addition, functional ACCs have accelerated sarcopenia compared to non-functional ACCs. Further longitudinal research might shed further light on the relationship between body component distribution and ACC prognosis, which will help us incorporate more nutritional strategies in cancer therapy.
Explainable prediction model for the human papillomavirus status in patients with oropharyngeal squamous cell carcinoma using CNN on CT images
Squamous Cell Carcinoma of Head and Neck/virology/diagnostic imaging/pathology
Tumor Burden
Human Papillomavirus Viruses
Convolutional neural network
Explainable artificial intelligence
Grad-CAM
Human papillomavirus
Oropharyngeal squamous cell carcinoma
Several studies have emphasised how positive and negative human papillomavirus (HPV+ and HPV-, respectively) oropharyngeal squamous cell carcinoma (OPSCC) has distinct molecular profiles, tumor characteristics, and disease outcomes. Different radiomics-based prediction models have been proposed, by also using innovative techniques such as Convolutional Neural Networks (CNNs). Although some of these models reached encouraging predictive performances, there evidence explaining the role of radiomic features in achieving a specific outcome is scarce. In this paper, we propose some preliminary results related to an explainable CNN-based model to predict HPV status in OPSCC patients. We extracted the Gross Tumor Volume (GTV) of pre-treatment CT images related to 499 patients (356 HPV+ and 143 HPV-) included into the OPC-Radiomics public dataset to train an end-to-end Inception-V3 CNN architecture. We also collected a multicentric dataset consisting of 92 patients (43 HPV+ , 49 HPV-), which was employed as an independent test set. Finally, we applied Gradient-weighted Class Activation Mapping (Grad-CAM) technique to highlight the most informative areas with respect to the predicted outcome. The proposed model reached an AUC value of 73.50% on the independent test. As a result of the Grad-CAM algorithm, the most informative areas related to the correctly classified HPV+ patients were located into the intratumoral area. Conversely, the most important areas referred to the tumor edges. Finally, since the proposed model provided additional information with respect to the accuracy of the classification given by the visualization of the areas of greatest interest for predictive purposes for each case examined, it could contribute to increase confidence in using computer-based predictive models in the actual clinical practice.
A deep learning approach for ovarian cancer detection and classification based on fuzzy deep learning
El-Latif, Eman I Abd
El-Dosuky, Mohamed
Darwish, Ashraf
Hassanien, Aboul Ella
Scientific RepoRtS2024Journal Article, cited 0 times
Website
Ovarian Bevacizumab Response
An open codebase for enhancing transparency in deep learning-based breast cancer diagnosis utilizing CBIS-DDSM data
Accessible mammography datasets and innovative machine learning techniques are at the forefront of computer-aided breast cancer diagnosis. However, the opacity surrounding private datasets and the unclear methodology behind the selection of subset images from publicly available databases for model training and testing, coupled with the arbitrary incompleteness or inaccessibility of code, markedly intensifies the obstacles in replicating and validating the model's efficacy. These challenges, in turn, erect barriers for subsequent researchers striving to learn and advance this field. To address these limitations, we provide a pilot codebase covering the entire process from image preprocessing to model development and evaluation pipeline, utilizing the publicly available Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM) mass subset, including both full images and regions of interests (ROIs). We have identified that increasing the input size could improve the detection accuracy of malignant cases within each set of models. Collectively, our efforts hold promise in accelerating global software development for breast cancer diagnosis by leveraging our codebase and structure, while also integrating other advancements in the field.
Hu similarity coefficient: a clinically oriented metric to evaluate contour accuracy in radiation therapy
To propose a clinically oriented quantitative metric, Hu similarity coefficient (HSC), to evaluate contour quality, gauge the performance of auto contouring methods, and aid effective allocation of clinical resources. The HSC is defined as the ratio of the number of boundary points of the initial contour that doesn't require modifications over the number of boundary points of the final adjusted contour. To demonstrate the clinical utility of the HSC in contour evaluation, we used publicly available pelvic CT data from the Cancer Imaging Archive. The bladder was selected as the organ of interest. It was contoured by a certified medical dosimetrist and reviewed by a certified medical physicist. This contour served as the ground truth contour. From this contour, we simulated two contour sets. The first set had the same Dice similarity coefficient (DSC) but different HSCs, whereas the second set kept a constant HSC while exhibiting different DSCs. Four individuals were asked to adjust the simulated contours until they met clinical standards. The corresponding contour modification times were recorded and normalized by individual's manual contouring times from scratch. The normalized contour modification time was correlated to the HSC and DSC to evaluate their suitability as quantitative metrics assessing contour quality. The HSC maintained a strong correlation with the normalized contour modification time when both sets of simulated contours were included in analysis. The correlation between the DSC and normalized contour modification time, however, was weak. Compared to the DSC, the HSC is more suitable for evaluating contour quality. We demonstrated that the HSC correlated well with the average normalized contour modification time. Clinically, contour modification time is the most relevant factor in allocating clinical resources. Therefore, the HSC is better suited than the DSC to assess contour quality from a clinical perspective.
Multi-faceted computational assessment of risk and progression in oligodendroglioma implicates NOTCH and PI3K pathways
Halani, Sameer H
Yousefi, Safoora
Vega, Jose Velazquez
Rossi, Michael R
Zhao, Zheng
Amrollahi, Fatemeh
Holder, Chad A
Baxter-Stoltzfus, Amelia
Eschbacher, Jennifer
Griffith, Brent
NPJ precision oncology2018Journal Article, cited 0 times
Website
TCGA-LGG
oligodendroglioma
NOTCH1
PIK3
Deep learning for end-to-end kidney cancer diagnosis on multi-phase abdominal computed tomography
Uhm, K. H.
Jung, S. W.
Choi, M. H.
Shin, H. K.
Yoo, J. I.
Oh, S. W.
Kim, J. Y.
Kim, H. G.
Lee, Y. J.
Youn, S. Y.
Hong, S. H.
Ko, S. J.
NPJ Precis Oncol2021Journal Article, cited 0 times
Website
TCGA-KIRC
TCGA-KIRP
TCGA-KICH
KIDNEY
Classification
In 2020, it is estimated that 73,750 kidney cancer cases were diagnosed, and 14,830 people died from cancer in the United States. Preoperative multi-phase abdominal computed tomography (CT) is often used for detecting lesions and classifying histologic subtypes of renal tumor to avoid unnecessary biopsy or surgery. However, there exists inter-observer variability due to subtle differences in the imaging features of tumor subtypes, which makes decisions on treatment challenging. While deep learning has been recently applied to the automated diagnosis of renal tumor, classification of a wide range of subtype classes has not been sufficiently studied yet. In this paper, we propose an end-to-end deep learning model for the differential diagnosis of five major histologic subtypes of renal tumors including both benign and malignant tumors on multi-phase CT. Our model is a unified framework to simultaneously identify lesions and classify subtypes for the diagnosis without manual intervention. We trained and tested the model using CT data from 308 patients who underwent nephrectomy for renal tumors. The model achieved an area under the curve (AUC) of 0.889, and outperformed radiologists for most subtypes. We further validated the model on an independent dataset of 184 patients from The Cancer Imaging Archive (TCIA). The AUC for this dataset was 0.855, and the model performed comparably to the radiologists. These results indicate that our model can achieve similar or better diagnostic performance than radiologists in differentiating a wide range of renal tumors on multi-phase CT.
Multimodal analysis suggests differential immuno-metabolic crosstalk in lung squamous cell carcinoma and adenocarcinoma
Leitner, B. P.
Givechian, K. B.
Ospanova, S.
Beisenbayeva, A.
Politi, K.
Perry, R. J.
NPJ Precis Oncol2022Journal Article, cited 0 times
Website
TCGA-LUAD
TCGA-LUSC
LUNG
Radiomics
Radiogenomics
Immunometabolism within the tumor microenvironment is an appealing target for precision therapy approaches in lung cancer. Interestingly, obesity confers an improved response to immune checkpoint inhibition in non-small cell lung cancer (NSCLC), suggesting intriguing relationships between systemic metabolism and the immunometabolic environment in lung tumors. We hypothesized that visceral fat and (18)F-Fluorodeoxyglucose uptake influenced the tumor immunometabolic environment and that these bidirectional relationships differ in NSCLC subtypes, lung adenocarcinoma (LUAD) and lung squamous cell carcinoma (LUSC). By integrating (18)F-FDG PET/CT imaging, bulk and single-cell RNA-sequencing, and histology, we observed that LUSC had a greater dependence on glucose than LUAD. In LUAD tumors with high glucose uptake, glutaminase was downregulated, suggesting a tradeoff between glucose and glutamine metabolism, while in LUSC tumors with high glucose uptake, genes related to fatty acid and amino acid metabolism were also increased. We found that tumor-infiltrating T cells had the highest expression of glutaminase, ribosomal protein 37, and cystathionine gamma-lyase in NSCLC, highlighting the metabolic flexibility of this cell type. Further, we demonstrate that visceral adiposity, but not body mass index (BMI), was positively associated with tumor glucose uptake in LUAD and that patients with high BMI had favorable prognostic transcriptional profiles, while tumors of patients with high visceral fat had poor prognostic gene expression. We posit that metabolic adjunct therapy may be more successful in LUSC rather than LUAD due to LUAD's metabolic flexibility and that visceral adiposity, not BMI alone, should be considered when developing precision medicine approaches for the treatment of NSCLC.
Gross tumour volume radiomics for prognostication of recurrence & death following radical radiotherapy for NSCLC
Hindocha, S.
Charlton, T. G.
Linton-Reid, K.
Hunter, B.
Chan, C.
Ahmed, M.
Greenlay, E. J.
Orton, M.
Bunce, C.
Lunn, J.
Doran, S. J.
Ahmad, S.
McDonald, F.
Locke, I.
Power, D.
Blackledge, M.
Lee, R. W.
Aboagye, E. O.
NPJ Precis Oncol2022Journal Article, cited 0 times
NSCLC-Radiomics
Radiomics
Non-Small Cell Lung Cancer (NSCLC)
Classification
Recurrence occurs in up to 36% of patients treated with curative-intent radiotherapy for NSCLC. Identifying patients at higher risk of recurrence for more intensive surveillance may facilitate the earlier introduction of the next line of treatment. We aimed to use radiotherapy planning CT scans to develop radiomic classification models that predict overall survival (OS), recurrence-free survival (RFS) and recurrence two years post-treatment for risk-stratification. A retrospective multi-centre study of >900 patients receiving curative-intent radiotherapy for stage I-III NSCLC was undertaken. Models using radiomic and/or clinical features were developed, compared with 10-fold cross-validation and an external test set, and benchmarked against TNM-stage. Respective validation and test set AUCs (with 95% confidence intervals) for the radiomic-only models were: (1) OS: 0.712 (0.592-0.832) and 0.685 (0.585-0.784), (2) RFS: 0.825 (0.733-0.916) and 0.750 (0.665-0.835), (3) Recurrence: 0.678 (0.554-0.801) and 0.673 (0.577-0.77). For the combined models: (1) OS: 0.702 (0.583-0.822) and 0.683 (0.586-0.78), (2) RFS: 0.805 (0.707-0.903) and 0.755 (0.672-0.838), (3) Recurrence: 0.637 (0.51-0..765) and 0.738 (0.649-0.826). Kaplan-Meier analyses demonstrate OS and RFS difference of >300 and >400 days respectively between low and high-risk groups. We have developed validated and externally tested radiomic-based prediction models. Such models could be integrated into the routine radiotherapy workflow, thus informing a personalised surveillance strategy at the point of treatment. Our work lays the foundations for future prospective clinical trials for quantitative personalised risk-stratification for surveillance following curative-intent radiotherapy for NSCLC.
An interpretable machine learning system for colorectal cancer diagnosis from pathology slides
Neto, P. C.
Montezuma, D.
Oliveira, S. P.
Oliveira, D.
Fraga, J.
Monteiro, A.
Monteiro, J.
Ribeiro, L.
Goncalves, S.
Reinhard, S.
Zlobec, I.
Pinto, I. M.
Cardoso, J. S.
NPJ Precis Oncol2024Journal Article, cited 0 times
Website
TCGA-COAD
TCGA-READ
Whole Slide Imaging (WSI)
Pathomics
Computer Aided Diagnosis (CADx)
Classification
Supervised deep learning
Interpretability
03 April 2024A Correction to this paper has been published: https://doi.org/10.1038/s41698-024-00581-2 ; ; Considering the profound transformation affecting pathology practice, we aimed to develop a scalable artificial intelligence (AI) system to diagnose colorectal cancer from whole-slide images (WSI). For this, we propose a deep learning (DL) system that learns from weak labels, a sampling strategy that reduces the number of training samples by a factor of six without compromising performance, an approach to leverage a small subset of fully annotated samples, and a prototype with explainable predictions, active learning features and parallelisation. Noting some problems in the literature, this study is conducted with one of the largest WSI colorectal samples dataset with approximately 10,500 WSIs. Of these samples, 900 are testing samples. Furthermore, the robustness of the proposed method is assessed with two additional external datasets (TCGA and PAIP) and a dataset of samples collected directly from the proposed prototype. Our proposed method predicts, for the patch-based tiles, a class based on the severity of the dysplasia and uses that information to classify the whole slide. It is trained with an interpretable mixed-supervision scheme to leverage the domain knowledge introduced by pathologists through spatial annotations. The mixed-supervision scheme allowed for an intelligent sampling strategy effectively evaluated in several different scenarios without compromising the performance. On the internal dataset, the method shows an accuracy of 93.44% and a sensitivity between positive (low-grade and high-grade dysplasia) and non-neoplastic samples of 0.996. On the external test samples varied with TCGA being the most challenging dataset with an overall accuracy of 84.91% and a sensitivity of 0.996.
Integrating histopathology and transcriptomics for spatial tumor microenvironment profiling in a melanoma case study
Lapuente-Santana, O.
Kant, J.
Eduati, F.
NPJ Precis Oncol2024Journal Article, cited 0 times
Website
CPTAC-CM
TIL-WSI-TCGA
Local structures formed by cells in the tumor microenvironment (TME) play an important role in tumor development and treatment response. This study introduces SPoTLIghT, a computational framework providing a quantitative description of the tumor architecture from hematoxylin and eosin (H&E) slides. We trained a weakly supervised machine learning model on melanoma patients linking tile-level imaging features extracted from H&E slides to sample-level cell type quantifications derived from RNA-sequencing data. Using this model, SPoTLIghT provides spatial cellular maps for any H&E image, and converts them in graphs to derive 96 interpretable features capturing TME cellular organization. We show how SPoTLIghT's spatial features can distinguish microenvironment subtypes and reveal nuanced immune infiltration structures not apparent in molecular data alone. Finally, we use SPoTLIghT to effectively predict patients' prognosis in an independent melanoma cohort. SPoTLIghT enhances computational histopathology providing a quantitative and interpretable characterization of the spatial contexture of tumors.
Eliminating biasing signals in lung cancer images for prognosis predictions with deep learning.
van Amsterdam, W. A. C.
Verhoeff, J. J. C.
de Jong, P. A.
Leiner, T.
Eijkemans, M. J. C.
NPJ Digit Med2019Journal Article, cited 0 times
Website
LIDC-IDRI
Computed tomography (CT)
LUNG
Analysis Results
Deep learning has shown remarkable results for image analysis and is expected to aid individual treatment decisions in health care. Treatment recommendations are predictions with an inherently causal interpretation. To use deep learning for these applications in the setting of observational data, deep learning methods must be made compatible with the required causal assumptions. We present a scenario with real-world medical images (CT-scans of lung cancer) and simulated outcome data. Through the data simulation scheme, the images contain two distinct factors of variation that are associated with survival, but represent a collider (tumor size) and a prognostic factor (tumor heterogeneity), respectively. When a deep network would use all the information available in the image to predict survival, it would condition on the collider and thereby introduce bias in the estimation of the treatment effect. We show that when this collider can be quantified, unbiased individual prognosis predictions are attainable with deep learning. This is achieved by (1) setting a dual task for the network to predict both the outcome and the collider and (2) enforcing a form of linear independence of the activation distributions of the last layer. Our method provides an example of combining deep learning and structural causal models to achieve unbiased individual prognosis predictions. Extensions of machine learning methods for applications to causal questions are required to attain the long-standing goal of personalized medicine supported by artificial intelligence.
Fast automated detection of COVID-19 from medical images using convolutional neural networks
Liang, Shuang
Liu, Huixiang
Gu, Yu
Guo, Xiuhua
Li, Hongjun
Li, Li
Wu, Zhiyuan
Liu, Mengyang
Tao, Lixin
Communications Biology2021Journal Article, cited 0 times
Website
LIDC
LUNA16 Challenge
CoViD-19
Lung
Radiomics-guided deep neural networks stratify lung adenocarcinoma prognosis from CT scans
Cho, Hwan-ho
Lee, Ho Yun
Kim, Eunjin
Lee, Geewon
Kim, Jonghoon
Kwon, Junmo
Park, Hyunjin
Communications Biology2021Journal Article, cited 7 times
Website
NSCLC Radiogenomics
Radiomics
Lung Cancer
A multi-encoder variational autoencoder controls multiple transformational features in single-cell image analysis
Ternes, L.
Dane, M.
Gross, S.
Labrie, M.
Mills, G.
Gray, J.
Heiser, L.
Chang, Y. H.
Commun Biol2022Journal Article, cited 0 times
Website
CRC_FFPE-CODEX_CellNeighs
Algorithm Development
Digital pathology
*Image Processing
Computer-Assisted
*Single-Cell Analysis
Image-based cell phenotyping relies on quantitative measurements as encoded representations of cells; however, defining suitable representations that capture complex imaging features is challenged by the lack of robust methods to segment cells, identify subcellular compartments, and extract relevant features. Variational autoencoder (VAE) approaches produce encouraging results by mapping an image to a representative descriptor, and outperform classical hand-crafted features for morphology, intensity, and texture at differentiating data. Although VAEs show promising results for capturing morphological and organizational features in tissue, single cell image analyses based on VAEs often fail to identify biologically informative features due to uninformative technical variation. Here we propose a multi-encoder VAE (ME-VAE) in single cell image analysis using transformed images as a self-supervised signal to extract transform-invariant biologically meaningful features, including emergent features not obvious from prior knowledge. We show that the proposed architecture improves analysis by making distinct cell populations more separable compared to traditional and recent extensions of VAE architectures and intensity measurements by enhancing phenotypic differences between cells and by improving correlations to other analytic modalities. Better feature extraction and image analysis methods enabled by the ME-VAE will advance our understanding of complex cell biology and enable discoveries previously hidden behind image complexity ultimately improving medical outcomes and drug discovery.
Iron commensalism of mesenchymal glioblastoma promotes ferroptosis susceptibility upon dopamine treatment
Vo, Vu T. A.
Kim, Sohyun
Hua, Tuyen N. M.
Oh, Jiwoong
Jeong, Yangsik
Communications Biology2022Journal Article, cited 0 times
Ivy GAP
REMBRANDT
TCGA-GBM
pathomics
BRAIN
Humans
Mice
The heterogeneity of glioblastoma multiforme (GBM) leads to poor patient prognosis. Here, we aim to investigate the mechanism through which GBM heterogeneity is coordinated to promote tumor progression. We find that proneural (PN)-GBM stem cells (GSCs) secreted dopamine (DA) and transferrin (TF), inducing the proliferation of mesenchymal (MES)-GSCs and enhancing their susceptibility toward ferroptosis. PN-GSC-derived TF stimulates MES-GSC proliferation in an iron-dependent manner. DA acts in an autocrine on PN-GSC growth in a DA receptor D1-dependent manner, while in a paracrine it induces TF receptor 1 expression in MES-GSCs to assist iron uptake and thus enhance ferroptotic vulnerability. Analysis of public datasets reveals worse prognosis of patients with heterogeneous GBM with high iron uptake than those with other GBM subtypes. Collectively, the findings here provide evidence of commensalism symbiosis that causes MES-GSCs to become iron-addicted, which in turn provides a rationale for targeting ferroptosis to treat resistant MES GBM.
Reversible epigenetic alterations regulate class I HLA loss in prostate cancer
Rodems, Tamara S.
Heninger, Erika
Stahlfeld, Charlotte N.
Gilsdorf, Cole S.
Carlson, Kristin N.
Kircher, Madison R.
Singh, Anupama
Krueger, Timothy E. G.
Beebe, David J.
Jarrard, David F.
McNeel, Douglas G.
Haffner, Michael C.
Lang, Joshua M.
Communications Biology2022Journal Article, cited 0 times
TCGA-PRAD
Downregulation of HLA class I (HLA-I) impairs immune recognition and surveillance in prostate cancer and may underlie the ineffectiveness of checkpoint blockade. However, the molecular mechanisms regulating HLA-I loss in prostate cancer have not been fully explored. Here, we conducted a comprehensive analysis of HLA-I genomic, epigenomic and gene expression alterations in primary and metastatic human prostate cancer. Loss of HLA-I gene expression was associated with repressive chromatin states including DNA methylation, histone H3 tri-methylation at lysine 27, and reduced chromatin accessibility. Pharmacological DNA methyltransferase (DNMT) and histone deacetylase (HDAC) inhibition decreased DNA methylation and increased H3 lysine 27 acetylation and resulted in re-expression of HLA-I on the surface of tumor cells. Re-expression of HLA-I on LNCaP cells by DNMT and HDAC inhibition increased activation of co-cultured prostate specific membrane antigen (PSMA)27-38-specific CD8+ T-cells. HLA-I expression is epigenetically regulated by functionally reversible DNA methylation and chromatin modifications in human prostate cancer. Methylated HLA-I was detected in HLA-Ilow circulating tumor cells (CTCs), which may serve as a minimally invasive biomarker for identifying patients who would benefit from epigenetic targeted therapies.
Open-source curation of a pancreatic ductal adenocarcinoma gene expression analysis platform (pdacR) supports a two-subtype model
Torre-Healy, L. A.
Kawalerski, R. R.
Oh, K.
Chrastecka, L.
Peng, X. L.
Aguirre, A. J.
Rashid, N. U.
Yeh, J. J.
Moffitt, R. A.
Commun Biol2023Journal Article, cited 0 times
CPTAC-PDA
PANCREAS
Gene expression profiling
Algorithm Development
H&E-stained slides
Pathomics
Pancreatic ductal adenocarcinoma (PDAC) is an aggressive disease for which potent therapies have limited efficacy. Several studies have described the transcriptomic landscape of PDAC tumors to provide insight into potentially actionable gene expression signatures to improve patient outcomes. Despite centralization efforts from multiple organizations and increased transparency requirements from funding agencies and publishers, analysis of public PDAC data remains difficult. Bioinformatic pitfalls litter public transcriptomic data, such as subtle inclusion of low-purity and non-adenocarcinoma cases. These pitfalls can introduce non-specificity to gene signatures without appropriate data curation, which can negatively impact findings. To reduce barriers to analysis, we have created pdacR ( http://pdacR.bmi.stonybrook.edu , https://github.com/rmoffitt/pdacR), an open-source software package and web-tool with annotated datasets from landmark studies and an interface for user-friendly analysis in clustering, differential expression, survival, and dimensionality reduction. Using this tool, we present a multi-dataset analysis of PDAC transcriptomics that confirms the basal-like/classical model over alternatives.
Clinically applicable deep learning framework for organs at risk delineation in CT images
Tang, Hao
Chen, Xuming
Liu, Yang
Lu, Zhipeng
You, Junhua
Yang, Mingzhou
Yao, Shengyu
Zhao, Guoqi
Xu, Yi
Chen, Tingfeng
Liu, Yong
Xie, Xiaohui
Nature Machine Intelligence2019Journal Article, cited 0 times
Head-Neck Cetuximab
Head-Neck-PET-CT
Segmentation
Machine Learning
Radiation therapy is one of the most widely used therapies for cancer treatment. A critical step in radiation therapy planning is to accurately delineate all organs at risk (OARs) to minimize potential adverse effects to healthy surrounding organs. However, manually delineating OARs based on computed tomography images is time-consuming and error-prone. Here, we present a deep learning model to automatically delineate OARs in head and neck, trained on a dataset of 215 computed tomography scans with 28 OARs manually delineated by experienced radiation oncologists. On a hold-out dataset of 100 computed tomography scans, our model achieves an average Dice similarity coefficient of 78.34% across the 28 OARs, significantly outperforming human experts and the previous state-of-the-art method by 10.05% and 5.18%, respectively. Our model takes only a few seconds to delineate an entire scan, compared to over half an hour by human experts. These findings demonstrate the potential for deep learning to improve the quality and reduce the treatment planning time of radiation therapy.
Human-level recognition of blast cells in acute myeloid leukaemia with convolutional neural networks
Matek, Christian
Schwarz, Simone
Spiekermann, Karsten
Marr, Carsten
Nature Machine Intelligence2019Journal Article, cited 0 times
AML-Cytomorphology_LMU
Reliable recognition of malignant white blood cells is a key step in the diagnosis of haematologic malignancies such as acute myeloid leukaemia. Microscopic morphological examination of blood cells is usually performed by trained human examiners, making the process tedious, time-consuming and hard to standardize. Here, we compile an annotated image dataset of over 18,000 white blood cells, use it to train a convolutional neural network for leukocyte classification and evaluate the network’s performance by comparing to inter- and intra-expert variability. The network classifies the most important cell types with high accuracy. It also allows us to decide two clinically relevant questions with human-level performance: (1) if a given cell has blast character and (2) if it belongs to the cell types normally present in non-pathological blood smears. Our approach holds the potential to be used as a classification aid for examining much larger numbers of cells in a smear than can usually be done by a human expert. This will allow clinicians to recognize malignant cell populations with lower prevalence at an earlier stage of the disease.
A shallow convolutional neural network predicts prognosis of lung cancer patients in multi-institutional computed tomography image datasets
Mukherjee, Pritam
Zhou, Mu
Lee, Edward
Schicht, Anne
Balagurunathan, Yoganand
Napel, Sandy
Gillies, Robert
Wong, Simon
Thieme, Alexander
Leung,Ann
Gevaert, Olivier
Nature Machine Intelligence2020Journal Article, cited 0 times
Website
LIDC-IDRI
NSCLC Radiogenomics
NSCLC-Radiomics
LungCT-Diagnosis
QIN LUNG CT
Classification
Models
Lung cancer is the most common fatal malignancy in adults worldwide, and non-small-cell lung cancer (NSCLC) accounts for 85% of lung cancer diagnoses. Computed tomography is routinely used in clinical practice to determine lung cancer treatment and assess prognosis. Here, we developed LungNet, a shallow convolutional neural network for predicting outcomes of patients with NSCLC. We trained and evaluated LungNet on four independent cohorts of patients with NSCLC from four medical centres: Stanford Hospital (n = 129), H. Lee Moffitt Cancer Center and Research Institute (n = 185), MAASTRO Clinic (n = 311) and Charité – Universitätsmedizin, Berlin (n = 84). We show that outcomes from LungNet are predictive of overall survival in all four independent survival cohorts as measured by concordance indices of 0.62, 0.62, 0.62 and 0.58 on cohorts 1, 2, 3 and 4, respectively. Furthermore, the survival model can be used, via transfer learning, for classifying benign versus malignant nodules on the Lung Image Database Consortium (n = 1,010), with improved performance (AUC = 0.85) versus training from scratch (AUC = 0.82). LungNet can be used as a non-invasive predictor for prognosis in patients with NSCLC and can facilitate interpretation of computed tomography images for lung cancer stratification and prognostication.
Uncertainty-guided dual-views for semi-supervised volumetric medical image segmentation
Peiris, Himashi
Hayat, Munawar
Chen, Zhaolin
Egan, Gary
Harandi, Mehrtash
Nature Machine Intelligence2023Journal Article, cited 0 times
Pancreas-CT
BraTS 2022
Semi-automatic segmentation
Deep Learning
Computed Tomography (CT)
Magnetic Resonance Imaging (MRI)
Deep learning has led to tremendous progress in the field of medical artificial intelligence. However, training deep-learning models usually require large amounts of annotated data. Annotating large-scale datasets is prone to human biases and is often very laborious, especially for dense prediction tasks such as image segmentation. Inspired by semi-supervised algorithms that use both labelled and unlabelled data for training, we propose a dual-view framework based on adversarial learning for segmenting volumetric images. In doing so, we use critic networks to allow each view to learn from high-confidence predictions of the other view by measuring a notion of uncertainty. Furthermore, to jointly learn the dual-views and the critics, we formulate the learning problem as a min–max problem. We analyse and contrast our proposed method against state-of-the-art baselines, both qualitatively and quantitatively, on four public datasets with multiple modalities (for example, computerized topography and magnetic resonance imaging) and demonstrate that the proposed semi-supervised method substantially outperforms the competing baselines while achieving competitive performance compared to fully supervised counterparts. Our empirical results suggest that an uncertainty-guided co-training framework can make two neural networks robust to data artefacts and have the ability to generate plausible segmentation masks that can be helpful for semi-automated segmentation processes.
Foundation model for cancer imaging biomarkers
Pai, S.
Bontempi, D.
Hadzic, I.
Prudente, V.
Sokac, M.
Chaunzwa, T. L.
Bernatz, S.
Hosny, A.
Mak, R. H.
Birkbak, N. J.
Aerts, Hjwl
Nat Mach Intell2024Journal Article, cited 0 times
Website
NSCLC Radiogenomics-Stanford
NSCLC Radiogenomics: Initial Stanford Study of 26 Cases
NSCLC-Radiomics
Imaging Data Commons
LUNA16 Challenge
Biomarkers
Cancer imaging
Tumour biomarkers
Foundation models in deep learning are characterized by a single large-scale model trained on vast amounts of data serving as the foundation for various downstream tasks. Foundation models are generally trained using self-supervised learning and excel in reducing the demand for training samples in downstream applications. This is especially important in medicine, where large labelled datasets are often scarce. Here, we developed a foundation model for cancer imaging biomarker discovery by training a convolutional encoder through self-supervised learning using a comprehensive dataset of 11,467 radiographic lesions. The foundation model was evaluated in distinct and clinically relevant applications of cancer imaging-based biomarkers. We found that it facilitated better and more efficient learning of imaging biomarkers and yielded task-specific models that significantly outperformed conventional supervised and other state-of-the-art pretrained implementations on downstream tasks, especially when training dataset sizes were very limited. Furthermore, the foundation model was more stable to input variations and showed strong associations with underlying biology. Our results demonstrate the tremendous potential of foundation models in discovering new imaging biomarkers that may extend to other clinical use cases and can accelerate the widespread translation of imaging biomarkers into clinical settings.
A deep-learning framework to predict cancer treatment response from histopathology images through imputed transcriptomics
Advances in artificial intelligence have paved the way for leveraging hematoxylin and eosin-stained tumor slides for precision oncology. We present ENLIGHT-DeepPT, an indirect two-step approach consisting of (1) DeepPT, a deep-learning framework that predicts genome-wide tumor mRNA expression from slides, and (2) ENLIGHT, which predicts response to targeted and immune therapies from the inferred expression values. We show that DeepPT successfully predicts transcriptomics in all 16 The Cancer Genome Atlas cohorts tested and generalizes well to two independent datasets. ENLIGHT-DeepPT successfully predicts true responders in five independent patient cohorts involving four different treatments spanning six cancer types, with an overall odds ratio of 2.28 and a 39.5% increased response rate among predicted responders versus the baseline rate. Notably, its prediction accuracy, obtained without any training on the treatment data, is comparable to that achieved by directly predicting the response from the images, which requires specific training on the treatment evaluation cohorts.
Radiomic tumor phenotypes augment molecular profiling in predicting recurrence free survival after breast neoadjuvant chemotherapy
Chitalia, R.
Miliotis, M.
Jahani, N.
Tastsoglou, S.
McDonald, E. S.
Belenky, V.
Cohen, E. A.
Newitt, D.
Van't Veer, L. J.
Esserman, L.
Hylton, N.
DeMichele, A.
Hatzigeorgiou, A.
Kontos, D.
Commun Med (Lond)2023Journal Article, cited 2 times
Website
ACRIN 6657
ISPY1
Breast-MRI-NACT-Pilot
Radiomics
Radiogenomics
Algorithm Development
DCE-MRI
Phenotype
BACKGROUND: Early changes in breast intratumor heterogeneity during neoadjuvant chemotherapy may reflect the tumor's ability to adapt and evade treatment. We investigated the combination of precision medicine predictors of genomic and MRI data towards improved prediction of recurrence free survival (RFS). METHODS: A total of 100 women from the ACRIN 6657/I-SPY 1 trial were retrospectively analyzed. We estimated MammaPrint, PAM50 ROR-S, and p53 mutation scores from publicly available gene expression data and generated four, voxel-wise 3-D radiomic kinetic maps from DCE-MR images at both pre- and early-treatment time points. Within the primary lesion from each kinetic map, features of change in radiomic heterogeneity were summarized into 6 principal components. RESULTS: We identify two imaging phenotypes of change in intratumor heterogeneity (p < 0.01) demonstrating significant Kaplan-Meier curve separation (p < 0.001). Adding phenotypes to established prognostic factors, functional tumor volume (FTV), MammaPrint, PAM50, and p53 scores in a Cox regression model improves the concordance statistic for predicting RFS from 0.73 to 0.79 (p = 0.002). CONCLUSIONS: These results demonstrate an important step in combining personalized molecular signatures and longitudinal imaging data towards improved prognosis.; Early changes in tumor properties during treatment may tell us whether or not a patient's tumor is responding to treatment. Such changes may be seen on imaging. Here, changes in breast cancer properties are identified on imaging and are used in combination with gene markers to investigate whether response to treatment can be predicted using mathematical models. We demonstrate that tumor properties seen on imaging early on in treatment can help to predict patient outcomes. Our approach may allow clinicians to better inform patients about their prognosis and choose appropriate and effective therapies.; eng
Multimodal deep learning to predict prognosis in adult and pediatric brain tumors
Steyaert, S.
Qiu, Y. L.
Zheng, Y.
Mukherjee, P.
Vogel, H.
Gevaert, O.
Commun Med (Lond)2023Journal Article, cited 0 times
CPTAC-GBM
Genomic Data Commons (GDC)
Deep Learning
Histopathology
Radiogenomics
Pathogenomics
Radiomics
Pathomics
TCGA-GBM
BACKGROUND: The introduction of deep learning in both imaging and genomics has significantly advanced the analysis of biomedical data. For complex diseases such as cancer, different data modalities may reveal different disease characteristics, and the integration of imaging with genomic data has the potential to unravel additional information than when using these data sources in isolation. Here, we propose a DL framework that combines these two modalities with the aim to predict brain tumor prognosis. METHODS: Using two separate glioma cohorts of 783 adults and 305 pediatric patients we developed a DL framework that can fuse histopathology images with gene expression profiles. Three strategies for data fusion were implemented and compared: early, late, and joint fusion. Additional validation of the adult glioma models was done on an independent cohort of 97 adult patients. RESULTS: Here we show that the developed multimodal data models achieve better prediction results compared to the single data models, but also lead to the identification of more relevant biological pathways. When testing our adult models on a third brain tumor dataset, we show our multimodal framework is able to generalize and performs better on new data from different cohorts. Leveraging the concept of transfer learning, we demonstrate how our pediatric multimodal models can be used to predict prognosis for two more rare (less available samples) pediatric brain tumors. CONCLUSIONS: Our study illustrates that a multimodal data fusion approach can be successfully implemented and customized to model clinical outcome of adult and pediatric brain tumors.; An increasing amount of complex patient data is generated when treating patients with cancer, including histopathology data (where the appearance of a tumor is examined under a microscope) and molecular data (such as analysis of a tumor's genetic material). Computational methods to integrate these data types might help us to predict outcomes in patients with cancer. Here, we propose a deep learning method which involves computer software learning from patterns in the data, to combine histopathology and molecular data to predict outcomes in patients with brain cancers. Using three cohorts of patients, we show that our method combining the different datasets performs better than models using one data type. Methods like ours might help clinicians to better inform patients about their prognosis and make decisions about their care.; eng
Deep learning for the detection of benign and malignant pulmonary nodules in non-screening chest CT scans
Hendrix, W.
Hendrix, N.
Scholten, E. T.
Mourits, M.
Trap-de Jong, J.
Schalekamp, S.
Korst, M.
van Leuken, M.
van Ginneken, B.
Prokop, M.
Rutten, M.
Jacobs, C.
Commun Med (Lond)2023Journal Article, cited 0 times
LIDC-IDRI
DICOM-LIDC-IDRI-Nodules
Algorithm Development
Lung Cancer
Nodule classification
Deep Learning
Computer Aided Detection (CADe)
Computer Aided Diagnosis (CADx)
BACKGROUND: Outside a screening program, early-stage lung cancer is generally diagnosed after the detection of incidental nodules in clinically ordered chest CT scans. Despite the advances in artificial intelligence (AI) systems for lung cancer detection, clinical validation of these systems is lacking in a non-screening setting. METHOD: We developed a deep learning-based AI system and assessed its performance for the detection of actionable benign nodules (requiring follow-up), small lung cancers, and pulmonary metastases in CT scans acquired in two Dutch hospitals (internal and external validation). A panel of five thoracic radiologists labeled all nodules, and two additional radiologists verified the nodule malignancy status and searched for any missed cancers using data from the national Netherlands Cancer Registry. The detection performance was evaluated by measuring the sensitivity at predefined false positive rates on a free receiver operating characteristic curve and was compared with the panel of radiologists. RESULTS: On the external test set (100 scans from 100 patients), the sensitivity of the AI system for detecting benign nodules, primary lung cancers, and metastases is respectively 94.3% (82/87, 95% CI: 88.1-98.8%), 96.9% (31/32, 95% CI: 91.7-100%), and 92.0% (104/113, 95% CI: 88.5-95.5%) at a clinically acceptable operating point of 1 false positive per scan (FP/s). These sensitivities are comparable to or higher than the radiologists, albeit with a slightly higher FP/s (average difference of 0.6). CONCLUSIONS: The AI system reliably detects benign and malignant pulmonary nodules in clinically indicated CT scans and can potentially assist radiologists in this setting.; Early-stage lung cancer can be diagnosed after identifying an abnormal spot on a chest CT scan ordered for other medical reasons. These spots or lung nodules can be overlooked by radiologists, as they are not necessarily the focus of an examination and can be as small as a few millimeters. Software using Artificial Intelligence (AI) technology has proven to be successful for aiding radiologists in this task, but its performance is understudied outside a lung cancer screening setting. We therefore developed and validated AI software for the detection of cancerous nodules or non-cancerous nodules that would need attention. We show that the software can reliably detect these nodules in a non-screening setting and could potentially aid radiologists in daily clinical practice.; eng
GaNDLF: the generally nuanced deep learning framework for scalable end-to-end clinical workflows
Pati, Sarthak
Thakur, Siddhesh P.
Hamamcı, İbrahim Ethem
Baid, Ujjwal
Baheti, Bhakti
Bhalerao, Megh
Güley, Orhun
Mouchtaris, Sofia
Lang, David
Thermos, Spyridon
Gotkowski, Karol
González, Camila
Grenko, Caleb
Getka, Alexander
Edwards, Brandon
Sheller, Micah
Wu, Junwen
Karkada, Deepthi
Panchumarthy, Ravi
Ahluwalia, Vinayak
Zou, Chunrui
Bashyam, Vishnu
Li, Yuemeng
Haghighi, Babak
Chitalia, Rhea
Abousamra, Shahira
Kurc, Tahsin M.
Gastounioti, Aimilia
Er, Sezgin
Bergman, Mark
Saltz, Joel H.
Fan, Yong
Shah, Prashant
Mukhopadhyay, Anirban
Tsaftaris, Sotirios A.
Menze, Bjoern
Davatzikos, Christos
Kontos, Despina
Karargyris, Alexandros
Umeton, Renato
Mattson, Peter
Bakas, Spyridon
Communications Engineering2023Journal Article, cited 5 times
Website
ISPY1
BraTS-TCGA-GBM
Radiology Imaging
Histopathology
Algorithm Development
Deep Learning (DL) has the potential to optimize machine learning in both the scientific and clinical communities. However, greater expertise is required to develop DL algorithms, and the variability of implementations hinders their reproducibility, translation, and deployment. Here we present the community-driven Generally Nuanced Deep Learning Framework (GaNDLF), with the goal of lowering these barriers. GaNDLF makes the mechanism of DL development, training, and inference more stable, reproducible, interpretable, and scalable, without requiring an extensive technical background. GaNDLF aims to provide an end-to-end solution for all DL-related tasks in computational precision medicine. We demonstrate the ability of GaNDLF to analyze both radiology and histology images, with built-in support for k-fold cross-validation, data augmentation, multiple modalities and output classes. Our quantitative performance evaluation on numerous use cases, anatomies, and computational tasks supports GaNDLF as a robust application framework for deployment in clinical workflows.
Advancing The Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features
Bakas, Spyridon
Akbari, Hamed
Sotiras, Aristeidis
Bilello, Michel
Rozycki, Martin
Kirby, Justin S.
Freymann, John B.
Farahani, Keyvan
Davatzikos, Christos
Scientific Data2017Journal Article, cited 1036 times
Website
TCGA-GBM
TCGA-LGG
Radiomic feature
Segmentation
Gliomas belong to a group of central nervous system tumors, and consist of various sub-regions. Gold standard labeling of these sub-regions in radiographic imaging is essential for both clinical and computational studies, including radiomic and radiogenomic analyses. Towards this end, we release segmentation labels and radiomic features for all pre-operative multimodal magnetic resonance imaging (MRI) (n=243) of the multi-institutional glioma collections of The Cancer Genome Atlas (TCGA), publicly available in The Cancer Imaging Archive (TCIA). Pre-operative scans were identified in both glioblastoma (TCGA-GBM, n=135) and low-grade-glioma (TCGA-LGG, n=108) collections via radiological assessment. The glioma sub-region labels were produced by an automated state-of-the-art method and manually revised by an expert board-certified neuroradiologist. An extensive panel of radiomic features was extracted based on the manually-revised labels. This set of labels and features should enable i) direct utilization of the TCGA/TCIA glioma collections towards repeatable, reproducible and comparative quantitative studies leading to new predictive, prognostic, and diagnostic assessments, as well as ii) performance evaluation of computer-aided segmentation methods, and comparison to our state-of-the-art method.
A curated mammography data set for use in computer-aided detection and diagnosis research
Lee, Rebecca Sawyer
Gimenez, Francisco
Hoogi, Assaf
Miyake, Kanae Kawai
Gorovoy, Mia
Rubin, Daniel L.
Scientific Data2017Journal Article, cited 702 times
Website
CBIS-DDSM
Mammography
Image Enhancement
Published research results are difficult to replicate due to the lack of a standard evaluation data set in the area of decision support systems in mammography; most computer-aided diagnosis (CADx) and detection (CADe) algorithms for breast cancer in mammography are evaluated on private data sets or on unspecified subsets of public databases. This causes an inability to directly compare the performance of methods or to replicate prior results. We seek to resolve this substantial challenge by releasing an updated and standardized version of the Digital Database for Screening Mammography (DDSM) for evaluation of future CADx and CADe systems (sometimes referred to generally as CAD) research in mammography. Our data set, the CBIS-DDSM (Curated Breast Imaging Subset of DDSM), includes decompressed images, data selection and curation by trained mammographers, updated mass segmentation and bounding boxes, and pathologic diagnosis for training data, formatted similarly to modern computer vision data sets. The data set contains 753 calcification cases and 891 mass cases, providing a data-set size capable of analyzing decision support systems in mammography.
The REMBRANDT study, a large collection of genomic data from brain cancer patients
Gusev, Yuriy
Bhuvaneshwar, Krithika
Song, Lei
Zenklusen, Jean-Claude
Fine, Howard
Madhavan, Subha
Scientific Data2018Journal Article, cited 1 times
Website
REMBRANDT
brain cancer
GDMI
G-DOC
Imaging and clinical data archive for head and neck squamous cell carcinoma patients treated with radiotherapy
Grossberg, Aaron J
Mohamed, Abdallah SR
El Halawani, Hesham
Bennett, William C
Smith, Kirk E
Nolan, Tracy S
Williams, Bowman
Chamchod, Sasikarn
Heukelom, Jolien
Kantor, Michael E
Scientific Data2018Journal Article, cited 0 times
Website
head and neck squamous cell carcinoma (HNSCC)
human papillomavirus
mri
pet ct
ct
dicom
A radiogenomic dataset of non-small cell lung cancer
Bakr, Shaimaa
Gevaert, Olivier
Echegaray, Sebastian
Ayers, Kelsey
Zhou, Mu
Shafiq, Majid
Zheng, Hong
Benson, Jalen Anthony
Zhang, Weiruo
Leung, Ann NC
Scientific Data2018Journal Article, cited 1 times
Website
non-small cell lung cancer (NSCLC)
Radiogenomics
An annotated test-retest collection of prostate multiparametric MRI
Fedorov, Andriy
Schwier, Michael
Clunie, David
Herz, Christian
Pieper, Steve
Kikinis, Ron
Tempany, Clare
Fennessy, Fiona
Scientific Data2018Journal Article, cited 0 times
Website
QIN-PROSTATE-Repeatability
Radiomic feature clusters and Prognostic Signatures specific for Lung and Head &Neck cancer
Parmar, C.
Leijenaar, R. T.
Grossmann, P.
Rios Velazquez, E.
Bussink, J.
Rietveld, D.
Rietbergen, M. M.
Haibe-Kains, B.
Lambin, P.
Aerts, H. J.
Sci Rep2015Journal Article, cited 0 times
Radiomics
Radiomics provides a comprehensive quantification of tumor phenotypes by extracting and mining large number of quantitative image features. To reduce the redundancy and compare the prognostic characteristics of radiomic features across cancer types, we investigated cancer-specific radiomic feature clusters in four independent Lung and Head &Neck (H) cancer cohorts (in total 878 patients). Radiomic features were extracted from the pre-treatment computed tomography (CT) images. Consensus clustering resulted in eleven and thirteen stable radiomic feature clusters for Lung and H cancer, respectively. These clusters were validated in independent external validation cohorts using rand statistic (Lung RS = 0.92, p < 0.001, H RS = 0.92, p < 0.001). Our analysis indicated both common as well as cancer-specific clustering and clinical associations of radiomic features. Strongest associations with clinical parameters: Prognosis Lung CI = 0.60 +/- 0.01, Prognosis H CI = 0.68 +/- 0.01; Lung histology AUC = 0.56 +/- 0.03, Lung stage AUC = 0.61 +/- 0.01, H HPV AUC = 0.58 +/- 0.03, H stage AUC = 0.77 +/- 0.02. Full utilization of these cancer-specific characteristics of image features may further improve radiomic biomarkers, providing a non-invasive way of quantifying and monitoring tumor phenotypic characteristics in clinical practice.
Machine Learning methods for Quantitative Radiomic Biomarkers
Parmar, C.
Grossmann, P.
Bussink, J.
Lambin, P.
Aerts, H. J.
Sci Rep2015Journal Article, cited 178 times
Website
Radiomics
Radiomics extracts and mines large number of medical imaging features quantifying tumor phenotypic characteristics. Highly accurate and reliable machine-learning approaches can drive the success of radiomic applications in clinical care. In this radiomic study, fourteen feature selection methods and twelve classification methods were examined in terms of their performance and stability for predicting overall survival. A total of 440 radiomic features were extracted from pre-treatment computed tomography (CT) images of 464 lung cancer patients. To ensure the unbiased evaluation of different machine-learning methods, publicly available implementations along with reported parameter configurations were used. Furthermore, we used two independent radiomic cohorts for training (n = 310 patients) and validation (n = 154 patients). We identified that Wilcoxon test based feature selection method WLCX (stability = 0.84 +/- 0.05, AUC = 0.65 +/- 0.02) and a classification method random forest RF (RSD = 3.52%, AUC = 0.66 +/- 0.03) had highest prognostic performance with high stability against data perturbation. Our variability analysis indicated that the choice of classification method is the most dominant source of performance variation (34.21% of total variance). Identification of optimal machine-learning methods for radiomic applications is a crucial step towards stable and clinically relevant radiomic biomarkers, providing a non-invasive way of quantifying and monitoring tumor-phenotypic characteristics in clinical practice.
Fully automatic GBM segmentation in the TCGA-GBM dataset: Prognosis and correlation with VASARI features
Velazquez, Emmanuel Rios
Meier, Raphael
Dunn Jr, William D
Alexander, Brian
Wiest, Roland
Bauer, Stefan
Gutman, David A
Reyes, Mauricio
Aerts, Hugo JWL
Scientific RepoRtS2015Journal Article, cited 42 times
Website
TCGA-GBM
VASARI
Magnetic Resonance Imaging (MRI)
Segmentation
Radiomics
Reproducible definition and quantification of imaging biomarkers is essential. We evaluated a fully automatic MR-based segmentation method by comparing it to manually defined sub-volumes by experienced radiologists in the TCGA-GBM dataset, in terms of sub-volume prognosis and association with VASARI features. MRI sets of 109 GBM patients were downloaded from the Cancer Imaging archive. GBM sub-compartments were defined manually and automatically using the Brain Tumor Image Analysis (BraTumIA). Spearman's correlation was used to evaluate the agreement with VASARI features. Prognostic significance was assessed using the C-index. Auto-segmented sub-volumes showed moderate to high agreement with manually delineated volumes (range (r): 0.4 - 0.86). Also, the auto and manual volumes showed similar correlation with VASARI features (auto r = 0.35, 0.43 and 0.36; manual r = 0.17, 0.67, 0.41, for contrast-enhancing, necrosis and edema, respectively). The auto-segmented contrast-enhancing volume and post-contrast abnormal volume showed the highest AUC (0.66, CI: 0.55-0.77 and 0.65, CI: 0.54-0.76), comparable to manually defined volumes (0.64, CI: 0.53-0.75 and 0.63, CI: 0.52-0.74, respectively). BraTumIA and manual tumor sub-compartments showed comparable performance in terms of prognosis and correlation with VASARI features. This method can enable more reproducible definition and quantification of imaging based biomarkers and has potential in high-throughput medical imaging research.
Deciphering Genomic Underpinnings of Quantitative MRI-based Radiomic Phenotypes of Invasive Breast Carcinoma
Zhu, Yitan
Li, Hui
Guo, Wentian
Drukker, Karen
Lan, Li
Giger, Maryellen L
Ji, Yuan
Scientific RepoRtS2015Journal Article, cited 52 times
Website
TCGA-BRCA
Radiogenomics
Magnetic Resonance Imaging (MRI)
BREAST
Magnetic Resonance Imaging (MRI) has been routinely used for the diagnosis and treatment of breast cancer. However, the relationship between the MRI tumor phenotypes and the underlying genetic mechanisms remains under-explored. We integrated multi-omics molecular data from The Cancer Genome Atlas (TCGA) with MRI data from The Cancer Imaging Archive (TCIA) for 91 breast invasive carcinomas. Quantitative MRI phenotypes of tumors (such as tumor size, shape, margin, and blood flow kinetics) were associated with their corresponding molecular profiles (including DNA mutation, miRNA expression, protein expression, pathway gene expression and copy number variation). We found that transcriptional activities of various genetic pathways were positively associated with tumor size, blurred tumor margin, and irregular tumor shape and that miRNA expressions were associated with the tumor size and enhancement texture, but not with other types of radiomic phenotypes. We provide all the association findings as a resource for the research community (available at http://compgenome.org/Radiogenomics/). These findings pave potential paths for the discovery of genetic mechanisms regulating specific tumor phenotypes and for improving MRI techniques as potential non-invasive approaches to probe the cancer molecular status.
Reproducibility of radiomics for deciphering tumor phenotype with imaging
Zhao, Binsheng
Tan, Yongqiang
Tsai, Wei-Yann
Qi, Jing
Xie, Chuanmiao
Lu, Lin
Schwartz, Lawrence H
Scientific RepoRtS2016Journal Article, cited 91 times
Website
Radiomics
Defining a Radiomic Response Phenotype: A Pilot Study using targeted therapy in NSCLC
Aerts, Hugo JWL
Grossmann, Patrick
Tan, Yongqiang
Oxnard, Geoffrey G
Rizvi, Naiyer
Schwartz, Lawrence H
Zhao, Binsheng
Scientific RepoRtS2016Journal Article, cited 40 times
Website
RIDER
radiomics
NSCLC
lung
Medical imaging plays a fundamental role in oncology and drug development, by providing a non-invasive method to visualize tumor phenotype. Radiomics can quantify this phenotype comprehensively by applying image-characterization algorithms, and may provide important information beyond tumor size or burden. In this study, we investigated if radiomics can identify a gefitinib response-phenotype, studying high-resolution computed-tomography (CT) imaging of forty-seven patients with early-stage non-small cell lung cancer before and after three weeks of therapy. On the baseline-scan, radiomic-feature Laws-Energy was significantly predictive for EGFR-mutation status (AUC = 0.67, p = 0.03), while volume (AUC = 0.59, p = 0.27) and diameter (AUC = 0.56, p = 0.46) were not. Although no features were predictive on the post-treatment scan (p > 0.08), the change in features between the two scans was strongly predictive (significant feature AUC-range = 0.74-0.91). A technical validation revealed that the associated features were also highly stable for test-retest (mean +/- std: ICC = 0.96 +/- 0.06). This pilot study shows that radiomic data before treatment is able to predict mutation status and associated gefitinib response non-invasively, demonstrating the potential of radiomics-based phenotyping to improve the stratification and response assessment between tyrosine kinase inhibitors (TKIs) sensitive and resistant patient populations.
Non-small cell lung cancer: quantitative phenotypic analysis of CT images as a potential marker of prognosis
Song, Jiangdian
Liu, Zaiyi
Zhong, Wenzhao
Huang, Yanqi
Ma, Zelan
Dong, Di
Liang, Changhong
Tian, Jie
Scientific RepoRtS2016Journal Article, cited 14 times
Website
Radiomics
NSCLC
Lung
Robust High-dimensional Bioinformatics Data Streams Mining by ODR-ioVFDT
Wang, Dantong
Fong, Simon
Wong, Raymond K
Mohammed, Sabah
Fiaidhi, Jinan
Wong, Kelvin KL
Scientific RepoRtS2017Journal Article, cited 3 times
Website
LIDC-IDRI
Algorithm Development
Diagnostic markers
Health services
Preclinical research
AlphaFold accelerates artificial intelligence powered drug discovery: efficient discovery of a novel CDK20 small molecule inhibitor
Ren, Feng
Ding, Xiao
Zheng, Min
Korzinkin, Mikhail
Cai, Xin
Zhu, Wei
Mantsyzov, Alexey
Aliper, Alex
Aladinskiy, Vladimir
Cao, Zhongying
Kong, Shanshan
Long, Xi
Liu, Bonnie Hei Man
Liu, Yingtao
Naumov, Vladimir
Shneyderman, Anastasia
Ozerov, Ivan V.
Wang, Ju
Pun, Frank W.
Polykovskiy, Daniil A.
Sun, Chong
Levitt, Michael
Aspuru-Guzik, Alán
Zhavoronkov, Alex
2023Journal Article, cited 0 times
TCGA-LIHC
The application of artificial intelligence (AI) has been considered a revolutionary change in drug discovery and development. In 2020, the AlphaFold computer program predicted protein structures for the whole human genome, which has been considered a remarkable breakthrough in both AI applications and structural biology. Despite the varying confidence levels, these predicted structures could still significantly contribute to structure-based drug design of novel targets, especially the ones with no or limited structural information. In this work, we successfully applied AlphaFold to our end-to-end AI-powered drug discovery engines, including a biocomputational platform PandaOmics and a generative chemistry platform Chemistry42. A novel hit molecule against a novel target without an experimental structure was identified, starting from target selection towards hit identification, in a cost- and time-efficient manner. PandaOmics provided the protein of interest for the treatment of hepatocellular carcinoma (HCC) and Chemistry42 generated the molecules based on the structure predicted by AlphaFold, and the selected molecules were synthesized and tested in biological assays. Through this approach, we identified a small molecule hit compound for cyclin-dependent kinase 20 (CDK20) with a binding constant Kd value of 9.2 ± 0.5 μM (n = 3) within 30 days from target selection and after only synthesizing 7 compounds. Based on the available data, a second round of AI-powered compound generation was conducted and through this, a more potent hit molecule, ISM042-2-048, was discovered with an average Kd value of 566.7 ± 256.2 nM (n = 3). Compound ISM042-2-048 also showed good CDK20 inhibitory activity with an IC50 value of 33.4 ± 22.6 nM (n = 3). In addition, ISM042-2-048 demonstrated selective anti-proliferation activity in an HCC cell line with CDK20 overexpression, Huh7, with an IC50 of 208.7 ± 3.3 nM, compared to a counter screen cell line HEK293 (IC50 = 1706.7 ± 670.0 nM). This work is the first demonstration of applying AlphaFold to the hit identification process in drug discovery.
B2C3NetF2: Breast cancer classification using an end‐to‐end deep learning feature fusion and satin bowerbird optimization controlled Newton Raphson feature selection
Fatima, Mamuna
Khan, Muhammad Attique
Shaheen, Saima
Almujally, Nouf Abdullah
Wang, Shui‐Hua
CAAI Transactions on Intelligence Technology2023Journal Article, cited 0 times
Website
CBIS-DDSM
BREAST
Features extraction
Computer Aided Diagnosis (CADx)
ResNet101
Contrast enhancement
Transfer learning
Currently, the improvement in AI is mainly related to deep learning techniques that are employed for the classification, identification, and quantification of patterns in clinical images. The deep learning models show more remarkable performance than the traditional methods for medical image processing tasks, such as skin cancer, colorectal cancer, brain tumour, cardiac disease, Breast cancer (BrC), and a few more. The manual diagnosis of medical issues always requires an expert and is also expensive. Therefore, developing some computer diagnosis techniques based on deep learning is essential. Breast cancer is the most frequently diagnosed cancer in females with a rapidly growing percentage. It is estimated that patients with BrC will rise to 70% in the next 20 years. If diagnosed at a later stage, the survival rate of patients with BrC is shallow. Hence, early detection is essential, increasing the survival rate to 50%. A new framework for BrC classification is presented that utilises deep learning and feature optimization. The significant steps of the presented framework include (i) hybrid contrast enhancement of acquired images, (ii) data augmentation to facilitate better learning of the Convolutional Neural Network (CNN) model, (iii) a pre-trained ResNet-101 model is utilised and modified according to selected dataset classes, (iv) deep transfer learning based model training for feature extraction, (v) the fusion of features using the proposed highly corrected function-controlled canonical correlation analysis approach, and (vi) optimal feature selection using the modified Satin Bowerbird Optimization controlled Newton Raphson algorithm that finally classified using 10 machine learning classifiers. The experiments of the proposed framework have been carried out using the most critical and publicly available dataset, such as CBIS-DDSM, and obtained the best accuracy of 94.5% along with improved computation time. The comparison depicts that the presented method surpasses the current state-of-the-art approaches.
Brain tumour classification using two-tier classifier with adaptive segmentation technique
Anitha, V
Murugavalli, S
IET Computer Vision2016Journal Article, cited 46 times
Website
TCGA-GBM
Radiomics
BRAIN
Texture features
Magnetic resonance imaging (MRI)
A brain tumour is a mass of tissue that is structured by a gradual addition of anomalous cells and it is important to classify brain tumours from the magnetic resonance imaging (MRI) for treatment. Human investigation is the routine technique for brain MRI tumour detection and tumours classification. Interpretation of images is based on organised and explicit classification of brain MRI and also various techniques have been proposed. Information identified with anatomical structures and potential abnormal tissues which are noteworthy to treat are given by brain tumour segmentation on MRI, the proposed system uses the adaptive pillar K-means algorithm for successful segmentation and the classification methodology is done by the two-tier classification approach. In the proposed system, at first the self-organising map neural network trains the features extracted from the discrete wavelet transform blend wavelets and the resultant filter factors are consequently trained by the K-nearest neighbour and the testing process is also accomplished in two stages. The proposed two-tier classification system classifies the brain tumours in double training process which gives preferable performance over the traditional classification method. The proposed system has been validated with the support of real data sets and the experimental results showed enhanced performance.
Three‐dimensional fusion of clustered and classified features for enhancement of liver and lesions from abdominal radiology images
P, Sreeja
S, Hariharan
IET Image Processing2019Journal Article, cited 0 times
TCGA-LIHC
Medical images usually are of low contrast in nature and have poor visual perception capability. Image enhancement techniques can improve significant features such as edge, texture and contrast which are helpful for further processing. This study discusses an image fusion‐based enhancement scheme suitable for enhancing liver and lesions from abdominal radiology images. Apart from other fusion techniques, feature‐based fusion is employed. The pixel‐wise features selected are intensity values, gradient magnitude and local homogeneity. These pixel‐wise features are clustered and classified using fuzzy C means (FCMs) and support vector machine (SVM), respectively. FCM clusters pixel‐wise features into foreground and background, edge and non‐edge as well as homogeneous and non‐homogeneous regions. These two classes are applied for training and testing the SVM. The classifier output is transformed into images and the pixel‐wise features of these images are fused to form a new image. Another important aspect of this scheme is the fusion of pixel‐wise features in three dimensions to form a new image. The resulting image is an RGB image having better visual perception capacity having both enhancement in edge and texture. Pixel level multi‐dimensional fusion is capable of enhancing the maximum relevant information.
Classification of magnetic resonance images for brain tumour detection
Kurmi, Yashwant
Chaurasia, Vijayshri
IET Image Processing2020Journal Article, cited 0 times
REMBRANDT
Image segmentation of magnetic resonance image (MRI) is a crucial process for visualisation and examination of abnormal tissues, especially during clinical analysis. Complexity and variations of the tumour structure magnify the challenges in the automated detection of a brain tumour in MRIs. This study presents an automatic lesion recognition method in the MRI followed by classification. In the proposed multistage image segmentation method, the intent region initialisation is performed using low‐level information by the keypoint descriptors. A set of the linear filter is used to transform low‐level information into higher‐level image features. The set of features and filter training data are accomplished to track the tumour region. The authors adopt a possibilistic model for region growing, and disparity map for the refinement process to grave consist boundary. Further, the features are extracted using the Fisher vector and autoencoder. A set of handcrafted features is also extracted using a segmentation‐based localised region to train and test the support vector machine and multilayer perceptron classifiers. The experiments that are performed using five MRI datasets confirm the superiority of proposal as that of the state‐of‐the‐art methods. It reports 94.5 and 91.76%, average accuracy of segmentation and classification, respectively.
4× Super‐resolution of unsupervised CT images based on GAN
Li, Yunhe
Chen, Lunqiang
Li, Bo
Zhao, Huiyan
IET Image Processing2023Journal Article, cited 0 times
QIN LUNG CT
Imaging Feature
Super-resolution
Algorithm Development
Cloud computing
PyTorch
Improving the resolution of computed tomography (CT) medical images can help doctors more accurately identify lesions, which is important in clinical diagnosis. In the absence of natural paired datasets of high resolution and low resolution image pairs, we abandoned the conventional Bicubic method and innovatively used a dataset of images of a single resolution to create near-natural high–low-resolution image pairs by designing a deep learning network and utilizing noise injection. In addition, we propose a super-resolution generative adversarial network called KerSRGAN which includes a super-resolution generator, super-resolution discriminator, and super-resolution feature extractor to achieve a 4× super-resolution of CT images. The results of an experimental evaluation show that KerSRGAN achieved superior performance compared to the state-of-the-art methods in terms of a quantitative comparison of non-reference image quality evaluation indicators on the generated 4× super-resolution CT images. Moreover, in terms of an intuitive visual comparison, the images generated by the KerSRGAN method had more precise details and better perceptual quality.
SEY‐Net: Semantic edge Y‐shaped network for pancreas segmentation
Zhou, Bangyuan
Xin, Guojiang
Liang, Hao
Ding, Changsong
IET Image Processing2024Journal Article, cited 0 times
Website
Pancreas-CT
Auto-segmentation
Computer Aided Diagnosis (CADx)
Pancreas segmentation has great significance in computer-aided diagnosis of pancreatic diseases. The small size of the pancreas, high variability in shape, and blurred edges make the task of pancreas segmentation challenging. A new model called SEY-Net is proposed to solve the above problems, which is a one-stage model with multi-inputs. SEY-Net is composed of three main components. Firstly, the edge information extraction (EIE) module is designed to improve the segmentation accuracy of the pancreas boundary. Then, the SE_ResNet50 is selected as the encoder's backbone to fit the size of the pancreas. Finally, the dual cross-attention is integrated into the skip connection to better focus on the variable shape of the pancreas. The experimental results shows that the proposed method has better performance and outperforms the other existing state-of-the-art pancreas segmentation methods.
Health Vigilance for Medical Imaging Diagnostic Optimization: Automated segmentation of COVID-19 lung infection from CT images
Bourekkadi, S.
Mohamed, Chala
Nsiri, Benayad
Abdelmajid, Soulaymani
Abdelghani, Mokhtari
Brahim, Benaji
Hami, H.
Mokhtari, A.
Slimani, K.
Soulaymani, A.
E3S Web of Conferences2021Journal Article, cited 0 times
Website
CT Images in COVID-19
Python
Computed Tomography (CT)
COVID-19
LUNG
Segmentation
Computer Aided Diagnosis (CADx)
Covid-19 disease has confronted the world with an unprecedented health crisis, faced with its quick spread, the health system is called upon to increase its vigilance. So, it is essential to set up a quick and automated diagnosis that can alleviate pressure on health systems. Many techniques used to diagnose the covid-19 disease, including imaging techniques, like computed tomography (CT). In this paper, we present an automatic method for COVID-19 Lung Infection Segmentation from CT Images, that can be integrated into a decision support system for the diagnosis of covid-19 disease. To achieve this goal, we focused to new techniques based on artificial intelligent concept, in particular the uses of deep convolutional neural network, and we are interested in our study to the most popular architecture used in the medical imaging community based on encoder-decoder models. We use an open access data collection for Artificial Intelligence COVID-19 CT segmentation or classification as dataset, the proposed model implemented on keras framework in python. A short description of model, training, validation and predictions is given, at the end we compare the result with an existing labeled data. We tested our trained model on new images, we obtained for Area under the ROC Curve the value 0.884 from the prediction result compared with manual expert segmentation. Finally, an overview is given for future works, and use of the proposed model into homogeneous framework in a medical imaging context for clinical purpose.
Deep learning in digital pathology for personalized treatment plans of cancer patients
Wen, Zhuoyu
Wang, Shidan
Yang, Donghan M
Xie, Yang
Chen, Mingyi
Bishop, Justin
Xiao, Guanghua
2023Journal Article, cited 0 times
TIL-WSI-TCGA
Over the past decade, many new cancer treatments have been developed and made available to patients. However, in most cases, these treatments only benefit a specific subgroup of patients, making the selection of treatment for a specific patient an essential but challenging task for oncologists. Although some biomarkers were found to associate with treatment response, manual assessment is time-consuming and subjective. With the rapid developments and expanded implementation of artificial intelligence (AI) in digital pathology, many biomarkers can be quantified automatically from histopathology images. This approach allows for a more efficient and objective assessment of biomarkers, aiding oncologists in formulating personalized treatment plans for cancer patients. This review presents an overview and summary of the recent studies on biomarker quantification and treatment response prediction using hematoxylin-eosin (H&E) stained pathology images. These studies have shown that an AI-based digital pathology approach can be practical and will become increasingly important in improving the selection of cancer treatments for patients.
Computer-aided nodule assessment and risk yield risk management of adenocarcinoma: the future of imaging?
The manual examination of blood and bone marrow specimens for leukemia patients is time-consuming and limited by intra- and inter-observer variance. The development of AI algorithms for leukemia diagnostics requires high-quality sample digitization and reliable annotation of large datasets. Deep learning-based algorithms using these datasets attain human-level performance for some well-defined, clinically relevant questions such as the blast character of cells. Methods such as multiple - instance - learning allow predicting diagnoses from a collection of leukocytes, but are more data-intensive. Using "explainable AI" methods can make the prediction process more transparent and allow users to verify the algorithm's predictions. Stability and robustness analyses are necessary for routine application of these algorithms, and regulatory institutions are developing standards for this purpose. Integrated diagnostics, which link different diagnostic modalities, offer the promise of even greater accuracy but require more extensive and diverse datasets.
Application of Artificial Neural Networks for Prognostic Modeling in Lung Cancer after Combining Radiomic and Clinical Features
Chufal, Kundan S.
Ahmad, Irfan
Pahuja, Anjali K.
Miller, Alexis A.
Singh, Rajpal
Chowdhary, Rahul L.
Asian Journal of Oncology2019Journal Article, cited 0 times
Website
NSCLC-Radiomics
LUNG
Machine Learning
Artificial Neural Network (ANN)
Classification
Objective This study was aimed to investigate machine learning (ML) and artificial neural networks (ANNs) in the prognostic modeling of lung cancer, utilizing high-dimensional data.; ; Materials and Methods A computed tomography (CT) dataset of inoperable nonsmall cell lung carcinoma (NSCLC) patients with embedded tumor segmentation and survival status, comprising 422 patients, was selected. Radiomic data extraction was performed on Computation Environment for Radiation Research (CERR). The survival probability was first determined based on clinical features only and then unsupervised ML methods. Supervised ANN modeling was performed by direct and hybrid modeling which were subsequently compared. Statistical significance was set at <0.05.; ; Results Survival analyses based on clinical features alone were not significant, except for gender. ML clustering performed on unselected radiomic and clinical data demonstrated a significant difference in survival (two-step cluster, median overall survival [ mOS]: 30.3 vs. 17.2 m; p = 0.03; K-means cluster, mOS: 21.1 vs. 7.3 m; p < 0.001). Direct ANN modeling yielded a better overall model accuracy utilizing multilayer perceptron (MLP) than radial basis function (RBF; 79.2 vs. 61.4%, respectively). Hybrid modeling with MLP (after feature selection with ML) resulted in an overall model accuracy of 80%. There was no difference in model accuracy after direct and hybrid modeling (p = 0.164).; ; Conclusion Our preliminary study supports the application of ANN in predicting outcomes based on radiomic and clinical data.
Rhinological Status of Patients with Nasolacrimal Duct Obstruction
Yartsev, Vasily D.
Atkova, Eugenia L.
Rozmanov, Eugeniy O.
Yartseva, Nina D.
International Archives of Otorhinolaryngology2021Journal Article, cited 0 times
Website
OPC-Radiomics
Computed Tomography (CT)
Introduction Studying the state of the nasal cavity and its sinuses and the morphometric parameters of the inferior nasal conchae, as well as a comparative analysis of obtained values in patients with primary (PANDO) and secondary acquired nasolacrimal duct obstruction (SALDO), is relevant.; ; Objective To study the rhinological status of patients with PANDO and SALDO.; ; Methods The present study was based on the results of computed tomography (CT) dacryocystography in patients with PANDO (n = 45) and SALDO due to exposure to radioactive iodine (n = 14). The control group included CT images of paranasal sinuses in patients with no pathology (n = 49). Rhinological status according to the Newman and Lund-Mackay scales and volume of the inferior nasal conchae were assessed. Statistical processing included nonparametric statistics methods; χ2 Pearson test; and the Spearman rank correlation method.; ; Results The difference in values of the Newman and Lund-Mackay scales for the tested groups was significant. A significant difference in scores by the Newman scale was revealed when comparing the results of patients with SALDO and PANDO. Comparing the scores by the Lund-Mackay scale, a significant difference was found between the results of patients with SALDO and PANDO and between the results of patients with PANDO and the control group.; ; Conclusion It was demonstrated that the rhinological status of patients with PANDO was worse than that of patients with SALDO and of subjects in the control group. No connection was found between the volume of the inferior nasal conchae and the development of lacrimal duct obstruction.; ; Keywords; Nasolacrimal Duct - sinus - computed tomography - dacryocystography - newman scale - lund-mackay scale
Reduced lung-cancer mortality with low-dose computed tomographic screening
The National Lung Screening Trial Research Team
Aberle, D. R.
Adams, A. M.
Berg, C. D.
Black, W. C.
Clapp, J. D.
Fagerstrom, R. M.
Gareen, I. F.
Gatsonis, C.
Marcus, P. M.
Sicks, J. D.
New England Journal of Medicine2011Journal Article, cited 4992 times
Website
NLST
LUNG
LDCT
BACKGROUND; The aggressive and heterogeneous nature of lung cancer has thwarted efforts to reduce mortality from this cancer through the use of screening. The advent of low-dose helical computed tomography (CT) altered the landscape of lung-cancer screening, with studies indicating that low-dose CT detects many tumors at early stages. The National Lung Screening Trial (NLST) was conducted to determine whether screening with low-dose CT could reduce mortality from lung cancer.; ; METHODS; From August 2002 through April 2004, we enrolled 53,454 persons at high risk for lung cancer at 33 U.S. medical centers. Participants were randomly assigned to undergo three annual screenings with either low-dose CT (26,722 participants) or single-view posteroanterior chest radiography (26,732). Data were collected on cases of lung cancer and deaths from lung cancer that occurred through December 31, 2009.; ; RESULTS; The rate of adherence to screening was more than 90%. The rate of positive screening tests was 24.2% with low-dose CT and 6.9% with radiography over all three rounds. A total of 96.4% of the positive screening results in the low-dose CT group and 94.5% in the radiography group were false positive results. The incidence of lung cancer was 645 cases per 100,000 person-years (1060 cancers) in the low-dose CT group, as compared with 572 cases per 100,000 person-years (941 cancers) in the radiography group (rate ratio, 1.13; 95% confidence interval [CI], 1.03 to 1.23). There were 247 deaths from lung cancer per 100,000 person-years in the low-dose CT group and 309 deaths per 100,000 person-years in the radiography group, representing a relative reduction in mortality from lung cancer with low-dose CT screening of 20.0% (95% CI, 6.8 to 26.7; P=0.004). The rate of death from any cause was reduced in the low-dose CT group, as compared with the radiography group, by 6.7% (95% CI, 1.2 to 13.6; P=0.02).; ; CONCLUSIONS; Screening with the use of low-dose CT reduces mortality from lung cancer. (Funded by the National Cancer Institute; National Lung Screening Trial ClinicalTrials.gov number, NCT00047385.)
Results of initial low-dose computed tomographic screening for lung cancer
Church, T. R.
Black, W. C.
Aberle, D. R.
Berg, C. D.
Clingan, K. L.
Duan, F.
Fagerstrom, R. M.
Gareen, I. F.
Gierada, D. S.
Jones, G. C.
Mahon, I.
Marcus, P. M.
Sicks, J. D.
Jain, A.
Baum, S.
N Engl J Med2013Journal Article, cited 529 times
Website
NLST
lung
LDCT
BACKGROUND: Lung cancer is the largest contributor to mortality from cancer. The National Lung Screening Trial (NLST) showed that screening with low-dose helical computed tomography (CT) rather than with chest radiography reduced mortality from lung cancer. We describe the screening, diagnosis, and limited treatment results from the initial round of screening in the NLST to inform and improve lung-cancer-screening programs. METHODS: At 33 U.S. centers, from August 2002 through April 2004, we enrolled asymptomatic participants, 55 to 74 years of age, with a history of at least 30 pack-years of smoking. The participants were randomly assigned to undergo annual screening, with the use of either low-dose CT or chest radiography, for 3 years. Nodules or other suspicious findings were classified as positive results. This article reports findings from the initial screening examination. RESULTS: A total of 53,439 eligible participants were randomly assigned to a study group (26,715 to low-dose CT and 26,724 to chest radiography); 26,309 participants (98.5%) and 26,035 (97.4%), respectively, underwent screening. A total of 7191 participants (27.3%) in the low-dose CT group and 2387 (9.2%) in the radiography group had a positive screening result; in the respective groups, 6369 participants (90.4%) and 2176 (92.7%) had at least one follow-up diagnostic procedure, including imaging in 5717 (81.1%) and 2010 (85.6%) and surgery in 297 (4.2%) and 121 (5.2%). Lung cancer was diagnosed in 292 participants (1.1%) in the low-dose CT group versus 190 (0.7%) in the radiography group (stage 1 in 158 vs. 70 participants and stage IIB to IV in 120 vs. 112). Sensitivity and specificity were 93.8% and 73.4% for low-dose CT and 73.5% and 91.3% for chest radiography, respectively. CONCLUSIONS: The NLST initial screening results are consistent with the existing literature on screening by means of low-dose CT and chest radiography, suggesting that a reduction in mortality from lung cancer is achievable at U.S. screening centers that have staff experienced in chest CT. (Funded by the National Cancer Institute; NLST ClinicalTrials.gov number, NCT00047385.).
Medical image segmentation using modified fuzzy c mean based clustering
Locating disease area in medical images is one of the most challenging task in the field of image segmentation. This paper presents a new approach of image-segmentation using modified fuzzy c-means(MFCM) clustering. Considering low illuminated medical images, the input image is firstly enhanced using histogram equalization(HE) technique. The enhanced image is now segmented into various regions using the MFCM based approach. The local information is employed in the objective-function of MFCM to overcome the issue of noise sensitivity. After that membership partitioning is improved by using fast membership filtering. The observed result of the proposed scheme is found suitable in terms of various evaluating parameters for experimentation.
Integrating mechanism-based modeling with biomedical imaging to build practical digital twins for clinical oncology
Wu, Chengyue
Lorenzo, Guillermo
Hormuth, David A.
Lima, Ernesto A. B. F.
Slavkova, Kalina P.
DiCarlo, Julie C.
Virostko, John
Phillips, Caleb M.
Patt, Debra
Chung, Caroline
Yankeelov, Thomas E.
2022Journal Article, cited 0 times
TCGA-GBM
Digital twins employ mathematical and computational models to virtually represent a physical object (e.g., planes and human organs), predict the behavior of the object, and enable decision-making to optimize the future behavior of the object. While digital twins have been widely used in engineering for decades, their applications to oncology are only just emerging. Due to advances in experimental techniques quantitatively characterizing cancer, as well as advances in the mathematical and computational sciences, the notion of building and applying digital twins to understand tumor dynamics and personalize the care of cancer patients has been increasingly appreciated. In this review, we present the opportunities and challenges of applying digital twins in clinical oncology, with a particular focus on integrating medical imaging with mechanism-based, tissue-scale mathematical modeling. Specifically, we first introduce the general digital twin framework and then illustrate existing applications of image-guided digital twins in healthcare. Next, we detail both the imaging and modeling techniques that provide practical opportunities to build patient-specific digital twins for oncology. We then describe the current challenges and limitations in developing image-guided, mechanism-based digital twins for oncology along with potential solutions. We conclude by outlining five fundamental questions that can serve as a roadmap when designing and building a practical digital twin for oncology and attempt to provide answers for a specific application to brain cancer. We hope that this contribution provides motivation for the imaging science, oncology, and computational communities to develop practical digital twin technologies to improve the care of patients battling cancer.
Deep learning method for brain tumor identification with multimodal 3D-MRI
In the primary gliomas, the brain tumors be the majority frequent of all types. Both the accurate and detailed delineation of tumor borders are significant for detection, treatment planning, also discovering risk factors this paper presents a brain tumor segmentation system using a deep learning approach. U-net is a new type of deep learning network which has been trained to segment the brain tumors. Essentially, our architecture be a nested, deeply-supervised decoder-encoder-skipper network. We use the BraTS data set as our training data for our model. For all practical purposes, a tumor in the validation dataset must be 0.757, 0.17 also 0.89.
Radiomics study of lung tumor volume segmentation technique in contrast-enhanced Computed Tomography (CT) thorax images: A comparative study
Medical image segmentation is crucial in extracting information regarding tumour characteristics including lung cancer. To obtain the information of macroscopic (tumour volume) and microscopic features (radiomics study), image segmentation process is required. Various kind of advance segmentation algorithms are available nowadays yet there is no so-called ‘the best segmentation technique’ that can be used in medical imaging modalities. This study compared manual slice by slice segmentation and semi-automated segmentation of lung tumour volume measurement with radiomics features of shape analysis and first-order statistical measures of texture analysis. Manual slice by slice delineation and region-growing semi-automated segmentation using 3D slicer software was performed on 45 sets of contrast-enhanced Computed Tomography (CT) Thorax images downloaded from The Cancer Imaging Archive (TCIA). The results showed that the manually and semi-automated segmentation has high similarity with volume Hausdorff distance (AHD) measured as 1.02 ± 0.71mm, high Dice similarity coefficient (DSC) value is 0.83 ± 0.05 and p value is 0.997; p > 0.05. Overall, 84.62% of the features under shape analysis and 33.33% of first-order statistical measures of texture analysis are no significant difference between these two segmentation methods. In conclusion, semiautomated segmentation can be perform as good as manual segmentation in lung tumour volume measurement, especially in terms of the ability to extract the shape order features of the lung tumour radiomics analysis.
Radiomics-based low and high-grade DCE-MRI breast tumor classification with an array of SVM classifiers
Breast cancer is an extremely prevalent cancer globally and the prominent cause attributing to cancer-related fatalities. The grade of breast cancer is a prognostic marker representing its aggressive potential. Morphologically, tumors that are well differentiated, have a highly noticeable basal membrane, and with moderate proliferation are considered low-grade (Grade I & II). Tumors with a massive nucleus, irregular shape, and size, prominent nucleoli, inadequate cytoplasm, and high intensity are High grade (Grade III & IV). Dynamic Contrast-EnhancedMRI (DCE-MRI) has been extensively used to assess tumors and tumor grades, with an emphasis on heterogeneity and integrated inspections. Neoadjuvant chemotherapy (NAC) for breast cancer is traditionally administered to patients with locally advanced disease and is advantageous for surgical downstaging. Generally, the histological grade and proliferationindex decrease after neoadjuvant chemotherapy and are connected to the therapeutic response. Radiomics is a novel approach for discovering tumor pathophysiological-related image information and possibly a pre-operative predictor of breast cancer pathological grade. Due to the heterogeneous nature of the tumor, histological grading remains challenging for the radiologist. This work extracts radiomics-based features from the QIN BREAST and QIN BREAST-02 datasets (N=47) of the publicly available TCIA database. The extracted features are used in the classification of low- and high-grade tumors by using an array of Support vector machines (SVM) algorithms such as Quadratic SVM, Linear SVM, Cubic SVM, and Medium Gaussian SVM. Results show that the test accuracy for the LinearSVM is 81.2%, AUC of 0.75, a sensitivity of 0.85, and an F-score of 0.89 which is observed to have better performance than other SVM models. Hence, radiomics-based grade differentiation using DCE MRI in patients with breast cancer could help to determine the potential for recovery with the right treatment.
Model discovery approach enables noninvasive measurement of intra-tumoral fluid transport in dynamic MRI
Woodall, Ryan T.
Esparza, Cora C.
Gutova, Margarita
Wang, Maosen
Cunningham, Jessica J.
Brummer, Alexander B.
Stine, Caleb A.
Brown, Christine C.
Munson, Jennifer M.
Rockne, Russell C.
2024Journal Article, cited 0 times
QIN-BREAST-02
DCE-MRI
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is a routine method to noninvasively quantify perfusion dynamics in tissues. The standard practice for analyzing DCE-MRI data is to fit an ordinary differential equation to each voxel. Recent advances in data science provide an opportunity to move beyond existing methods to obtain more accurate measurements of fluid properties. Here, we developed a localized convolutional function regression that enables simultaneous measurement of interstitial fluid velocity, diffusion, and perfusion in 3D. We validated the method computationally and experimentally, demonstrating accurate measurement of fluid dynamics in situ and in vivo. Applying the method to human MRIs, we observed tissue-specific differences in fluid dynamics, with an increased fluid velocity in breast cancer as compared to brain cancer. Overall, our method represents an improved strategy for studying interstitial flows and interstitial transport in tumors and patients. We expect that our method will contribute to the better understanding of cancer progression and therapeutic response.
A novel resolution independent gradient edge predictor for lossless compression of medical image sequences
Sharma, Urvashi
Sood, Meenakshi
Puthooran, Emjee
2019Journal Article, cited 0 times
RIDER Breast MRI
Digital visualization of human body in terms of medical images with high resolution and bit depth generates tremendous amount of data. In the field of medical diagnosis, lossless compression technique is preferred that facilitates efficient archiving and transmission of medical images avoiding false diagnosis. Among various approaches to lossless compression of medical images, predictive coding techniques have high coding efficiency and low complexity. Gradient Edge Detector (GED) used in predictive coding is based on threshold value for prediction and choice of threshold is very important for efficient prediction. However, no specific method is adopted in the literature for threshold value selection. This paper presents an efficient prediction solution targeted at lossless compression of 8 bits and higher bit depth volumetric medical images up to 16 bits. Novelty of the proposed technique is developing Resolution Independent Gradient Edge Predictor (RIGED) algorithm to support 8- and 16-bit depth medical images. Percentage improvement of the proposed model is 30.39% over state-of-the-art Median Edge Detector (MED) and 0.92% over Gradient Adaptive Predictor (GAP) in terms of entropy for medical image dataset of different modalities having different resolutions and bit depths.
A Novel Distributed Matching Global and Local Fuzzy Clustering (DMGLFC) for 3D Brain Image Segmentation for Tumor Detection
Sumithra, M.
Malathi, S.
2022Journal Article, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
In this paper, we proposed a novel Distribution Matching Global and Local Fuzzy Clustering (DMGLFC) for image segmentation. The proposed DMGLFC targeted 3D MRI brain images for tumor detection. The DMGLFC is involved in the estimation of uncertainties with consideration of different classes. The number of uncertainties is estimated based on the consideration of global entropy and local entropy. The identified voxel in 3D brain MRI images is measured with a fuzzy weighted membership function for the estimation of global entropy. The local entropy measurement utilizes spatial likelihood estimation of fuzzifier weighted membership function. The proposed DMGLFC is involved in the effective segmentation of MRI tumors based on fuzzy objective function entropy measurement. Depending upon the weighted parameters, the tumors present in the 3D images are classified regarding the global and local entropy. The performance of the proposed algorithm is measured in terms of Dice similarity coefficient (DSC), accuracy (Acc), sensitivity (true positive rate), specificity (true negative rate), and Bit Error Rate (BER). Comparative analysis of results expressed that the proposed DMGLFC approach exhibits significant performance rather than the existing technique.
The impact of initial tumor bulk in DLBCL treated with DA-EPOCH-R vs. R-CHOP: a secondary analysis of alliance/CALGB 50303
The ideal treatment paradigm for bulky diffuse large B-cell lymphoma (DLBCL) remains uncertain. We investigated the impact of tumor bulk in patients treated with systemic therapy alone through Alliance/CALGB 50303. Data from this trial were obtained from the National Cancer Institute's NCTN/NCORP Data Archive. The study assessed the size of nodal sites and estimated progression-free survival (PFS) using Cox proportional hazards models. Stratified analysis factored in International Prognostic Index (IPI) risk scores. Out of 524 patients, 155 had pretreatment scans. Using a 7.5 cm cutoff, 44% were classified as bulky. Bulk did not significantly impact progression-free survival (PFS), whether measured continuously or at thresholds of >5 or >7.5 cm (p = 0.10-p = 0.99). Stratified analyses by treatment group and IPI risk group were also non-significant. In this secondary analysis, a significant association between bulk and PFS was not identified.
The prognosis of upfront tumor bulk in DLBCL remains unclear. In this secondary analysis of a phase III trial comparing DA-EPOCH-R to R-CHOP, a significant association between upfront tumor bulk and PFS was not identified.
eng
Persistent epigenetic changes in adult daughters of older mothers
Moore, Aaron M
Xu, Zongli
Kolli, Ramya T
White, Alexandra J
Sandler, Dale P
Taylor, Jack A
2019Journal Article, cited 0 times
TCGA-HNSC
Women of advanced maternal age account for an increasing proportion of live births in many developed countries across the globe. Offspring of older mothers are at an increased risk for a variety of subsequent health outcomes, including outcomes that do not manifest until childhood or adulthood. The molecular underpinnings of the association between maternal aging and offspring morbidity remain elusive. However, one possible mechanism is that maternal aging produces specific alterations in the offspring's epigenome in utero, and these epigenetic alterations persist into adulthood. We conducted an epigenome-wide association study (EWAS) of the effect of a mother's age on blood DNA methylation in 2,740 adult daughters using the Illumina Infinium HumanMethylation450 array. A false discovery rate (FDR) q-value threshold of 0.05 was used to identify differentially methylated CpG sites (dmCpGs). We identified 87 dmCpGs associated with increased maternal age. The majority (84%) of the dmCpGs had lower methylation in daughters of older mothers, with an average methylation difference of 0.6% per 5-year increase in mother's age. Thirteen genomic regions contained multiple dmCpGs. Most notably, nine dmCpGs were found in the promoter region of the gene LIM homeobox 8 (LHX8), which plays a pivotal role in female fertility. Other dmCpGs were found in genes associated with metabolically active brown fat, carcinogenesis, and neurodevelopmental disorders. We conclude that maternal age is associated with persistent epigenetic changes in daughters at genes that have intriguing links to health.
Local mesh ternary patterns: a new descriptor for MRI and CT biomedical image indexing and retrieval
Deep, G.
Kaur, L.
Gupta, S.
Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization2016Journal Article, cited 3 times
Website
Algorithm Development
LIDC-IDRI
This paper proposes a new pattern-based feature called local mesh ternary pattern for biomedical image indexing and retrieval. The standard local binary patterns (LBP) and local ternary patterns (LTP) encode the greyscale relationship between the centre pixel and its surrounding neighbours in two-dimensional (2D) local region of an image, whereas the proposed method encodes the greyscale relationship among the neighbours for a given centre pixel with three selected directions of mesh patterns which is generated from 2D image. The novelty of the proposed method is that it uses ternary patterns from mesh patterns of an image to encode more spatial structure information which leads to better retrieval. The experiments have been carried out for proving the worth of proposed algorithm on three different types of benchmarked biomedical databases; (i) computed tomography (CT) scanned lung image databases named as LIDC-IDRI-CT and VIA/I–ELCAP-CT, (ii) brain magnetic resonance imaging (MRI) database named as OASIS-MRI. The results demonstrate that the proposed method yields better performance in terms of average retrieval precision and average retrieval rate over state-of-the-art feature extraction techniques like LBP, LTP, local mesh pattern, etc.
Are all shortcuts in encoder–decoder networks beneficial for CT denoising?
Chen, Junhua
Zhang, Chong
Wee, Leonard
Dekker, Andre
Bermejo, Inigo
Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization2022Journal Article, cited 0 times
Website
NSCLC-Radiomics
Image denoising
Computed Tomography (CT)
Deep Learning
Denoising of CT scans has attracted the attention of many researchers in the medical image analysis domain. Encoder–decoder networks are deep learning neural networks that have become common for image denoising in recent years. Shortcuts between the encoder and decoder layers are crucial for some image-to-image translation tasks. However, are all shortcuts necessary for CT denoising? To answer this question, we set up two encoder–decoder networks representing two popular architectures and then progressively removed shortcuts from the networks from shallow to deep (forward removal) and from deep to shallow (backward removal). We used two unrelated datasets with different noise levels to test the denoising performance of these networks using two metrics, namely root mean square error and content loss. The results show that while more than half of the shortcuts are still indispensable for CT scan denoising, removing certain shortcuts leads to performance improvement for denoising. Both shallow and deep shortcuts might be removed, thus retaining sparse connections, especially when the noise level is high. Backward removal seems to have a better performance than forward removal, which means deep shortcuts have priority to be removed. Finally, we propose a hypothesis to explain this phenomenon and validate it in the experiments.
Machine learning based Breast Cancer screening: trends, challenges, and opportunities
Zizaan, Asma
Idri, Ali
2023Journal Article, cited 0 times
CBIS-DDSM
Although breast cancer (BC) deaths have decreased over time, it remains the second leading cause of cancer-related deaths among women. With the technical advancement of artificial intelligence (AI) and availability of healthcare data, many researchers have attempted to employ machine learning (ML) techniques to gain a better understanding of this disease. The present study was a systematic literature review (SLR) of the use of machine learning techniques in breast cancer screening (BCS) between 2011 and 2021. A total of 66 papers were selected and analysed to address nine criteria: year of publication, publication venue, paper type, BCS modality, empirical type, ML technique, performance, advantages and disadvantages, and dataset used. The results showed that mammography was the most frequently used BCS modality, and that classification was the most used ML objective. Moreover, of the six investigated ML techniques, convolutional neural network models scored the highest median accuracy with 96.67%, followed by adaptive boosting (88.9%).
Adding features from the mathematical model of breast cancer to predict the tumour size
Nave, OPhir
International Journal of Computer Mathematics: Computer Systems Theory2020Journal Article, cited 0 times
Website
Breast-MRI-NACT-Pilot
ISPY1/ACRIN 6657
Machine Learning
Radiomics
In this study, we combine a theoretical mathematical model with machine learning (ML) to predict tumour sizes in breast cancer. Our study is based on clinical data from 1869 women of various ages with breast cancer. To accurately predict tumour size for each woman individually, we solved our customized mathematical model for each woman, then added the solution vector of the dynamic variables in the model (in machine learning language, these are called features) to the clinical data and used a variety of machine learning algorithms. We compared the results obtained with and without the mathematical model and showed that by adding specific features from the mathematical model we were able to better predict tumour size for each woman.
Feasibility of proton dosimetry overriding planning CT with daily CBCT elaborated through generative artificial intelligence tools
Rossi, Matteo
Belotti, Gabriele
Mainardi, Luca
Baroni, Guido
Cerveri, Pietro
2024Journal Article, cited 0 times
Pancreatic-CT-CBCT-SEG
radiotherapy
Artificial Intelligence
Spiral Cone-Beam Computed Tomography
Image Enhancement
Radiotherapy commonly utilizes cone beam computed tomography (CBCT) for patient positioning and treatment monitoring. CBCT is deemed to be secure for patients, making it suitable for the delivery of fractional doses. However, limitations such as a narrow field of view, beam hardening, scattered radiation artifacts, and variability in pixel intensity hinder the direct use of raw CBCT for dose recalculation during treatment. To address this issue, reliable correction techniques are necessary to remove artifacts and remap pixel intensity into Hounsfield Units (HU) values. This study proposes a deep-learning framework for calibrating CBCT images acquired with narrow field of view (FOV) systems and demonstrates its potential use in proton treatment planning updates. Cycle-consistent generative adversarial networks (cGAN) processes raw CBCT to reduce scatter and remap HU. Monte Carlo simulation is used to generate CBCT scans, enabling the possibility to focus solely on the algorithm's ability to reduce artifacts and cupping effects without considering intra-patient longitudinal variability and producing a fair comparison between planning CT (pCT) and calibrated CBCT dosimetry. To showcase the viability of the approach using real-world data, experiments were also conducted using real CBCT. Tests were performed on a publicly available dataset of 40 patients who received ablative radiation therapy for pancreatic cancer. The simulated CBCT calibration led to a difference in proton dosimetry of less than 2%, compared to the planning CT. The potential toxicity effect on the organs at risk decreased from about 50% (uncalibrated) up the 2% (calibrated). The gamma pass rate at 3%/2 mm produced an improvement of about 37% in replicating the prescribed dose before and after calibration (53.78% vs 90.26%). Real data also confirmed this with slightly inferior performances for the same criteria (65.36% vs 87.20%). These results may confirm that generative artificial intelligence brings the use of narrow FOV CBCT scans incrementally closer to clinical translation in proton therapy planning updates.
Robust semi-automatic segmentation of pulmonary subsolid nodules in chest computed tomography scans
Lassen, BC
Jacobs, C
Kuhnigk, JM
van Ginneken, B
van Rikxoort, EM
Physics in Medicine and Biology2015Journal Article, cited 25 times
Website
LIDC-IDRI
Reproducibility
LUNG
The malignancy of lung nodules is most often detected by analyzing changes of the nodule diameter in follow-up scans. A recent study showed that comparing the volume or the mass of a nodule over time is much more significant than comparing the diameter. Since the survival rate is higher when the disease is still in an early stage it is important to detect the growth rate as soon as possible. However manual segmentation of a volume is time-consuming. Whereas there are several well evaluated methods for the segmentation of solid nodules, less work is done on subsolid nodules which actually show a higher malignancy rate than solid nodules. In this work we present a fast, semi-automatic method for segmentation of subsolid nodules. As minimal user interaction the method expects a user-drawn stroke on the largest diameter of the nodule. First, a threshold-based region growing is performed based on intensity analysis of the nodule region and surrounding parenchyma. In the next step the chest wall is removed by a combination of a connected component analyses and convex hull calculation. Finally, attached vessels are detached by morphological operations. The method was evaluated on all nodules of the publicly available LIDC/IDRI database that were manually segmented and rated as non-solid or part-solid by four radiologists (Dataset 1) and three radiologists (Dataset 2). For these 59 nodules the Jaccard index for the agreement of the proposed method with the manual reference segmentations was 0.52/0.50 (Dataset 1/Dataset 2) compared to an inter-observer agreement of the manual segmentations of 0.54/0.58 (Dataset 1/Dataset 2). Furthermore, the inter-observer agreement using the proposed method (i.e. different input strokes) was analyzed and gave a Jaccard index of 0.74/0.74 (Dataset 1/Dataset 2). The presented method provides satisfactory segmentation results with minimal observer effort in minimal time and can reduce the inter-observer variability for segmentation of subsolid nodules in clinical routine.
A radiomics model from joint FDG-PET and MRI texture features for the prediction of lung metastases in soft-tissue sarcomas of the extremities
Vallières, Martin
Freeman, CR
Skamene, SR
El Naqa, I
Physics in Medicine and Biology2015Journal Article, cited 199 times
Website
Soft Tissue Sarcoma
Radiomics
Positron emission tomography (PET)
Magnetic Resonance Imaging (MRI)
This study aims at developing a joint FDG-PET and MRI texture-based model for the early evaluation of lung metastasis risk in soft-tissue sarcomas (STSs). We investigate if the creation of new composite textures from the combination of FDG-PET and MR imaging information could better identify aggressive tumours. Towards this goal, a cohort of 51 patients with histologically proven STSs of the extremities was retrospectively evaluated. All patients had pre-treatment FDG-PET and MRI scans comprised of T1-weighted and T2-weighted fat-suppression sequences (T2FS). Nine non-texture features (SUV metrics and shape features) and forty-one texture features were extracted from the tumour region of separate (FDG-PET, T1 and T2FS) and fused (FDG-PET/T1 and FDG-PET/T2FS) scans. Volume fusion of the FDG-PET and MRI scans was implemented using the wavelet transform. The influence of six different extraction parameters on the predictive value of textures was investigated. The incorporation of features into multivariable models was performed using logistic regression. The multivariable modeling strategy involved imbalance-adjusted bootstrap resampling in the following four steps leading to final prediction model construction: (1) feature set reduction; (2) feature selection; (3) prediction performance estimation; and (4) computation of model coefficients. Univariate analysis showed that the isotropic voxel size at which texture features were extracted had the most impact on predictive value. In multivariable analysis, texture features extracted from fused scans significantly outperformed those from separate scans in terms of lung metastases prediction estimates. The best performance was obtained using a combination of four texture features extracted from FDG-PET/T1 and FDGPET/T2FS scans. This model reached an area under the receiver-operating characteristic curve of 0.984 +/- 0.002, a sensitivity of 0.955 +/- 0.006, and a specificity of 0.926 +/- 0.004 in bootstrapping evaluations. Ultimately, lung metastasis risk assessment at diagnosis of STSs could improve patient outcomes by allowing better treatment adaptation.
Equivariant neural networks for inverse problems
Celledoni, Elena
Ehrhardt, Matthias J
Etmann, Christian
Owren, Brynjulf
Schönlieb, Carola-Bibiane
Sherry, Ferdia
2021Journal Article, cited 0 times
LIDC-IDRI
In recent years the use of convolutional layers to encode an inductive bias (translational equivariance) in neural networks has proven to be a very fruitful idea. The successes of this approach have motivated a line of research into incorporating other symmetries into deep learning methods, in the form of group equivariant convolutional neural networks. Much of this work has been focused on roto-translational symmetry of R d , but other examples are the scaling symmetry of R d and rotational symmetry of the sphere. In this work, we demonstrate that group equivariant convolutional operations can naturally be incorporated into learned reconstruction methods for inverse problems that are motivated by the variational regularisation approach. Indeed, if the regularisation functional is invariant under a group symmetry, the corresponding proximal operator will satisfy an equivariance property with respect to the same group symmetry. As a result of this observation, we design learned iterative methods in which the proximal operators are modelled as group equivariant convolutional neural networks. We use roto-translationally equivariant operations in the proposed methodology and apply it to the problems of low-dose computerised tomography reconstruction and subsampled magnetic resonance imaging reconstruction. The proposed methodology is demonstrated to improve the reconstruction quality of a learned reconstruction method with a little extra computational cost at training time but without any extra cost at test time.
Enhancement of multimodality texture-based prediction models via optimization of PET and MR image acquisition protocols: a proof of concept
Vallières, Martin
Laberge, Sébastien
Diamant, André
El Naqa, Issam
Physics in Medicine & Biology2017Journal Article, cited 3 times
Website
Algorithm Development
Radiomics
Magnetic Resonance Imaging (MRI)
Computed Tomography (CT)
Texture-based radiomic models constructed from medical images have the potential to support cancer treatment management via personalized assessment of tumour aggressiveness. While the identification of stable texture features under varying imaging settings is crucial for the translation of radiomics analysis into routine clinical practice, we hypothesize in this work that a complementary optimization of image acquisition parameters prior to texture feature extraction could enhance the predictive performance of texture-based radiomic models. As a proof of concept, we evaluated the possibility of enhancing a model constructed for the early prediction of lung metastases in soft-tissue sarcomas by optimizing PET and MR image acquisition protocols via computerized simulations of image acquisitions with varying parameters. Simulated PET images from 30 STS patients were acquired by varying the extent of axial data combined per slice ('span'). Simulated T 1-weighted and T 2-weighted MR images were acquired by varying the repetition time and echo time in a spin-echo pulse sequence, respectively. We analyzed the impact of the variations of PET and MR image acquisition parameters on individual textures, and we investigated how these variations could enhance the global response and the predictive properties of a texture-based model. Our results suggest that it is feasible to identify an optimal set of image acquisition parameters to improve prediction performance. The model constructed with textures extracted from simulated images acquired with a standard clinical set of acquisition parameters reached an average AUC of [Formula: see text] in bootstrap testing experiments. In comparison, the model performance significantly increased using an optimal set of image acquisition parameters ([Formula: see text]), with an average AUC of [Formula: see text]. Ultimately, specific acquisition protocols optimized to generate superior radiomics measurements for a given clinical problem could be developed and standardized via dedicated computer simulations and thereafter validated using clinical scanners.
Planning, guidance, and quality assurance of pelvic screw placement using deformable image registration
Goerres, J.
Uneri, A.
Jacobson, M.
Ramsay, B.
De Silva, T.
Ketcha, M.
Han, R.
Manbachi, A.
Vogt, S.
Kleinszig, G.
Wolinsky, J. P.
Osgood, G.
Siewerdsen, J. H.
Phys Med Biol2017Journal Article, cited 4 times
Website
CT Lymph Nodes
Segmentation
Percutaneous pelvic screw placement is challenging due to narrow bone corridors surrounded by vulnerable structures and difficult visual interpretation of complex anatomical shapes in 2D x-ray projection images. To address these challenges, a system for planning, guidance, and quality assurance (QA) is presented, providing functionality analogous to surgical navigation, but based on robust 3D-2D image registration techniques using fluoroscopy images already acquired in routine workflow. Two novel aspects of the system are investigated: automatic planning of pelvic screw trajectories and the ability to account for deformation of surgical devices (K-wire deflection). Atlas-based registration is used to calculate a patient-specific plan of screw trajectories in preoperative CT. 3D-2D registration aligns the patient to CT within the projective geometry of intraoperative fluoroscopy. Deformable known-component registration (dKC-Reg) localizes the surgical device, and the combination of plan and device location is used to provide guidance and QA. A leave-one-out analysis evaluated the accuracy of automatic planning, and a cadaver experiment compared the accuracy of dKC-Reg to rigid approaches (e.g. optical tracking). Surgical plans conformed within the bone cortex by 3-4 mm for the narrowest corridor (superior pubic ramus) and >5 mm for the widest corridor (tear drop). The dKC-Reg algorithm localized the K-wire tip within 1.1 mm and 1.4 degrees and was consistently more accurate than rigid-body tracking (errors up to 9 mm). The system was shown to automatically compute reliable screw trajectories and accurately localize deformed surgical devices (K-wires). Such capability could improve guidance and QA in orthopaedic surgery, where workflow is impeded by manual planning, conventional tool trackers add complexity and cost, rigid tool assumptions are often inaccurate, and qualitative interpretation of complex anatomy from 2D projections is prone to trial-and-error with extended fluoroscopy time.
Optimal transformations leading to normal distributions of positron emission tomography standardized uptake values
Scarpelli, M.
Eickhoff, J.
Cuna, E.
Perlman, S.
Jeraj, R.
Phys Med Biol2018Journal Article, cited 2 times
Website
Head-Neck Cetuximab
RTOG-0522
Radiation Therapy Oncology Group (RTOG)
head and neck squamous cell carcinoma
positron emission tomography (PET)
standardized uptake value (SUV)
18F-FDG
18F-FLT
Box-Cox transform
The statistical analysis of positron emission tomography (PET) standardized uptake value (SUV) measurements is challenging due to the skewed nature of SUV distributions. This limits utilization of powerful parametric statistical models for analyzing SUV measurements. An ad-hoc approach, which is frequently used in practice, is to blindly use a log transformation, which may or may not result in normal SUV distributions. This study sought to identify optimal transformations leading to normally distributed PET SUVs extracted from tumors and assess the effects of therapy on the optimal transformations. METHODS: The optimal transformation for producing normal distributions of tumor SUVs was identified by iterating the Box-Cox transformation parameter (lambda) and selecting the parameter that maximized the Shapiro-Wilk P-value. Optimal transformations were identified for tumor SUVmax distributions at both pre and post treatment. This study included 57 patients that underwent (18)F-fluorodeoxyglucose ((18)F-FDG) PET scans (publically available dataset). In addition, to test the generality of our transformation methodology, we included analysis of 27 patients that underwent (18)F-Fluorothymidine ((18)F-FLT) PET scans at our institution. RESULTS: After applying the optimal Box-Cox transformations, neither the pre nor the post treatment (18)F-FDG SUV distributions deviated significantly from normality (P > 0.10). Similar results were found for (18)F-FLT PET SUV distributions (P > 0.10). For both (18)F-FDG and (18)F-FLT SUV distributions, the skewness and kurtosis increased from pre to post treatment, leading to a decrease in the optimal Box-Cox transformation parameter from pre to post treatment. There were types of distributions encountered for both (18)F-FDG and (18)F-FLT where a log transformation was not optimal for providing normal SUV distributions. CONCLUSION: Optimization of the Box-Cox transformation, offers a solution for identifying normal SUV transformations for when the log transformation is insufficient. The log transformation is not always the appropriate transformation for producing normally distributed PET SUVs.
Radiogenomics of hepatocellular carcinoma: multiregion analysis-based identification of prognostic imaging biomarkers by integrating gene data—a preliminary study
Xia, Wei
Chen, Ying
Zhang, Rui
Yan, Zhuangzhi
Zhou, Xiaobo
Zhang, Bo
Gao, Xin
Physics in Medicine and Biology2018Journal Article, cited 0 times
TCGA-LIHC
Our objective was to identify prognostic imaging biomarkers for hepatocellular carcinoma in contrast-enhanced computed tomography (CECT) with biological interpretations by associating imaging features and gene modules. We retrospectively analyzed 371 patients who had gene expression profiles. For the 38 patients with CECT imaging data, automatic intra-tumor partitioning was performed, resulting in three spatially distinct subregions. We extracted a total of 37 quantitative imaging features describing intensity, geometry, and texture from each subregion. Imaging features were selected after robustness and redundancy analysis. Gene modules acquired from clustering were chosen for their prognostic significance. By constructing an association map between imaging features and gene modules with Spearman rank correlations, the imaging features that significantly correlated with gene modules were obtained. These features were evaluated with Cox's proportional hazard models and Kaplan-Meier estimates to determine their prognostic capabilities for overall survival (OS). Eight imaging features were significantly correlated with prognostic gene modules, and two of them were associated with OS. Among these, the geometry feature volume fraction of the subregion, which was significantly correlated with all prognostic gene modules representing cancer-related interpretation, was predictive of OS (Cox p = 0.022, hazard ratio = 0.24). The texture feature cluster prominence in the subregion, which was correlated with the prognostic gene module representing lipid metabolism and complement activation, also had the ability to predict OS (Cox p = 0.021, hazard ratio = 0.17). Imaging features depicting the volume fraction and textural heterogeneity in subregions have the potential to be predictors of OS with interpretable biological meaning.
Computer-aided diagnosis of lung cancer: the effect of training data sets on classification accuracy of lung nodules
Gong, Jing
Liu, Ji-Yu
Sun, Xi-Wen
Zheng, Bin
Nie, Sheng-Dong
Physics in Medicine and Biology2018Journal Article, cited 51 times
Website
NSCLC-Radiomics
This study aims to develop a computer-aided diagnosis (CADx) scheme for classification between malignant and benign lung nodules, and also assess whether CADx performance changes in detecting nodules associated with early and advanced stage lung cancer. The study involves 243 biopsy-confirmed pulmonary nodules. Among them, 76 are benign, 81 are stage I and 86 are stage III malignant nodules. The cases are separated into three data sets involving: (1) all nodules, (2) benign and stage I malignant nodules, and (3) benign and stage III malignant nodules. A CADx scheme is applied to segment lung nodules depicted on computed tomography images and we initially computed 66 3D image features. Then, three machine learning models namely, a support vector machine, naïve Bayes classifier and linear discriminant analysis, are separately trained and tested by using three data sets and a leave-one-case-out cross-validation method embedded with a Relief-F feature selection algorithm. When separately using three data sets to train and test three classifiers, the average areas under receiver operating characteristic curves (AUC) are 0.94, 0.90 and 0.99, respectively. When using the classifiers trained using data sets with all nodules, average AUC values are 0.88 and 0.99 for detecting early and advanced stage nodules, respectively. AUC values computed from three classifiers trained using the same data set are consistent without statistically significant difference (p > 0.05). This study demonstrates (1) the feasibility of applying a CADx scheme to accurately distinguish between benign and malignant lung nodules, and (2) a positive trend between CADx performance and cancer progression stage. Thus, in order to increase CADx performance in detecting subtle and early cancer, training data sets should include more diverse early stage cancer cases.
Effectiveness of different rescanning techniques for scanned proton radiotherapy in lung cancer patients
Engwall, E
Glimelius, L
Hynning, E
Physics in Medicine and Biology2018Journal Article, cited 54 times
Website
4D-Lung
Non-Small-Cell Lung cancer
4D CT
radiotherapy
Non-small cell lung cancer (NSCLC) is a tumour type thought to be well-suited for proton radiotherapy. However, the lung region poses many problems related to organ motion and can for actively scanned beams induce severe interplay effects. In this study we investigate four mitigating rescanning techniques: (1) volumetric rescanning, (2) layered rescanning, (3) breath-sampled (BS) layered rescanning, and (4) continuous breath-sampled (CBS) layered rescanning. The breath-sampled methods will spread the layer rescans over a full breathing cycle, resulting in an improved averaging effect at the expense of longer treatment times. In CBS, we aim at further improving the averaging by delivering as many rescans as possible within one breathing cycle. The interplay effect was evaluated for 4D robustly optimized treatment plans (with and without rescanning) for seven NSCLC patients in the treatment planning system RayStation. The optimization and final dose calculation used a Monte Carlo dose engine to account for the density heterogeneities in the lung region. A realistic treatment delivery time structure given from the IBA ScanAlgo simulation tool served as basis for the interplay evaluation. Both slow (2.0 s) and fast (0.1 s) energy switching times were simulated. For all seven studied patients, rescanning improves the dose conformity to the target. The general trend is that the breath-sampled techniques are superior to layered and volumetric rescanning with respect to both target coverage and variability in dose to OARs. The spacing between rescans in our breath-sampled techniques is set at planning, based on the average breathing cycle length obtained in conjunction with CT acquisition. For moderately varied breathing cycle lengths between planning and delivery (up to 15%), the breath-sampled techniques still mitigate the interplay effect well. This shows the potential for smooth implementation at the clinic without additional motion monitoring equipment.
An in silico performance characterization of respiratory motion guided 4DCT for high-quality low-dose lung cancer imaging
Martin, Spencer
Brien, Ricky O’
Hofmann, Christian
Keall, Paul
Kipriditis, John
Physics in Medicine and Biology2018Journal Article, cited 0 times
4D-Lung
This work aims to characterize the performance of an improved 4DCT technique aiming to overcome irregular breathing-related image artifacts. To address this, we have developed respiratory motion guided (RMG) 4DCT, which uses real-time breathing motion analysis to prospectively gate scans based on detection of irregular breathing. This is the first investigation of RMG-4DCT using a real-time software prototype, testing the hypothesis that it can reduce breathing irregularities during imaging, reduce image oversampling and improve image quality compared to a 'conventional' 4DCT protocol without breathing guidance. RMG-4DCT scans were simulated based on 100+ hours of breathing motion acquired for 20 lung cancer patients. Scan performance was quantified in terms of the beam on time (a surrogate for imaging dose), total scan time and the breathing irregularity during imaging (via RMSE of the breathing motion during acquisition). A conventional 4DCT protocol was also implemented using the same software prototype for a direct comparator to the RMG-4DCT results. We investigated the impact of key RMG-4DCT parameters such as gating tolerance, gantry rotation time and the use of baseline drift correction. Using a representative set of algorithm parameters, RMG-4DCT achieved significant mean reductions in estimated imaging dose (-17.8%, p < 0.001) and breathing RMSE during imaging (-12.6%, p < 0.001) compared to conventional 4DCT. These improvements came with increased scan times, roughly doubled on average (104%, p < 0.001). Image quality simulations were performed using the deformable digital XCAT phantom, with image quality quantified based on the normalized cross correlation (NCC) between axial slices. RMG-4DCT demonstrated qualitative image quality improvements for three out of 10 phase bins, however the improvement was not significant across all 10 phases (p = 0.08) at a population level. In choosing RMG-4DCT scan parameters, the trade-off between gating sensitivity and scan time may be optimized, demonstrating potential for RMG-4DCT as a viable pathway to improve clinical 4DCT imaging.
Reliable gene mutation prediction in clear cell renal cell carcinoma through multi-classifier multi-objective radiogenomics model
Chen, Xi
Zhou, Zhiguo
Hannan, Raquibul
Thomas, Kimberly
Pedrosa, Ivan
Kapur, Payal
Brugarolas, James
Mou, Xuanqin
Wang, Jing
Physics in Medicine and Biology2018Journal Article, cited 45 times
Website
TCGA-KIRP
Renal Cell
CT
radiogenomics
Genetic studies have identified associations between gene mutations and clear cell renal cell carcinoma (ccRCC). Since the complete gene mutational landscape cannot be characterized through biopsy and sequencing assays for each patient, non-invasive tools are needed to determine the mutation status for tumors. Radiogenomics may be an attractive alternative tool to identify disease genomics by analyzing amounts of features extracted from medical images. Most current radiogenomics predictive models are built based on a single classifier and trained through a single objective. However, since many classifiers are available, selecting an optimal model is challenging. On the other hand, a single objective may not be a good measure to guide model training. We proposed a new multi-classifier multi-objective (MCMO) radiogenomics predictive model. To obtain more reliable prediction results, similarity-based sensitivity and specificity were defined and considered as the two objective functions simultaneously during training. To take advantage of different classifiers, the evidential reasoning (ER) approach was used for fusing the output of each classifier. Additionally, a new similarity-based multi-objective optimization algorithm (SMO) was developed for training the MCMO to predict ccRCC related gene mutations (VHL, PBRM1 and BAP1) using quantitative CT features. Using the proposed MCMO model, we achieved a predictive area under the receiver operating characteristic curve (AUC) over 0.85 for VHL, PBRM1 and BAP1 genes with balanced sensitivity and specificity. Furthermore, MCMO outperformed all the individual classifiers, and yielded more reliable results than other optimization algorithms and commonly used fusion strategies.
Impact of image preprocessing on the scanner dependence of multi-parametric MRI radiomic features and covariate shift in multi-institutional glioblastoma datasets
Um, Hyemin
Tixier, Florent
Bermudez, Dalton
Deasy, Joseph O
Young, Robert J
Veeraraghavan, Harini
Physics in Medicine & Biology2019Journal Article, cited 0 times
TCGA-GBM
Magnetic Resonance Imaging (MRI)
Glioblastoma Multiforme (GBM)
Radiomics
Recent advances in radiomics have enhanced the value of medical imaging in various aspects of clinical practice, but a crucial component that remains to be investigated further is the robustness of quantitative features to imaging variations and across multiple institutions. In the case of MRI, signal intensity values vary according to the acquisition parameters used, yet no consensus exists on which preprocessing techniques are favorable in reducing scanner-dependent variability of image-based features. Hence, the purpose of this study was to assess the impact of common image preprocessing methods on the scanner dependence of MRI radiomic features in multi-institutional glioblastoma multiforme (GBM) datasets. Two independent GBM cohorts were analyzed: 50 cases from the TCGA-GBM dataset and 111 cases acquired in our institution, and each case consisted of 3 MRI sequences viz. FLAIR, T1-weighted, and T1-weighted post-contrast. Five image preprocessing techniques were examined: 8-bit global rescaling, 8-bit local rescaling, bias field correction, histogram standardization, and isotropic resampling. A total of 420 features divided into 8 categories representing texture, shape, edge, and intensity histogram were extracted. Two distinct imaging parameters were considered: scanner manufacturer and scanner magnetic field strength. Wilcoxon tests identified features robust to the considered acquisition parameters under the selected image preprocessing techniques. A machine learning-based strategy was implemented to measure the covariate shift between the analyzed datasets using features computed using the aforementioned preprocessing methods. Finally, radiomic scores (rad-scores) were constructed by identifying features relevant to patients' overall survival after eliminating those impacted by scanner variability. These were then evaluated for their prognostic significance through Kaplan-Meier and Cox hazards regression analyses. Our results demonstrate that overall, histogram standardization contributes the most in reducing radiomic feature variability as it is the technique to reduce the covariate shift for 3 feature categories and successfully discriminate patients into groups of different survival risks.
Bayesian pharmacokinetic modeling of dynamic contrast-enhanced magnetic resonance imaging: validation and application
Mittermeier, Andreas
Ertl-Wagner, Birgit
Ricke, Jens
Dietrich, Olaf
Ingrisch, Michael
Physics in Medicine and Biology2019Journal Article, cited 0 times
QIN Breast DCE-MRI
MRI
Breast
Tracer-kinetic analysis of dynamic contrast-enhanced magnetic resonance imaging data is commonly performed with the well-known Tofts model and nonlinear least squares (NLLS) regression. This approach yields point estimates of model parameters, uncertainty of these estimates can be assessed e.g. by an additional bootstrapping analysis. Here, we present a Bayesian probabilistic modeling approach for tracer-kinetic analysis with a Tofts model, which yields posterior probability distributions of perfusion parameters and therefore promises a robust and information-enriched alternative based on a framework of probability distributions. In this manuscript, we use the quantitative imaging biomarkers alliance (QIBA) Tofts phantom to evaluate the Bayesian tofts model (BTM) against a bootstrapped NLLS approach. Furthermore, we demonstrate how Bayesian posterior probability distributions can be employed to assess treatment response in a breast cancer DCE-MRI dataset using Cohen's d. Accuracy and precision of the BTM posterior distributions were validated and found to be in good agreement with the NLLS approaches, and assessment of therapy response with respect to uncertainty in parameter estimates was found to be excellent. In conclusion, the Bayesian modeling approach provides an elegant means to determine uncertainty via posterior distributions within a single step and provides honest information about changes in parameter estimates.
A prognostic analysis method for non-small cell lung cancer based on the computed tomography radiomics
Wang, Xu
Duan, Huihong
Li, Xiaobing
Ye, Xiaodan
Huang, Gang
Nie, Shengdong
Phys Med Biol2020Journal Article, cited 0 times
NSCLC-Radiomics
Machine Learning
In order to assist doctors in arranging the postoperative treatments and re-examinations for non-small cell lung cancer (NSCLC) patients, this study was initiated to explore a prognostic analysis method for NSCLC based on computed tomography (CT) radiomics. The data of 173 NSCLC patients were collected retrospectively and the clinically meaningful 3-year survival was used as the predictive limit to predict the patient's prognosis survival time range. Firstly, lung tumors were segmented and the radiomics features were extracted. Secondly, the feature weighting algorithm was used to screen and optimize the extracted original feature data. Then, the selected feature data combining with the prognosis survival of patients were used to train machine learning classification models. Finally, a prognostic survival prediction model and radiomics prognostic factors were obtained to predict the prognosis survival time range of NSCLC patients. The classification accuracy rate under cross-validation was up to 88.7% in the prognosis survival analysis model. When verifying on an independent data set, the model also yielded a high prediction accuracy which is up to 79.6%. Inverse different moment, lobulation sign and angular second moment were NSCLC prognostic factors based on radiomics. This study proved that CT radiomics features could effectively assist doctors to make more accurate prognosis survival prediction for NSCLC patients, so as to help doctors to optimize treatment and re-examination for NSCLC patients to extend their survival time.
Gross tumor volume segmentation for head and neck cancer radiotherapy using deep dense multi-modality network
Guo, Zhe
Guo, Ning
Gong, Kuang
Zhong, Shun’an
Li, Quanzheng
Physics in Medicine and Biology2019Journal Article, cited 0 times
Head-Neck-PET-CT
Deep Learning
Head and Neck
PET-CT
In radiation therapy, the accurate delineation of gross tumor volume (GTV) is crucial for treatment planning. However, it is challenging for head and neck cancer (HNC) due to the morphology complexity of various organs in the head, low targets to background contrast and potential artifacts on conventional planning CT images. Thus, manual delineation of GTV on anatomical images is extremely time consuming and suffers from inter-observer variability that leads to planning uncertainty. With the wide use of PET/CT imaging in oncology, complementary functional and anatomical information can be utilized for tumor contouring and bring a significant advantage for radiation therapy planning. In this study, by taking advantage of multi-modality PET and CT images, we propose an automatic GTV segmentation framework based on deep learning for HNC. The backbone of this segmentation framework is based on 3D convolution with dense connections which enables a better information propagation and takes full advantage of the features extracted from multi-modality input images. We evaluate our proposed framework on a dataset including 250 HNC patients. Each patient receives both planning CT and PET/CT imaging before radiation therapy (RT). Manually delineated GTV contours by radiation oncologists are used as ground truth in this study. To further investigate the advantage of our proposed Dense-Net framework, we also compared with the framework using 3D U-Net which is the state-of-the-art in segmentation tasks. Meanwhile, for each frame, the performance comparison between single modality input (PET or CT image) and multi-modality input (both PET/CT) is conducted. Dice coefficient, mean surface distance (MSD), 95th-percentile Hausdorff distance (HD95) and displacement of mass centroid (DMC) are calculated for quantitative evaluation. The dataset is split into train (140 patients), validation (35 patients) and test (75 patients) groups to optimize the network. Based on the results on independent test group, our proposed multi-modality Dense-Net (Dice 0.73) shows better performance than the compared network (Dice 0.71). Furthermore, the proposed Dense-Net structure has less trainable parameters than the 3D U-Net, which reduces the prediction variability. In conclusion, our proposed multi-modality Dense-Net can enable satisfied GTV segmentation for HNC using multi-modality images and yield superior performance than conventional methods. Our proposed method provides an automatic, fast and consistent solution for GTV segmentation and shows potentials to be generally applied for radiation therapy planning of a variety of cancer (e.g. lung, sarcoma, liver and so on).
Fast spot-scanning proton dose calculation method with uncertainty quantification using a three-dimensional convolutional neural network
Nomura, Yusuke
Wang, Jeff
Shirato, Hiroki
Shimizu, Shinichi
Xing, Lei
Physics in Medicine and Biology2020Journal Article, cited 0 times
Head-Neck-CT-Atlas
This study proposes a near-real-time spot-scanning proton dose calculation method with probabilistic uncertainty estimation using a three-dimensional convolutional neural network (3D-CNN). CT images and clinical target volume contours of 215 head and neck cancer patients were collected from a public database. 1484 and 488 plans were extracted for training and testing the 3D-CNN model, respectively. Spot beam data and single-field uniform dose (SFUD) labels were calculated for each plan using an open-source dose calculation toolkit. Variable spot data were converted into a fixed-size volume hereby called a 'peak map' (PM). 300 epochs of end-to-end training was implemented using sets of stopping power ratio and PM as input. Moreover, transfer learning techniques were used to adjust the trained model to SFUD doses calculated with different beam parameters and calculation algorithm using only 7.95% of training data used for the base model. Finally, accuracy of the 3D-CNN-calculated doses and model uncertainty was reviewed with several evaluation metrics. The 3D-CNN model calculates 3D proton dose distributions accurately with a mean absolute error of 0.778 cGyE. The predicted uncertainty is correlated with dose errors at high contrast edges. Averaged Sørensen-Dice similarity coefficients between binarized outputs and ground truths are mostly above 80%. Once the 3D-CNN model was well-trained, it can be efficiently fine-tuned for different proton doses by transfer learning techniques. Inference time for calculating one dose distribution is around 0.8 s for a plan using 1500 spot beams with a consumer grade GPU. A novel spot-scanning proton dose calculation method using 3D-CNN was developed. The 3D-CNN model is able to calculate 3D doses and uncertainty with any SFUD spot data and beam irradiation angles. Our proposed method should be readily extendable to other setups and plans and be useful for dose verification, image-guided proton therapy, or other applications.
Liver-ultrasound based motion modelling to estimate 4D dose distributions for lung tumours in scanned proton therapy
Giger, Alina
Krieger, Miriam
Jud, Christoph
Duetschler, Alisha
Salomir, Rares
Bieri, Oliver
Bauman, Grzegorz
Nguyen, Damien
Weber, Damien C
Lomax, Antony J
Zhang, Ye
Cattin, Philippe C
Physics in Medicine and Biology2020Journal Article, cited 0 times
4D-Lung
Motion mitigation strategies are crucial for scanned particle therapy of mobile tumours in order to prevent geometrical target miss and interplay effects. We developed a patient-specific respiratory motion model based on simultaneously acquired time-resolved volumetric MRI and 2D abdominal ultrasound images. We present its effects on 4D pencil beam scanned treatment planning and simulated dose distributions. Given an ultrasound image of the liver and the diaphragm, principal component analysis and Gaussian process regression were applied to infer dense motion information of the lungs. 4D dose calculations for scanned proton therapy were performed using the estimated and the corresponding ground truth respiratory motion; the differences were compared by dose difference volume metrics. We performed this simulation study on 10 combined CT and 4DMRI data sets where the motion characteristics were extracted from 5 healthy volunteers and fused with the anatomical CT data of two lung cancer patients. Median geometrical estimation errors below 2 mm for all data sets and maximum dose differences of [Formula: see text] = 43.2% and [Formula: see text] = 16.3% were found. Moreover, it was shown that abdominal ultrasound imaging allows to monitor organ drift. This study demonstrated the feasibility of the proposed ultrasound-based motion modelling approach for its application in scanned proton therapy of lung tumours.
Anatomically-adaptive multi-modal image registration for image-guided external-beam radiotherapy
Zachiu, C
de Senneville, B Denis
Willigenburg, T
van Zyp, J R N Voort
de Boer, J C J
Raaymakers, B W
Ries, M
Physics in Medicine and Biology2020Journal Article, cited 0 times
4D-Lung
Image-guided radiotherapy (IGRT) allows observation of the location and shape of the tumor and organs-at-risk (OAR) over the course of a radiation cancer treatment. Such information may in turn be used for reducing geometric uncertainties during therapeutic planning, dose delivery and response assessment. However, given the multiple imaging modalities and/or contrasts potentially included within the imaging protocol over the course of the treatment, the current manual approach to determining tissue displacement may become time-consuming and error prone. In this context, variational multi-modal deformable image registration (DIR) algorithms allow automatic estimation of tumor and OAR deformations across the acquired images. In addition, they require short computational times and a low number of input parameters, which is particularly beneficial for online adaptive applications, which require on-the-fly adaptions with the patient on the treatment table. However, the majority of such DIR algorithms assume that all structures across the entire field-of-view (FOV) undergo a similar deformation pattern. Given that various anatomical structures may behave considerably different, this may lead to the estimation of anatomically implausible deformations at some locations, thus limiting their validity. Therefore, in this paper we propose an anatomically-adaptive variational multi-modal DIR algorithm, which employs a regionalized registration model in accordance with the local underlying anatomy. The algorithm was compared against two existing methods which employ global assumptions on the estimated deformations patterns. Compared to the existing approaches, the proposed method has demonstrated an improved anatomical plausibility of the estimated deformations over the entire FOV as well as displaying overall higher accuracy. Moreover, despite the more complex registration model, the proposed approach is very fast and thus suitable for online scenarios. Therefore, future adaptive IGRT workflows may benefit from an anatomically-adaptive registration model for precise contour propagation and dose accumulation, in areas showcasing considerable variations in anatomical properties.
Artificial intelligence supported single detector multi-energy proton radiography system
van der Heyden, Brent
Cohilis, Marie
Souris, Kevin
de Freitas Nascimento, Luana
Sterpin, Edmond
Physics in Medicine and Biology2021Journal Article, cited 0 times
HEAD-NECK-RADIOMICS-HN1
Proton radiography imaging was proposed as a promising technique to evaluate internal anatomical changes, to enable pre-treatment patient alignment, and most importantly, to optimize the patient specific CT number to stopping-power ratio conversion. The clinical implementation rate of proton radiography systems is still limited due to their complex bulky design, together with the persistent problem of (in)elastic nuclear interactions and multiple Coulomb scattering (i.e. range mixing). In this work, a compact multi-energy proton radiography system was proposed in combination with an artificial intelligence network architecture (ProtonDSE) to remove the persistent problem of proton scatter in proton radiography. A realistic Monte Carlo model of the Proteus®One accelerator was built at 200 and 220 MeV to isolate the scattered proton signal in 236 proton radiographies of 80 digital anthropomorphic phantoms. ProtonDSE was trained to predict the proton scatter distribution at two beam energies in a 60%/25%/15% scheme for training, testing, and validation. A calibration procedure was proposed to derive the water equivalent thickness image based on the detector dose response relationship at both beam energies. ProtonDSE network performance was evaluated with quantitative metrics that showed an overall mean absolute percentage error below 1.4% ± 0.4% in our test dataset. For one example patient, detector dose to WET conversions were performed based on the total dose (ITotal), the primary proton dose (IPrimary), and the ProtonDSE corrected detector dose (ICorrected). The determined WET accuracy was compared with respect to the reference WET by idealistic raytracing in a manually delineated region-of-interest inside the brain. The error was determined 4.3% ± 4.1% forWET(ITotal),2.2% ± 1.4% forWET(IPrimary),and 2.5% ± 2.0% forWET(ICorrected).
Calibrated uncertainty estimation for interpretable proton computed tomography image correction using Bayesian deep learning
Nomura, Yusuke
Tanaka, Sodai
Wang, Jeff
Shirato, Hiroki
Shimizu, Shinichi
Xing, Lei
Physics in Medicine and Biology2021Journal Article, cited 0 times
Head-Neck-CT-Atlas
Integrated-type proton computed tomography (pCT) measures proton stopping power ratio (SPR) images for proton therapy treatment planning, but its image quality is degraded due to noise and scatter. Although several correction methods have been proposed, techniques that include estimation of uncertainty are limited. This study proposes a novel uncertainty-aware pCT image correction method using a Bayesian convolutional neural network (BCNN). A DenseNet-based BCNN was constructed to predict both a corrected SPR image and its uncertainty from a noisy SPR image. A total 432 noisy SPR images of 6 non-anthropomorphic and 3 head phantoms were collected with Monte Carlo simulations, while true noise-free images were calculated with known geometric and chemical components. Heteroscedastic loss and deep ensemble techniques were performed to estimate aleatoric and epistemic uncertainties by training 25 unique BCNN models. 200-epoch end-to-end training was performed for each model independently. Feasibility of the predicted uncertainty was demonstrated after applying two post-hoc calibrations and calculating spot-specific path length uncertainty distribution. For evaluation, accuracy of head SPR images and water-equivalent thickness (WET) corrected by the trained BCNN models was compared with a conventional method and non-Bayesian CNN model. BCNN-corrected SPR images represent noise-free images with high accuracy. Mean absolute error in test data was improved from 0.263 for uncorrected images to 0.0538 for BCNN-corrected images. Moreover, the calibrated uncertainty represents accurate confidence levels, and the BCNN-corrected calibrated WET was more accurate than non-Bayesian CNN with high statistical significance. Computation time for calculating one image and its uncertainties with 25 BCNN models is 0.7 s with a consumer grade GPU. Our model is able to predict accurate pCT images as well as two types of uncertainty. These uncertainties will be useful to identify potential cause of SPR errors and develop a spot-specific range margin criterion, toward elaboration of uncertainty-guided proton therapy.
Latent space arc therapy optimization
Bice, Noah
Fakhreddine, Mohamad
Li, Ruiqi
Nguyen, Dan
Kabat, Christopher
Myers, Pamela
Papanikolaou, Niko
Kirby, Neil
Physics in Medicine and Biology2021Journal Article, cited 0 times
HEAD-NECK-RADIOMICS-HN1
Volumetric modulated arc therapy planning is a challenging problem in high-dimensional, non-convex optimization. Traditionally, heuristics such as fluence-map-optimization-informed segment initialization use locally optimal solutions to begin the search of the full arc therapy plan space from a reasonable starting point. These routines facilitate arc therapy optimization such that clinically satisfactory radiation treatment plans can be created in a reasonable time frame. However, current optimization algorithms favor solutions near their initialization point and are slower than necessary due to plan overparameterization. In this work, arc therapy overparameterization is addressed by reducing the effective dimension of treatment plans with unsupervised deep learning. An optimization engine is then built based on low-dimensional arc representations which facilitates faster planning times.
A Bayesian approach to tissue-fraction estimation for oncological PET segmentation
Liu, Z.
Mhlanga, J. C.
Laforest, R.
Derenoncourt, P. R.
Siegel, B. A.
Jha, A. K.
Phys Med Biol2021Journal Article, cited 0 times
Website
ACRIN-NSCLC-FDG-PET
Positron Emission Tomography (PET)
Segmentation
Deep Learning
Tumor segmentation in oncological PET is challenging, a major reason being the partial-volume effects (PVEs) that arise due to low system resolution and finite voxel size. The latter results in tissue-fraction effects (TFEs), i.e. voxels contain a mixture of tissue classes. Conventional segmentation methods are typically designed to assign each image voxel as belonging to a certain tissue class. Thus, these methods are inherently limited in modeling TFEs. To address the challenge of accounting for PVEs, and in particular, TFEs, we propose a Bayesian approach to tissue-fraction estimation for oncological PET segmentation. Specifically, this Bayesian approach estimates the posterior mean of the fractional volume that the tumor occupies within each image voxel. The proposed method, implemented using a deep-learning-based technique, was first evaluated using clinically realistic 2D simulation studies with known ground truth, in the context of segmenting the primary tumor in PET images of patients with lung cancer. The evaluation studies demonstrated that the method accurately estimated the tumor-fraction areas and significantly outperformed widely used conventional PET segmentation methods, including a U-net-based method, on the task of segmenting the tumor. In addition, the proposed method was relatively insensitive to PVEs and yielded reliable tumor segmentation for different clinical-scanner configurations. The method was then evaluated using clinical images of patients with stage IIB/III non-small cell lung cancer from ACRIN 6668/RTOG 0235 multi-center clinical trial. Here, the results showed that the proposed method significantly outperformed all other considered methods and yielded accurate tumor segmentation on patient images with Dice similarity coefficient (DSC) of 0.82 (95% CI: 0.78, 0.86). In particular, the method accurately segmented relatively small tumors, yielding a high DSC of 0.77 for the smallest segmented cross-section of 1.30 cm(2). Overall, this study demonstrates the efficacy of the proposed method to accurately segment tumors in PET images.
Beyond automatic medical image segmentation—the spectrum between fully manual and fully automatic delineation
Trimpl, Michael J
Primakov, Sergey
Lambin, Philippe
Stride, Eleanor P J
Vallis, Katherine A
Gooding, Mark J
Physics in Medicine and Biology2022Journal Article, cited 0 times
NSCLC-Radiomics
Semi-automatic and fully automatic contouring tools have emerged as an alternative to fully manual segmentation to reduce time spent contouring and to increase contour quality and consistency. Particularly, fully automatic segmentation has seen exceptional improvements through the use of deep learning in recent years. These fully automatic methods may not require user interactions, but the resulting contours are often not suitable to be used in clinical practice without a review by the clinician. Furthermore, they need large amounts of labelled data to be available for training. This review presents alternatives to manual or fully automatic segmentation methods along the spectrum of variable user interactivity and data availability. The challenge lies to determine how much user interaction is necessary and how this user interaction can be used most effectively. While deep learning is already widely used for fully automatic tools, interactive methods are just at the starting point to be transformed by it. Interaction between clinician and machine, via artificial intelligence, can go both ways and this review will present the avenues that are being pursued to improve medical image segmentation.
LRR-CED: low-resolution reconstruction-aware convolutional encoder-decoder network for direct sparse-view CT image reconstruction
Kandarpa, V. S. S.
Perelli, A.
Bousse, A.
Visvikis, D.
Phys Med Biol2022Journal Article, cited 0 times
Website
Lung-PET-CT-Dx
Algorithm Development
*Image Processing
Computer-Assisted/methods
Phantoms
Imaging
*Tomography
X-Ray Computed/methods
X-Rays
deep learning
sparse-view CT
Objective. Sparse-view computed tomography (CT) reconstruction has been at the forefront of research in medical imaging. Reducing the total x-ray radiation dose to the patient while preserving the reconstruction accuracy is a big challenge. The sparse-view approach is based on reducing the number of rotation angles, which leads to poor quality reconstructed images as it introduces several artifacts. These artifacts are more clearly visible in traditional reconstruction methods like the filtered-backprojection (FBP) algorithm.Approach. Over the years, several model-based iterative and more recently deep learning-based methods have been proposed to improve sparse-view CT reconstruction. Many deep learning-based methods improve FBP-reconstructed images as a post-processing step. In this work, we propose a direct deep learning-based reconstruction that exploits the information from low-dimensional scout images, to learn the projection-to-image mapping. This is done by concatenating FBP scout images at multiple resolutions in the decoder part of a convolutional encoder-decoder (CED).; Main results. This approach is investigated on two different networks, based on Dense Blocks and U-Net to show that a direct mapping can be learned from a sinogram to an image. The results are compared to two post-processing deep learning methods (FBP-ConvNet and DD-Net) and an iterative method that uses a total variation (TV) regularization.; Significance. This work presents a novel method that uses information from both sinogram and low-resolution scout images for sparse-view CT image reconstruction. We also generalize this idea by demonstrating results with two different neural networks. This work is in the direction of exploring deep learning across the various stages of the image reconstruction pipeline involving data correction, domain transfer and image improvement.
Effects of phase aberration on transabdominal focusing for a large aperture, low f-number histotripsy transducer
Yeats, Ellen
Gupta, Dinank
Xu, Zhen
Hall, Timothy L
Physics in Medicine & Biology2022Journal Article, cited 4 times
Website
CT Lymph Nodes
Computed Tomography (CT)
Generative models improve radiomics reproducibility in low dose CTs: a simulation study
Chen, Junhua
Zhang, Chong
Traverso, Alberto
Zhovannik, Ivan
Dekker, Andre
Wee, Leonard
Bermejo, Inigo
Physics in Medicine and Biology2021Journal Article, cited 0 times
NSCLC-Radiomics
Radiomics is an active area of research in medical image analysis, however poor reproducibility of radiomics has hampered its application in clinical practice. This issue is especially prominent when radiomic features are calculated from noisy images, such as low dose computed tomography (CT) scans. In this article, we investigate the possibility of improving the reproducibility of radiomic features calculated on noisy CTs by using generative models for denoising. Our work concerns two types of generative models-encoder-decoder network (EDN) and conditional generative adversarial network (CGAN). We then compared their performance against a more traditional 'non-local means' denoising algorithm. We added noise to sinograms of full dose CTs to mimic low dose CTs with two levels of noise: low-noise CT and high-noise CT. Models were trained on high-noise CTs and used to denoise low-noise CTs without re-training. We tested the performance of our model in real data, using a dataset of same-day repeated low dose CTs in order to assess the reproducibility of radiomic features in denoised images. EDN and the CGAN achieved similar improvements on the concordance correlation coefficients (CCC) of radiomic features for low-noise images from 0.87 [95%CI, (0.833, 0.901)] to 0.92 [95%CI, (0.909, 0.935)] and for high-noise images from 0.68 [95%CI, (0.617, 0.745)] to 0.92 [95%CI, (0.909, 0.936)], respectively. The EDN and the CGAN improved the test-retest reliability of radiomic features (mean CCC increased from 0.89 [95%CI, (0.881, 0.914)] to 0.94 [95%CI, (0.927, 0.951)]) based on real low dose CTs. These results show that denoising using EDN and CGANs could be used to improve the reproducibility of radiomic features calculated from noisy CTs. Moreover, images at different noise levels can be denoised to improve the reproducibility using the above models without need for re-training, provided the noise intensity is not excessively greater that of the high-noise CTs. To the authors' knowledge, this is the first effort to improve the reproducibility of radiomic features calculated on low dose CT scans by applying generative models.
A deep unsupervised learning framework for the 4D CBCT artifact correction
Dong, Guoya
Zhang, Chenglong
Deng, Lei
Zhu, Yulin
Dai, Jingjing
Song, Liming
Meng, Ruoyan
Niu, Tianye
Liang, Xiaokun
Xie, Yaoqin
Physics in Medicine and Biology2022Journal Article, cited 0 times
4D-Lung
Objective.Four-dimensional cone-beam computed tomography (4D CBCT) has unique advantages in moving target localization, tracking and therapeutic dose accumulation in adaptive radiotherapy. However, the severe fringe artifacts and noise degradation caused by 4D CBCT reconstruction restrict its clinical application. We propose a novel deep unsupervised learning model to generate the high-quality 4D CBCT from the poor-quality 4D CBCT.Approach.The proposed model uses a contrastive loss function to preserve the anatomical structure in the corrected image. To preserve the relationship between the input and output image, we use a multilayer, patch-based method rather than operate on entire images. Furthermore, we draw negatives from within the input 4D CBCT rather than from the rest of the dataset.Main results.The results showed that the streak and motion artifacts were significantly suppressed. The spatial resolution of the pulmonary vessels and microstructure were also improved. To demonstrate the results in the different directions, we make the animation to show the different views of the predicted correction image in the supplementary animation.Significance.The proposed method can be integrated into any 4D CBCT reconstruction method and maybe a practical way to enhance the image quality of the 4D CBCT.
U-net architecture with embedded Inception-ResNet-v2 image encoding modules for automatic segmentation of organs-at-risk in head and neck cancer radiation therapy based on computed tomography scans
Siciarz, Pawel
McCurdy, Boyd
Physics in Medicine and Biology2022Journal Article, cited 0 times
HEAD-NECK-RADIOMICS-HN1
HNSCC-3DCT-RT
Purpose.The purpose of this study was to utilize a deep learning model with an advanced inception module to automatically contour critical organs on the computed tomography (CT) scans of head and neck cancer patients who underwent radiation therapy treatment and interpret the clinical suitability of the model results through activation mapping.Materials and methods.This study included 25 critical organs that were delineated by expert radiation oncologists. Contoured medical images of 964 patients were sourced from a publicly available TCIA database. The proportion of training, validation, and testing samples for deep learning model development was 65%, 25%, and 10% respectively. The CT scans and segmentation masks were augmented with shift, scale, and rotate transformations. Additionally, medical images were pre-processed using contrast limited adaptive histogram equalization to enhance soft tissue contrast while contours were subjected to morphological operations to ensure their structural integrity. The segmentation model was based on the U-Net architecture with embedded Inception-ResNet-v2 blocks and was trained over 100 epochs with a batch size of 32 and an adaptive learning rate optimizer. The loss function combined the Jaccard Index and binary cross entropy. The model performance was evaluated with Dice Score, Jaccard Index, and Hausdorff Distances. The interpretability of the model was analyzed with guided gradient-weighted class activation mapping.Results.The Dice Score, Jaccard Index, and mean Hausdorff Distance averaged over all structures and patients were 0.82 ± 0.10, 0.71 ± 0.10, and 1.51 ± 1.17 mm respectively on the testing data sets. The Dice Scores for 86.4% of compared structures was within range or better than published interobserver variability derived from multi-institutional studies. The average model training time was 8 h per anatomical structure. The full segmentation of head and neck anatomy by the trained network required only 6.8 s per patient.Conclusions.High accuracy obtained on a large, multi-institutional data set, short segmentation time and clinically-realistic prediction reasoning make the model proposed in this work a feasible solution for head and neck CT scan segmentation in a clinical environment.
Millisecond speed deep learning based proton dose calculation with Monte Carlo accuracy
Pastor-Serrano, O.
Perko, Z.
Phys Med Biol2022Journal Article, cited 0 times
Website
NSCLC-Radiomics-Genomics
Algorithms
*Deep Learning
Humans
Monte Carlo Method
Phantoms
Imaging
*Proton Therapy/methods
Radiotherapy Dosage
Radiotherapy Planning
Computer-Assisted/methods
*Monte Carlo
Objective.Next generation online and real-time adaptive radiotherapy workflows require precise particle transport simulations in sub-second times, which is unfeasible with current analytical pencil beam algorithms (PBA) or Monte Carlo (MC) methods. We present a deep learning based millisecond speed dose calculation algorithm (DoTA) accurately predicting the dose deposited by mono-energetic proton pencil beams for arbitrary energies and patient geometries.Approach.Given the forward-scattering nature of protons, we frame 3D particle transport as modeling a sequence of 2D geometries in the beam's eye view. DoTA combines convolutional neural networks extracting spatial features (e.g. tissue and density contrasts) with a transformer self-attention backbone that routes information between the sequence of geometry slices and a vector representing the beam's energy, and is trained to predict low noise MC simulations of proton beamlets using 80 000 different head and neck, lung, and prostate geometries.Main results.Predicting beamlet doses in 5 +/- 4.9 ms with a very high gamma pass rate of 99.37 +/- 1.17% (1%, 3 mm) compared to the ground truth MC calculations, DoTA significantly improves upon analytical pencil beam algorithms both in precision and speed. Offering MC accuracy 100 times faster than PBAs for pencil beams, our model calculates full treatment plan doses in 10-15 s depending on the number of beamlets (800-2200 in our plans), achieving a 99.70 +/- 0.14% (2%, 2 mm) gamma pass rate across 9 test patients.Significance.Outperforming all previous analytical pencil beam and deep learning based approaches, DoTA represents a new state of the art in data-driven dose calculation and can directly compete with the speed of even commercial GPU MC approaches. Providing the sub-second speed required for adaptive treatments, straightforward implementations could offer similar benefits to other steps of the radiotherapy workflow or other modalities such as helium or carbon treatments.
Weakly-supervised lesion analysis with a CNN-based framework for COVID-19
Wu, Kaichao
Jelfs, Beth
Ma, Xiangyuan
Ke, Ruitian
Tan, Xuerui
Fang, Qiang
Physics in Medicine and Biology2021Journal Article, cited 0 times
COVID-19
Humans
Lung
SARS-CoV-2
Thorax
Tomography
X-Ray Computed
Physical Sciences
Medical and Biological Physics
CT Images in COVID-19
LIDC-IDRI
Objective.Lesions of COVID-19 can be clearly visualized using chest CT images, and hence provide valuable evidence for clinicians when making a diagnosis. However, due to the variety of COVID-19 lesions and the complexity of the manual delineation procedure, automatic analysis of lesions with unknown and diverse types from a CT image remains a challenging task. In this paper we propose a weakly-supervised framework for this task requiring only a series of normal and abnormal CT images without the need for annotations of the specific locations and types of lesions.Approach.A deep learning-based diagnosis branch is employed for classification of the CT image and then a lesion identification branch is leveraged to capture multiple types of lesions.Main Results.Our framework is verified on publicly available datasets and CT data collected from 13 patients of the First Affiliated Hospital of Shantou University Medical College, China. The results show that the proposed framework can achieve state-of-the-art diagnosis prediction, and the extracted lesion features are capable of distinguishing between lesions showing ground glass opacity and consolidation.Significance.The proposed approach integrates COVID-19 positive diagnosis and lesion analysis into a unified framework without extra pixel-wise supervision. Further exploration also demonstrates that this framework has the potential to discover lesion types that have not been reported and can potentially be generalized to lesion detection of other chest-based diseases.
A multi-objective based radiomics feature selection method for response prediction following radiotherapy
Pan, XiaoYing
Liu, Chen
Feng, TianHao
Qi, X Sharon
Physics in Medicine and Biology2023Journal Article, cited 0 times
OPC-Radiomics
Objective.Radiomics contains a large amount of mineable information extracted from medical images, which has important significance in treatment response prediction for personalized treatment. Radiomics analyses generally involve high dimensions and redundant features, feature selection is essential for construction of prediction models.Approach.We proposed a novel multi-objective based radiomics feature selection method (MRMOPSO), where the number of features, sensitivity, and specificity are jointly considered as optimization objectives in feature selection. The MRMOPSO innovated in the following three aspects: (1) Fisher score to initialize the population to speed up the convergence; (2) Min-redundancy particle generation operations to reduce the redundancy between radiomics features, a truncation strategy was introduced to further reduce the number of features effectively; (3) Particle selection operations guided by elitism strategies to improve local search ability of the algorithm. We evaluated the effectiveness of the MRMOPSO by using a multi-institution oropharyngeal cancer dataset from The Cancer Imaging Archive. 357 patients were used for model training and cross validation, an additional 64 patients were used for evaluation.Main results.The area under the curve (AUC) of our method achieved AUCs of 0.82 and 0.84 for cross validation and independent dataset, respectively. Compared with classical feature selection methods, the AUC of MRMOPSO is significantly higher than the Lasso (AUC = 0.74,p-value = 0.02), minimal-redundancy-maximal-relevance criterion (mRMR) (AUC = 0.73,p-value = 0.05), F-score (AUC = 0.48,p-value < 0.01), and mutual information (AUC = 0.69,p-value < 0.01) methods. Compared to single-objective methods, the AUC of MRMOPSO is 12% higher than those of the genetic algorithm (GA) (AUC = 0.68,p-value = 0.02) and particle swarm optimization algorithm (AUC = 0.72,p-value = 0.05) methods. Compared to other multi-objective feature selection methods, the AUC of MRMOPSO is 14% higher than those of multiple objective particle swarm optimization (MOPSO) (AUC = 0.68,p-value = 0.02) and nondominated sorting genetic algorithm II (NSGA2) (AUC = 0.70,p-value = 0.03).Significance.We proposed a multi-objective based radiomics feature selection method. Compared to conventional feature reduction algorithms, the proposed algorithm effectively reduced feature dimension, and achieved superior performance, with improved sensitivity and specificity, for response prediction in radiotherapy.
DADFN: dynamic adaptive deep fusion network based on imaging genomics for prediction recurrence of lung cancer
Jia, Liye
Wu, Wei
Hou, Guojie
Zhang, Yanan
Zhao, Juanjuan
Qiang, Yan
Wang, Long
Physics in Medicine and Biology2023Journal Article, cited 0 times
NSCLC Radiogenomics
Objective. Recently, imaging genomics has increasingly shown great potential for predicting postoperative recurrence of lung cancer patients. However, prediction methods based on imaging genomics have some disadvantages such as small sample size, high-dimensional information redundancy and poor multimodal fusion efficiency. This study aim to develop a new fusion model to overcome these challenges.Approach. In this study, a dynamic adaptive deep fusion network (DADFN) model based on imaging genomics is proposed for predicting recurrence of lung cancer. In this model, the 3D spiral transformation is used to augment the dataset, which better retains the 3D spatial information of the tumor for deep feature extraction. The intersection of genes screened by LASSO, F-test and CHI-2 selection methods is used to eliminate redundant data and retain the most relevant gene features for the gene feature extraction. A dynamic adaptive fusion mechanism based on the cascade idea is proposed, and multiple different types of base classifiers are integrated in each layer, which can fully utilize the correlation and diversity between multimodal information to better fuse deep features, handcrafted features and gene features.Main results. The experimental results show that the DADFN model achieves good performance, and its accuracy and AUC are 0.884 and 0.863, respectively. This indicates that the model is effective in predicting lung cancer recurrence.Significance. The proposed model has the potential to help physicians to stratify the risk of lung cancer patients and can be used to identify patients who may benefit from a personalized treatment option.
Accurate segmentation of head and neck radiotherapy CT scans with 3D CNNs: consistency is key
Henderson, Edward G A
Osorio, Eliana M Vasquez
van Herk, Marcel
Brouwer, Charlotte L
Steenbakkers, Roel J H M
Green, Andrew F
Physics in Medicine and Biology2023Journal Article, cited 0 times
HNSCC
Objective.Automatic segmentation of organs-at-risk in radiotherapy planning computed tomography (CT) scans using convolutional neural networks (CNNs) is an active research area. Very large datasets are usually required to train such CNN models. In radiotherapy, large, high-quality datasets are scarce and combining data from several sources can reduce the consistency of training segmentations. It is therefore important to understand the impact of training data quality on the performance of auto-segmentation models for radiotherapy.Approach.In this study, we took an existing 3D CNN architecture for head and neck CT auto-segmentation and compare the performance of models trained with a small, well-curated dataset (n= 34) and then a far larger dataset (n= 185) containing less consistent training segmentations. We performed 5-fold cross-validations in each dataset and tested segmentation performance using the 95th percentile Hausdorff distance and mean distance-to-agreement metrics. Finally, we validated the generalisability of our models with an external cohort of patient data (n= 12) with five expert annotators.Main results.The models trained with a large dataset were greatly outperformed by models (of identical architecture) trained with a smaller, but higher consistency set of training samples. Our models trained with a small dataset produce segmentations of similar accuracy as expert human observers and generalised well to new data, performing within inter-observer variation.Significance.We empirically demonstrate the importance of highly consistent training samples when training a 3D auto-segmentation model for use in radiotherapy. Crucially, it is the consistency of the training segmentations which had a greater impact on model performance rather than the size of the dataset used.
4D dosimetric-blood flow model: impact of prolonged fraction delivery times of IMRT on the dose to the circulating lymphocytes
Hammi, A.
Phys Med Biol2023Journal Article, cited 0 times
GLIS-RT
Humans
*Radiotherapy
Intensity-Modulated
Hemodynamics
Lymphocytes
Radiometry
Simulation
intensity-modulated radiotherapy (IMRT)
blood flow model
cerebral vasculature
circulating lymphocytes
patient-specific modeling
To investigate the impact of prolonged fraction delivery of modern intensity-modulated radiotherapy (IMRT) on the accumulated dose to the circulating blood during the course of fractionated radiation therapy. We have developed a 4D dosimetric blood flow model (d-BFM) capable of continuously simulating the blood flow through the entire body of the cancer patient and scoring the accumulated dose to blood particles (BPs). We developed a semi-automatic approach that enables us to map the tortuous blood vessels of the surficial brain of individual patients directly from standard magnetic resonance imaging data of the patient. For the rest of the body, we developed a fully-fledged dynamic blood flow transfer model according to the International Commission on Radiological Protection human reference. We proposed a methodology enabling us to design a personalized d-BFM, such it can be tailored for individual patients by adopting intra- and inter-subject variations. The entire circulatory model tracks over 43 million BPs and has a time resolution of $\unicode{x02206}t$ = 10−3 s. A dynamic dose delivery model was implemented to emulate the spatial and temporal time-varying pattern of the dose rate during the step-and-shoot mode of IMRT. We evaluated how different configurations of the dose rate delivery, and a time prolongation of fraction delivery may impact the dose received by the circulating blood (CB).Our calculations indicate that prolonging the fraction treatment time from 7 to 18 min will augment the blood volume receiving any dose ${V}_{D\gt 0\mathrm{Gy}}$ from 36.1% to 81.5% during one single fraction. The results indicate that increasing the segment number has only a negligible effect on the irradiated blood volume, when the fraction time is kept identical. We developed a novel concept of customized 4D d-BFM that can be tailored to the hemodynamics of individual patients to quantify dose to the CB during fractionated radiotherapy. The prolonged fraction delivery and the variability of the instantaneous dose rate have a significant impact on the accumulated dose distribution during IMRT treatments. This impact should be considered during IMRT treatments design to reduce RT-induced immunosuppressive effects.
Physics in Medicine and Biology2023Journal Article, cited 0 times
TCGA-LUAD
TCGA-LUSC
Objective.Whole slide images (WSIs) play a crucial role in histopathological analysis. The extremely high resolution of WSIs makes it laborious to obtain fine-grade annotations. Hence, classifying WSIs with only slide-level labels is often cast as a multiple instance learning (MIL) problem where a WSI is regarded as a bag and tiled into patches that are regarded as instances. The purpose of this study is to develop a novel MIL method for classifying WSIs with only slide-level labels in histopathology analysis.Approach.We propose a novel iterative MIL (IMIL) method for WSI classification where instance representations and bag representations are learned collaboratively. In particular, IMIL iteratively finetune the feature extractor with selected instances and corresponding pseudo labels generated by attention-based MIL pooling. Additionally, three procedures for robust training of IMIL are adopted: (1) the feature extractor is initialized by utilizing self-supervised learning methods on all instances, (2) samples for finetuning the feature extractor are selected according to the attention scores, and (3) a confidence-aware loss is applied for finetuning the feature extractor.Main results.Our proposed IMIL-SimCLR archives the optimal classification performance on Camelyon16 and KingMed-Lung. Compared with the baseline method CLAM, IMIL-SimCLR significantly outperforms it by 3.71% higher average area under curve (AUC) on Camelyon16 and 4.25% higher average AUC on KingMed-Lung. Additionally, our proposed IMIL-ImageNet achieve the optimal classification performance on TCGA-Lung with the average AUC of 96.55% and the accuracy of 96.76%, which significantly outperforms the baseline method CLAM by 1.65% higher average AUC and 2.09% higher average accuracy respectively.Significance.Experimental results on a public lymph node metastasis dataset, a public lung cancer diagnosis dataset and an in-house lung cancer diagnosis datasets show the effectiveness of our proposed IMIL method across different WSI classification tasks compared with other state-of-the-art MIL methods.
Fast dose calculation in x-ray guided interventions by using deep learning
Villa, Mateo
Nasr, Bahaa
Benoit, Didier
Padoy, Nicolas
Visvikis, Dimitris
Bert, Julien
Physics in Medicine and Biology2023Journal Article, cited 0 times
Pancreas-CT
Objective.Patient dose estimation in x-ray-guided interventions is essential to prevent radiation-induced biological side effects. Current dose monitoring systems estimate the skin dose based in dose metrics such as the reference air kerma. However, these approximations do not take into account the exact patient morphology and organs composition. Furthermore, accurate organ dose estimation has not been proposed for these procedures. Monte Carlo simulation can accurately estimate the dose by recreating the irradiation process generated during the x-ray imaging, but at a high computation time, limiting an intra-operative application. This work presents a fast deep convolutional neural network trained with MC simulations for patient dose estimation during x-ray-guided interventions.Approach.We introduced a modified 3D U-Net that utilizes a patient's CT scan and the numerical values of imaging settings as input to produce a Monte Carlo dose map. To create a dataset of dose maps, we simulated the x-ray irradiation process for the abdominal region using a publicly available dataset of 82 patient CT scans. The simulation involved varying the angulation, position, and tube voltage of the x-ray source for each scan. We additionally conducted a clinical study during endovascular abdominal aortic repairs to validate the reliability of our Monte Carlo simulation dose maps. Dose measurements were taken at four specific anatomical points on the skin and compared to the corresponding simulated doses. The proposed network was trained using a 4-fold cross-validation approach with 65 patients, and evaluating the performance on the remaining 17 patients during testing.Main results.The clinical validation demonstrated a average error within the anatomical points of 5.1%. The network yielded test errors of 11.5 ± 4.6% and 6.2 ± 1.5% for peak and average skin doses, respectively. Furthermore, the mean errors for the abdominal region and pancreas doses were 5.0 ± 1.4% and 13.1 ± 2.7%, respectively.Significance.Our network can accurately predict a personalized 3D dose map considering the current imaging settings. A short computation time was achieved, making our approach a potential solution for dose monitoring and reporting commercial systems.
Hyperparameter optimization and development of an advanced CNN-based technique for lung nodule assessment
Shivwanshi, Resham Raj
Nirala, Neelamshobha
Physics in Medicine and Biology2023Journal Article, cited 0 times
LIDC-IDRI
Objective. This paper aims to propose an advanced methodology for assessing lung nodules using automated techniques with computed tomography (CT) images to detect lung cancer at an early stage.Approach. The proposed methodology utilizes a fixed-size 3 × 3 kernel in a convolution neural network (CNN) for relevant feature extraction. The network architecture comprises 13 layers, including six convolution layers for deep local and global feature extraction. The nodule detection architecture is enhanced by incorporating a transfer learning-based EfficientNetV_2 network (TLEV2N) to improve training performance. The classification of nodules is achieved by integrating the EfficientNet_V2 architecture of CNN for more accurate benign and malignant classification. The network architecture is fine-tuned to extract relevant features using a deep network while maintaining performance through suitable hyperparameters.Main results. The proposed method significantly reduces the false-negative rate, with the network achieving an accuracy of 97.56% and a specificity of 98.4%. Using the 3 × 3 kernel provides valuable insights into minute pixel variation and enables the extraction of information at a broader morphological level. The continuous responsiveness of the network to fine-tune initial values allows for further optimization possibilities, leading to the design of a standardized system capable of assessing diversified thoracic CT datasets.Significance. This paper highlights the potential of non-invasive techniques for the early detection of lung cancer through the analysis of low-dose CT images. The proposed methodology offers improved accuracy in detecting lung nodules and has the potential to enhance the overall performance of early lung cancer detection. By reconfiguring the proposed method, further advancements can be made to optimize outcomes and contribute to developing a standardized system for assessing diverse thoracic CT datasets.
QS-ADN: quasi-supervised artifact disentanglement network for low-dose CT image denoising by local similarity among unpaired data
Ruan, Yuhui
Yuan, Qiao
Niu, Chuang
Li, Chen
Yao, Yudong
Wang, Ge
Teng, Yueyang
Physics in Medicine and Biology2023Journal Article, cited 0 times
LDCT-and-Projection-data
Deep learning has been successfully applied to low-dose CT (LDCT) image denoising for reducing potential radiation risk. However, the widely reported supervised LDCT denoising networks require a training set of paired images, which is expensive to obtain and cannot be perfectly simulated. Unsupervised learning utilizes unpaired data and is highly desirable for LDCT denoising. As an example, an artifact disentanglement network (ADN) relies on unpaired images and obviates the need for supervision but the results of artifact reduction are not as good as those through supervised learning. An important observation is that there is often hidden similarity among unpaired data that can be utilized. This paper introduces a new learning mode, called quasi-supervised learning, to empower ADN for LDCT image denoising. For every LDCT image, the best matched image is first found from an unpaired normal-dose CT (NDCT) dataset. Then, the matched pairs and the corresponding matching degree as prior information are used to construct and train our ADN-type network for LDCT denoising. The proposed method is different from (but compatible with) supervised and semi-supervised learning modes and can be easily implemented by modifying existing networks. The experimental results show that the method is competitive with state-of-the-art methods in terms of noise suppression and contextual fidelity. The code and working dataset are publicly available athttps://github.com/ruanyuhui/ADN-QSDL.git.
Gradient-based geometry learning for fan-beam CT reconstruction
Thies, Mareike
Wagner, Fabian
Maul, Noah
Folle, Lukas
Meier, Manuela
Rohleder, Maximilian
Schneider, Linda-Sophie
Pfaff, Laura
Gu, Mingxuan
Utz, Jonas
Denzinger, Felix
Manhart, Michael
Maier, Andreas
Physics in Medicine and Biology2023Journal Article, cited 0 times
LDCT-and-Projection-data
Objective.Incorporating computed tomography (CT) reconstruction operators into differentiable pipelines has proven beneficial in many applications. Such approaches usually focus on the projection data and keep the acquisition geometry fixed. However, precise knowledge of the acquisition geometry is essential for high quality reconstruction results. In this paper, the differentiable formulation of fan-beam CT reconstruction is extended to the acquisition geometry.Approach.The CT fan-beam reconstruction is analytically derived with respect to the acquisition geometry. This allows to propagate gradient information from a loss function on the reconstructed image into the geometry parameters. As a proof-of-concept experiment, this idea is applied to rigid motion compensation. The cost function is parameterized by a trained neural network which regresses an image quality metric from the motion-affected reconstruction alone.Main results.The algorithm improves the structural similarity index measure (SSIM) from 0.848 for the initial motion-affected reconstruction to 0.946 after compensation. It also generalizes to real fan-beam sinograms which are rebinned from a helical trajectory where the SSIM increases from 0.639 to 0.742.Significance.Using the proposed method, we are the first to optimize an autofocus-inspired algorithm based on analytical gradients. Next to motion compensation, we see further use cases of our differentiable method for scanner calibration or hybrid techniques employing deep models.
BAF-Net: bidirectional attention-aware fluid pyramid feature integrated multimodal fusion network for diagnosis and prognosis
Wu, H.
Peng, L.
Du, D.
Xu, H.
Lin, G.
Zhou, Z.
Lu, L.
Lv, W.
Phys Med Biol2024Journal Article, cited 0 times
Website
HNSCC
QIN-HEADNECK
TCGA-HNSC
HEAD-NECK-RADIOMICS-HN1
Tuberculosis
Humans
Prognosis
*Image Processing
Computer-Assisted/methods
Positron Emission Tomography Computed Tomography
Lung Neoplasms/diagnostic imaging
Multimodal Imaging
Head and Neck Neoplasms/diagnostic imaging
bidirectional attention-aware
deep learning
diagnosis and prognosis
fluid pyramid feature integration
multimodal fusion
Objective. To go beyond the deficiencies of the three conventional multimodal fusion strategies (i.e. input-, feature- and output-level fusion), we propose a bidirectional attention-aware fluid pyramid feature integrated fusion network (BAF-Net) with cross-modal interactions for multimodal medical image diagnosis and prognosis.Approach. BAF-Net is composed of two identical branches to preserve the unimodal features and one bidirectional attention-aware distillation stream to progressively assimilate cross-modal complements and to learn supplementary features in both bottom-up and top-down processes. Fluid pyramid connections were adopted to integrate the hierarchical features at different levels of the network, and channel-wise attention modules were exploited to mitigate cross-modal cross-level incompatibility. Furthermore, depth-wise separable convolution was introduced to fuse the cross-modal cross-level features to alleviate the increase in parameters to a great extent. The generalization abilities of BAF-Net were evaluated in terms of two clinical tasks: (1) an in-house PET-CT dataset with 174 patients for differentiation between lung cancer and pulmonary tuberculosis. (2) A public multicenter PET-CT head and neck cancer dataset with 800 patients from nine centers for overall survival prediction.Main results. On the LC-PTB dataset, improved performance was found in BAF-Net (AUC = 0.7342) compared with input-level fusion model (AUC = 0.6825;p< 0.05), feature-level fusion model (AUC = 0.6968;p= 0.0547), output-level fusion model (AUC = 0.7011;p< 0.05). On the H&N cancer dataset, BAF-Net (C-index = 0.7241) outperformed the input-, feature-, and output-level fusion model, with 2.95%, 3.77%, and 1.52% increments of C-index (p= 0.3336, 0.0479 and 0.2911, respectively). The ablation experiments demonstrated the effectiveness of all the designed modules regarding all the evaluated metrics in both datasets.Significance. Extensive experiments on two datasets demonstrated better performance and robustness of BAF-Net than three conventional fusion strategies and PET or CT unimodal network in terms of diagnosis and prognosis.
Optimal batch determination for improved harmonization and prognostication of multi-center PET/CT radiomics feature in head and neck cancer
Wu, Huiqin
Liu, Xiaohui
Peng, Lihong
Yang, Yuling
Zhou, Zidong
Du, Dongyang
Xu, Hui
Lv, Wenbing
Lu, Lijun
Phys Med Biol2023Journal Article, cited 0 times
Website
Objective. To determine the optimal approach for identifying and mitigating batch effects in PET/CT radiomics features, and further improve the prognosis of patients with head and neck cancer (HNC), this study investigated the performance of three batch harmonization methods.Approach. Unsupervised harmonization identified the batch labels by K-means clustering. Supervised harmonization regarding the image acquisition factors (center, manufacturer, scanner, filter kernel) as known/given batch labels, and Combat harmonization was then implemented separately and sequentially based on the batch labels, i.e. harmonizing features among batches determined by each factor individually or harmonizing features among batches determined by multiple factors successively. Extensive experiments were conducted to predict overall survival (OS) on public PET/CT datasets that contain 800 patients from 9 centers.Main results. In the external validation cohort, results show that compared to original models without harmonization, Combat harmonization would be beneficial in OS prediction with C-index of 0.687-0.740 versus 0.684-0.767. Supervised harmonization slightly outperformed unsupervised harmonization in all models (C-index: 0.692-0.767 versus 0.684-0.750). Separate harmonization outperformed sequential harmonization in CT_m+clinic and CT_cm+clinic models with C-index of 0.752 and 0.722, respectively, while sequential harmonization involved clinical features in PET_rs+clinic model further improving the performance and achieving the highest C-index of 0.767.Significance. Optimal batch determination especially sequential harmonization for Combat holds the potential to improve the prognostic power of radiomics model in multi-center HNC dataset with PET/CT imaging.
Boundary-aware semantic clustering network for segmentation of prostate zones from T2-weighted MRI
Kou, Weixuan
Marshall, Harry
Chiu, Bernard
Phys Med Biol2024Journal Article, cited 0 times
Website
ISBI-MR-Prostate-2013
boundary-aware contrastive (BAC) loss
prostate zonal segmentation
self-attention
semantic clustering attention (SCA)
Automatic segmentation of prostatic zones from MRI can improve clinical diagnosis of prostate cancer as lesions in the peripheral zone (PZ) and central gland (CG) exhibit different characteristics. Existing approaches are limited in their accuracy in localizing the edges of PZ and CG. The proposed boundary-aware semantic clustering network (BASC-Net) improves segmentation performance by learning features in the vicinity of the prostate zonal boundaries, instead of only focusing on manually segmented boundaries.

Approach: BASC-Net consists of two major components: the semantic clustering attention (SCA) module and the boundary-aware contrastive (BAC) loss. The SCA module implements a self-attention mechanism that extracts feature bases representing essential features of the inner body and boundary subregions and constructs attention maps highlighting each subregion. SCA is the first self-attention algorithm that utilizes ground truth masks to supervise the feature basis construction process. The features extracted from the inner body and boundary subregions of the same zone were integrated by BAC loss, which promotes the similarity of features extracted in the two subregions of the same zone. The BAC loss further promotes the difference between features extracted from different zones.

Main results: BASC-Net was evaluated on the NCI-ISBI 2013 Challenge and Prostate158 datasets. An inter-dataset evaluation was conducted to evaluate the generalizability of the proposed method. BASC-Net outperformed nine state-of-the-art methods in all three experimental settings, attaining Dice similarity coefficients (DSCs) of 79.9% and 88.6% for PZ and CG, respectively, in the NCI-ISBI dataset, 80.5% and 89.2% for PZ and CG, respectively, in Prostate158 dataset, and 73.2% and 87.4% for PZ and CG, respectively, in the inter-dataset evaluation.

Significance: As PZ and CG lesions have different characteristics, the zonal boundaries segmented by BASC-Net will facilitate prostate lesion detection.
.
The influence of anisotropy on the clinical target volume of brain tumor patients
Buti, Gregory
Ajdari, Ali
Hochreuter, Kim
Shih, Helen
Bridge, Christopher P
Sharp, Gregory C
Bortfeld, Thomas
Physics in Medicine and Biology2024Journal Article, cited 0 times
GLIS-RT
Diffusion Tensor Imaging
Radiotherapy Planning
Glioma
Objective.Current radiotherapy guidelines for glioma target volume definition recommend a uniform margin expansion from the gross tumor volume (GTV) to the clinical target volume (CTV), assuming uniform infiltration in the invaded brain tissue. However, glioma cells migrate preferentially along white matter tracts, suggesting that white matter directionality should be considered in an anisotropic CTV expansion. We investigate two models of anisotropic CTV expansion and evaluate their clinical feasibility.Approach.To incorporate white matter directionality into the CTV, a diffusion tensor imaging (DTI) atlas is used. The DTI atlas consists of water diffusion tensors that are first spatially transformed into local tumor resistance tensors, also known as metric tensors, and secondly fed to a CTV expansion algorithm to generate anisotropic CTVs. Two models of spatial transformation are considered in the first step. The first model assumes that tumor cells experience reduced resistance parallel to the white matter fibers. The second model assumes that the anisotropy of tumor cell resistance is proportional to the anisotropy observed in DTI, with an 'anisotropy weighting parameter' controlling the proportionality. The models are evaluated in a cohort of ten brain tumor patients.Main results.To evaluate the sensitivity of the model, a library of model-generated CTVs was computed by varying the resistance and anisotropy parameters. Our results indicate that the resistance coefficient had the most significant effect on the global shape of the CTV expansion by redistributing the target volume from potentially less involved gray matter to white matter tissue. In addition, the anisotropy weighting parameter proved useful in locally increasing CTV expansion in regions characterized by strong tissue directionality, such as near the corpus callosum.Significance.By incorporating anisotropy into the CTV expansion, this study is a step toward an interactive CTV definition that can assist physicians in incorporating neuroanatomy into a clinically optimized CTV.
Staging of clear cell renal cell carcinoma using random forest and support vector machine
Abstract. Kidney cancer is one of the deadliest types of cancer affecting the human body. It’s regarded as the seventh most common type of cancer affecting men and the ninth affecting women. Early diagnosis of kidney cancer can improve the survival rates for many patients. Clear cell renal cell carcinoma (ccRCC) accounts for 90% of renal cancers. Although the exact cause of the kidney cancer is still unknown, early diagnosis can help patients get the proper treatment at the proper time. In this paper, a novel semi-automated model is proposed for early detection and staging of clear cell renal cell carcinoma. The proposed model consists of three phases: segmentation, feature extraction, and classification. The first phase is image segmentation phase where images were masked to segment the kidney lobes. Then the masked images were fed into watershed algorithm to extract tumor from the kidney. The second phase is feature extraction phase where gray level co-occurrence matrix (GLCM) method was integrated with normal statistical method to extract the feature vectors from the segmented images. The last phase is the classification phase where the resulted feature vectors were introduced to random forest (RF) and support vector machine (SVM) classifiers. Experiments have been carried out to validate the effectiveness of the proposed model using TCGA-KRIC dataset which contains 228 CT scans of ccRCC patients where 150 scans were used for learning and 78 for validation. The proposed model showed an outstanding improvement of 15.12% for accuracy from the previous work.
Low Dose Mammography via Deep Learning
Zhu, Guogang
Fu, Jian
Dong, Jianbing
2020Journal Article, cited 0 times
CBIS-DDSM
X-ray mammography has been widely applied to breast cancer diagnosis due to its simplicity and reliability. However, X-ray will do harm to the health of patients or even cause cancer. Low dose mammography by reducing the tube current is an effective method to reduce radiation dose and has attracted more and more interests. In this paper, we implemented a method to improve the image quality of low dose mammography via deep learning. It is based on a convolutional neural network (CNN) and focuses on reducing the noise in low dose mammography. After training, it can obtain a high quality image from a low dose mammography image. This method is validated with experimental data sets obtained from The Cancer Imaging Archive (TCIA). It will promote the application of state-of-art deep learning technique in the field of low dose mammography.
Inception Architecture for Brain Image Classification
Tamilarasi, R.
Gopinathan, S.
Journal of Physics: Conference Series2021Journal Article, cited 0 times
Website
REMBRANDT
BRAIN
Deep Learning
Classification
A non-invasive diagnostic support system for brain cancer diagnosis is presented in this study. Recently, very deeper convolution neural networks are designed for computerized tasks such as image classification, natural language processing. One of the standard architecture designs is the Visual Geometric Group (VGG) models. It uses a large number of small convolution filters (3x3) connected serially. Before applying max pooling, convolution filters are stacked up to four layers to extract features' abstraction. The main drawback of going deeper is over fitting, and also updating gradient weights is very hard. These limitations are overcome using the inception module, which is wider rather than deeper. Also, it has parallel convolution layers with 3x3, 5x5, and 1x1 filters that reduce the computational complexity due to stacking, and the outputs are concatenated. This study's experimental results show the usefulness of inception architecture for aiding brain image classification on Repository of Molecular Brain Neoplasia DaTa (REMBRANDT) Magnetic Resonance Imaging (MRI) images with an average accuracy of 95.1%, sensitivity of 96.2%, and specificity of 94%.
Lung Images Segmentation and Classification Based on Deep Learning: A New Automated CNN Approach
Salama, Wessam M.
Aly, Moustafa H.
Elbagoury, Azza M.
Journal of Physics: Conference Series2021Journal Article, cited 0 times
Website
National Lung Screening Trial (NLST)
Computed Tomography (CT)
Computer Aided Detection (CADe)
LUNA16 Challenge
ResNet50
Lung cancer became a significant health problem worldwide over the past decades. This paper introduces a new generalized framework for lung cancer detection where many different strategies are explored for the classification. The ResNet50 model is applied to classify CT lung images into benign or malignant. Also, the U-Net, which is one of the most used architectures in deep learning for image segmentation, is employed to segment CT images before classification to increase system performance. Moreover, Image Size Dependent Normalization Technique (ISDNT) and Wiener filter are utilized as the preprocessing phase to enhance the images and suppress the noise. Our proposed framework which comprises preprocessing, segmentation and classification phases, is applied on two databases: Lung Nodule Analysis 2016 (Luna 16) and National Lung Screening Trial (NLST). Data augmentation technique is applied to solve the problem of lung CT images deficiency, and consequently, the overfitting of deep models will be avoided. The classification results show that the preprocessing for the CT lung image as the input for ResNet50-U-Net hybrid model achieves the best performance. The proposed model achieves 98.98% accuracy (ACC), 98.65% area under the ROC curve (AUC), 98.99% sensitivity (Se), 98.43% precision (Pr), 98.86% F1- score and 1.9876 s computational time.
Deep-learning soft-tissue decomposition in chest radiography using fast fuzzy C-means clustering with CT datasets
Jeon, Duhee
Lim, Younghwan
Lee, Minjae
Kim, Guna
Cho, Hyosung
Journal of Instrumentation2023Journal Article, cited 0 times
SPIE-AAPM Lung CT Challenge
Image denoising
Fuzzy C-means
Algorithm Development
X-ray chest classification
Chest radiography is the most routinely used X-ray imaging technique for screening and diagnosing lung and chest disease, such as lung cancer and pneumonia. However, the clinical interpretation of the hidden and obscured anatomy in chest X-ray images remains challenging because of the bony structures overlapping the lung area. Thus, multi-perspective imaging with a high radiation dose is often required. In this study, to address this problem, we propose a deep-learning soft-tissue decomposition method using fast fuzzy C-means (FFCM) clustering with computed tomography (CT) datasets. In this method, FFCM clustering is used to decompose a CT image into bone and soft-tissue components, which are synthesized into digitally reconstructed radiographs (DRRs) to obtain large amounts of X-ray decomposition datasets as ground truths for training. In the training stage, chest and soft-tissue DRRs are used as input and label data, respectively, for training the network. During the testing, a chest X-ray image is fed into the trained network to output the corresponding soft-tissue image component. To verify the efficacy of the proposed method, we conducted a feasibility study on clinical CT datasets available from the AAPM Lung CT Challenge. According to our results, the proposed method effectively yielded soft-tissue decomposition from chest X-ray images; this is encouraging for reducing the visual complexity of chest X-ray images. Consequently, the finding of our feasibility study indicate that the proposed method can offer a promising outcome for this purpose.
Transfer learning may explain pigeons’ ability to detect cancer in histopathology
Kilim, Oz
Báskay, János
Biricz, András
Bedőházi, Zsolt
Pollner, Péter
Csabai, István
2024Journal Article, cited 0 times
DLBCL-Morphology
Ovarian Bevacizumab Response
Hungarian-Colorectal-Screening
HER2 tumor ROIs
Extraction of pulmonary vessels and tumour from plain computed tomography sequence
Theoretical tumor edge detection technique using multiple Bragg peak decomposition in carbon ion therapy
Dias, Marta Filipa Ferraz
Collins-Fekete, Charles-Antoine
Baroni, Guido
Riboldi, Marco
Seco, Joao
Biomedical Physics & Engineering Express2019Journal Article, cited 0 times
Website
LUNG
CT
Radiation Therapy
A geometry-guided multi-beamlet deep learning technique for CT reconstruction
Lu, Ke
Ren, Lei
Yin, Fang-Fang
Biomedical Physics & Engineering Express2022Journal Article, cited 0 times
LDCT-and-Projection-data
Purpose. Previous studies have proposed deep-learning techniques to reconstruct CT images from sinograms. However, these techniques employ large fully-connected (FC) layers for projection-to-image domain transformation, producing large models requiring substantial computation power, potentially exceeding the computation memory limit. Our previous work proposed a geometry-guided-deep-learning (GDL) technique for CBCT reconstruction that reduces model size and GPU memory consumption. This study further develops the technique and proposes a novel multi-beamlet deep learning (GMDL) technique of improved performance. The study compares the proposed technique with the FC layer-based deep learning (FCDL) method and the GDL technique through low-dose real-patient CT image reconstruction.Methods. Instead of using a large FC layer, the GMDL technique learns the projection-to-image domain transformation by constructing many small FC layers. In addition to connecting each pixel in the projection domain to beamlet points along the central beamlet in the image domain as GDL does, these smaller FC layers in GMDL connect each pixel to beamlets peripheral to the central beamlet based on the CT projection geometry. We compare ground truth images with low-dose images reconstructed with the GMDL, the FCDL, the GDL, and the conventional FBP methods. The images are quantitatively analyzed in terms of peak-signal-to-noise-ratio (PSNR), structural-similarity-index-measure (SSIM), and root-mean-square-error (RMSE).Results. Compared to other methods, the GMDL reconstructed low-dose CT images show improved image quality in terms of PSNR, SSIM, and RMSE. The optimal number of peripheral beamlets for the GMDL technique is two beamlets on each side of the central beamlet. The model size and memory consumption of the GMDL model is less than 1/100 of the FCDL model.Conclusion. Compared to the FCDL method, the GMDL technique is demonstrated to be able to reconstruct real patient low-dose CT images of improved image quality with significantly reduced model size and GPU memory requirement.
Shuffle-ResNet: Deep learning for predicting LGG IDH1 mutation from multicenter anatomical MRI sequences
Safari, Mojtaba
Beiki, Manjieh
Ameri, Ahmad
Toudeshki, Saeed Hosseini
Fatemi, Ali
Archambault, Louis
Biomedical Physics & Engineering Express2022Journal Article, cited 0 times
TCGA-LGG
TCGA-LUAD
Background and Purpose.The world health organization recommended to incorporate gene information such as isocitrate dehydrogenase 1 (IDH1) mutation status to improve prognosis, diagnosis, and treatment of the central nervous system tumors. We proposed our Shuffle Residual Network (Shuffle-ResNet) to predict IDH1 gene mutation status of the low grade glioma (LGG) tumors from multicenter anatomical magnetic resonance imaging (MRI) sequences including T2-w, T2-FLAIR, T1-w, and T1-Gd.Methods and Materials.We used 105 patient's dataset available in The Cancer Genome Atlas LGG project where we split them into training and testing datasets. We implemented a random image patch extractor to leverage tumor heterogeneity where about half a million image patches were extracted. RGB dataset were created from image concatenation. We used random channel-shuffle layer in the ResNet architecture to improve the generalization, and, also, a 3-fold cross validation to generalize the network's performance. The early stopping algorithm and learning rate scheduler were employed to automatically halt the training.Results.The early stopping algorithm terminated the training after 131, 106, and 96 epochs in fold 1, 2, and 3. The accuracy and area under the curve (AUC) of the validation dataset were 81.29% (95% CI (79.87, 82.72)) and 0.96 (95% CI (0.92, 0.98)) when we concatenated T2-FLAIR, T1-Gd, and T2-w to produce an RGB dataset. The accuracy and AUC values of the test dataset were 85.7% and 0.943.Conclusions.Our Shuffle-ResNet could predict IDH1 gene mutation status using multicenter MRI. However, its clinical application requires more investigation.
A few-shot U-Net deep learning model for lung cancer lesion segmentation via PET/CT imaging
Protonotarios, N. E.
Katsamenis, I.
Sykiotis, S.
Dikaios, N.
Kastis, G. A.
Chatziioannou, S. N.
Metaxas, M.
Doulamis, N.
Doulamis, A.
Biomed Phys Eng Express2022Journal Article, cited 0 times
Website
Lung-PET-CT-Dx
U-Net architectures
Convolutional Neural Networks (CNN)
Deep Learning
Segmentation
Over the past few years, positron emission tomography/computed tomography (PET/CT) imaging for computer-aided diagnosis has received increasing attention. Supervised deep learning architectures are usually employed for the detection of abnormalities, with anatomical localization, especially in the case of CT scans. However, the main limitations of the supervised learning paradigm include (i) large amounts of data required for model training, and (ii) the assumption of fixed network weights upon training completion, implying that the performance of the model cannot be further improved after training. In order to overcome these limitations, we apply a few-shot learning (FSL) scheme. Contrary to traditional deep learning practices, in FSL the model is provided with less data during training. The model then utilizes end-user feedback after training to constantly improve its performance. We integrate FSL in a U-Net architecture for lung cancer lesion segmentation on PET/CT scans, allowing for dynamic model weight fine-tuning and resulting in an online supervised learning scheme. Constant online readjustments of the model weights according to the users' feedback, increase the detection and classification accuracy, especially in cases where low detection performance is encountered. Our proposed method is validated on the Lung-PET-CT-DX TCIA database. PET/CT scans from 87 patients were included in the dataset and were acquired 60 minutes after intravenous(18)F-FDG injection. Experimental results indicate the superiority of our approach compared to other state-of-the-art methods.
Investigation of radiomics and deep convolutional neural networks approaches for glioma grading
Aouadi, S.
Torfeh, T.
Arunachalam, Y.
Paloor, S.
Riyas, M.
Hammoud, R.
Al-Hammadi, N.
Biomed Phys Eng Express2023Journal Article, cited 0 times
Website
TCGA-GBM
TCGA-LGG
BraTS 2021
Humans
*Gastrointestinal Stromal Tumors
*Fibromatosis
Aggressive
*Glioma/diagnostic imaging
Algorithm Development
*Melanoma
Computed Tomography (CT)
benchmarking
Deep learning
glioma grading
multi-contrast MRI
Radiomics
Purpose.To determine glioma grading by applying radiomic analysis or deep convolutional neural networks (DCNN) and to benchmark both approaches on broader validation sets.Methods.Seven public datasets were considered: (1) low-grade glioma or high-grade glioma (369 patients, BraTS'20) (2) well-differentiated liposarcoma or lipoma (115, LIPO); (3) desmoid-type fibromatosis or extremity soft-tissue sarcomas (203, Desmoid); (4) primary solid liver tumors, either malignant or benign (186, LIVER); (5) gastrointestinal stromal tumors (GISTs) or intra-abdominal gastrointestinal tumors radiologically resembling GISTs (246, GIST); (6) colorectal liver metastases (77, CRLM); and (7) lung metastases of metastatic melanoma (103, Melanoma). Radiomic analysis was performed on 464 (2016) radiomic features for the BraTS'20 (others) datasets respectively. Random forests (RF), Extreme Gradient Boosting (XGBOOST) and a voting algorithm comprising both classifiers were tested. The parameters of the classifiers were optimized using a repeated nested stratified cross-validation process. The feature importance of each classifier was computed using the Gini index or permutation feature importance. DCNN was performed on 2D axial and sagittal slices encompassing the tumor. A balanced database was created, when necessary, using smart slices selection. ResNet50, Xception, EficientNetB0, and EfficientNetB3 were transferred from the ImageNet application to the tumor classification and were fine-tuned. Five-fold stratified cross-validation was performed to evaluate the models. The classification performance of the models was measured using multiple indices including area under the receiver operating characteristic curve (AUC).Results.The best radiomic approach was based on XGBOOST for all datasets; AUC was 0.934 (BraTS'20), 0.86 (LIPO), 0.73 (LIVER), (0.844) Desmoid, 0.76 (GIST), 0.664 (CRLM), and 0.577 (Melanoma) respectively. The best DCNN was based on EfficientNetB0; AUC was 0.99 (BraTS'20), 0.982 (LIPO), 0.977 (LIVER), (0.961) Desmoid, 0.926 (GIST), 0.901 (CRLM), and 0.89 (Melanoma) respectively.Conclusion.Tumor classification can be accurately determined by adapting state-of-the-art machine learning algorithms to the medical context.
From pixels to prognosis: unveiling radiomics models with SHAP and LIME for enhanced interpretability
Raptis, S.
Ilioudis, C.
Theodorou, K.
Biomed Phys Eng Express2024Journal Article, cited 0 times
Website
NSCLC-Radiomics- Interobserver1
*Radiomics
*Positron Emission Tomography Computed Tomography
Radiation pneumonitis
Radiomics-based prediction models have shown promise in predicting Radiation Pneumonitis (RP), a common adverse outcome of chest irradiation. This study looks into more than just RP: it also investigates a bigger shift in the way radiomics-based models work. By integrating multi-modal radiomic data, which includes a wide range of variables collected from medical images including cutting-edge PET/CT imaging, we have developed predictive models that capture the intricate nature of illness progression. Radiomic features were extracted using PyRadiomics, encompassing intensity, texture, and shape measures. The high-dimensional dataset formed the basis for our predictive models, primarily Gradient Boosting Machines (GBM)-XGBoost, LightGBM, and CatBoost. Performance evaluation metrics, including Multi-Modal AUC-ROC, Sensitivity, Specificity, and F1-Score, underscore the superiority of the Deep Neural Network (DNN) model. The DNN achieved a remarkable Multi-Modal AUC-ROC of 0.90, indicating superior discriminatory power. Sensitivity and specificity values of 0.85 and 0.91, respectively, highlight its effectiveness in detecting positive occurrences while accurately identifying negatives. External validation datasets, comprising retrospective patient data and a heterogeneous patient population, validate the robustness and generalizability of our models. The focus of our study is the application of sophisticated model interpretability methods, namely SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-Agnostic Explanations), to improve the clarity and understanding of predictions. These methods allow clinicians to visualize the effects of features and provide localized explanations for every prediction, enhancing the comprehensibility of the model. This strengthens trust and collaboration between computational technologies and medical competence. The integration of data-driven analytics and medical domain expertise represents a significant shift in the profession, advancing us from analyzing pixel-level information to gaining valuable prognostic insights.
Advancements in prostate zone segmentation: integrating attention mechanisms into the nnU-Net framework
Prostate cancer is one of the most lethal cancers in the world. Early diagnosis is essential for successful treatment of prostate cancer. Segmentation of prostate zones in magnetic resonance images is an important task in the diagnosis of prostate cancer. Currently, the state-of-the-art method for this task is no-new U-Net. In this paper, a method to incorporate the attention U-Net architecture into no-new U-Net is proposed and compared with a classical U-net architecture as research. The experimental results indicate that there is no significant statistical difference between the proposed modification of no-new U-Net with the generalizability of the attention mechanism or the ability to achieve more accurate results. Moreover, two novel workflows are proposed for prostate segmentation, transitional zone segmentation and peripheral zone calculation workflow, and separate models for peripheral zone and transitional zone segmentation workflow. These workflows are compared with a baseline single peripheral zone and transitional zone segmentation model workflow. The experimental results indicate that separate models for peripheral zone and transitional zone segmentation workflow generalizes better than the baseline between data sets of different sources. In peripheral zone segmentation separate models for peripheral zone and transitional zone segmentation workflow achieves 1.9% higher median Dice score coefficient than the baseline workflow when using the attention U-Net architecture and 5.6% higher median Dice score coefficient when using U-Net architecture. Moreover, in transitional zone segmentation separate models for peripheral zone and transitional zone segmentation workflow achieves 0.4% higher median Dice score coefficient than the baseline workflow when using the attention U-Net architecture and 0.7% higher median Dice score coefficient when using U-Net architecture. Meanwhile, prostate segmentation, transitional zone segmentation and peripheral zone calculation workflow generalizes worse than the baseline. In peripheral zone segmentation prostate segmentation, transitional zone segmentation and peripheral zone calculation workflow achieves 4.6% lower median Dice score coefficient than the baseline workflow when using the attention U-Net architecture and 3.6% lower median Dice score coefficient when using U-Net architecture. In transitional zone segmentation prostate segmentation, transitional zone segmentation and peripheral zone calculation workflow achieves a similar median Dice score coefficient to the baseline workflow.
Integrative analysis of cross-modal features for the prognosis prediction of clear cell renal cell carcinoma
Ning, Zhenyuan
Pan, Weihao
Chen, Yuting
Xiao, Qing
Zhang, Xinsen
Luo, Jiaxiu
Wang, Jian
Zhang, Yuan
Bioinformatics2020Journal Article, cited 0 times
Website
TCGA-KIRC
Radiomics
Radiogenomics
KIDNEY
MOTIVATION: As a highly heterogeneous disease, clear cell renal cell carcinoma (ccRCC) has quite variable clinical behaviors. The prognostic biomarkers play a crucial role in stratifying patients suffering from ccRCC to avoid over- and under-treatment. Researches based on hand-crafted features and single-modal data have been widely conducted to predict the prognosis of ccRCC. However, these experience-dependent methods, neglecting the synergy among multimodal data, have limited capacity to perform accurate prediction. Inspired by complementary information among multimodal data and the successful application of convolutional neural networks (CNNs) in medical image analysis, a novel framework was proposed to improve prediction performance. RESULTS: We proposed a cross-modal feature-based integrative framework, in which deep features extracted from computed tomography/histopathological images by using CNNs were combined with eigengenes generated from functional genomic data, to construct a prognostic model for ccRCC. Results showed that our proposed model can stratify high- and low-risk subgroups with significant difference (P-value < 0.05) and outperform the predictive performance of those models based on single-modality features in the independent testing cohort [C-index, 0.808 (0.728-0.888)]. In addition, we also explored the relationship between deep image features and eigengenes, and make an attempt to explain deep image features from the view of genomic data. Notably, the integrative framework is available to the task of prognosis prediction of other cancer with matched multimodal data. AVAILABILITY AND IMPLEMENTATION: https://github.com/zhang-de-lab/zhang-lab? from=singlemessage. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.
Making head and neck cancer clinical data Findable-Accessible-Interoperable-Reusable to support multi-institutional collaboration and federated learning
Gouthamchand, Varsha
Choudhury, Ananya
Hoebers, Frank J. P.
Wesseling, Frederik W. R.
Welch, Mattea
Kim, Sejin
Kazmierska, Joanna
Dekker, Andre
Haibe-Kains, Benjamin
van Soest, Johan
Wee, Leonard
BJR|Artificial Intelligence2024Journal Article, cited 0 times
HEAD-NECK-RADIOMICS-HN1
Head-Neck-PET-CT
OPC-Radiomics
HNSCC
Federated learning
SPARQL
RDF
Models
Radiomics
Abstract; Objectives; Federated learning (FL) is a group of methodologies where statistical modelling can be performed without exchanging identifiable patient data between cooperating institutions. To realize its potential for AI development on clinical data, a number of bottlenecks need to be addressed. One of these is making data Findable-Accessible-Interoperable-Reusable (FAIR). The primary aim of this work is to show that tools making data FAIR allow consortia to collaborate on privacy-aware data exploration, data visualization, and training of models on each other’s original data.; ; Methods; We propose a “Schema-on-Read” FAIR-ification method that adapts for different (re)analyses without needing to change the underlying original data. The procedure involves (1) decoupling the contents of the data from its schema and database structure, (2) annotation with semantic ontologies as a metadata layer, and (3) readout using semantic queries. Open-source tools are given as Docker containers to help local investigators prepare their data on-premises.; ; Results; We created a federated privacy-preserving visualization dashboard for case mix exploration of 5 distributed datasets with no common schema at the point of origin. We demonstrated robust and flexible prognostication model development and validation, linking together different data sources—clinical risk factors and radiomics.; ; Conclusions; Our procedure leads to successful (re)use of data in FL-based consortia without the need to impose a common schema at every point of origin of data.; ; Advances in knowledge; This work supports the adoption of FL within the healthcare AI community by sharing means to make data more FAIR.
Comparing the performance of a deep learning-based lung gross tumour volume segmentation algorithm before and after transfer learning in a new hospital
Kulkarni, Chaitanya
Sherkhane, Umesh
Jaiswar, Vinay
Mithun, Sneha
Mysore Siddu, Dinesh
Rangarajan, Venkatesh
Dekker, Andre
Traverso, Alberto
Jha, Ashish
Wee, Leonard
BJR|Open2024Journal Article, cited 0 times
Website
NSCLC-Radiomics
NSCLC-Radiomics-Interobserver1
Deep Learning
Automatic Segmentation
Lung Cancer
Gross Tumor Volume (GTV)
Transfer learning
Computed Tomography (CT)
Radiotherapy
Abstract; Objectives; Radiation therapy for lung cancer requires a gross tumour volume (GTV) to be carefully outlined by a skilled radiation oncologist (RO) to accurately pinpoint high radiation dose to a malignant mass while simultaneously minimizing radiation damage to adjacent normal tissues. This is manually intensive and tedious however, it is feasible to train a deep learning (DL) neural network that could assist ROs to delineate the GTV. However, DL trained on large openly accessible data sets might not perform well when applied to a superficially similar task but in a different clinical setting. In this work, we tested the performance of DL automatic lung GTV segmentation model trained on open-access Dutch data when used on Indian patients from a large public tertiary hospital, and hypothesized that generic DL performance could be improved for a specific local clinical context, by means of modest transfer-learning on a small representative local subset.; ; Methods; X-ray computed tomography (CT) series in a public data set called “NSCLC-Radiomics” from The Cancer Imaging Archive was first used to train a DL-based lung GTV segmentation model (Model 1). Its performance was assessed using a different open access data set (Interobserver1) of Dutch subjects plus a private Indian data set from a local tertiary hospital (Test Set 2). Another Indian data set (Retrain Set 1) was used to fine-tune the former DL model using a transfer learning method. The Indian data sets were taken from CT of a hybrid scanner based in nuclear medicine, but the GTV was drawn by skilled Indian ROs. The final (after fine-tuning) model (Model 2) was then re-evaluated in “Interobserver1” and “Test Set 2.” Dice similarity coefficient (DSC), precision, and recall were used as geometric segmentation performance metrics.; ; Results; Model 1 trained exclusively on Dutch scans showed a significant fall in performance when tested on “Test Set 2.” However, the DSC of Model 2 recovered by 14 percentage points when evaluated in the same test set. Precision and recall showed a similar rebound of performance after transfer learning, in spite of using a comparatively small sample size. The performance of both models, before and after the fine-tuning, did not significantly change the segmentation performance in “Interobserver1.”; ; Conclusions; A large public open-access data set was used to train a generic DL model for lung GTV segmentation, but this did not perform well initially in the Indian clinical context. Using transfer learning methods, it was feasible to efficiently and easily fine-tune the generic model using only a small number of local examples from the Indian hospital. This led to a recovery of some of the geometric segmentation performance, but the tuning did not appear to affect the performance of the model in another open-access data set.; ; Advances in knowledge; Caution is needed when using models trained on large volumes of international data in a local clinical setting, even when that training data set is of good quality. Minor differences in scan acquisition and clinician delineation preferences may result in an apparent drop in performance. However, DL models have the advantage of being efficiently “adapted” from a generic to a locally specific context, with only a small amount of fine-tuning by means of transfer learning on a small local institutional data set.
Modulation of Nogo receptor 1 expression orchestrates myelin-associated infiltration of glioblastoma
As the clinical failure of glioblastoma treatment is attributed by multiple components, including myelin-associated infiltration, assessment of the molecular mechanisms underlying such process and identification of the infiltrating cells have been the primary objectives in glioblastoma research. Here, we adopted radiogenomic analysis to screen for functionally relevant genes that orchestrate the process of glioma cell infiltration through myelin and promote glioblastoma aggressiveness. The receptor of the Nogo ligand (NgR1) was selected as the top candidate through Differentially Expressed Genes (DEG) and Gene Ontology (GO) enrichment analysis. Gain and loss of function studies on NgR1 elucidated its underlying molecular importance in suppressing myelin-associated infiltration in vitro and in vivo. The migratory ability of glioblastoma cells on myelin is reversibly modulated by NgR1 during differentiation and dedifferentiation process through deubiquitinating activity of USP1, which inhibits the degradation of ID1 to downregulate NgR1 expression. Furthermore, pimozide, a well-known antipsychotic drug, upregulates NgR1 by post-translational targeting of USP1, which sensitizes glioma stem cells to myelin inhibition and suppresses myelin-associated infiltration in vivo. In primary human glioblastoma, downregulation of NgR1 expression is associated with highly infiltrative characteristics and poor survival. Together, our findings reveal that loss of NgR1 drives myelin-associated infiltration of glioblastoma and suggest that novel therapeutic strategies aimed at reactivating expression of NgR1 will improve the clinical outcome of glioblastoma patients.
Transcriptomic and connectomic correlates of differential spatial patterning among gliomas
Unravelling the complex events driving grade-specific spatial distribution of brain tumour occurrence requires rich datasets from both healthy individuals and patients. Here, we combined open-access data from The Cancer Genome Atlas, the UK Biobank and the Allen Brain Human Atlas to disentangle how the different spatial occurrences of glioblastoma multiforme and low-grade gliomas are linked to brain network features and the normative transcriptional profiles of brain regions.From MRI of brain tumour patients, we first constructed a grade-related frequency map of the regional occurrence of low-grade gliomas and the more aggressive glioblastoma multiforme. Using associated mRNA transcription data, we derived a set of differential gene expressions from glioblastoma multiforme and low-grade gliomas tissues of the same patients. By combining the resulting values with normative gene expressions from post-mortem brain tissue, we constructed a grade-related expression map indicating which brain regions express genes dysregulated in aggressive gliomas. Additionally, we derived an expression map of genes previously associated with tumour subtypes in a genome-wide association study (tumour-related genes).There were significant associations between grade-related frequency, grade-related expression and tumour-related expression maps, as well as functional brain network features (specifically, nodal strength and participation coefficient) that are implicated in neurological and psychiatric disorders.These findings identify brain network dynamics and transcriptomic signatures as key factors in regional vulnerability for glioblastoma multiforme and low-grade glioma occurrence, placing primary brain tumours within a well established framework of neurological and psychiatric cortical alterations.
Federated learning improves site performance in multicenter deep learning without data sharing
Sarma, Karthik V
Harmon, Stephanie
Sanford, Thomas
Roth, Holger R
Xu, Ziyue
Tetreault, Jesse
Xu, Daguang
Flores, Mona G
Raman, Alex G
Kulkarni, Rushikesh
Wood, Bradford J
Choyke, Peter L
Priester, Alan M
Marks, Leonard S
Raman, Steven S
Enzmann, Dieter
Turkbey, Baris
Speier, William
Arnold, Corey W
Journal of the American Medical Informatics Association2021Journal Article, cited 0 times
PROSTATEx
OBJECTIVE: To demonstrate enabling multi-institutional training without centralizing or sharing the underlying physical data via federated learning (FL).
MATERIALS AND METHODS: Deep learning models were trained at each participating institution using local clinical data, and an additional model was trained using FL across all of the institutions.
RESULTS: We found that the FL model exhibited superior performance and generalizability to the models trained at single institutions, with an overall performance level that was significantly better than that of any of the institutional models alone when evaluated on held-out test sets from each institution and an outside challenge dataset.
DISCUSSION: The power of FL was successfully demonstrated across 3 academic institutions while avoiding the privacy risk associated with the transfer and pooling of patient data.
CONCLUSION: Federated learning is an effective methodology that merits further study to enable accelerated development of models across institutions, enabling greater generalizability in clinical use.
3D deep learning for detecting pulmonary nodules in CT scans
Gruetzemacher, Ross
Gupta, Ashish
Paradice, David
Journal of the American Medical Informatics Association2018Journal Article, cited 85 times
Website
LIDC-IDRI
Objective: To demonstrate and test the validity of a novel deep-learning-based system for the automated detection of pulmonary nodules.
Materials and Methods: The proposed system uses 2 3D deep learning models, 1 for each of the essential tasks of computer-aided nodule detection: candidate generation and false positive reduction. A total of 888 scans from the LIDC-IDRI dataset were used for training and evaluation.
Results: Results for candidate generation on the test data indicated a detection rate of 94.77% with 30.39 false positives per scan, while the test results for false positive reduction exhibited a sensitivity of 94.21% with 1.789 false positives per scan. The overall system detection rate on the test data was 89.29% with 1.789 false positives per scan.
Discussion: An extensive and rigorous validation was conducted to assess the performance of the proposed system. The system demonstrated a novel combination of 3D deep neural network architectures and demonstrates the use of deep learning for both candidate generation and false positive reduction to be evaluated with a substantial test dataset. The results strongly support the ability of deep learning pulmonary nodule detection systems to generalize to unseen data. The source code and trained model weights have been made available.
Conclusion: A novel deep-neural-network-based pulmonary nodule detection system is demonstrated and validated. The results provide comparison of the proposed deep-learning-based system over other similar systems based on performance.
Predicting risk of metastases and recurrence in soft-tissue sarcomas via Radiomics and Formal Methods
Casale, R.
Varriano, G.
Santone, A.
Messina, C.
Casale, C.
Gitto, S.
Sconfienza, L. M.
Bali, M. A.
Brunese, L.
JAMIA Open2023Journal Article, cited 0 times
Website
Soft-tissue-Sarcoma
Formal Methods
Radiomics
magnetic resonance imaging
metastases
model checking
soft-tissue sarcoma
OBJECTIVE: Soft-tissue sarcomas (STSs) of the extremities are a group of malignancies arising from the mesenchymal cells that may develop distant metastases or local recurrence. In this article, we propose a novel methodology aimed to predict metastases and recurrence risk in patients with these malignancies by evaluating magnetic resonance radiomic features that will be formally verified through formal logic models. MATERIALS AND METHODS: This is a retrospective study based on a public dataset evaluating MRI scans T2-weighted fat-saturated or short tau inversion recovery and patients having "metastases/local recurrence" (group B) or "no metastases/no local recurrence" (group A) as clinical outcomes. Once radiomic features are extracted, they are included in formal models, on which is automatically verified the logic property written by a radiologist and his computer scientists coworkers. RESULTS: Evaluating the Formal Methods efficacy in predicting distant metastases/local recurrence in STSs (group A vs group B), our methodology showed a sensitivity and specificity of 0.81 and 0.67, respectively; this suggests that radiomics and formal verification may be useful in predicting future metastases or local recurrence development in soft tissue sarcoma. DISCUSSION: Authors discussed about the literature to consider Formal Methods as a valid alternative to other Artificial Intelligence techniques. CONCLUSIONS: An innovative and noninvasive rigourous methodology can be significant in predicting local recurrence and metastases development in STSs. Future works can be the assessment on multicentric studies to extract objective disease information, enriching the connection between the radiomic quantitative analysis and the radiological clinical evidences.
The exceptional responders initiative: feasibility of a National Cancer Institute pilot study
Conley, Barbara A
Staudt, Lou
Takebe, Naoko
Wheeler, David A
Wang, Linghua
Cardenas, Maria F
Korchina, Viktoriya
Zenklusen, Jean Claude
McShane, Lisa M
Tricoli, James V
JNCI: Journal of the National Cancer Institute2021Journal Article, cited 5 times
Website
Exceptional Responders
Projected outcomes using different nodule sizes to define a positive CT lung cancer screening examination
Gierada, David S
Pinsky, Paul
Nath, Hrudaya
Chiles, Caroline
Duan, Fenghai
Aberle, Denise R
Journal of the National Cancer Institute2014Journal Article, cited 74 times
Website
National Lung Screening Trial (NLST)
LUNG
Computer Aided Detection (CADe)
Computed Tomography (CT)
Background Computed tomography (CT) screening for lung cancer has been associated with a high frequency of false positive results because of the high prevalence of indeterminate but usually benign small pulmonary nodules. The acceptability of reducing false-positive rates and diagnostic evaluations by increasing the nodule size threshold for a positive screen depends on the projected balance between benefits and risks.; Methods We examined data from the National Lung Screening Trial (NLST) to estimate screening CT performance and outcomes for scans with nodules above the 4 mm NLST threshold used to classify a CT screen as positive. Outcomes assessed included screening results, subsequent diagnostic tests performed, lung cancer histology and stage distribution, and lung cancer mortality. Sensitivity, specificity, positive predictive value, and negative predictive value were calculated for the different nodule size thresholds. All statistical tests were two-sided.; Results In 64% of positive screens (11 598/18 141), the largest nodule was 7 mm or less in greatest transverse diameter. By increasing the threshold, the percentages of lung cancer diagnoses that would have been missed or delayed and false positives that would have been avoided progressively increased, for example from 1.0% and 15.8% at a 5 mm threshold to 10.5% and 65.8% at an 8 mm threshold, respectively. The projected reductions in postscreening follow-up CT scans and invasive procedures also increased as the threshold was raised. Differences across nodules sizes for lung cancer histology and stage distribution were small but statistically significant. There were no differences across nodule sizes in survival or mortality.; Conclusion Raising the nodule size threshold for a positive screen would substantially reduce false-positive CT screenings and medical resource utilization with a variable impact on screening outcomes.
ST3GAL1-associated transcriptomic program in glioblastoma tumor growth, invasion, and prognosis
Chong, Yuk Kien
Sandanaraj, Edwin
Koh, Lynnette WH
Thangaveloo, Moogaambikai
Tan, Melanie SY
Koh, Geraldene RH
Toh, Tan Boon
Lim, Grace GY
Holbrook, Joanna D
Kon, Oi Lian
Nadarajah, M.
Ng, I.
Ng, W. H.
Tan, N. S.
Lim, K. L.
Tang, C.
Ang, B. T.
Journal of the National Cancer Institute2016Journal Article, cited 16 times
Website
REMBRANDT
Radiogenomics
BRAIN
Glioblastoma Multiforme (GBM)
BACKGROUND: Cell surface sialylation is associated with tumor cell invasiveness in many cancers. Glioblastoma is the most malignant primary brain tumor and is highly infiltrative. ST3GAL1 sialyltransferase gene is amplified in a subclass of glioblastomas, and its role in tumor cell self-renewal remains unexplored. METHODS: Self-renewal of patient glioma cells was evaluated using clonogenic, viability, and invasiveness assays. ST3GAL1 was identified from differentially expressed genes in Peanut Agglutinin-stained cells and validated in REMBRANDT (n = 390) and Gravendeel (n = 276) clinical databases. Gene set enrichment analysis revealed upstream processes. TGFbeta signaling on ST3GAL1 transcription was assessed using chromatin immunoprecipitation. Transcriptome analysis of ST3GAL1 knockdown cells was done to identify downstream pathways. A constitutively active FoxM1 mutant lacking critical anaphase-promoting complex/cyclosome ([APC/C]-Cdh1) binding sites was used to evaluate ST3Gal1-mediated regulation of FoxM1 protein. Finally, the prognostic role of ST3Gal1 was determined using an orthotopic xenograft model (3 mice groups comprising nontargeting and 2 clones of ST3GAL1 knockdown in NNI-11 [8 per group] and NNI-21 [6 per group]), and the correlation with patient clinical information. All statistical tests on patients' data were two-sided; other P values below are one-sided. RESULTS: High ST3GAL1 expression defines an invasive subfraction with self-renewal capacity; its loss of function prolongs survival in a mouse model established from mesenchymal NNI-11 (P < .001; groups of 8 in 3 arms: nontargeting, C1, and C2 clones of ST3GAL1 knockdown). ST3GAL1 transcriptomic program stratifies patient survival (hazard ratio [HR] = 2.47, 95% confidence interval [CI] = 1.72 to 3.55, REMBRANDT P = 1.92 x 10(-)(8); HR = 2.89, 95% CI = 1.94 to 4.30, Gravendeel P = 1.05 x 10(-)(1)(1)), independent of age and histology, and associates with higher tumor grade and T2 volume (P = 1.46 x 10(-)(4)). TGFbeta signaling, elevated in mesenchymal patients, correlates with high ST3GAL1 (REMBRANDT gliomacor = 0.31, P = 2.29 x 10(-)(1)(0); Gravendeel gliomacor = 0.50, P = 3.63 x 10(-)(2)(0)). The transcriptomic program upon ST3GAL1 knockdown enriches for mitotic cell cycle processes. FoxM1 was identified as a statistically significantly modulated gene (P = 2.25 x 10(-)(5)) and mediates ST3Gal1 signaling via the (APC/C)-Cdh1 complex. CONCLUSIONS: The ST3GAL1-associated transcriptomic program portends poor prognosis in glioma patients and enriches for higher tumor grades of the mesenchymal molecular classification. We show that ST3Gal1-regulated self-renewal traits are crucial to the sustenance of glioblastoma multiforme growth.
The Impact of Obesity on Tumor Glucose Uptake in Breast and Lung Cancer
Leitner, Brooks P.
Perry, Rachel J.
JNCI Cancer Spectrum2020Journal Article, cited 0 times
Website
HNSCC
QIN Breast
NSCLC Radiogenomics
Anti-PD-1_Lung
TCGA-LUAD
TCGA-LUSC
Soft-tissue Sarcoma
Obesity confers an increased incidence and poorer clinical prognosis in over ten cancer types. Paradoxically, obesity provides protection from poor outcomes in lung cancer. Mechanisms for the obesity-cancer links are not fully elucidated, with altered glucose metabolism being a promising candidate. Using 18F-Fluorodeoxyglucose positron-emission-tomography/computed-tomography images from The Cancer Imaging Archive, we explored the relationship between body mass index (BMI) and glucose metabolism in several cancers. In 188 patients (BMI: 27.7, SD = 5.1, Range = 17.4-49.3 kg/m2), higher BMI was associated with greater tumor glucose uptake in obesity-associated breast cancer r = 0.36, p = 0.02), and with lower tumor glucose uptake in non-small-cell lung cancer (r=-0.26, p = 0.048) using two-sided Pearson correlations. No relationship was observed in soft tissue sarcoma or squamous cell carcinoma. Harnessing The National Cancer Institute’s open-access database, we demonstrate altered tumor glucose metabolism as a potential mechanism for the detrimental and protective effects of obesity on breast and lung cancer, respectively.
Prediction of liver Dmean for proton beam therapy using deep learning and contour-based data augmentation
Jampa-Ngern, S.
Kobashi, K.
Shimizu, S.
Takao, S.
Nakazato, K.
Shirato, H.
J Radiat Res2021Journal Article, cited 0 times
Website
TCGA-LIHC
Deep Learning
LIVER
Computed Tomography (CT)
The prediction of liver Dmean with 3-dimensional radiation treatment planning (3DRTP) is time consuming in the selection of proton beam therapy (PBT), and deep learning prediction generally requires large and tumor-specific databases. We developed a simple dose prediction tool (SDP) using deep learning and a novel contour-based data augmentation (CDA) approach and assessed its usability. We trained the SDP to predict the liver Dmean immediately. Five and two computed tomography (CT) data sets of actual patients with liver cancer were used for the training and validation. Data augmentation was performed by artificially embedding 199 contours of virtual clinical target volume (CTV) into CT images for each patient. The data sets of the CTVs and OARs are labeled with liver Dmean for six different treatment plans using two-dimensional calculations assuming all tissue densities as 1.0. The test of the validated model was performed using 10 unlabeled CT data sets of actual patients. Contouring only of the liver and CTV was required as input. The mean relative error (MRE), the mean percentage error (MPE) and regression coefficient between the planned and predicted Dmean was 0.1637, 6.6%, and 0.9455, respectively. The mean time required for the inference of liver Dmean of the six different treatment plans for a patient was 4.47+/-0.13 seconds. We conclude that the SDP is cost-effective and usable for gross estimation of liver Dmean in the clinic although the accuracy should be improved further if we need the accuracy of liver Dmean to be compatible with 3DRTP.
Intensity-modulated irradiation for superficial tumors by overlapping irradiation fields using intensity modulators in accelerator-based BNCT
Sasaki, Akinori
Hu, Naonori
Takata, Takushi
Matsubayashi, Nishiki
Sakurai, Yoshinori
Suzuki, Minoru
Tanaka, Hiroki
Journal of Radiation Research2022Journal Article, cited 0 times
Website
HNSCC
Radiation Therapy
Development of optimization method for uniform dose distribution on superficial tumor in an accelerator-based boron neutron capture therapy system
Sasaki, A.
Hu, N.
Matsubayashi, N.
Takata, T.
Sakurai, Y.
Suzuki, M.
Tanaka, H.
J Radiat Res2023Journal Article, cited 0 times
Website
HNSCC
Head-Neck-CT-Atlas
3D Printed Phantom
accelerator-based boron neutron capture therapy
intensity-modulated irradiation
optimization method
uniform dose distribution
uniform thermal neutron flux
Radiation Therapy
To treat superficial tumors using accelerator-based boron neutron capture therapy (ABBNCT), a technique was investigated, based on which, a single-neutron modulator was placed inside a collimator and was irradiated with thermal neutrons. In large tumors, the dose was reduced at their edges. The objective was to generate a uniform and therapeutic intensity dose distribution. In this study, we developed a method for optimizing the shape of the intensity modulator and irradiation time ratio to generate a uniform dose distribution to treat superficial tumors of various shapes. A computational tool was developed, which performed Monte Carlo simulations using 424 different source combinations. We determined the shape of the intensity modulator with the highest minimum tumor dose. The homogeneity index (HI), which evaluates uniformity, was also derived. To evaluate the efficacy of this method, the dose distribution of a tumor with a diameter of 100 mm and thickness of 10 mm was evaluated. Furthermore, irradiation experiments were conducted using an ABBNCT system. The thermal neutron flux distribution outcomes that have considerable impacts on the tumor's dose confirmed a good agreement between experiments and calculations. Moreover, the minimum tumor dose and HI improved by 20 and 36%, respectively, compared with the irradiation case wherein a single-neutron modulator was used. The proposed method improves the minimum tumor volume and uniformity. The results demonstrate the method's efficacy in ABBNCT for the treatment of superficial tumors.
Feasibility of synthetic computed tomography generated with an adversarial network for multi-sequence magnetic resonance-based brain radiotherapy
Koike, Yuhei
Akino, Yuichi
Sumida, Iori
Shiomi, Hiroya
Mizuno, Hirokazu
Yagi, Masashi
Isohashi, Fumiaki
Seo, Yuji
Suzuki, Osamu
Ogawa, Kazuhiko
J Radiat Res2019Journal Article, cited 0 times
Deep Learning
BRAIN
Magnetic Resonance Imaging (MRI)
Computed Tomography (CT)
Modality synthesis
The aim of this work is to generate synthetic computed tomography (sCT) images from multi-sequence magnetic resonance (MR) images using an adversarial network and to assess the feasibility of sCT-based treatment planning for brain radiotherapy. Datasets for 15 patients with glioblastoma were selected and 580 pairs of CT and MR images were used. T1-weighted, T2-weighted and fluid-attenuated inversion recovery MR sequences were combined to create a three-channel image as input data. A conditional generative adversarial network (cGAN) was trained using image patches. The image quality was evaluated using voxel-wise mean absolute errors (MAEs) of the CT number. For the dosimetric evaluation, 3D conformal radiotherapy (3D-CRT) and volumetric modulated arc therapy (VMAT) plans were generated using the original CT set and recalculated using the sCT images. The isocenter dose and dose-volume parameters were compared for 3D-CRT and VMAT plans, respectively. The equivalent path length was also compared. The mean MAEs for the whole body, soft tissue and bone region were 108.1 +/- 24.0, 38.9 +/- 10.7 and 366.2 +/- 62.0 hounsfield unit, respectively. The dosimetric evaluation revealed no significant difference in the isocenter dose for 3D-CRT plans. The differences in the dose received by 2% of the volume (D2%), D50% and D98% relative to the prescribed dose were <1.0%. The overall equivalent path length was shorter than that for real CT by 0.6 +/- 1.9 mm. A treatment planning study using generated sCT detected only small, clinically negligible differences. These findings demonstrated the feasibility of generating sCT images for MR-only radiotherapy from multi-sequence MR images using cGAN.
Efficacy evaluation of 2D, 3D U-Net semantic segmentation and atlas-based segmentation of normal lungs excluding the trachea and main bronchi
Nemoto, Takafumi
Futakami, Natsumi
Yagi, Masamichi
Kumabe, Atsuhiro
Takeda, Atsuya
Kunieda, Etsuo
Shigematsu, Naoyuki
Journal of Radiation Research2020Journal Article, cited 0 times
Website
NSCLC-Radiomics
Segmentation
LUNG
This study aimed to examine the efficacy of semantic segmentation implemented by deep learning and to confirm whether this method is more effective than a commercially dominant auto-segmentation tool with regards to delineating normal lung excluding the trachea and main bronchi. A total of 232 non-small-cell lung cancer cases were examined. The computed tomography (CT) images of these cases were converted from Digital Imaging and Communications in Medicine (DICOM) Radiation Therapy (RT) formats to arrays of 32 x 128 x 128 voxels and input into both 2D and 3D U-Net, which are deep learning networks for semantic segmentation. The number of training, validation and test sets were 160, 40 and 32, respectively. Dice similarity coefficients (DSCs) of the test set were evaluated employing Smart Segmentation Knowledge Based Contouring (Smart segmentation is an atlas-based segmentation tool), as well as the 2D and 3D U-Net. The mean DSCs of the test set were 0.964 [95% confidence interval (CI), 0.960-0.968], 0.990 (95% CI, 0.989-0.992) and 0.990 (95% CI, 0.989-0.991) with Smart segmentation, 2D and 3D U-Net, respectively. Compared with Smart segmentation, both U-Nets presented significantly higher DSCs by the Wilcoxon signed-rank test (P < 0.01). There was no difference in mean DSC between the 2D and 3D U-Net systems. The newly-devised 2D and 3D U-Net approaches were found to be more effective than a commercial auto-segmentation tool. Even the relatively shallow 2D U-Net which does not require high-performance computational resources was effective enough for the lung segmentation. Semantic segmentation using deep learning was useful in radiation treatment planning for lung cancers.
AI-based Prognostic Imaging Biomarkers for Precision Neurooncology: the ReSPOND Consortium
Davatzikos, C.
Barnholtz-Sloan, J. S.
Bakas, S.
Colen, R.
Mahajan, A.
Quintero, C. B.
Font, J. C.
Puig, J.
Jain, R.
Sloan, A. E.
Badve, C.
Marcus, D. S.
Choi, Y. S.
Lee, S. K.
Chang, J. H.
Poisson, L. M.
Griffith, B.
Dicker, A. P.
Flanders, A. E.
Booth, T. C.
Rathore, S.
Akbari, H.
Sako, C.
Bilello, M.
Shukla, G.
Kazerooni, A. F.
Brem, S.
Lustig, R.
Mohan, S.
Bagley, S.
Nasrallah, M.
O'Rourke, D. M.
Neuro-Oncology2020Journal Article, cited 0 times
Website
Machine Learning
Glioblastoma Multiforme (GBM)
Computer Aided Diagnosis (CADx)
Magnetic Resonance Imaging (MRI)
Radiomics
Radiomic features
BRAIN
Sexually dimorphic radiogenomic models identify distinct imaging and biological pathways that are prognostic of overall survival in glioblastoma
Beig, Niha
Singh, Salendra
Bera, Kaustav
Prasanna, Prateek
Singh, Gagandeep
Chen, Jonathan
Bamashmos, Anas Saeed
Barnett, Addison
Hunter, Kyle
Statsevych, Volodymyr
Hill, Virginia B
Varadan, Vinay
Madabhushi, Anant
Ahluwalia, Manmeet S
Tiwari, Pallavi
Neuro-Oncology2020Journal Article, cited 0 times
IvyGAP
Glioblastoma
MRI
BACKGROUND: Recent epidemiological studies have suggested that sexual dimorphism influences treatment response and prognostic outcome in glioblastoma (GBM). To this end, we sought to (i) identify distinct sex-specific radiomic phenotypes-from tumor subcompartments (peritumoral edema, enhancing tumor, and necrotic core) using pretreatment MRI scans-that are prognostic of overall survival (OS) in GBMs, and (ii) investigate radiogenomic associations of the MRI-based phenotypes with corresponding transcriptomic data, to identify the signaling pathways that drive sex-specific tumor biology and treatment response in GBM.
METHODS: In a retrospective setting, 313 GBM patients (male = 196, female = 117) were curated from multiple institutions for radiomic analysis, where 130 were used for training and independently validated on a cohort of 183 patients. For the radiogenomic analysis, 147 GBM patients (male = 94, female = 53) were used, with 125 patients in training and 22 cases for independent validation.
RESULTS: Cox regression models of radiomic features from gadolinium T1-weighted MRI allowed for developing more precise prognostic models, when trained separately on male and female cohorts. Our radiogenomic analysis revealed higher expression of Laws energy features that capture spots and ripple-like patterns (representative of increased heterogeneity) from the enhancing tumor region, as well as aggressive biological processes of cell adhesion and angiogenesis to be more enriched in the "high-risk" group of poor OS in the male population. In contrast, higher expressions of Laws energy features (which detect levels and edges) from the necrotic core with significant involvement of immune related signaling pathways was observed in the "low-risk" group of the female population.
CONCLUSIONS: Sexually dimorphic radiogenomic models could help risk-stratify GBM patients for personalized treatment decisions.
Combined molecular subtyping, grading, and segmentation of glioma using multi-task deep learning
van der Voort, S. R.
Incekara, F.
Wijnenga, M. M. J.
Kapsas, G.
Gahrmann, R.
Schouten, J. W.
Nandoe Tewarie, R.
Lycklama, G. J.
De Witt Hamer, P. C.
Eijgelaar, R. S.
French, P. J.
Dubbink, H. J.
Vincent, Ajpe
Niessen, W. J.
van den Bent, M. J.
Smits, M.
Klein, S.
Neuro Oncol2022Journal Article, cited 0 times
Website
Radiomics
Radiogenomics
REMBRANDT
CPTAC-GBM
Ivy GAP
Brain-Tumor-Progression
TCGA-LGG
TCGA-GBM
Deep learning
BRAIN
Segmentation
BACKGROUND: Accurate characterization of glioma is crucial for clinical decision making. A delineation of the tumor is also desirable in the initial decision stages but is time-consuming. Previously, deep learning methods have been developed that can either non-invasively predict the genetic or histological features of glioma, or that can automatically delineate the tumor, but not both tasks at the same time. Here, we present our method that can predict the molecular subtype and grade, while simultaneously providing a delineation of the tumor. METHODS: We developed a single multi-task convolutional neural network that uses the full 3D, structural, pre-operative MRI scans to predict the IDH mutation status, the 1p/19q co-deletion status, and the grade of a tumor, while simultaneously segmenting the tumor. We trained our method using a patient cohort containing 1508 glioma patients from 16 institutes. We tested our method on an independent dataset of 240 patients from 13 different institutes. RESULTS: In the independent test set we achieved an IDH-AUC of 0.90, an 1p/19q co-deletion AUC of 0.85, and a grade AUC of 0.81 (grade II/III/IV). For the tumor delineation, we achieved a mean whole tumor DICE score of 0.84. CONCLUSIONS: We developed a method that non-invasively predicts multiple, clinically relevant features of glioma. Evaluation in an independent dataset shows that the method achieves a high performance and that it generalizes well to the broader clinical population. This first of its kind method opens the door to more generalizable, instead of hyper-specialized, AI methods.
Added prognostic value of 3D deep learning-derived features from preoperative MRI for adult-type diffuse gliomas
Lee, J. O.
Ahn, S. S.
Choi, K. S.
Lee, J.
Jang, J.
Park, J. H.
Hwang, I.
Park, C. K.
Park, S. H.
Chung, J. W.
Choi, S. H.
Neuro Oncol2024Journal Article, cited 0 times
TCGA-GBM
Adult
Humans
Prognosis
*Brain Neoplasms/diagnostic imaging/genetics
*Deep Learning
Retrospective Studies
*Glioma/diagnostic imaging/genetics/surgery
Magnetic Resonance Imaging/methods
Deep learning
Glioblastoma
Isocitrate dehydrogenase (IDH) mutation
Magnetic Resonance Imaging (MRI)
Survival analysis
BACKGROUND: To investigate the prognostic value of spatial features from whole-brain MRI using a three-dimensional (3D) convolutional neural network for adult-type diffuse gliomas. METHODS: In a retrospective, multicenter study, 1925 diffuse glioma patients were enrolled from 5 datasets: SNUH (n = 708), UPenn (n = 425), UCSF (n = 500), TCGA (n = 160), and Severance (n = 132). The SNUH and Severance datasets served as external test sets. Precontrast and postcontrast 3D T1-weighted, T2-weighted, and T2-FLAIR images were processed as multichannel 3D images. A 3D-adapted SE-ResNeXt model was trained to predict overall survival. The prognostic value of the deep learning-based prognostic index (DPI), a spatial feature-derived quantitative score, and established prognostic markers were evaluated using Cox regression. Model evaluation was performed using the concordance index (C-index) and Brier score. RESULTS: The MRI-only median DPI survival prediction model achieved C-indices of 0.709 and 0.677 (BS = 0.142 and 0.215) and survival differences (P < 0.001 and P = 0.002; log-rank test) for the SNUH and Severance datasets, respectively. Multivariate Cox analysis revealed DPI as a significant prognostic factor, independent of clinical and molecular genetic variables: hazard ratio = 0.032 and 0.036 (P < 0.001 and P = 0.004) for the SNUH and Severance datasets, respectively. Multimodal prediction models achieved higher C-indices than models using only clinical and molecular genetic variables: 0.783 vs. 0.774, P = 0.001, SNUH; 0.766 vs. 0.748, P = 0.023, Severance. CONCLUSIONS: The global morphologic feature derived from 3D CNN models using whole-brain MRI has independent prognostic value for diffuse gliomas. Combining clinical, molecular genetic, and imaging data yields the best performance.
Imaging descriptors improve the predictive power of survival models for glioblastoma patients
Mazurowski, Maciej Andrzej
Desjardins, Annick
Malof, Jordan Milton
Neuro-Oncology2013Journal Article, cited 62 times
Website
TCGA-GBM
Radiomics
Magnetic resonance imaging (MRI)
BRAIN
BACKGROUND: Because effective prediction of survival time can be highly beneficial for the treatment of glioblastoma patients, the relationship between survival time and multiple patient characteristics has been investigated. In this paper, we investigate whether the predictive power of a survival model based on clinical patient features improves when MRI features are also included in the model. METHODS: The subjects in this study were 82 glioblastoma patients for whom clinical features as well as MR imaging exams were made available by The Cancer Genome Atlas (TCGA) and The Cancer Imaging Archive (TCIA). Twenty-six imaging features in the available MR scans were assessed by radiologists from the TCGA Glioma Phenotype Research Group. We used multivariate Cox proportional hazards regression to construct 2 survival models: one that used 3 clinical features (age, gender, and KPS) as the covariates and 1 that used both the imaging features and the clinical features as the covariates. Then, we used 2 measures to compare the predictive performance of these 2 models: area under the receiver operating characteristic curve for the 1-year survival threshold and overall concordance index. To eliminate any positive performance estimation bias, we used leave-one-out cross-validation. RESULTS: The performance of the model based on both clinical and imaging features was higher than the performance of the model based on only the clinical features, in terms of both area under the receiver operating characteristic curve (P < .01) and the overall concordance index (P < .01). CONCLUSIONS: Imaging features assessed using a controlled lexicon have additional predictive value compared with clinical features when predicting survival time in glioblastoma patients.;
Multicenter imaging outcomes study of The Cancer Genome Atlas glioblastoma patient cohort: imaging predictors of overall and progression-free survival
Wangaryattawanich, Pattana
Hatami, Masumeh
Wang, Jixin
Thomas, Ginu
Flanders, Adam
Kirby, Justin
Wintermark, Max
Huang, Erich S.
Bakhtiari, Ali Shojaee
Luedi, Markus M.
Hashmi, Syed S.
Rubin, Daniel L.
Chen, James Y.
Hwang, Scott N.
Freymann, John
Holder, Chad A.
Zinn, Pascal O.
Colen, Rivka R.
Neuro-Oncology2015Journal Article, cited 40 times
Website
TCGA-GBM
VASARI
Survival
Despite an aggressive therapeutic approach, the prognosis for most patients with glioblastoma (GBM) remains poor. The aim of this study was to determine the significance of preoperative MRI variables, both quantitative and qualitative, with regard to overall and progression-free survival in GBM.We retrospectively identified 94 untreated GBM patients from the Cancer Imaging Archive who had pretreatment MRI and corresponding patient outcomes and clinical information in The Cancer Genome Atlas. Qualitative imaging assessments were based on the Visually Accessible Rembrandt Images feature-set criteria. Volumetric parameters were obtained of the specific tumor components: contrast enhancement, necrosis, and edema/invasion. Cox regression was used to assess prognostic and survival significance of each image.Univariable Cox regression analysis demonstrated 10 imaging features and 2 clinical variables to be significantly associated with overall survival. Multivariable Cox regression analysis showed that tumor-enhancing volume (P = .03) and eloquent brain involvement (P < .001) were independent prognostic indicators of overall survival. In the multivariable Cox analysis of the volumetric features, the edema/invasion volume of more than 85 000 mm3 and the proportion of enhancing tumor were significantly correlated with higher mortality (Ps = .004 and .003, respectively).Preoperative MRI parameters have a significant prognostic role in predicting survival in patients with GBM, thus making them useful for patient stratification and endpoint biomarkers in clinical trials.
MRI features predict survival and molecular markers in diffuse lower-grade gliomas
Zhou, Hao
Vallieres, Martin
Bai, Harrison X
Su, Chang
Tang, Haiyun
Oldridge, Derek
Zhang, Zishu
Xiao, Bo
Liao, Weihua
Tao, Yongguang
Zhou, Jianhua
Zhang, Paul
Yang, Li
Neuro-Oncology2017Journal Article, cited 41 times
Website
TCGA-LGG
Lower-grade glioma (LGG)
VASARI
Radiogenomics
1p/19q co-deletion
IDH mutation
Texture analysis
Background: Previous studies have shown that MR imaging features can be used to predict survival and molecular profile of glioblastoma. However, no study of a similar type has been performed on lower-grade gliomas (LGGs). Methods: Presurgical MRIs of 165 patients with diffuse low- and intermediate-grade gliomas (histological grades II and III) were scored according to the Visually Accessible Rembrandt Images (VASARI) annotations. Radiomic models using automated texture analysis and VASARI features were built to predict isocitrate dehydrogenase 1 (IDH1) mutation, 1p/19q codeletion status, histological grade, and tumor progression. Results: Interrater analysis showed significant agreement in all imaging features scored (k = 0.703-1.000). On multivariate Cox regression analysis, no enhancement and a smooth non-enhancing margin were associated with longer progression-free survival (PFS), while a smooth non-enhancing margin was associated with longer overall survival (OS) after taking into account age, grade, tumor location, histology, extent of resection, and IDH1 1p/19q subtype. Using logistic regression and bootstrap testing evaluations, texture models were found to possess higher prediction potential for IDH1 mutation, 1p/19q codeletion status, histological grade, and progression of LGGs than VASARI features, with areas under the receiver-operating characteristic curves of 0.86 +/- 0.01, 0.96 +/- 0.01, 0.86 +/- 0.01, and 0.80 +/- 0.01, respectively. Conclusion: No enhancement and a smooth non-enhancing margin on MRI were predictive of longer PFS, while a smooth non-enhancing margin was a significant predictor of longer OS in LGGs. Textural analyses of MR imaging data predicted IDH1 mutation, 1p/19q codeletion, histological grade, and tumor progression with high accuracy.;
Magnetic resonance perfusion image features uncover an angiogenic subgroup of glioblastoma patients with poor survival and better response to antiangiogenic treatment
Liu, Tiffany T.
Achrol, Achal S.
Mitchell, Lex A.
Rodriguez, Scott A.
Feroze, Abdullah
Michael Iv
Kim, Christine
Chaudhary, Navjot
Gevaert, Olivier
Stuart, Josh M.
Harsh, Griffith R.
Chang, Steven D.
Rubin, Daniel L.
Neuro-Oncology2016Journal Article, cited 15 times
Website
Radiogenomics
TCGA-GBM
Background. In previous clinical trials, antiangiogenic therapies such as bevacizumab did not show efficacy in patients with newly diagnosed glioblastoma (GBM). This may be a result of the heterogeneity of GBM, which has a variety of imaging-based phenotypes and gene expression patterns. In this study, we sought to identify a phenotypic subtype of GBM patients who have distinct tumor-image features and molecular activities and who may benefit from antiangiogenic therapies.Methods. Quantitative image features characterizing subregions of tumors and the whole tumor were extracted from preoperative and pretherapy perfusion magnetic resonance (MR) images of 117 GBM patients in 2 independent cohorts. Unsupervised consensus clustering was performed to identify robust clusters of GBM in each cohort. Cox survival and gene set enrichment analyses were conducted to characterize the clinical significance and molecular pathway activities of the clusters. The differential treatment efficacy of antiangiogenic therapy between the clusters was evaluated.Results. A subgroup of patients with elevated perfusion features was identified and was significantly associated with poor patient survival after accounting for other clinical covariates (P values <.01; hazard ratios > 3) consistently found in both cohorts. Angiogenesis and hypoxia pathways were enriched in this subgroup of patients, suggesting the potential efficacy of antiangiogenic therapy. Patients of the angiogenic subgroups pooled from both cohorts, who had chemotherapy information available, had significantly longer survival when treated with antiangiogenic therapy (log-rank P=.022).Conclusions. Our findings suggest that an angiogenic subtype of GBM patients may benefit from antiangiogenic therapy with improved overall survival.
Automatic assessment of glioma burden: A deep learning algorithm for fully automated volumetric and bi-dimensional measurement
Chang, Ken
Beers, Andrew L
Bai, Harrison X
Brown, James M
Ly, K Ina
Li, Xuejun
Senders, Joeky T
Kavouridis, Vasileios K
Boaro, Alessandro
Su, Chang
Bi, Wenya Linda
Rapalino, Otto
Liao, Weihua
Shen, Qin
Zhou, Hao
Xiao, Bo
Wang, Yinyan
Zhang, Paul J
Pinho, Marco C
Wen, Patrick Y
Batchelor, Tracy T
Boxerman, Jerrold L
Arnaout, Omar
Rosen, Bruce R
Gerstner, Elizabeth R
Yang, Li
Huang, Raymond Y
Kalpathy-Cramer, Jayashree
Neuro Oncol2019Journal Article, cited 0 times
Website
TCGA-GBM
TCGA-LGG
Ivy GAP
Deep Learning
Glioma
Segmentation
BACKGROUND: Longitudinal measurement of glioma burden with MRI is the basis for treatment response assessment. In this study, we developed a deep learning algorithm that automatically segments abnormal FLAIR hyperintensity and contrast-enhancing tumor, quantitating tumor volumes as well as the product of maximum bi-dimensional diameters according to the Response Assessment in Neuro-Oncology (RANO) criteria (AutoRANO). METHODS: Two cohorts of patients were used for this study. One consisted of 843 pre-operative MRIs from 843 patients with low- or high-grade gliomas from four institutions and the second consisted 713 longitudinal, post-operative MRI visits from 54 patients with newly diagnosed glioblastomas (each with two pre-treatment "baseline" MRIs) from one institution. RESULTS: The automatically generated FLAIR hyperintensity volume, contrast-enhancing tumor volume, and AutoRANO were highly repeatable for the double-baseline visits, with an intraclass correlation coefficient (ICC) of 0.986, 0.991, and 0.977, respectivelyon the cohort of post-operative GBM patients. Furthermore, there was high agreement between manually and automatically measured tumor volumes, with ICC values of 0.915, 0.924, and 0.965 for pre-operative FLAIR hyperintensity, post-operative FLAIR hyperintensity, and post-operative contrast-enhancing tumor volumes, respectively. Lastly, the ICC for comparing manually and automatically derived longitudinal changes in tumor burden was 0.917, 0.966, and 0.850 for FLAIR hyperintensity volume, contrast-enhancing tumor volume, and RANO measures, respectively. CONCLUSIONS: Our automated algorithm demonstrates potential utility for evaluating tumor burden in complex post-treatment settings, although further validation in multi-center clinical trials will be needed prior to widespread implementation.
Neurosurgery2017Journal Article, cited 3 times
Website
TCGA-GBM
Radiomics
BRAIN
Glioblastoma Multiforme (GBM)
Decorin Expression Is Associated With Diffusion MR Phenotypes in Glioblastoma
Patel, Kunal S.
Raymond, Catalina
Yao, Jingwen
Tsung, Joseph
Liau, Linda M.
Everson, Richard
Cloughesy, Timothy F.
Ellingson, Benjamin
Neurosurgery2019Journal Article, cited 0 times
Radiogenomics
Radiomics
TCGA
Abstract; INTRODUCTION; Significant evidence from multiple phase II trials have suggested diffusion-weighted imaging estimates of apparent diffusion coefficient (ADC) are a predictive imaging biomarker for survival benefit for recurrent glioblastoma when treated with anti-VEGF therapies, including bevacizumab, cediranib, and cabozantinib. Despite this observation, the underlying mechanism linking anti-VEGF therapeutic efficacy with diffusion MR characteristics remains unknown. We hypothesized that a high expression of decorin, a small proteoglycan that has been associated with sequestration of pro-angiogenic signaling as well as reduction in the viscosity of the extracellular environment, may be associated with elevated ADC.; ; METHODS; A differential gene expression analysis was carried out in human glioblastoma samples in whom preoperative diffusion imaging was obtained. ADC histogram analysis was carried out to calculate preoperative ADCL values, the average ADC in the lower distribution using a double Gaussian mixed model. The Cancer Imaging Archive (TCIA) and The Cancer Genome Atlas (TCGA) databases were queried to identify diffusion imaging and levels of decorin protein expression. Patients with recurrent glioblastoma who undergo resection prospectively had targeted biopsies based on the ADC analysis collected. These samples were stained for decorin and quantified using whole-slide image analysis software.; ; RESULTS; Differential gene expression analysis between tumors associated with high and low preoperative ADCL showed that patients with high ADCL had increased decorin gene expression. Patients from the TCGA database with elevated ADCL had a significantly higher level of decorin gene expression (P = .01). These patients had a survival advantage with a log-rank analysis (P = .002). Patients with preoperative diffusion imaging had multiple targeted intraoperative biopsies stained for decorin. Patients with high ADCL had increased decorin expression on immunohistochemistry (P = .002).; ; CONCLUSION; Increased ADCL on diffusion MR imaging is associated with high decorin expression as well as increased survival in glioblastoma. Decorin may play an important role the imaging features on diffusion MR and anti-VEGF treatment efficacy. Decorin expression may serve as a future therapeutic target in patients with favorable diffusion MR characteristics.
A novel fully automated MRI-based deep-learning method for classification of 1p/19q co-deletion status in brain gliomas
Yogananda, Chandan Ganesh Bangalore
Shah, Bhavya R
Yu, Frank F
Pinho, Marco C
Nalawade, Sahil S
Murugesan, Gowtham K
Wagner, Benjamin C
Mickey, Bruce
Patel, Toral R
Fei, Baowei
Madhuranthakam, Ananth J
Maldjian, Joseph A
Neuro-Oncology Advances2020Journal Article, cited 0 times
LGG-1p19qDeletion
BACKGROUND: One of the most important recent discoveries in brain glioma biology has been the identification of the isocitrate dehydrogenase (IDH) mutation and 1p/19q co-deletion status as markers for therapy and prognosis. 1p/19q co-deletion is the defining genomic marker for oligodendrogliomas and confers a better prognosis and treatment response than gliomas without it. Our group has previously developed a highly accurate deep-learning network for determining IDH mutation status using T2-weighted (T2w) MRI only. The purpose of this study was to develop a similar 1p/19q deep-learning classification network.
METHODS: Multiparametric brain MRI and corresponding genomic information were obtained for 368 subjects from The Cancer Imaging Archive and The Cancer Genome Atlas. 1p/19 co-deletions were present in 130 subjects. Two-hundred and thirty-eight subjects were non-co-deleted. A T2w image-only network (1p/19q-net) was developed to perform 1p/19q co-deletion status classification and simultaneous single-label tumor segmentation using 3D-Dense-UNets. Three-fold cross-validation was performed to generalize the network performance. Receiver operating characteristic analysis was also performed. Dice scores were computed to determine tumor segmentation accuracy.
RESULTS: 1p/19q-net demonstrated a mean cross-validation accuracy of 93.46% across the 3 folds (93.4%, 94.35%, and 92.62%, SD = 0.8) in predicting 1p/19q co-deletion status with a sensitivity and specificity of 0.90 ± 0.003 and 0.95 ± 0.01, respectively and a mean area under the curve of 0.95 ± 0.01. The whole tumor segmentation mean Dice score was 0.80 ± 0.007.
CONCLUSION: We demonstrate high 1p/19q co-deletion classification accuracy using only T2w MR images. This represents an important milestone toward using MRI to predict glioma histology, prognosis, and response to treatment.
Radiogenomic modeling predicts survival-associated prognostic groups in glioblastoma
Nuechterlein, Nicholas
Li, Beibin
Feroze, Abdullah
Holland, Eric C
Shapiro, Linda
Haynor, David
Fink, James
Cimino, Patrick J
Neuro-Oncology Advances2021Journal Article, cited 0 times
Website
TCGA-GBM
radiogenomics
A functional artificial neural network for noninvasive pretreatment evaluation of glioblastoma patients
Zander, E.
Ardeleanu, A.
Singleton, R.
Bede, B.
Wu, Y.
Zheng, S.
Neurooncol Adv2022Journal Article, cited 0 times
Website
Ivy GAP
TCGA-GBM
Radiomics
Artificial Neural Network (ANN)
Radiogenomics
copy number variations
Glioblastoma
Machine Learning
Background: Pretreatment assessments for glioblastoma (GBM) patients, especially elderly or frail patients, are critical for treatment planning. However, genetic profiling with intracranial biopsy carries a significant risk of permanent morbidity. We previously demonstrated that the CUL2 gene, encoding the scaffold cullin2 protein in the cullin2-RING E3 ligase (CRL2), can predict GBM radiosensitivity and prognosis. CUL2 expression levels are closely regulated with its copy number variations (CNVs). This study aims to develop artificial neural networks (ANNs) for pretreatment evaluation of GBM patients with inputs obtainable without intracranial surgical biopsies. Methods: Public datasets including Ivy-GAP, The Cancer Genome Atlas Glioblastoma (TCGA-GBM), and the Chinese Glioma Genome Atlas (CGGA) were used for training and testing of the ANNs. T1 images from corresponding cases were studied using automated segmentation for features of heterogeneity and tumor edge contouring. A ratio comparing the surface area of tumor borders versus the total volume (SvV) was derived from the DICOM-SEG conversions of segmented tumors. The edges of these borders were detected using the canny edge detector. Packages including Keras, Pytorch, and TensorFlow were tested to build the ANNs. A 4-layered ANN (8-8-8-2) with a binary output was built with optimal performance after extensive testing. Results: The 4-layered deep learning ANN can identify a GBM patient's overall survival (OS) cohort with 80%-85% accuracy. The ANN requires 4 inputs, including CUL2 copy number, patients' age at GBM diagnosis, Karnofsky Performance Scale (KPS), and SvV ratio. Conclusion: Quantifiable image features can significantly improve the ability of ANNs to identify a GBM patients' survival cohort. Features such as clinical measures, genetic data, and image data, can be integrated into a single ANN for GBM pretreatment evaluation.
An investigation of the conformity, feasibility, and expected clinical benefits of multiparametric MRI-guided dose painting radiotherapy in glioblastoma
Brighi, Caterina
Keall, Paul J
Holloway, Lois C
Walker, Amy
Whelan, Brendan
de Witt Hamer, Philip C
Verburg, Niels
Aly, Farhannah
Chen, Cathy
Koh, Eng-Siew
Waddington, David E J
Neuro-Oncology Advances2022Journal Article, cited 0 times
QIN GBM Treatment Response
Background: New technologies developed to improve survival outcomes for glioblastoma (GBM) continue to have limited success. Recently, image-guided dose painting (DP) radiotherapy has emerged as a promising strategy to increase local control rates. In this study, we evaluate the practical application of a multiparametric MRI model of glioma infiltration for DP radiotherapy in GBM by measuring its conformity, feasibility, and expected clinical benefits against standard of care treatment.
Methods: Maps of tumor probability were generated from perfusion/diffusion MRI data from 17 GBM patients via a previously developed model of GBM infiltration. Prescriptions for DP were linearly derived from tumor probability maps and used to develop dose optimized treatment plans. Conformity of DP plans to dose prescriptions was measured via a quality factor. Feasibility of DP plans was evaluated by dose metrics to target volumes and critical brain structures. Expected clinical benefit of DP plans was assessed by tumor control probability. The DP plans were compared to standard radiotherapy plans.
Results: The conformity of the DP plans was >90%. Compared to the standard plans, DP (1) did not affect dose delivered to organs at risk; (2) increased mean and maximum dose and improved minimum dose coverage for the target volumes; (3) reduced minimum dose within the radiotherapy treatment margins; (4) improved local tumor control probability within the target volumes for all patients.
Conclusions: A multiparametric MRI model of GBM infiltration can enable conformal, feasible, and potentially beneficial dose painting radiotherapy plans.
NS-HGlio: A Generalizable and Repeatable HGG Segmentation and Volumetric measurement AI Algorithm for the Longitudinal MRI Assessment to Inform RANO in Trials and Clinics
Abayazeed, Aly H.
Abbassy, Ahmed
Mueller, Michael
Hill, Michael
Qayati, Mohamed
Mohamed, Shady
Mekhaimar, Mahmoud
Raymond, Catalina
Dubey, Prachi
Nael, Kambiz
Rohatgi, Saurabh
Kapare, Vaishali
Kulkarni, Ashwini
Shiang, Tina
Kumar, Atul
Andratschke, Nicolaus
Willmann, Jonas
Brawanski, Alexander
De Jesus, Reordan
Tuna, Ibrahim
Fung, Steve H.
Landolfi, Joseph C.
Ellingson, Benjamin M.
Reyes, Mauricio
Neuro-Oncology Advances2022Journal Article, cited 0 times
Website
QIN GBM Treatment Response
BraTS-TCGA-GBM
BRAIN
Segmentation
Machine Learning
Convolutional Neural Network (CNN)
High grade glioma
Background; Accurate and repeatable measurement of high-grade glioma (HGG) enhancing (Enh.) and T2/FLAIR hyperintensity/edema (Ed.) is required for monitoring treatment response. 3D measurements can be used to inform the modified Response Assessment in Neuro-oncology criteria (mRANO). We aim to develop an HGG volumetric measurement and visualisation AI algorithm that is generalizable and repeatable.; ; Material and methods; A single 3D-Convoluted Neural Network (CNN), NS-HGlio, to analyse HGG on MRIs using 5-fold cross validation was developed using retrospective (557 MRIs), multicentre (38 sites) and multivendor (32 scanners) dataset divided into training (70%), validation (20%) and testing (10%). Six neuroradiologists created the ground truth (GT). Additional Internal validation (IV, three institutions) using 70 MRIs, External validation (EV, single institution) using 40 MRIs through the Dice Similarity Coefficient (DSC) of Enh., Ed. and Enh. + Ed. (WholeLesion/WL) labels and repeatability testing on 14 subjects from the TCIA MGH-QIN-GBM dataset using volume correlations between timepoints were performed.; ; Results; IV Preoperative median DSC Enh. 0.89 (SD 0.11), Ed. 0.88 (0.28), WL 0.88 (0.11). EV Preoperative median DSC Enh. 0.82 (0.09), Ed. 0.83 (0.11), WL 0.86 (0.06). IV Postoperative median DSC Enh. 0.77 (SD 0.20), Ed 0.78. (SD 0.09), WL 0.78 (SD 0.11). EV Postoperative median DSC Enh. 0.75 (0.21), Ed 0.74 (0.12), WL 0.79 (0.07). Repeatability testing; Intraclass Correlation Coefficient (ICC) of 0.95 Enh. and 0.92 Ed.; ; Conclusion; NS-HGlio is accurate, repeatable, and generalizable. The output can be used for visualization, documentation, treatment response monitoring, radiation planning, intra-operative targeting, and estimation of Residual Tumor Volume (RTV) among others.
MRI-based classification of IDH mutation and 1p/19q codeletion status of gliomas using a 2.5D hybrid multi-task convolutional neural network
Chakrabarty, Satrajit
LaMontagne, Pamela
Shimony, Joshua
Marcus, Daniel S
Sotiras, Aristeidis
Neuro-Oncology Advances2023Journal Article, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Background: IDH mutation and 1p/19q codeletion status are important prognostic markers for glioma that are currently determined using invasive procedures. Our goal was to develop artificial intelligence-based methods to noninvasively determine molecular alterations from MRI.
Methods: Pre-operative MRI scans of 2648 glioma patients were collected from Washington University School of Medicine (WUSM; n = 835) and publicly available Brain Tumor Segmentation (BraTS; n = 378), LGG 1p/19q (n = 159), Ivy Glioblastoma Atlas Project (Ivy GAP; n = 41), The Cancer Genome Atlas (TCGA; n = 461), and the Erasmus Glioma Database (EGD; n = 774) datasets. A 2.5D hybrid convolutional neural network was proposed to simultaneously localize glioma and classify its molecular status by leveraging MRI imaging features and prior knowledge features from clinical records and tumor location. The models were trained on 223 and 348 cases for IDH and 1p/19q tasks, respectively, and tested on one internal (TCGA) and two external (WUSM and EGD) test sets.
Results: For IDH, the best-performing model achieved areas under the receiver operating characteristic (AUROC) of 0.925, 0.874, 0.933 and areas under the precision-recall curves (AUPRC) of 0.899, 0.702, 0.853 on the internal, WUSM, and EGD test sets, respectively. For 1p/19q, the best model achieved AUROCs of 0.782, 0.754, 0.842, and AUPRCs of 0.588, 0.713, 0.782, on those three data-splits, respectively.
Conclusions: The high accuracy of the model on unseen data showcases its generalization capabilities and suggests its potential to perform "virtual biopsy" for tailoring treatment planning and overall clinical management of gliomas.
Predicting methylation class from diffusely infiltrating adult gliomas using multi-modality MRI data
Alom, Zahangir
Tran, Quynh T.
Bag, Asim K.
Lucas, John T.
Orr, Brent A.
Neuro-Oncology Advances2023Journal Article, cited 0 times
Website
TCGA-LGG
TCGA-GBM
Radiomics
Radiogenomics
IDH mutation
glioma
Magnetic Resonance Imaging (MRI)
DNA methylation profiling
Brain tumor
Classification
supervised deep neural network
Background; Radiogenomic studies of adult-type diffuse gliomas have used magnetic resonance imaging (MRI) data to infer tumor attributes, including abnormalities such as IDH-mutation status and 1p19q deletion. This approach is effective but doesn't generalize to tumor types that lack highly recurrent alterations. Tumors have intrinsic DNA methylation patterns and can be grouped into stable methylation classes even when lacking recurrent mutations or copy number changes. The purpose of this study was to prove the principle that a tumor's DNA-methylation class could be used as a predictive feature for radiogenomic modeling.; ; Methods; Using a custom DNA methylation-based classification model, molecular classes were assigned to diffuse gliomas in The Cancer Genome Atlas (TCGA) dataset. We then constructed and validated machine learning models to predict a tumor’s methylation family or subclass from matched multisequence MRI data using either extracted radiomic features or directly from MRI images.; ; Results; For models using extracted radiomic features, we demonstrated top accuracies above 90% for predicting IDH-glioma and GBM-IDHwt methylation families, IDH-mutant tumor methylation subclasses, or GBM-IDHwt molecular subclasses. Classification models utilizing MRI images directly demonstrated average accuracies of 80.6 % for predicting methylation families, compared to 87.2% and 89.0% for differentiating IDH-mutated astrocytomas from oligodendrogliomas and glioblastoma molecular subclasses, respectively.; ; Conclusions; These findings demonstrate that MRI-based machine learning models can effectively predict the methylation class of brain tumors. Given appropriate datasets, this approach could generalize to most brain tumor types, expanding the number and types of tumors that could be used to develop radiomic or radiogenomic models.
Radiological, clinical, and molecular analyses reveal distinct subtypes of butterfly glioblastomas affecting the prognosis
Shibahara, Ichiyo
Shigeeda, Ryota
Watanabe, Takashi
Orihashi, Yasushi
Tanihata, Yoko
Fujitani, Kazuko
Handa, Hajime
Hyakutake, Yuri
Toyoda, Mariko
Inukai, Madoka
Neuro-Oncology Advances2024Journal Article, cited 0 times
Website
TCGA-GBM
CPTAC-GBM
Ivy GAP
UPENN-GBM
OPTIMISING DELINEATION ACCURACY OF TUMOURS IN PET FOR RADIOTHERAPY PLANNING USING BLIND DECONVOLUTION
Guvenis, A
Koc, A
Radiation Protection Dosimetry2015Journal Article, cited 3 times
Website
Algorithm Development
Computer Assisted Detection (CAD)
Segmentation
Positron Emission Tomography (PET)
Phantom
Positron emission tomography (PET) imaging has been proven to be useful in radiotherapy planning for the determination of the metabolically active regions of tumours. Delineation of tumours, however, is a difficult task in part due to high noise levels and the partial volume effects originating mainly from the low camera resolution. The goal of this work is to study the effect of blind deconvolution on tumour volume estimation accuracy for different computer-aided contouring methods. The blind deconvolution estimates the point spread function (PSF) of the imaging system in an iterative manner in a way that the likelihood of the given image being the convolution output is maximised. In this way, the PSF of the imaging system does not need to be known. Data were obtained from a NEMA NU-2 IQ-based phantom with a GE DSTE-16 PET/CT scanner. The artificial tumour diameters were 13, 17, 22, 28 and 37 mm with a target/background ratio of 4:1. The tumours were delineated before and after blind deconvolution. Student's two-tailed paired t-test showed a significant decrease in volume estimation error (p < 0.001) when blind deconvolution was used in conjunction with computer-aided delineation methods. A manual delineation confirmation demonstrated an improvement from 26 to 16 % for the artificial tumour of size 37 mm while an improvement from 57 to 15 % was noted for the small tumour of 13 mm. Therefore, it can be concluded that blind deconvolution of reconstructed PET images may be used to increase tumour delineation accuracy.
Artificial intelligence: opportunities in lung cancer
Zhang, Kai
Chen, Kezhong
2021Journal Article, cited 0 times
LIDC-IDRI
Lung-PET-CT-Dx
LungCT-Diagnosis
NSCLC-Radiomics-Genomics
SPIE-AAPM Lung CT Challenge
TCGA-LUAD
TCGA-LUSC
PURPOSE OF REVIEW: In this article, we focus on the role of artificial intelligence in the management of lung cancer. We summarized commonly used algorithms, current applications and challenges of artificial intelligence in lung cancer.
RECENT FINDINGS: Feature engineering for tabular data and computer vision for image data are commonly used algorithms in lung cancer research. Furthermore, the use of artificial intelligence in lung cancer has extended to the entire clinical pathway including screening, diagnosis and treatment. Lung cancer screening mainly focuses on two aspects: identifying high-risk populations and the automatic detection of lung nodules. Artificial intelligence diagnosis of lung cancer covers imaging diagnosis, pathological diagnosis and genetic diagnosis. The artificial intelligence clinical decision-support system is the main application of artificial intelligence in lung cancer treatment. Currently, the challenges of artificial intelligence applications in lung cancer mainly focus on the interpretability of artificial intelligence models and limited annotated datasets; and recent advances in explainable machine learning, transfer learning and federated learning might solve these problems.
SUMMARY: Artificial intelligence shows great potential in many aspects of the management of lung cancer, especially in screening and diagnosis. Future studies on interpretability and privacy are needed for further application of artificial intelligence in lung cancer.
A Radiogenomic multimodal and whole-transcriptome sequencing for preoperative prediction of axillary lymph node metastasis and drug therapeutic response in breast cancer: a retrospective, machine learning And international multi-cohort study
Lai, J.
Chen, Z.
Liu, J.
Zhu, C.
Huang, H.
Yi, Y.
Cai, G.
Liao, N.
Int J Surg2024Journal Article, cited 0 times
Website
TCGA-BRCA
Duke-Breast-Cancer-MRI
Radiogenomics
Support Vector Machine (SVM)
BACKGROUND: Axillary lymph nodes (ALN) status serves as a crucial prognostic indicator in breast cancer (BC). The aim of this study was to construct a radiogenomic multimodal model, based on machine learning (ML) and whole-transcriptome sequencing (WTS), to accurately preoperative evaluate the risk of ALN metastasis (ALNM), drug therapeutic response and avoid unnecessary axillary surgery in BC patients. METHODS: In this study, we conducted a retrospective analysis of 1078 BC patients from The Cancer Genome Atlas (TCGA), The Cancer Imaging Archive (TCIA), and Foshan cohort. These patients were divided into the TCIA cohort(N=103), TCIA validation cohort(N=51), Duke cohort(N=138), Foshan cohort(N=106), and TCGA cohort(N=680). Radiological features were extracted from BC radiological images and differentially expressed gene expression was calibrated using WTS technology. A support vector machine (SVM) model was employed to screen radiological and genetic features, and a multimodal model was established based on radiogenomic and clinical pathological features to predict ALNM and stratify. The accuracy of the model predictions was assessed using the area under the curve (AUC) and the clinical benefit was measured using decision curve analysis (DCA). Risk stratification analysis of BC patients was performed by gene set enrichment analysis (GSEA), differential comparison of immune checkpoint gene expression, and drug sensitivity testing. RESULTS: For the prediction of ALNM, rad-score was able to significantly differentiate between ALN- and ALN+ patients in both the Duke and Foshan cohorts (P<0.05). Similarly, the gene-score was able to significantly differentiate between ALN- and ALN+ patients in the TCGA cohort (P<0.05). The radiogenomic multimodal nomogram demonstrated satisfactory performance in the TCIA cohort (AUC 0.82, 95% CI: 0.74-0.91) and TCIA validation cohort (AUC 0.77, 95% CI: 0.63-0.91). In the risk sub-stratification analysis, there were significant differences in gene pathway enrichment between high and low-risk groups (P<0.05). Additionally, different risk groups may exhibit varying treatment responses to chemotherapy (including Doxorubicin, Methotrexate and Lapatinib) (P<0.05). CONCLUSION: Overall, the radiogenomic multimodal model employs multimodal data, including radiological images, genetic and clinicopathological typing. The radiogenomic multimodal nomogram can precisely predict ALNM and drug therapeutic response in BC patients.
Deep learning-based multimodel prediction for disease-free survival status of patients with clear cell renal cell carcinoma after surgery: a multicenter cohort study
Chen, S.
Gao, F.
Guo, T.
Jiang, L.
Zhang, N.
Wang, X.
Zheng, J.
Int J Surg2024Journal Article, cited 0 times
TCGA-KIRC
CPTAC-CCRCC
Multimodal Imaging
Pathomics
Whole Slide Imaging (WSI)
Cell segmentation
Computed Tomography (CT)
Predictive model
BACKGROUND: Although separate analysis of individual factor can somewhat improve the prognostic performance, integration of multimodal information into a single signature is necessary to stratify patients with clear cell renal cell carcinoma (ccRCC) for adjuvant therapy after surgery. METHODS: A total of 414 patients with whole slide images, computed tomography images, and clinical data from three patient cohorts were retrospectively analyzed. The authors performed deep learning and machine learning algorithm to construct three single-modality prediction models for disease-free survival of ccRCC based on whole slide images, cell segmentation, and computed tomography images, respectively. A multimodel prediction signature (MMPS) for disease-free survival were further developed by combining three single-modality prediction models and tumor stage/grade system. Prognostic performance of the prognostic model was also verified in two independent validation cohorts. RESULTS: Single-modality prediction models performed well in predicting the disease-free survival status of ccRCC. The MMPS achieved higher area under the curve value of 0.742, 0.917, and 0.900 in three independent patient cohorts, respectively. MMPS could distinguish patients with worse disease-free survival, with HR of 12.90 (95% CI: 2.443-68.120, P<0.0001), 11.10 (95% CI: 5.467-22.520, P<0.0001), and 8.27 (95% CI: 1.482-46.130, P<0.0001) in three different patient cohorts. In addition, MMPS outperformed single-modality prediction models and current clinical prognostic factors, which could also provide complements to current risk stratification for adjuvant therapy of ccRCC. CONCLUSION: Our novel multimodel prediction analysis for disease-free survival exhibited significant improvements in prognostic prediction for patients with ccRCC. After further validation in multiple centers and regions, the multimodal system could be a potential practical tool for clinicians in the treatment for ccRCC patients.
MRI-based radiomics approach for the prediction of recurrence-free survival in triple-negative breast cancer after breast-conserving surgery or mastectomy
Zhao, J.
Zhang, Q.
Liu, M.
Zhao, X.
Medicine (Baltimore)2023Journal Article, cited 0 times
Website
Breast-MRI-NACT-Pilot
Humans
*Mastectomy
Segmental
*Triple Negative Breast Neoplasms/diagnostic imaging/surgery
Mastectomy
Magnetic Resonance Imaging/methods
Nomograms
Retrospective Studies
To explore the value of a radiomics signature and develop a nomogram combined with a radiomics signature and clinical factors for predicting recurrence-free survival in triple-negative breast cancer patients. We enrolled 151 patients from the cancer imaging archive who underwent preoperative contrast-enhanced magnetic resonance imaging. They were assigned to training, validation and external validation cohorts. Image features with coefficients not equal to zero in the 10-fold cross-validation were selected to generate a radiomics signature. Based on the optimal cutoff value of the radiomics signature determined by maximally selected log-rank statistics, patients were stratified into high- and low-risk groups in the training and validation cohorts. Kaplan-Meier survival analysis was performed for both groups. Kaplan-Meier survival distributions in these groups were compared using log-rank tests. Univariate and multivariate Cox regression analyses were used to construct clinical and combined models. Concordance index was used to assess the predictive performance of the 3 models. Calibration of the combined model was assessed using calibration curves. Four image features were selected to generate the radiomics signature. The Kaplan-Meier survival distributions of patients in the 2 groups were significantly different in the training (P < .001) and validation cohorts (P = .001). The C-indices of the radiomics model, clinical model, and combined model in the training and validation cohorts were 0.772, 0.700, 0.878, and 0.744, 0.574, 0.777, respectively. The C-indices of the radiomics model, clinical model, and combined model in the external validation cohort were 0.778, 0.733, 0.822, respectively. The calibration curves of the combined model showed good calibration. The radiomics signature can predict recurrence-free survival of patients with triple-negative breast cancer and improve the predictive performance of the clinical model.
Future artificial intelligence tools and perspectives in medicine
Chaddad, Ahmad
Katib, Yousef
Hassan, Lama
2021Journal Article, cited 0 times
QIN PROSTATE
PURPOSE OF REVIEW: Artificial intelligence has become popular in medical applications, specifically as a clinical support tool for computer-aided diagnosis. These tools are typically employed on medical data (i.e., image, molecular data, clinical variables, etc.) and used the statistical and machine-learning methods to measure the model performance. In this review, we summarized and discussed the most recent radiomic pipeline used for clinical analysis.
RECENT FINDINGS: Currently, limited management of cancers benefits from artificial intelligence, mostly related to a computer-aided diagnosis that avoids a biopsy analysis that presents additional risks and costs. Most artificial intelligence tools are based on imaging features, known as radiomic analysis that can be refined into predictive models in noninvasively acquired imaging data. This review explores the progress of artificial intelligence-based radiomic tools for clinical applications with a brief description of necessary technical steps. Explaining new radiomic approaches based on deep-learning techniques will explain how the new radiomic models (deep radiomic analysis) can benefit from deep convolutional neural networks and be applied on limited data sets.
SUMMARY: To consider the radiomic algorithms, further investigations are recommended to involve deep learning in radiomic models with additional validation steps on various cancer types.
Automatic Measurement of the Total Visceral Adipose Tissue From Computed Tomography Images by Using a Multi-Atlas Segmentation Method
BACKGROUND: The visceral adipose tissue (VAT) volume is a predictive and/or prognostic factor for many cancers. The objective of our study was to develop an automatic measurement of the whole VAT volume using a multi-atlas segmentation (MAS) method from a computed tomography.
METHODS: A total of 31 sets of whole-body computed tomography volume data were used. The reference VAT volume was defined on the basis of manual segmentation (VATMANUAL). We developed an algorithm, which measured automatically the VAT volumes using a MAS based on a nonrigid volume registration algorithm coupled with a selective and iterative method for performance level estimation (SIMPLE), called VATMAS_SIMPLE. The results were evaluated using intraclass correlation coefficient and dice similarity coefficients.
RESULTS: The intraclass correlation coefficient of VATMAS_SIMPLE was excellent, at 0.976 (confidence interval, 0.943-0.989) (P < 0.001). The dice similarity coefficient of VATMAS_SIMPLE was also good, at 0.905 (SD, 0.076).
CONCLUSIONS: This newly developed algorithm based on a MAS can measure accurately the whole abdominopelvic VAT.
Reduced Chest Computed Tomography Scan Length for Patients Positive for Coronavirus Disease 2019: Dose Reduction and Impact on Diagnostic Utility
Principi, S.
O'Connor, S.
Frank, L.
Schmidt, T. G.
J Comput Assist Tomogr2022Journal Article, cited 0 times
Website
MIDRC-RICORD-1A
LCTSC
*covid-19
*Drug Tapering
Humans
Radiation Dosage
Thorax
Tomography
X-Ray Computed/methods
METHODS: This study used the Personalized Rapid Estimation of Dose in CT (PREDICT) tool to estimate patient-specific organ doses from CT image data. The PREDICT is a research tool that combines a linear Boltzmann transport equation solver for radiation dose map generation with deep learning algorithms for organ contouring. Computed tomography images from 74 subjects in the Medical Imaging Data Resource Center-RSNA International COVID-19 Open Radiology Database data set (chest CT of adult patients positive for COVID-19), which included expert annotations including "infectious opacities," were analyzed. First, the full z-scan length of the CT image data set was evaluated. Next, the z-scan length was reduced from the left hemidiaphragm to the top of the aortic arch. Generic dose reduction based on dose length product (DLP) and patient-specific organ dose reductions were calculated. The percentage of infectious opacities excluded from the reduced z-scan length was used to quantify the effect on diagnostic utility. RESULTS: Generic dose reduction, based on DLP, was 69%. The organ dose reduction ranged from approximately equal to 18% (breasts) to approximately equal to 64% (bone surface and bone marrow). On average, 12.4% of the infectious opacities were not included in the reduced z-coverage, per patient, of which 5.1% were above the top of the arch and 7.5% below the left hemidiaphragm. CONCLUSIONS: Limiting z-scan length of chest CTs reduced radiation dose without significantly compromising diagnostic utility in COVID-19 patients. The PREDICT demonstrated that patient-specific organ dose reductions varied from generic dose reduction based on DLP.
Computer-aided nodule detection and volumetry to reduce variability between radiologists in the interpretation of lung nodules at low-dose screening CT
Jeon, Kyung Nyeo
Goo, Jin Mo
Lee, Chang Hyun
Lee, Youkyung
Choo, Ji Yung
Lee, Nyoung Keun
Shim, Mi-Suk
Lee, In Sun
Kim, Kwang Gi
Gierada, David S
Investigative radiology2012Journal Article, cited 51 times
Website
NLST
lung
LDCT
A Novel Deep Learning Based Computer-Aided Diagnosis System Improves the Accuracy and Efficiency of Radiologists in Reading Biparametric Magnetic Resonance Images of the Prostate
Winkel, David J.
Tong, Angela
Lou, Bin
Kamen, Ali
Comaniciu, Dorin
Disselhorst, Jonathan A.
Rodríguez-Ruiz, Alejandro
Huisman, Henkjan
Szolar, Dieter
Shabunin, Ivan
Choi, Moon Hyung
Xing, Pengyi
Penzkofer, Tobias
Grimm, Robert
von Busch, Heinrich
Boll, Daniel T.
Investigative radiology2021Journal Article, cited 0 times
PROSTATEx
OBJECTIVE: The aim of this study was to evaluate the effect of a deep learning based computer-aided diagnosis (DL-CAD) system on radiologists' interpretation accuracy and efficiency in reading biparametric prostate magnetic resonance imaging scans.
MATERIALS AND METHODS: We selected 100 consecutive prostate magnetic resonance imaging cases from a publicly available data set (PROSTATEx Challenge) with and without histopathologically confirmed prostate cancer. Seven board-certified radiologists were tasked to read each case twice in 2 reading blocks (with and without the assistance of a DL-CAD), with a separation between the 2 reading sessions of at least 2 weeks. Reading tasks were to localize and classify lesions according to Prostate Imaging Reporting and Data System (PI-RADS) v2.0 and to assign a radiologist's level of suspicion score (scale from 1-5 in 0.5 increments; 1, benign; 5, malignant). Ground truth was established by consensus readings of 3 experienced radiologists. The detection performance (receiver operating characteristic curves), variability (Fleiss κ), and average reading time without DL-CAD assistance were evaluated.
RESULTS: The average accuracy of radiologists in terms of area under the curve in detecting clinically significant cases (PI-RADS ≥4) was 0.84 (95% confidence interval [CI], 0.79-0.89), whereas the same using DL-CAD was 0.88 (95% CI, 0.83-0.94) with an improvement of 4.4% (95% CI, 1.1%-7.7%; P = 0.010). Interreader concordance (in terms of Fleiss κ) increased from 0.22 to 0.36 (P = 0.003). Accuracy of radiologists in detecting cases with PI-RADS ≥3 was improved by 2.9% (P = 0.10). The median reading time in the unaided/aided scenario was reduced by 21% from 103 to 81 seconds (P < 0.001).
CONCLUSIONS: Using a DL-CAD system increased the diagnostic accuracy in detecting highly suspicious prostate lesions and reduced both the interreader variability and the reading time.
Reader variability in identifying pulmonary nodules on chest radiographs from the national lung screening trial
Singh, Satinder
Gierada, David S
Pinsky, Paul
Sanders, Colleen
Fineberg, Naomi
Sun, Yanhui
Lynch, David
Nath, Hrudaya
Journal of thoracic imaging2012Journal Article, cited 4 times
Website
NLST
lung
LDCT
Cancer Screening
Performance of Deep Learning Model in Detecting Operable Lung Cancer With Chest Radiographs
Cha, Min Jae
Chung, Myung Jin
Lee, Jeong Hyun
Lee, Kyung Soo
Journal of thoracic imaging2019Journal Article, cited 0 times
LIDC-IDRI
Deep Learning
Lung
PURPOSE: The aim of this study was to evaluate the diagnostic performance of a trained deep convolutional neural network (DCNN) model for detecting operable lung cancer with chest radiographs (CXRs).
MATERIALS AND METHODS: The institutional review board approved this study. A deep learning model (DLM) based on DCNN was trained with 17,211 CXRs (5700 CT-confirmed lung nodules in 3500 CXRs and 13,711 normal CXRs), finally augmented to 600,000 images. For validation, a trained DLM was tested with 1483 CXRs with surgically resected lung cancer, marked and scored by 2 radiologists. Furthermore, diagnostic performances of DLM and 6 human observers were compared with 500 cases (200 visible T1 lung cancer on CXR and 300 normal CXRs) and analyzed using free-response receiver-operating characteristics curve (FROC) analysis.
RESULTS: The overall detection rate of DLM for resected lung cancers (27.2±14.6 mm) was a sensitivity of 76.8% (1139/1483) with a false positive per image (FPPI) of 0.3 and area under the FROC curve (AUC) of 0.732. In the comparison with human readers, DLM demonstrated a sensitivity of 86.5% at 0.1 FPPI and a sensitivity of 92% at 0.3 FPPI with AUC of 0.899 at an FPPI range of 0.03 to 0.44 for detecting visible T1 lung cancers, which were superior to the average of 6 human readers [mean sensitivity; 78% (range, 71.6% to 82.6%) at an FPPI of 0.1% and 85% (range, 80.2% to 89.2%) at an FPPI of 0.3, AUC of 0.819 (range, 0.754 to 0.862) at an FPPI of 0.03 to 0.44).
CONCLUSIONS: A DLM has high diagnostic performance in detecting operable lung cancer with CXR, demonstrating a potential of playing a pivotal role for lung cancer screening.
A hybrid learning method for distinguishing lung adenocarcinoma and squamous cell carcinoma
Swain, Anil Kumar
Swetapadma, Aleena
Rout, Jitendra Kumar
Balabantaray, Bunil Kumar
2023Journal Article, cited 0 times
Lung-PET-CT-Dx
Purpose The objective of the proposed work is to identify the most commonly occurring non–small cell carcinoma types, such as adenocarcinoma and squamous cell carcinoma, within the human population. Another objective of the work is to reduce the false positive rate during the classification. Design/methodology/approach In this work, a hybrid method using convolutional neural networks (CNNs), extreme gradient boosting (XGBoost) and long-short-term memory networks (LSTMs) has been proposed to distinguish between lung adenocarcinoma and squamous cell carcinoma. To extract features from non–small cell lung carcinoma images, a three-layer convolution and three-layer max-pooling-based CNN is used. A few important features have been selected from the extracted features using the XGBoost algorithm as the optimal feature. Finally, LSTM has been used for the classification of carcinoma types. The accuracy of the proposed method is 99.57 per cent, and the false positive rate is 0.427 per cent. Findings The proposed CNN–XGBoost–LSTM hybrid method has significantly improved the results in distinguishing between adenocarcinoma and squamous cell carcinoma. The importance of the method can be outlined as follows: It has a very low false positive rate of 0.427 per cent. It has very high accuracy, i.e. 99.57 per cent. CNN-based features are providing accurate results in classifying lung carcinoma. It has the potential to serve as an assisting aid for doctors. Practical implications It can be used by doctors as a secondary tool for the analysis of non–small cell lung cancers. Social implications It can help rural doctors by sending the patients to specialized doctors for more analysis of lung cancer. Originality/value In this work, a hybrid method using CNN, XGBoost and LSTM has been proposed to distinguish between lung adenocarcinoma and squamous cell carcinoma. A three-layer convolution and three-layer max-pooling-based CNN is used to extract features from the non–small cell lung carcinoma images. A few important features have been selected from the extracted features using the XGBoost algorithm as the optimal feature. Finally, LSTM has been used for the classification of carcinoma types.
Ensemble Learning of Multiple-View 3D-CNNs Model for Micro-Nodules Identification in CT Images
Monkam, Patrice
Qi, Shouliang
Xu, Mingjie
Li, Haoming
Han, Fangfang
Teng, Yueyang
Qian, Wei
IEEE Access2018Journal Article, cited 0 times
LIDC-IDRI
Numerous automatic systems of pulmonary nodules detection have been proposed, but very few of them have been consecrated to micro-nodules (diameter < 3 mm) even though they are regarded as the earliest manifestations of lung cancer. Moreover, most available systems present high false positive rate resulting from their incapability of discriminating between micro-nodules and non-nodules. Thus, this paper proposes a system to differentiate between the micro-nodules and non-nodules in computed tomography (CT) images by an ensemble learning of multiple-view 3-D convolutional neural networks (3D-CNNs). A total of 34 494 volumetric image samples, including 13 179 micro-nodules and 21 315 non-nodules, are acquired from the 1010 CT scans of the LIDC/IDRI database. The pulmonary nodule candidates are cropped with five different sizes, including $20\times 20\times 3$ , $16\times 16\times 3$ , $12\times 12\times 3$ , $8\times 8\times 3$ , and $4\times 4\times 3$ . Then, five distinct 3D-CNN models are built and implemented on one size of the nodule candidates. An extreme learning machine (ELM) network is utilized to integrate the five 3D-CNN outputs, yielding the final classification results. The performance of the proposed system is assessed in terms of accuracy, area under the curve (AUC), F-score, and sensitivity. It is found that the proposed system yields an accuracy, AUC, F-score, and sensitivity of 97.35%, 0.98, 96.42%, and 96.57%, respectively. These performances are highly superior to those of 2D-CNNs, single 3D-CNN model, as well as those by the state-of-the-art methods implemented on the same dataset. For the ensemble method, ELM achieves better performance than the majority voting, averaging, AND operator, and autoencoder. The results demonstrate that developing an automatic system for discriminating between micro-nodules and non-nodules in CT images is feasible, which extends lung cancer studies to micro-nodules. The combination of multiple-view 3D-CNNs and ensemble learning contribute to excellent identification performance, and this strategy may help develop other reliable clinical-decision support systems.
Brain MRI Image Classification for Cancer Detection Using Deep Wavelet Autoencoder-Based Deep Neural Network
Mallick, Pradeep Kumar
Ryu, Seuc Ho
Satapathy, Sandeep Kumar
Mishra, Shruti
Nguyen, Gia Nhu
Tiwari, Prayag
IEEE Access2019Journal Article, cited 0 times
RIDER NEURO MRI
Machine Learning
Technology and the rapid growth in the area of brain imaging technologies have forever made for a pivotal role in analyzing and focusing the new views of brain anatomy and functions. The mechanism of image processing has widespread usage in the area of medical science for improving the early detection and treatment phases. Deep neural networks (DNN), till date, have demonstrated wonderful performance in classification and segmentation task. Carrying this idea into consideration, in this paper, a technique for image compression using a deep wavelet autoencoder (DWA), which blends the basic feature reduction property of autoencoder along with the image decomposition property of wavelet transform is proposed. The combination of both has a tremendous effect on sinking the size of the feature set for enduring further classification task by using DNN. A brain image dataset was taken and the proposed DWA-DNN image classifier was considered. The performance criterion for the DWA-DNN classifier was compared with other existing classifiers such as autoencoder-DNN or DNN, and it was noted that the proposed method outshines the existing methods.
Medical Image Retrieval Based on Convolutional Neural Network and Supervised Hashing
Cai, Yiheng
Li, Yuanyuan
Qiu, Changyan
Ma, Jie
Gao, Xurong
IEEE Access2019Journal Article, cited 0 times
Website
NSCLC-Radiomics
Pancreas-CT
RIDER NEURO MRI
TCGA-BLCA
RIDER Lung CT
Content based image retrieval (CBIR)
Convolutional Neural Network (CNN)
Deep learning
machine learning
In recent years, with extensive application in image retrieval and other tasks, a convolutional neural network (CNN) has achieved outstanding performance. In this paper, a new content-based medical image retrieval (CBMIR) framework using CNN and hash coding is proposed. The new framework adopts a Siamese network in which pairs of images are used as inputs, and a model is learned to make images belonging to the same class have similar features by using weight sharing and a contrastive loss function. In each branch of the network, CNN is adapted to extract features, followed by hash mapping, which is used to reduce the dimensionality of feature vectors. In the training process, a new loss function is designed to make the feature vectors more distinguishable, and a regularization term is added to encourage the real value outputs to approximate the desired binary values. In the retrieval phase, the compact binary hash code of the query image is achieved from the trained network and is subsequently compared with the hash codes of the database images. We experimented on two medical image datasets: the cancer imaging archive-computed tomography (TCIA-CT) and the vision and image analysis group/international early lung cancer action program (VIA/I-ELCAP). The results indicate that our method is superior to existing hash methods and CNN methods. Compared with the traditional hashing method, feature extraction based on CNN has advantages. The proposed algorithm combining a Siamese network with the hash method is superior to the classical CNN-based methods. The application of a new loss function can effectively improve retrieval accuracy.
Multi-Classification of Brain Tumor Images Using Deep Neural Network
Sultan, Hossam H.
Salem, Nancy M.
Al-Atabany, Walid
IEEE Access2019Journal Article, cited 0 times
REMBRANDT
Machine Learning
Brain tumor classification is a crucial task to evaluate the tumors and make a treatment decision according to their classes. There are many imaging techniques used to detect brain tumors. However, MRI is commonly used due to its superior image quality and the fact of relying on no ionizing radiation. Deep learning (DL) is a subfield of machine learning and recently showed a remarkable performance, especially in classification and segmentation problems. In this paper, a DL model based on a convolutional neural network is proposed to classify different brain tumor types using two publicly available datasets. The former one classifies tumors into (meningioma, glioma, and pituitary tumor). The other one differentiates between the three glioma grades (Grade II, Grade III, and Grade IV). The datasets include 233 and 73 patients with a total of 3064 and 516 images on T1-weighted contrast-enhanced images for the first and second datasets, respectively. The proposed network structure achieves a significant performance with the best overall accuracy of 96.13% and 98.7%, respectively, for the two studies. The results indicate the ability of the model for brain tumor multi-classification purposes.
Brain Tumor Segmentation Using Multi-Cascaded Convolutional Neural Networks and Conditional Random Field
Hu, Kai
Gan, Qinghai
Zhang, Yuan
Deng, Shuhua
Xiao, Fen
Huang, Wei
Cao, Chunhong
Gao, Xieping
IEEE Access2019Journal Article, cited 2 times
Website
BraTS
TCGA-GBM
TCGA-LGG
Convolutional Neural Network (CNN)
Magnetic Resonance Imaging (MRI)
Segmentation
Accurate segmentation of brain tumor is an indispensable component for cancer diagnosis and treatment. In this paper, we propose a novel brain tumor segmentation method based on multicascaded convolutional neural network (MCCNN) and fully connected conditional random fields (CRFs). The segmentation process mainly includes the following two steps. First, we design a multi-cascaded network architecture by combining the intermediate results of several connected components to take the local dependencies of labels into account and make use of multi-scale features for the coarse segmentation. Second, we apply CRFs to consider the spatial contextual information and eliminate some spurious outputs for the fine segmentation. In addition, we use image patches obtained from axial, coronal, and sagittal views to respectively train three segmentation models, and then combine them to obtain the final segmentation result. The validity of the proposed method is evaluated on three publicly available databases. The experimental results show that our method achieves competitive performance compared with the state-of-the-art approaches.
Enlarged Training Dataset by Pairwise GANs for Molecular-Based Brain Tumor Classification
Ge, Chenjie
Gu, Irene Yu-Hua
Jakola, Asgeir Store
Yang, Jie
IEEE Access2020Journal Article, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
This paper addresses issues of brain tumor subtype classification using Magnetic Resonance Images (MRIs) from different scanner modalities like T1 weighted, T1 weighted with contrast-enhanced, T2 weighted and FLAIR images. Currently most available glioma datasets are relatively moderate in size, and often accompanied with incomplete MRIs in different modalities. To tackle the commonly encountered problems of insufficiently large brain tumor datasets and incomplete modality of image for deep learning, we propose to add augmented brain MR images to enlarge the training dataset by employing a pairwise Generative Adversarial Network (GAN) model. The pairwise GAN is able to generate synthetic MRIs across different modalities. To achieve the patient-level diagnostic result, we propose a post-processing strategy to combine the slice-level glioma subtype classification results by majority voting. A two-stage course-to-fine training strategy is proposed to learn the glioma feature using GAN-augmented MRIs followed by real MRIs. To evaluate the effectiveness of the proposed scheme, experiments have been conducted on a brain tumor dataset for classifying glioma molecular subtypes: isocitrate dehydrogenase 1 (IDH1) mutation and IDH1 wild-type. Our results on the dataset have shown good performance (with test accuracy 88.82%). Comparisons with several state-of-the-art methods are also included.
Medical Image Classification Using a Light-Weighted Hybrid Neural Network Based on PCANet and DenseNet
Huang, Zhiwen
Zhu, Xingxing
Ding, Mingyue
Zhang, Xuming
IEEE Access2020Journal Article, cited 23 times
Website
Osteosarcoma-Tumor-Assessment
Classification
Deep Learning
Histopathology imaging features
CBIS-DDSM
Medical image classification plays an important role in disease diagnosis since it can provide important reference information for doctors. The supervised convolutional neural networks (CNNs) such as DenseNet provide the versatile and effective method for medical image classification tasks, but they require large amounts of data with labels and involve complex and time-consuming training process. The unsupervised CNNs such as principal component analysis network (PCANet) need no labels for training but cannot provide desirable classification accuracy. To realize the accurate medical image classification in the case of a small training dataset, we have proposed a light-weighted hybrid neural network which; consists of a modified PCANet cascaded with a simplified DenseNet. The modified PCANet has two stages, in which the network produces the effective feature maps at each stage by convoluting inputs with various learned kernels. The following simplified DenseNet with a small number of weights will take all feature maps produced by the PCANet as inputs and employ the dense shortcut connections to realize accurate medical; image classification. To appreciate the performance of the proposed method, some experiments have been done on mammography and osteosarcoma histology images. Experimental results show that the proposed hybrid neural network is easy to train and it outperforms such popular CNN models as PCANet, ResNet and DenseNet in terms of classification accuracy, sensitivity and specificity.; INDEX TERMS Medical image classification, hybrid neural network, PCANet, DenseNet.
A Novel Approach to Improving Brain Image Classification Using Mutual Information-Accelerated Singular Value Decomposition
Al-Saffar, Zahraa A
Yildirim, Tülay
IEEE Access2020Journal Article, cited 0 times
Website
REMBRANDT
machine learning
Mining Domain Knowledge: Improved Framework Towards Automatically Standardizing Anatomical Structure Nomenclature in Radiotherapy
Yang, Qiming
Chao, Hongyang
Nguyen, Dan
Jiang, Steve
IEEE Access2020Journal Article, cited 0 times
Head-Neck-PET-CT
The automatic standardization of nomenclature for anatomical structures in radiotherapy (RT) clinical data is a critical prerequisite for data curation and data-driven research in the era of big data and artificial intelligence, but it is currently an unmet need. Existing methods either cannot handle cross-institutional datasets or suffer from heavy imbalance and poor-quality delineation in clinical RT datasets. To solve these problems, we propose an automated structure nomenclature standardization framework, 3D Non-local Network with Voting (3DNNV). This framework consists of an improved data processing strategy, namely, adaptive sampling and adaptive cropping (ASAC) with voting, and an optimized feature extraction module. The framework simulates clinicians’ domain knowledge and recognition mechanisms to identify small-volume organs at risk (OARs) with heavily imbalanced data better than other methods. We used partial data from an open-source head-and-neck cancer dataset to train the model, then tested the model on three cross-institutional datasets to demonstrate its generalizability. 3DNNV outperformed the baseline model, achieving higher average true positive rates (TPR) over all categories on the three test datasets (+8.27%, +2.39%, and +5.53%, respectively). More importantly, the 3DNNV outperformed the baseline on the test dataset, 28.63% to 91.17%, in terms of F1 score for a small-volume OAR with only 9 training samples. The results show that 3DNNV can be applied to identify OARs, even error-prone ones. Furthermore, we discussed the limitations and applicability of the framework in practical scenarios. The framework we developed can assist in standardizing structure nomenclature to facilitate data-driven clinical research in cancer radiotherapy.
Regularized Three-Dimensional Generative Adversarial Nets for Unsupervised Metal Artifact Reduction in Head and Neck CT Images
Nakao, Megumi
Imanishi, Keiho
Ueda, Nobuhiro
Imai, Yuichiro
Kirita, Tadaaki
Matsuda, Tetsuya
IEEE Access2020Journal Article, cited 1 times
Website
Algorithm Development
Machine Learning
Head and Neck Neoplasms
The reduction of metal artifacts in computed tomography (CT) images, specifically for strongartifacts generated from multiple metal objects, is a challenging issue in medical imaging research. Althoughthere have been some studies on supervised metal artifact reduction through the learning of synthesizedartifacts, it is difficult for simulated artifacts to cover the complexity of the real physical phenomenathat may be observed in X-ray propagation. In this paper, we introduce metal artifact reduction methodsbased on an unsupervised volume-to-volume translation learned from clinical CT images. We constructthree-dimensional adversarial nets with a regularized loss function designed for metal artifacts from multipledental fillings. The results of experiments using a CT volume database of 361 patients demonstrate that theproposed framework has an outstanding capacity to reduce strong artifacts and to recover underlying missingvoxels, while preserving the anatomical features of soft tissues and tooth structures from the original images.
Weakly Supervised Deep Learning for COVID-19 Infection Detection and Classification From CT Images
Hu, Shaoping
Gao, Yuan
Niu, Zhangming
Jiang, Yinghui
Li, Lao
Xiao, Xianglu
Wang, Minhao
Fang, Evandro Fei
Menpes-Smith, Wade
Xia, Jun
Ye, Hui
Yang, Guang
IEEE Access2020Journal Article, cited 0 times
LCTSC
An outbreak of a novel coronavirus disease (i.e., COVID-19) has been recorded in Wuhan, China since late December 2019, which subsequently became pandemic around the world. Although COVID-19 is an acutely treated disease, it can also be fatal with a risk of fatality of 4.03% in China and the highest of 13.04% in Algeria and 12.67% Italy (as of 8th April 2020). The onset of serious illness may result in death as a consequence of substantial alveolar damage and progressive respiratory failure. Although laboratory testing, e.g., using reverse transcription polymerase chain reaction (RT-PCR), is the golden standard for clinical diagnosis, the tests may produce false negatives. Moreover, under the pandemic situation, shortage of RT-PCR testing resources may also delay the following clinical decision and treatment. Under such circumstances, chest CT imaging has become a valuable tool for both diagnosis and prognosis of COVID-19 patients. In this study, we propose a weakly supervised deep learning strategy for detecting and classifying COVID-19 infection from CT images. The proposed method can minimise the requirements of manual labelling of CT images but still be able to obtain accurate infection detection and distinguish COVID-19 from non-COVID-19 cases. Based on the promising results obtained qualitatively and quantitatively, we can envisage a wide deployment of our developed technique in large-scale clinical studies.
On the Continuous Processing of Health Data in Edge-Fog-Cloud Computing by Using Micro/Nanoservice Composition
Sánchez-Gallegos, Dante D.
Galaviz-Mosqueda, Alejandro
Gonzalez-Compean, J. L.
Villarreal-Reyes, Salvador
Perez-Ramos, Aldo E.
Carrizales-Espinoza, Diana
Carretero, Jesus
IEEE Access2020Journal Article, cited 0 times
TCGA-LUAD
The edge, the fog, the cloud, and even the end-user’s devices play a key role in the management of the health sensitive content/data lifecycle. However, the creation and management of solutions including multiple applications executed by multiple users in multiple environments (edge, the fog, and the cloud) to process multiple health repositories that, at the same time, fulfilling non-functional requirements (NFRs) represents a complex challenge for health care organizations. This paper presents the design, development, and implementation of an architectural model to create, on-demand, edge-fog-cloud processing structures to continuously handle big health data and, at the same time, to execute services for fulfilling NFRs. In this model, constructive and modular $blocks$ , implemented as microservices and nanoservices, are recursively interconnected to create edge-fog-cloud processing structures as infrastructure-agnostic services. Continuity schemes create dataflows through the blocks of edge-fog-cloud structures and enforce, in an implicit manner, the fulfillment of NFRs for data arriving and departing to/from each block of each edge-fog-cloud structure. To show the feasibility of this model, a prototype was built using this model, which was evaluated in a case study based on the processing of health data for supporting critical decision-making procedures in remote patient monitoring. This study considered scenarios where end-users and medical staff received insights discovered when processing electrocardiograms (ECGs) produced by sensors in wireless IoT devices as well as where physicians received patient records (spirometry studies, ECGs and tomography images) and warnings raised when online analyzing and identifying anomalies in the analyzed ECG data. A scenario where organizations manage multiple simultaneous each edge-fog-cloud structure for processing of health data and contents delivered to internal and external staff was also studied. The evaluation of these scenarios showed the feasibility of applying this model to the building of solutions interconnecting multiple services/applications managing big health data through different environments.
A Novel Framework for Improving Pulse-Coupled Neural Networks With Fuzzy Connectedness for Medical Image Segmentation
Bai, Peirui
Yang, Kai
Min, Xiaolin
Guo, Ziyang
Li, Chang
Fu, Yingxia
Han, Chao
Lu, Xiang
Liu, Qingyi
IEEE Access2020Journal Article, cited 0 times
TCGA-LIHC
Machine Learning
A pulse-coupled neural network (PCNN) is a promising image segmentation approach that requires no training. However, it is challenging to successfully apply a PCNN to medical image segmentation due to common but difficult scenarios such as irregular object shapes, blurred boundaries, and intensity inhomogeneity. To improve this situation, a novel framework incorporating fuzzy connectedness (FC) is proposed. First, a comparative study of the traditional PCNN models is carried out to analyze the framework and firing mechanism. Then, the characteristic matrix of fuzzy connectedness (CMFC) is presented for the first time. The CMFC can provide more intensity information and spatial relationships at the pixel level, which is helpful for producing a more reasonable firing mechanism in the PCNN models. Third, by integrating the CMFC into the PCNN framework models, a construction scheme of FC-PCNN models is designed. To illustrate this concept, a general solution that can be applied to different PCNN models is developed. Next, the segmentation performances of the proposed FC-PCNN models are evaluated by comparison with the traditional PCNN models, the traditional segmentation methods, and deep learning methods. The test images include synthetic and real medical images from the Internet and three public medical image datasets. The quantitative and visual comparative analysis demonstrates that the proposed FC-PCNN models outperform the traditional PCNN models and the traditional segmentation methods and achieve competitive performance to the deep learning methods. In addition, the proposed FC-PCNN models have favorable capability to eliminate inference from surrounding artifacts.
Automatic Detection of White Blood Cancer From Bone Marrow Microscopic Images Using Convolutional Neural Networks
Kumar, Deepika
Jain, Nikita
Khurana, Aayush
Mittal, Sweta
Satapathy, Suresh Chandra
Senkerik, Roman
Hemanth, Jude D.
IEEE Access2020Journal Article, cited 0 times
SN-AM
Leukocytes, produced in the bone marrow, make up around one percent of all blood cells. Uncontrolled growth of these white blood cells leads to the birth of blood cancer. Out of the three different types of cancers, the proposed study provides a robust mechanism for the classification of Acute Lymphoblastic Leukemia (ALL) and Multiple Myeloma (MM) using the SN-AM dataset. Acute lymphoblastic leukemia (ALL) is a type of cancer where the bone marrow forms too many lymphocytes. On the other hand, Multiple myeloma (MM), a different kind of cancer, causes cancer cells to accumulate in the bone marrow rather than releasing them into the bloodstream. Therefore, they crowd out and prevent the production of healthy blood cells. Conventionally, the process was carried out manually by a skilled professional in a considerable amount of time. The proposed model eradicates the probability of errors in the manual process by employing deep learning techniques, namely convolutional neural networks. The model, trained on cells’ images, first pre-processes the images and extracts the best features. This is followed by training the model with the optimized Dense Convolutional neural network framework (termed DCNN here) and finally predicting the type of cancer present in the cells. The model was able to reproduce all the measurements correctly while it recollected the samples exactly 94 times out of 100. The overall accuracy was recorded to be 97.2%, which is better than the conventional machine learning methods like Support Vector Machine (SVMs), Decision Trees, Random Forests, Naive Bayes, etc. This study indicates that the DCNN model’s performance is close to that of the established CNN architectures with far fewer parameters and computation time tested on the retrieved dataset. Thus, the model can be used effectively as a tool for determining the type of cancer in the bone marrow.
Multi-Resolution Texture-Based 3D Level Set Segmentation
Reska, Daniel
Kretowski, Marek
IEEE Access2020Journal Article, cited 0 times
LCTSC
This article presents a novel three-dimensional level set method for the segmentation of textured volumes. The algorithm combines sparse and multi-resolution schemes to speed up computations and utilise the multi-scale nature of extracted texture features. The method’s performance is also enhanced by graphics processing unit (GPU) acceleration. The segmentation process starts with an initial surface at the coarsest resolution of the input volume and moves to progressively higher scales. The surface evolution is driven by a generalised data term that can consider multiple feature types and is not tied to specific descriptors. The proposed implementation of this approach uses features based on grey level co-occurrence matrices and discrete wavelet transform. Quantitative results from experiments performed on synthetic volumes showed a significant improvement in segmentation quality over traditional methods. Qualitative validation using real-world medical datasets, and comparison with other similar GPU-based algorithms, were also performed. In all cases, the proposed implementation provided good segmentation accuracy while maintaining competitive performance.
A Comprehensive Review of Computer-Aided Diagnosis of Pulmonary Nodules Based on Computed Tomography Scans
Cao, Wenming
Wu, Rui
Cao, Guitao
He, Zhihai
IEEE Access2020Journal Article, cited 0 times
QIN LUNG CT
Lung cancer is one of the malignant tumor diseases with the fastest increase in morbidity and mortality, which poses a great threat to human health. Low-Dose Computed Tomography (LDCT) screening has been proved as a practical technique for improving the accuracy of pulmonary nodule detection and classification at early cancer diagnosis, which helps to reduce mortality. Therefore, with the explosive growth of CT data, it is of great clinical significance to exploit an effective Computer-Aided Diagnosis (CAD) system for radiologists on automatic nodule analysis. In this article, a comprehensive review of the application and development of CAD systems is presented. The experimental benchmarks for nodule analysis are first described and summarized, covering public datasets of lung CT scans, commonly used evaluation metrics and various medical competitions. We then introduce the main structure of a CAD system and present some efficient methodologies. For the extensive use of Convolutional Neural Network (CNN) based methods in pulmonary nodule investigations recently, we summarized the advantages of CNNs over traditional image processing methods. Besides, we mainly select the CAD systems developed by state-of-the-art CNNs with excellent performance and analyze their objectives, algorithms as well as results. Finally, research trends, existing challenges, and future directions in this field are discussed.
Blockchain for Privacy Preserving and Trustworthy Distributed Machine Learning in Multicentric Medical Imaging (C-DistriM)
Zerka, Fadila
Urovi, Visara
Vaidyanathan, Akshayaa
Barakat, Samir
Leijenaar, Ralph T. H.
Walsh, Sean
Gabrani-Juma, Hanif
Miraglio, Benjamin
Woodruff, Henry C.
Dumontier, Michel
Lambin, Philippe
IEEE Access2020Journal Article, cited 0 times
NSCLC-Radiomics
Distributed learning
Gross Tumor Volume (GTV)
Segmentation
Classification
ReLu
Convolutional Neural Network (CNN)
Cloud computing
The utility of Artificial Intelligence (AI) in healthcare strongly depends upon the quality of the data used to build models, and the confidence in the predictions they generate. Access to sufficient amounts of high-quality data to build accurate and reliable models remains problematic owing to substantive legal and ethical constraints in making clinically relevant research data available offsite. New technologies such as distributed learning offer a pathway forward, but unfortunately tend to suffer from a lack of transparency, which undermines trust in what data are used for the analysis. To address such issues, we hypothesized that, a novel distributed learning that combines sequential distributed learning with a blockchain-based platform, namely Chained Distributed Machine learning C-DistriM, would be feasible and would give a similar result as a standard centralized approach. C-DistriM enables health centers to dynamically participate in training distributed learning models. We demonstrate C-DistriM using the NSCLC-Radiomics open data to predict two-year lung-cancer survival. A comparison of the performance of this distributed solution, evaluated in six different scenarios, and the centralized approach, showed no statistically significant difference (AUCs between central and distributed models), all DeLong tests yielded p -val >0.05. This methodology removes the need to blindly trust the computation in one specific server on a distributed learning network. This fusion of blockchain and distributed learning serves as a proof-of-concept to increase transparency, trust, and ultimately accelerate the adoption of AI in multicentric studies. We conclude that our blockchain-based model for sequential training on distributed datasets is a feasible approach, provides equivalent performance to the centralized approach.
Explainable Machine Learning for Early Assessment of COVID-19 Risk Prediction in Emergency Departments
Casiraghi, Elena
Malchiodi, Dario
Trucco, Gabriella
Frasca, Marco
Cappelletti, Luca
Fontana, Tommaso
Esposito, Alessandro Andrea
Avola, Emanuele
Jachetti, Alessandro
Reese, Justin
Rizzi, Alessandro
Robinson, Peter N.
Valentini, Giorgio
IEEE Access2020Journal Article, cited 0 times
LCTSC
Between January and October of 2020, the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) virus has infected more than 34 million persons in a worldwide pandemic leading to over one million deaths worldwide (data from the Johns Hopkins University). Since the virus begun to spread, emergency departments were busy with COVID-19 patients for whom a quick decision regarding in- or outpatient care was required. The virus can cause characteristic abnormalities in chest radiographs (CXR), but, due to the low sensitivity of CXR, additional variables and criteria are needed to accurately predict risk. Here, we describe a computerized system primarily aimed at extracting the most relevant radiological, clinical, and laboratory variables for improving patient risk prediction, and secondarily at presenting an explainable machine learning system, which may provide simple decision criteria to be used by clinicians as a support for assessing patient risk. To achieve robust and reliable variable selection, Boruta and Random Forest (RF) are combined in a 10-fold cross-validation scheme to produce a variable importance estimate not biased by the presence of surrogates. The most important variables are then selected to train a RF classifier, whose rules may be extracted, simplified, and pruned to finally build an associative tree, particularly appealing for its simplicity. Results show that the radiological score automatically computed through a neural network is highly correlated with the score computed by radiologists, and that laboratory variables, together with the number of comorbidities, aid risk prediction. The prediction performance of our approach was compared to that that of generalized linear models and shown to be effective and robust. The proposed machine learning-based computational system can be easily deployed and used in emergency departments for rapid and accurate risk prediction in COVID-19 patients.
A Joint Detection and Recognition Approach to Lung Cancer Diagnosis From CT Images With Label Uncertainty
Chenyang, L.
Chan, S. C.
IEEE Access2020Journal Article, cited 0 times
Website
LIDC-IDRI
LUNA16 Challenge
lung
radiomic features
deep learning
Automatic lung cancer diagnosis from computer tomography (CT) images requires the detection of nodule location as well as nodule malignancy prediction. This article proposes a joint lung nodule detection and classification network for simultaneous lung nodule detection, segmentation and classification subject to possible label uncertainty in the training set. It operates in an end-to-end manner and provides detection and classification of nodules simultaneously together with a segmentation of the detected nodules. Both the nodule detection and classification subnetworks of the proposed joint network adopt a 3-D encoder-decoder architecture for better exploration of the 3-D data. Moreover, the classification subnetwork utilizes the features extracted from the detection subnetwork and multiscale nodule-specific features for boosting the classification performance. The former serves as valuable prior information for optimizing the more complicated 3D classification network directly to better distinguish suspicious nodules from other tissues compared with direct backpropagation from the decoder. Experimental results show that this co-training yields better performance on both tasks. The framework is validated on the LUNA16 and LIDC-IDRI datasets and a pseudo-label approach is proposed for addressing the label uncertainty problem due to inconsistent annotations/labels. Experimental results show that the proposed nodule detector outperforms the state-of-the-art algorithms and yields comparable performance as state-of-the-art nodule classification algorithms when classification alone is considered. Since our joint detection/recognition approach can directly detect nodules and classify its malignancy instead of performing the tasks separately, our approach is more practical for automatic cancer and nodules detection.
Deep Feature Selection and Decision Level Fusion for Lungs Nodule Classification
Ali, Imdad
Muzammil, Muhammad
Haq, Ihsan Ul
Khaliq, Amir A.
Abdullah, Suheel
IEEE Access2021Journal Article, cited 0 times
SPIE-AAPM Lung CT Challenge
The existence of pulmonary nodules exhibits the presence of lung cancer. The Computer-Aided Diagnostic (CAD) and classification of such nodules in CT images lead to improve the lung cancer screening. The classic CAD systems utilize nodule detector and feature-based classifier. In this work, we proposed a decision level fusion technique to improve the performance of the CAD system for lung nodule classification. First, we evaluated the performance of Support Vector Machine (SVM) and AdaBoostM2 algorithms based on the deep features from the state-of-the-art transferable architectures (such as; VGG-16, VGG-19, GoogLeNet, Inception-V3, ResNet-18, ResNet-50, ResNet-101 and InceptionResNet-V2). Then, we analyzed the performance of SVM and AdaBoostM2 classifier as a function of deep features. We also extracted the deep features by identifying the optimal layers which improved the performance of the classifiers. The classification accuracy is increased from 76.88% to 86.28% for ResNet-101 and 67.37% to 83.40% for GoogLeNet. Similarly, the error rate is also reduced significantly. Moreover, the results showed that SVM is more robust and efficient for deep features as compared to AdaBoostM2. The results are based on 4-fold cross-validation and are presented for publicly available LUNGx challenge dataset. We showed that the proposed technique outperforms as compared to state-of-the-art techniques and achieved accuracy score was 90.46 ± 0.25%.
Breast Mass Detection With Faster R-CNN: On the Feasibility of Learning From Noisy Annotations
Famouri, Sina
Morra, Lia
Mangia, Leonardo
Lamberti, Fabrizio
IEEE Access2021Journal Article, cited 0 times
CBIS-DDSM
In this work we study the impact of noise on the training of object detection networks for the medical domain, and how it can be mitigated by improving the training procedure. Annotating large medical datasets for training data-hungry deep learning models is expensive and time consuming. Leveraging information that is already collected in clinical practice, in the form of text reports, bookmarks or lesion measurements would substantially reduce this cost. Obtaining precise lesion bounding boxes through automatic mining procedures, however, is difficult. We provide here a quantitative evaluation of the effect of bounding box coordinate noise on the performance of Faster R-CNN object detection networks for breast mass detection. Varying degrees of noise are simulated by randomly modifying the bounding boxes: in our experiments, bounding boxes could be enlarged up to six times the original size. The noise is injected in the CBIS-DDSM collection, a well curated public mammography dataset for which accurate lesion location is available. We show how, due to an imperfect matching between the ground truth and the network bounding box proposals, the noise is propagated during training and reduces the ability of the network to correctly classify lesions from background. When using the standard Intersection over Union criterion, the area under the FROC curve decreases by up to 9%. A novel matching criterion is proposed to improve tolerance to noise.
Discovery of a Generalization Gap of Convolutional Neural Networks on COVID-19 X-Rays Classification
Ahmed, Kaoutar Ben
Goldgof, Gregory M.
Paul, Rahul
Goldgof, Dmitry B.
Hall, Lawrence O.
IEEE Access2021Journal Article, cited 0 times
COVID-19-AR
A number of recent papers have shown experimental evidence that suggests it is possible to build highly accurate deep neural network models to detect COVID-19 from chest X-ray images. In this paper, we show that good generalization to unseen sources has not been achieved. Experiments with richer data sets than have previously been used show models have high accuracy on seen sources, but poor accuracy on unseen sources. The reason for the disparity is that the convolutional neural network model, which learns features, can focus on differences in X-ray machines or in positioning within the machines, for example. Any feature that a person would clearly rule out is called a confounding feature. Some of the models were trained on COVID-19 image data taken from publications, which may be different than raw images. Some data sets were of pediatric cases with pneumonia where COVID-19 chest X-rays are almost exclusively from adults, so lung size becomes a spurious feature that can be exploited. In this work, we have eliminated many confounding features by working with as close to raw data as possible. Still, deep learned models may leverage source specific confounders to differentiate COVID-19 from pneumonia preventing generalizing to new data sources (i.e. external sites). Our models have achieved an AUC of 1.00 on seen data sources but in the worst case only scored an AUC of 0.38 on unseen ones. This indicates that such models need further assessment/development before they can be broadly clinically deployed. An example of fine-tuning to improve performance at a new site is given.
CT-Scan Denoising Using a Charbonnier Loss Generative Adversarial Network
Gajera, Binit
Kapil, Siddhant Raj
Ziaei, Dorsa
Mangalagiri, Jayalakshmi
Siegel, Eliot
Chapman, David
IEEE Access2021Journal Article, cited 0 times
Phantom FDA
We propose a Generative Adversarial Network (GAN) optimized for noise reduction in CT-scans. The objective of CT scan denoising is to obtain higher quality imagery using a lower radiation exposure to the patient. Recent work in computer vision has shown that the use of Charbonnier distance as a term in the perceptual loss of a GAN can improve the performance of image reconstruction and video super-resolution. However, the use of a Charbonnier structural loss term has not yet been applied or evaluated for the purpose of CT scan denoising. Our proposed GAN makes use of a Wasserstein adversarial loss, a pretrained VGG19 perceptual loss, as well as a Charbonnier distance structural loss. We evaluate our approach using both applied Poisson noise distribution in order to simulate low-dose CT imagery, as well as using an anthropomorphic thoracic phantom at different exposure levels. Our evaluation criteria are Peek Signal to Noise (PSNR) as well as Structured Similarity (SSIM) of the denoised images, and we compare the results of our method versus recent state of the art deep denoising GANs. In addition, we report global noise through uniform soft tissue mediums. Our findings show that the incorporation of the Charbonnier Loss with the VGG-19 network improves the performance of the denoising as measured with the PSNR and SSIM, and that the method greatly reduces soft tissue noise to levels comparable to the NDCT scan.
Genotype-Guided Radiomics Signatures for Recurrence Prediction of Non-Small Cell Lung Cancer
Aonpong, Panyanat
Iwamoto, Yutaro
Han, Xian-Hua
Lin, Lanfen
Chen, Yen-Wei
IEEE Access2021Journal Article, cited 0 times
NSCLC Radiogenomics
Non-small cell lung cancer (NSCLC) is a serious disease and has a high recurrence rate after surgery. Recently, many machine learning methods have been proposed for recurrence prediction. The methods using gene expression data achieve high accuracy rates but expensive. While, the radiomics features using computer tomography (CT) image is a cost-effective method, but their accuracy is not competitive. In this paper, we propose a genotype-guided radiomics method (GGR) for obtaining high prediction accuracy at a low cost. We used a public radiogenomics dataset of NSCLC, which includes CT images and gene expression data. Our proposed method is two steps method that uses two models. The first model is a gene estimation model, which is used to estimate the gene expression from radiomics features and deep features extracted from CT images. The second model is used to predict the recurrence using the estimated gene. The proposed GGR method is designed based on hybrid features which is the fusion of handcrafted- and deep learning-based features. The experiments demonstrated that the prediction accuracy can be improved significantly from 78.61% (existing radiomics method) and 79.09% (ResNet50) to 83.28% by the proposed GGR.
Dose-Conditioned Synthesis of Radiotherapy Dose With Auxiliary Classifier Generative Adversarial Network
Liao, Wentao
Pu, Yuehu
IEEE Access2021Journal Article, cited 0 times
Head-Neck Cetuximab
HNSCC
In recent years, there are more and more researches on automatic radiotherapy planning based on artificial intelligence technology. Most of the work focuses on the dose prediction of radiotherapy planning, that is, the generation of radiation dose distribution image. Because of the small sample nature of radiotherapy planning data, it is difficult to obtain large-scale training data sets. In this paper, we propose a model of Dose-Conditioned Synthesis of Radiotherapy dose by using Auxiliary Classifier Generative Adversarial Network(ACGAN), and a method of customize and synthesis dose distribution images of specific tumor types and beam types is considered. This method can customize and generate dose distribution images of tumor types and beam types. The dose distribution images generated by our model are evaluated by MS-SSIM and PSNR, the results show that the image quality of dose distribution generated by ACGAN model was excellent, which was very close to the real data and shows high diversity, it can be used for data enhancement work of training data sets of dose prediction methods.
Training Convolutional Networks for Prostate Segmentation With Limited Data
Saunders, Sara L.
Leng, Ethan
Spilseth, Benjamin
Wasserman, Neil
Metzger, Gregory J.
Bolan, Patrick J.
IEEE Access2021Journal Article, cited 0 times
Prostate-3T
PROSTATE-DIAGNOSIS
PROSTATEx
Multi-zonal segmentation is a critical component of computer-aided diagnostic systems for detecting and staging prostate cancer. Previously, convolutional neural networks such as the U-Net have been used to produce fully automatic multi-zonal prostate segmentation on magnetic resonance images (MRIs) with performance comparable to human experts, but these often require large amounts of manually segmented training data to produce acceptable results. For institutions that have limited amounts of labeled MRI exams, it is not clear how much data is needed to train a segmentation model, and which training strategy should be used to maximize the value of the available data. This work compares how the strategies of transfer learning and aggregated training using publicly available external data can improve segmentation performance on internal, site-specific prostate MR images, and evaluates how the performance varies with the amount of internal data used for training. Cross training experiments were performed to show that differences between internal and external data were impactful. Using a standard U-Net architecture, optimizations were performed to select between 2D and 3D variants, and to determine the depth of fine-tuning required for optimal transfer learning. With the optimized architecture, the performance of transfer learning and aggregated training were compared for a range of 5-40 internal datasets. The results show that both strategies consistently improve performance and produced segmentation results that are comparable to that of human experts with approximately 20 site-specific MRI datasets. These findings can help guide the development of site-specific prostate segmentation models for both clinical and research applications.
Augmented Noise Learning Framework for Enhancing Medical Image Denoising
Rai, Swati
Bhatt, Jignesh S.
Patra, S. K.
IEEE Access2021Journal Article, cited 0 times
LDCT-and-Projection-data
Deep learning attempts medical image denoising either by directly learning the noise present or via first learning the image content. We observe that residual learning (RL) often suffers from signal leakage while dictionary learning (DL) is prone to Gibbs (ringing) artifacts. In this paper, we propose an unsupervised noise learning framework that enhances denoising by augmenting the limitation of RL with the strength of DL and vice versa. To this end, we propose a ten-layer deep residue network (DRN) augmented with patch-based dictionaries. The input images are presented to patch-based DL to indirectly learn the noise via sparse representation while given to the DRN to directly learn the noise. An optimum noise characterization is captured by iterating DL/DRN network against proposed loss. The denoised images are obtained by subtracting the learned noise from available data. We show that augmented DRN effectively handles high-frequency regions to avoid Gibbs artifacts due to DL while augmented DL helps to reduce the overfitting due to RL. Comparative experiments with many state-of-the-arts on MRI and CT datasets (2D/3D) including low-dose CT (LDCT) are conducted on a GPU-based supercomputer. The proposed network is trained by adding different levels of Rician noise for MRI and Poisson noise for CT images considering different nature and statistical distribution of datasets. The ablation studies are carried out that demonstrate enhanced denoising performance with minimal signal leakage and least artifacts by proposed augmented approach.
ProCDet: a new method for prostate cancer detection based on MR images
Y. Qian
Z. Zhang
B. Wang
IEEE Access2021Journal Article, cited 0 times
Website
PROSTATEx
Machine Learning
Registration
Comparison of Current Deep Convolutional Neural Networks for the Segmentation of Breast Masses in Mammograms
Anaya-Isaza, Andrés
Mera-Jiménez, Leonel
Cabrera-Chavarro, Johan Manuel
Guachi-Guachi, Lorena
Peluffo-Ordóñez, Diego
Rios-Patiño, Jorge Ivan
IEEE Access2021Journal Article, cited 0 times
CBIS-DDSM
Breast cancer causes approximately 684,996 deaths worldwide, making it the leading cause of female cancer mortality. However, these figures can be reduced with early diagnosis through mammographic imaging, allowing for the timely and effective treatment of this disease. To establish the best tools for contributing to the automatic diagnosis of breast cancer, different deep learning (DL) architectures were compared in terms of breast lesion segmentation, lesion type classification, and degree of suspicion of malignancy tests. The tasks were completed with state-of-the-art architectures and backbones. Initially, during segmentation, the base UNet, Visual Geometry Group 19 (VGG19), InceptionResNetV2, EfficientNet, MobileNetv2, ResNet, ResNeXt, MultiResUNet, linkNet-VGG19, DenseNet, SEResNet and SeResNeXt architectures were compared, where “Res” denotes a residual network. In addition, training was performed with 5 of the most advanced loss functions and validated by the Dice coefficient, sensitivity, and specificity. The proposed models achieved Dice values above 90%, with the EfficientNet architecture achieving 94.75% and 99% accuracy on the two tasks. Subsequently, classification was addressed with the ResNet50V2, VGG19, InceptionResNetV2, DenseNet121, InceptionV3, Xception and EfficientNetB7 networks. The proposed models achieved 96.97% and 97.73% accuracy through the VGG19 and ResNet50V2 networks on the lesion classification and degree of suspicion tasks, respectively. All three tasks were addressed with open-access databases, including the Digital Database for Screening Mammography (DDSM), the Mammographic Image Analysis Society (MIAS) database, and INbreast.
A Deep Learning Framework Integrating the Spectral and Spatial Features for Image-Assisted Medical Diagnostics
Ghosh, S.
Das, S.
Mallipeddi, R.
IEEE Access2021Journal Article, cited 0 times
CBIS-DDSM
Computer Aided Detection (CADe)
Radiomics
Image projection
Spectral analysis
COVID-19 detection
Medical imaging
class imbalance
deep learning
diagnostic solution
discrete cosine transform
discrete wavelet transform
saliency map
BREAST
Diabetic Retinopathy Detection
CHEST
The development of a computer-aided disease detection system to ease the long and arduous manual diagnostic process is an emerging research interest. Living through the recent outbreak of the COVID-19 virus, we propose a machine learning and computer vision algorithms-based automatic diagnostic solution for detecting the COVID-19 infection. Our proposed method applies to chest radiograph that uses readily available infrastructure. No studies in this direction have considered the spatial aspect of the medical images. This motivates us to investigate the role of spectral-domain information of medical images along with the spatial content towards improved disease detection ability. Successful integration of spatial and spectral features is demonstrated on the COVID-19 infection detection task. Our proposed method comprises three stages - Feature extraction, Dimensionality reduction via projection, and prediction. At first, images are transformed into spectral and spatio-spectral domains by using Discrete cosine transform (DCT) and Discrete Wavelet transform (DWT), two powerful image processing algorithms. Next, features from spatial, spectral, and spatio-spectral domains are projected into a lower dimension through the Convolutional Neural Network (CNN), and those three types of projected features are then fed to Multilayer Perceptron (MLP) for final prediction. The combination of the three types of features yielded superior performance than any of the features when used individually. This indicates the presence of complementary information in the spectral domain of the chest radiograph to characterize the considered medical condition. Moreover, saliency maps corresponding to classes representing different medical conditions demonstrate the reliability of the proposed method. The study is further extended to identify different medical conditions using diverse medical image datasets and shows the efficiency of leveraging the combined features. Altogether, the proposed method exhibits potential as a generalized and robust medical image-assisted diagnostic solution.
Feature Extraction of White Blood Cells Using CMYK-Moment Localization and Deep Learning in Acute Myeloid Leukemia Blood Smear Microscopic Images
Elhassan, Tusneem Ahmed M.
Rahim, Mohd Shafry Mohd
Swee, Tan Tian
Hashim, Siti Zaiton Mohd
Aljurf, Mahmoud
IEEE Access2022Journal Article, cited 0 times
AML-Cytomorphology_LMU
Artificial intelligence has revolutionized medical diagnosis, particularly for cancers. Acute myeloid leukemia (AML) diagnosis is a tedious protocol that is prone to human and machine errors. In several instances, it is difficult to make an accurate final decision even after careful examination by an experienced pathologist. However, computer-aided diagnosis (CAD) can help reduce the errors and time associated with AML diagnosis. White Blood Cells (WBC) detection is a critical step in AML diagnosis, and deep learning is considered a state-of-the-art approach for WBC detection. However, the accuracy of WBC detection is strongly associated with the quality of the extracted features used in training the pixel-wise classification models. CAD depends on studying the different patterns of changes associated with WBC counts and features. In this study, a new hybrid feature extraction method was developed using image processing and deep learning methods. The proposed method consists of two steps: 1) a region of interest (ROI) is extracted using the CMYK-moment localization method and 2) deep learning-based features are extracted using a CNN-based feature fusion method. Several classification algorithms are used to evaluate the significance of the extracted features. The proposed feature extraction method was evaluated using an external dataset and benchmarked against other feature extraction methods. The proposed method achieved excellent performance, generalization, and stability using all the classifiers, with overall classification accuracies of 97.57% and 96.41% using the primary and secondary datasets, respectively. This method has opened a new alternative to improve the detection of WBCs, which could lead to a better diagnosis of AML.
Data Augmentation and Transfer Learning for Brain Tumor Detection in Magnetic Resonance Imaging
Anaya-Isaza, Andres
Mera-Jimenez, Leonel
IEEE Access2022Journal Article, cited 1 times
Website
TCGA-LGG
ResNet50
Computer Aided Detection (CADe)
The exponential growth of deep learning networks has allowed us to tackle complex tasks, even in fields as complicated as medicine. However, using these models requires a large corpus of data for the networks to be highly generalizable and with high performance. In this sense, data augmentation methods are widely used strategies to train networks with small data sets, being vital in medicine due to the limited access to data. A clear example of this is magnetic resonance imaging in pathology scans associated with cancer. In this vein, we compare the effect of several conventional data augmentation schemes on the ResNet50 network for brain tumor detection. In addition, we included our strategy based on principal component analysis. The training was performed with the network trained from zeros and transfer-learning, obtained from the ImageNet dataset. The investigation allowed us to achieve an F1 detection score of 92.34%. The score was achieved with the ResNet50 network through the proposed method and implementing the learning transfer. In addition, it was also concluded that the proposed method is different from the other conventional methods with a significance level of 0.05 through the Kruskal Wallis test statistic.
Reducing CNN Textural Bias With k-Space Artifacts Improves Robustness
Cabrera, Yaniel
Fetit, Ahmed E.
IEEE Access2022Journal Article, cited 0 times
TCGA-GBM
TCGA-LGG
Convolutional neural networks (CNNs) have become the de facto algorithms of choice for semantic segmentation tasks in biomedical image processing. Yet, models based on CNNs remain susceptible to the domain shift problem, where a mismatch between source and target distributions could lead to a drop in performance. CNNs were recently shown to exhibit a textural bias when processing natural images, and recent studies suggest that this bias also extends to the context of biomedical imaging. In this paper, we focus on Magnetic Resonance Images (MRI) and investigate textural bias in the context of ${k}$ -space artifacts (Gibbs, spike, and wraparound artifacts), which naturally manifest in clinical MRI scans. We show that carefully introducing such artifacts at training time can help reduce textural bias, and consequently lead to CNN models that are more robust to acquisition noise and out-of-distribution inference, including scans from hospitals not seen during training. We also present Gibbs ResUnet; a novel, end-to-end framework that automatically finds an optimal combination of Gibbs ${k}$ -space stylizations and segmentation model weights. We illustrate our findings on multimodal and multi-institutional clinical MRI datasets obtained retrospectively from the Medical Segmentation Decathlon $(n=750)$ and The Cancer Imaging Archive $(n=243)$ .
Mediastinal Lymph Node Detection and Segmentation Using Deep Learning
Nayan, Al-Akhir
Kijsirikul, Boonserm
Iwahori, Yuji
IEEE Access2022Journal Article, cited 0 times
Lung-PET-CT-Dx
Segmentation
Algorithm Development
Lymph Nodes
Automatic lymph node (LN) segmentation and detection for cancer staging are critical. In clinical practice, computed tomography (CT) and positron emission tomography (PET) imaging detect abnormal LNs. Despite its low contrast and variety in nodal size and form, LN segmentation remains a challenging task. Deep convolutional neural networks frequently segment items in medical photographs. Most state-of-the-art techniques destroy image’s resolution through pooling and convolution. As a result, the models provide unsatisfactory results. Keeping the issues in mind, a well-established deep learning technique UNet++ was modified using bilinear interpolation and total generalized variation (TGV) based upsampling strategy to segment and detect mediastinal lymph nodes. The modified UNet++ maintains texture discontinuities, selects noisy areas, searches appropriate balance points through backpropagation, and recreates image resolution. Collecting CT image data from TCIA, 5-patients, and ELCAP public dataset, a dataset was prepared with the help of experienced medical experts. The UNet++ was trained using those datasets, and three different data combinations were utilized for testing. Utilizing the proposed approach, the model achieved 94.8% accuracy, 91.9% Jaccard, 94.1% recall, and 93.1% precision on COMBO_3. The performance was measured on different datasets and compared with state-of-the-art approaches. The UNet++ model with hybridized strategy performed better than others.
Generative Adversarial Networks for Anomaly Detection in Biomedical Imaging: A Study on Seven Medical Image Datasets
Esmaeili, Marzieh
Toosi, Amirhosein
Roshanpoor, Arash
Changizi, Vahid
Ghazisaeedi, Marjan
Rahmim, Arman
Sabokrou, Mohammad
IEEE Access2023Journal Article, cited 0 times
C-NMC 2019
GAN
Anomaly detection (AD) is a challenging problem in computer vision. Particularly in the field of medical imaging, AD poses even more challenges due to a number of reasons, including insufficient availability of ground truth (annotated) data. In recent years, AD models based on generative adversarial networks (GANs) have made significant progress. However, their effectiveness in biomedical imaging remains underexplored. In this paper, we present an overview of using GANs for AD, as well as an investigation of state-of-the-art GAN-based AD methods for biomedical imaging and the challenges encountered in detail. We have also specifically investigated the advantages and limitations of AD methods on medical image datasets, conducting experiments using 3 AD methods on 7 medical imaging datasets from different modalities and organs/tissues. Given the highly different findings achieved across these experiments, we further analyzed the results from both data-centric and model-centric points of view. The results showed that none of the methods had a reliable performance for detecting abnormalities in medical images. Factors such as the number of training samples, the subtlety of the anomaly, and the dispersion of the anomaly in the images are among the phenomena that highly impact the performance of the AD models. The obtained results were highly variable (AUC: 0.475-0.991; Sensitivity: 0.17-0.98; Specificity: 0.14-0.97). In addition, we provide recommendations for the deployment of AD models in medical imaging and foresee important research directions.
Tissue Artifact Segmentation and Severity Assessment for Automatic Analysis Using WSI
Hossain, Shakhawat
Shahriar, Galib Muhammad
Syeed, M. M. Mahbubul
Uddin, Mohammad Faisal
Hasan, Mahady
Hossain, Sakir
Bari, Rubina
IEEE Access2023Journal Article, cited 0 times
Post-NAT-BRCA
Traditionally, pathological analysis and diagnosis are performed by manually eyeballing glass-slide specimens under a microscope by an expert. The whole slide image (WSI) is the digital specimen produced from the glass slide. WSI enabled specimens to be observed on a computer screen and led to computational pathology where computer vision and artificial intelligence are utilized for automated analysis and diagnosis. With the current computational advancement, the entire WSI can be analyzed autonomously without human supervision. However, the analysis could fail or lead to wrong diagnosis if the WSI is affected by tissue artifacts such as tissue fold or air bubbles depending on the severity. Existing artifact detection methods rely on experts for severity assessment to eliminate artifact-affected regions from the analysis. This process is time-consuming, exhausting and undermines the goal of automated analysis or removal of artifacts without evaluating their severity, which could result in the loss of diagnostically important data. Therefore, it is necessary to detect artifacts and then assess their severity automatically. In this paper, we propose a system that incorporates severity evaluation with artifact detection utilizing convolutional neural networks (CNN). The proposed system uses DoubleUNet to segment artifacts and an ensemble network of six fine-tuned CNN models to determine severity. This method outperformed current state-of-the-art in accuracy by 9% for artifact segmentation and achieved a strong correlation of 97% with the pathologist’s evaluation for severity assessment. The robustness of the system was demonstrated using our proposed heterogeneous dataset and practical usability was ensured by integrating it with an automated analysis system.
Improved Abdominal Multi-Organ Segmentation via 3D Boundary-Constrained Deep Neural Networks
Irshad, Samra
Gomes, Douglas P. S.
Kim, Seong Tae
IEEE Access2023Journal Article, cited 0 times
Pancreas-CT
Algorithm Development
Automatic Segmentation
Quantitative assessment of the abdominal region from CT scans requires the accurate delineation of abdominal organs. Therefore, automatic abdominal image segmentation has been the subject of intensive research for the past two decades. Recently, deep learning-based methods have resulted in state-of-the-art performance for the 3D abdominal CT segmentation. However, the complex characterization of abdominal organs with weak boundaries prevents the deep learning methods from accurate segmentation. Specifically, the voxels on the boundary of organs are more vulnerable to misprediction due to the highly-varying intensities. This paper proposes a method for improved abdominal image segmentation by leveraging organ-boundary prediction as a complementary task. We train 3D encoder-decoder networks to simultaneously segment the abdominal organs and their boundaries via multi-task learning. We explore two network topologies based on the extent of weights shared between the two tasks within a unified multi-task framework. In the first topology, the whole-organ prediction task and the boundary detection task share all the layers in the network except for the last task-specific layers. The second topology employs a single shared encoder but two separate task-specific decoders. The effectiveness of utilizing the organs’ boundary information for abdominal multi-organ segmentation is evaluated on two publically available abdominal CT datasets: Pancreas-CT and the BTCV dataset. The improvements shown in segmentation results reveal the advantage of the multi-task training that forces the network to pay attention to ambiguous boundaries of organs. A maximum relative improvement of 3.5% and 3.6% is observed in Mean Dice Score for Pancreas-CT and BTCV datasets, respectively.
Failure to Achieve Domain Invariance With Domain Generalization Algorithms: An Analysis in Medical Imaging
Korevaar, Steven
Tennakoon, Ruwan
Bab-Hadiashar, Alireza
IEEE Access2023Journal Article, cited 0 times
MIDRC-RICORD-1A
MIDRC-RICORD-1B
One prominent issue in the application of deep learning is the failure to generalize to data that lies on a different distribution to the training data. While many methods have been proposed to address this, prior work has shown that when operating under the same conditions most algorithms perform almost equally. As such, more work needs to be done to validate past and future methods before they are put into important scenarios like medical imaging. Our work analyses eight domain generalization algorithms across four important medical imaging classification datasets along with three standard natural image classification problems to discover the differences in how these methods operate in these different contexts. We assess these algorithms in terms of generalization capability, domain invariance, and representational sensitivity. Through this, we show that despite the differences between domain and content variations between natural and medical imaging there is little deviation in the operation of each method between natural images and medical images. Additionally, we show that all tested algorithms retain significant amounts of domain-specific information in their feature representations despite explicit training to remove it. Thus, revealing the failure point of all these methods is a lack of class-discriminative features extracted from out-of-distribution data. While these results show that methods that work well on natural imaging work similarly in medical imaging, no method outperforms baseline methods, highlighting the continuing gap of achieving adequate domain generalization. Similarly, the results also question the efficacy of optimizing for domain invariant representations as a method for generalizing to unseen domains.
Circular LSTM for Low-Dose Sinograms Inpainting
Kuo, Chin
Wei, Tzu-Ti
Chen, Jen-Jee
Tseng, Yu-Chee
IEEE Access2023Journal Article, cited 0 times
LDCT-and-Projection-data
Image denoising
Graphics Processing Units (GPU)
Unsupervised learning
Computed tomography (CT) is usually accompanied by a long scanning time and substantial patient radiation exposure. Sinograms are the basis for constructing CT scans; however, continuous sinograms may highly overlap, resulting in extra radiation exposure. This paper proposes a deep learning model to inpaint a sparse-view sinogram sequence. Because a sinogram sequence around the human body is circular in nature, we propose a circular LSTM (CirLSTM) architecture that feeds position-relevant information to our model. To evaluate the performance of our proposed method, we compared the results of our inpainted sinograms with ground truth sinograms using evaluation metrics, including the peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM). The SSIM values for both our proposed method and the state-of-the-art method range from 0.998 to 0.999, indicating that the prediction of structures is not challenging for either method. Our proposed CirLSTM achieves PSNR values ranging from 49 to 52, outperforming all the other compared methods. These results demonstrate the feasibility of using only interleaved sinograms to construct a complete sinogram sequence and to generate high-quality CT images. Furthermore, we validated the proposed model across different body portions and CT machine models. The results show that CirLSTM outperforms all other methods in both the across-body segment validation and across-machine validation scenarios.
SAM-UNETR: Clinically Significant Prostate Cancer Segmentation Using Transfer Learning From Large Model
Alzate-Grisales, Jesus Alejandro
Mora-Rubio, Alejandro
García-García, Francisco
Tabares-Soto, Reinel
De La Iglesia-Vayá, Maria
IEEE Access2023Journal Article, cited 0 times
PROSTATEx
prostate
Deep Learning
Prostate cancer (PCa) is one of the leading causes of cancer-related mortality among men worldwide. Accurate and efficient segmentation of clinically significant prostate cancer (csPCa) regions from magnetic resonance imaging (MRI) plays a crucial role in diagnosis, treatment planning, and monitoring of the disease, however, this is a challenging task even for the specialized clinicians. This study presents SAM-UNETR, a novel model for segmenting csPCa regions from MRI images. SAM-UNETR combines a transformer-encoder from the Segment Anything Model (SAM), a versatile segmentation model trained on 11 million images, with a residual-convolution decoder inspired by UNETR. The model uses multiple image modalities and applies prostate zone segmentation, normalization, and data augmentation as preprocessing steps. The performance of SAM-UNETR is compared with three other models using the same strategy and preprocessing. The results show that SAM-UNETR achieves superior reliability and accuracy in csPCa segmentation, especially when using transfer learning for the image encoder. This demonstrates the adaptability of large-scale models for different tasks. SAM-UNETR attains a Dice Score of 0.467 and an AUROC of 0.77 for csPCa prediction.
An Artificial Intelligence Based Approach Toward Predicting Mortality in Head and Neck Cancer Patients With Relation to Smoking and Clinical Data
Dhariwal, Naman
Hariprasad, Rithvik
Sundari, L. Mohana
IEEE Access2023Journal Article, cited 0 times
RADCURE
Head and neck cancers are one of the most common cancers in the world which affects the mouth, throat, and tongue regions of the human body. Lifestyle factors such as smoking, and tobacco have been long associated with the generation of cancerous cells in the body. This paper is a novel approach towards extracting the correlation between these life factors and head and neck cancers, supported by crucial cancer attributes like the tumor-node-metastasis and human papilloma virus. Mortality prediction algorithms in cases of head and neck cancers will help doctors pre-determine the factors that are most crucial and help deliver specialized and targeted treatments. The paper used eight machine learning and four deep learning hyper-parameter tuned models to predict the mortality rate associated with head and neck cancer. The maximum accuracy of 98.8% was achieved by the gradient boosting algorithm in the paper. The feature importance of smoking and human papilloma virus positivity using the same classifier was approximately 4% and 2.5% respectively. The most influential factor in mortality prediction was the duration of follow-up from diagnosis to the last contact date, with 40.8% importance. Quantitative results from the area under the receiver operating characteristic curve substantiate the classifiers’ performance, with a maximum value of 0.99 for gradient boosting. This paper is bound to impact many medical professionals by helping them predict the mortality of cancer patients and aid appropriate treatments.
Comparative Study on Architecture of Deep Neural Networks for Segmentation of Brain Tumor using Magnetic Resonance Images
Preetha, R.
Priyadarsini, M. Jasmine Pemeena
Nisha, J. S.
IEEE Access2023Journal Article, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
REMBRANDT
TCGA-GBM
TCGA-LGG
The state-of-the-art works for the segmentation of brain tumor using the images acquired by Magnetic Resonance Imaging (MRI) with their performances are analyzed in this comparative study. First, the architectures of convolutional neural networks (CNN) and the variants of U-shaped Network (U-Net), a kind of Deep Neural Network (DNN) are compared and their differences are highlighted. The publicly available datasets of MRI images specifically Brain Tumor Segmentation (BraTS) are also discussed. Next, the performances of tumor segmentation of various methods in the literature are compared using the parameters such as Dice score and Hausdroff distance (95). This study concludes that the U-Net based architectures using the BraTS-2019 dataset outperform well compared with other CNN based architectures.
Kidney tumor segmentation from MR images became a pivotal research area in kidney cancer diagnosis and treatment planning. Accurate and efficient segmentation enables precise tumor localization, treatment planning, and monitoring of disease progression. Former studies have demonstrated the remarkable capability of the U-Net architecture in semantic segmentation of kidney tumors. A recent variant of the U-Net architecture, known as 3D-CU-Net, has been specifically designed with fully-connected dense skip connections to tackle the kidney tumor segmentation challenges related to network depth invariance, segmentation errors, and enforced feature fusion. While the 3D-CU-Net model demonstrated improved effectiveness in kidney tumor segmentation, it still exhibits significant limitations, including challenges in precise localization, fixed feature selection, image diversity, limited contextual information, and computational complexity. To address the limitations of the 3D-CU-Net model, this paper introduces the Attention 3D-CU-Net as a novel variant. The attention-based mechanism is seamlessly integrated with the 3D-CU-Net, prioritizing informative features to enhance segmentation accuracy by concentrating on selective regions. This innovative approach serves to significantly improve the model’s performance, particularly in challenging cases. The proposed model is evaluated on the TCGA-KIRC dataset, a widely used benchmark for kidney tumor segmentation. In comparative experiments, we evaluated the results using metrics like IoU, DSC and accuracy. Our Attention 3D-CU-Net model outperforms the baseline 3D-CU-Net and U-Net with notably higher scores: IoU (0.92), DSC (0.94), and accuracy (0.96).
Markerless Lung Tumor Localization From Intraoperative Stereo Color Fluoroscopic Images for Radiotherapy
Yan, Yongxuan
Fujii, Fumitake
Shiinoki, Takehiro
Liu, Shengping
IEEE Access2024Journal Article, cited 0 times
Website
QIN LUNG CT
MATLAB
Softmax
U-Net
Motion correction
Fluoroscopy
Radiotherapy
Computer Aided Detection (CADe)
Transfer learning
Accurately determining tumor regions from stereo fluoroscopic images during radiotherapy is a challenging task. As a result, high-density fiducial markers are implanted around tumors in clinical practice as internal surrogates of the tumor, which leads to associated surgical risks. This study was conducted to achieve lung tumor localization without the use of fiducial markers. We propose training a cascade U-net system to perform color to grayscale conversion, enhancement, bone suppression, and tumor detection to determine the precise tumor region. We generated Digitally Reconstructed Radiographs (DRRs) and tumor labels from 4D planning CT images as training data. An improved maximum projection algorithm and a novel color-to-gray conversion algorithm were proposed to improve the quality of the generated training data. Training a bone suppression model using bone-enhanced and bone-suppressed DRRs enables the bone suppression model to achieve better bone suppression performance. The mean peak signal-to-noise ratios in the test sets of the trained translation and bone suppression models are 39.284 ± 0.034 dB and 37.713 ± 0.724 dB, respectively. The results indicate that our proposed markerless tumor localization method is applicable in seven out of ten cases; in applicable cases, the centroid position error of the tumor detection model is less than 1.13 mm; and the calculated tumor center motion trajectories using the proposed network highly coincide with the motion trajectories of implanted fiducial markers in over 60% of captured groups, providing a promising direction for markerless tumor localization tracking methods.
X2V: 3D Organ Volume Reconstruction From a Planar X-Ray Image With Neural Implicit Methods
Guven, Gokce
Ates, Hasan F.
Ugurdag, H. Fatih
IEEE Access2024Journal Article, cited 0 times
Website
National Lung Screening Trial (NLST)
Organ segmentation
Algorithm Development
In this work, an innovative approach is proposed for three-dimensional (3D) organ volume reconstruction from a single planar X-ray, namely X2V network. Such capability holds pivotal clinical potential, especially in real-time image-guided radiotherapy, computer-aided surgery, and patient follow-up sessions. Traditional methods for 3D volume reconstruction from X-rays often require the utilization of statistical 3D organ templates, which are employed in 2D/3D registration. However, these methods may not accurately account for the variation in organ shapes across different subjects. Our X2V model overcomes this problem by leveraging neural implicit representation. A vision transformer model is integrated as an encoder network, specifically designed to direct and enhance attention to particular regions within the X-ray image. The reconstructed meshes exhibit a similar topology to the ground truth organ volume, demonstrating the ability of X2V in accurately capturing the 3D structure from a 2D image. The effectiveness of X2V is evaluated on lung X-rays using several metrics, including volumetric Intersection over Union (IoU). X2V outperforms the state-of-the-art method in the literature for lungs (DeepOrganNet) by about 7-9% achieving IoU’s between 0.892-0.942 versus DeepOrganNet’s IoU of 0.815-0.888.
Multivariate Technique for Detecting Variations in High-Dimensional Imagery
Sanusi, Ridwan A.
Ajadi, Jimoh Olawale
Abbasi, Saddam Akber
Dauda, Taofik O.
Adegoke, Nurudeen A.
IEEE Access2024Journal Article, cited 0 times
Website
AML-Cytomorphology_LMU
Pathomics
Algorithm Development
The field of immunology requires refined techniques to identify detailed cellular variance in high-dimensional images. Current methods mainly capture general immune cell proportion variations and often overlook specific deviations in individual patient samples from group baseline. We introduce a simple technique that integrates Hotelling’s T2 statistic with random projection (RP) methods, specifically designed to identify changes in immune cell composition in high-dimensional images. Uniquely, our method provides deeper insights into individual patient samples, allowing for a clearer understanding of group differences. We assess the efficacy of the technique across various RPs: Achlioptas (AP), plus-minus one (PM), Li, and normal projections (NP), considering shift size, dimension reduction, and image dimensions. Simulations reveal variable detection performances across RPs, with PM outperforming and Li lagging. Practical tests using single-cell images of basophils (BAS) and promyelocytes (PMO) emphasise their utility for individualised detection. Our approach elevates high-dimensional image data analysis, particularly for identifying shifts in immune cell composition. This breakthrough potentially transforms healthcare practitioners’ cellular interpretation of the immune landscape, promoting personalised patient care, and reshaping the discernment of diverse patient immune cell samples.
Local Cross-View Transformers and Global Representation Collaborating for Mammogram Classification
Wu, Wenna
Rong, Qi
Lu, Zhentai
IEEE Access2024Journal Article, cited 0 times
CMMD
When analyzing screening mammography images, radiologists compare multiple views of the same breast to help improve the detection rate of lesions and reduce the incidence of false-positive results. Therefore, to make the deep learning-based mammography computer-aided detection/diagnosis (CAD) system meet the radiologists’ requirements for accuracy and generality, the construction of deep learning models needs to mimic manual analysis and consider the correlation between different views of the same breast. In this paper, we propose the Local Cross-View Transformers and Global Representation Collaborating for Mammogram Classification (LCVT-GR) model. The model uses different view images to train in an end-to-end manner. In this model, the global and local representations of mammogram images are analyzed in parallel using the global-local parallel analysis method. To validate the effectiveness of our method, we conducted comparison experiments and ablation experiments on two publicly available datasets, Mini-DDSM and CMMD. The results of the comparison experiments show that our method achieves better results compared with existing advanced methods, with greater improvements in both AUC-ROC and AUC-PR assessment metrics. The results of the ablation experiments show that our model architecture is scientific and effective and achieves a good trade-off between computational cost and model performance.
Enhanced Lung Cancer Detection and TNM Staging Using YOLOv8 and TNMClassifier: An Integrated Deep Learning Approach for CT Imaging
Wehbe, Alaa
Dellepiane, Silvana
Minetti, Irene
IEEE Access2024Journal Article, cited 0 times
Website
NSCLC-Radiomics
Lung-PET-CT-Dx
Classification
LUNG
This paper introduces an advanced method for lung cancer subtype classification and detection using the latest version of YOLO, tailored for the analysis of CT images. Given the increasing mortality rates associated with lung cancer, early and accurate diagnosis is crucial for effective treatment planning. The proposed method employs single-shot object detection to precisely identify and classify various types of lung cancer, including Squamous Cell Carcinoma (SCC), Adenocarcinoma (ADC), and Small Cell Carcinoma (SCLC). A publicly available dataset was utilized to evaluate the performance of YOLOv8. Experimental outcomes underscore the system’s effectiveness, achieving an impressive mean Average Precision (mAP) of 97.1%. The system demonstrates the capability to accurately identify and categorize diverse lung cancer subtypes with a high degree of accuracy. For instance, the YOLOv8 Small model outperforms others with a precision of 96.1% and a detection speed of 0.22 seconds, surpassing other object detection models based on two-stage detection approaches. Building on these results, we further developed a comprehensive TNM classification system. Features extracted from the YOLO backbone were reduced using Principal Component Analysis (PCA) to enhance computational efficiency. These reduced features were then fed into a custom TNMClassifier, a neural network designed to classify the Tumor, Node, and Metastasis (TNM) stages. The TNMClassifier architecture comprises fully connected layers and dropout layers to prevent overfitting, achieving an accuracy of 98% in classifying the TNM stages. Additionally, we tested the YOLOv8 Small model on another dataset, the Lung3 dataset from the Cancer Imaging Archive (TCIA). This testing yielded a recall of 0.91, further validating the model’s effectiveness in accurately identifying lung cancer cases. The integrated system of YOLO for subtype detection and the TNMClassifier for stage classification shows significant potential to assist healthcare professionals in expediting and refining diagnoses, thereby contributing to improved patient health outcomes.
Automatic Segmentation and Shape, Texture-based Analysis of Glioma Using Fully Convolutional Network
Lower-grade glioma is a type of brain tumor that is usually found in the human brain and spinal cord. Early detection and accurate diagnosis of lower-grade glioma can reduce the fatal risk of the affected patients. An essential step for lower-grade glioma analysis is MRI Image Segmentation. Manual segmentation processes are time-consuming and depend on the expertise of the pathologist. In this study, three different deep-learning-based automatic segmentation models were used to segment the tumor-affected region from the MRI slice. The segmentation accuracy of the three models-U-Net, FCN, and U-Net with ResNeXt50 backbone were respectively 80%, 84%, and 91%. Two shape-based features- (angular standard deviation, marginal fluctuation) and six texture-based features (entropy, local binary pattern, homogeneity, contrast, correlation, energy) were extracted from the segmented images to find the association with seven existing genomic data types. It was found out that there was a significant association between the genomic data type-microRNA cluster and texture-based feature-entropy case and genomic data type-RNA sequence cluster with shape-based feature-angular standard deviation case. In both of these cases, the p values were observed less than 0.05 for the Fisher exact test.
3D medical image denoising using 3D block matching and low-rank matrix completion
3D Denoising as one of the most significant tools in medical imaging was studied in the literature. However, most existing 3D medical data denoising algorithms have assumed the additive white Gaussian noise. In this work, we propose an efficient 3D medical data denoising method that can handle a noise mixture of various types. Our method is based on modified 2D Adaptive Rood Pattern Search (ARPS) [1] and low-rank matrix completion as follows. In our method, a noisy 3D data is processed in blockwise manner, for each processed 3D block we find similar 3D blocks in 3D data, where we use overlapped 3D patches to further lower the computation complexity. The 3D blocks then will stack together and unreliable voxels will be replaced using fast matrix completion method [2]. Experimental results show that the proposed method is able to robustly denoise the mixed noise from 3D medical data.
Using Machine Learning Applied to Radiomic Image Features for Segmenting Tumour Structures
Lung cancer (LC) was the predicted leading causeof Australian cancer fatalities in 2018 (around 9,200 deaths). Non-Small Cell Lung Cancer (NSCLC) tumours with larger amounts of heterogeneity have been linked to a worse outcome.Medical imaging is widely used in oncology and non-invasively collects data about the whole tumour. The field of radiomics uses these medical images to extract quantitative image featuresand promises further understanding of the disease at the time of diagnosis, during treatment and in follow up. It is well known that manual and semi-automatic tumour segmentation methods are subject to inter-observer variability which reduces confidence in the treatment region and extentof disease. This leads to tumour under- and over-estimation which can impact on treatment outcome and treatment-induced morbidity.This research aims to use radiomic features centred at each pixel to segment the location of the lung tumour on Computed Tomography (CT) scans. To achieve this objective, a DecisionTree (DT) model was trained using sampled CT data from eight patients. The data consisted of 25 pixel-based texture features calculated from four Gray Level Matrices (GLMs)describing the region around each pixel. The model was assessed using an unseen patient through both a confusion matrix and interpretation of the segment.The findings showed that the model accurately (AUROC =83.9%) predicts tumour location within the test data, concluding that pixel based textural features likely contribute to segmenting the lung tumour. The prediction displayed a strong representation of the manually segmented Region of Interest (ROI), which is considered the ground truth for the purpose of this research.
Deep learning models for classifying cancer and COVID-19 lung diseases
The use of Computed Tomography (CT) images for detecting lung diseases is both hard and time-consuming for humans. In the past few years, Artificial Intelligence (AI), especially, deep learning models have provided impressive results vs the classical methods in a lot of different fields. Nowadays, a lot of researchers are trying to develop different deep learning mechanisms to increase and improve the performance of different systems in lung disease screening with CT images. In this work, different deep learning-based models such as DarkNet-53 (the backbone of YOLO-v3), ResNet50, and VGG19 were applied to classify CT images of patients having Corona Virus disease (COVID-19) or lung cancer. Each model's performance is presented, analyzed, and compared. The dataset used in the study came from two different sources, the large-scale CT dataset for lung cancer diagnoses (Lung-PET -CT-Dx) for lung cancer CT images while International COVID-19 Open Radiology Dataset (RICORD) for COVID-19 CT images. As a result, DarkNet-53 overperformed other models by achieving 100% accuracy. While the accuracies for ResNet and VGG19 were 80% and 77% respectively.
Survival Analysis in Lung Cancer: A Comparative Study of Different Approaches Using NSCLC-Radiomics (Lung1) Data
Lung cancer is known as one of the most prevalent and dangerous types of cancer, and its survival rate is lower compared to many other types of cancer. Survival analyses are vital to accurately estimate the expected time to death of lung cancer patients. Since these analyses are generally based on the judgment of clinicians, accuracy is often a matter of debate. This highlights the importance of performing survival analysis using statistical or machine learning systems that use clinical data and/or images obtained from medical imaging devices. In this study, we applied various methods to analyze clinical records from a publicly available lung cancer database. We approached survival analysis as both a survival regression and a survival classification problem, conducting comparative analyses using multiple techniques. We conducted a detailed examination of numerous survival analysis studies related to lung cancer. Our findings revealed that incorporating the number of slices containing the tumor area in patients' CT images (GTV1-SliceNum) as an extra feature significantly enhanced survival analysis performance. Additionally, we observed that the presence of censored observations negatively impacted survival classification performance.
Malignant nodule detection on lung CT scan images with kernel RX-algorithm
Roozgard, A.
Cheng, S.
Hong, Liu
2012Conference Proceedings, cited 24 times
Website
LIDC-IDRI
Algorithm Development
Computer Aided Detection (CADe)
In this paper, we present a nonlinear anomaly detector called kernel RX-algorithm and apply it to CT images for malignant nodule detection. Malignant nodule detection is very similar to anomaly detection in military imaging applications where the RX-algorithm has been successfully applied. We modified the original RX-algorithm so that it can be applied to anomaly detection in CT images. Moreover, using kernel trick, we mapped the data to a high dimensional space to obtain a kernelized RX-algorithm that outperforms the original RX-algorithm. The preliminary results of applying the kernel RX-algorithm on annotated public access databases suggests that the proposed method may provide a means for early detection of the malignant nodules.
A Deep Learning-based cropping technique to improve segmentation of prostate's peripheral zone
Zaridis, Dimitris
Mylona, Eugenia
Tachos, Nikolaos
Marias, Kostas
Tsiknakis, Manolis
Fotiadis, Dimitios I.
2021Conference Paper, cited 0 times
PROSTATEx
Segmentation
Automatic segmentation of the prostate peripheral zone on Magnetic Resonance Images (MRI) is a necessary but challenging step for accurate prostate cancer diagnosis. Deep learning (DL) based methods, such as U-Net, have recently been developed to segment the prostate and its' sub-regions. Nevertheless, the presence of class imbalance in the image labels, where the background pixels dominate over the region to be segmented, may severely hamper the segmentation performance. In the present work, we propose a DL-based preprocessing pipeline for segmenting the peripheral zone of the prostate by cropping unnecessary information without making a priori assumptions regarding the location of the region of interest. The effect of DL-cropping for improving the segmentation performance was compared to the standard center-cropping using three state-of-the-art DL networks, namely U-net, Bridged U-net and Dense U-net. The proposed method achieved an improvement of 24%, 12% and 15% for the U-net, Bridged U-net and Dense U-net, respectively, in terms of Dice score.
Scale-Space DCE-MRI Radiomics Analysis Based on Gabor Filters for Predicting Breast Cancer Therapy Response
Radiomics-based studies have created an unprecedented momentum in computational medical imaging over the last years by significantly advancing and empowering correlational and predictive quantitative studies in numerous clinical applications. An important element of this exciting field of research especially in oncology is multi-scale texture analysis since it can effectively describe tissue heterogeneity, which is highly informative for clinical diagnosis and prognosis. There are however, several concerns regarding the plethora of radiomics features used in the literature especially regarding their performance consistency across studies. Since many studies use software packages that yield multi-scale texture features it makes sense to investigate the scale-space performance of texture candidate biomarkers under the hypothesis that significant texture markers may have a more persistent scale-space performance. To this end, this study proposes a methodology for the extraction of Gabor multi-scale and orientation texture DCE-MRI radiomics for predicting breast cancer complete response to neoadjuvant therapy. More specifically, a Gabor filter bank was created using four different orientations and ten different scales and then first-order and second-order texture features were extracted for each scale-orientation data representation. The performance of all these features was evaluated under a generalized repeated cross-validation framework in a scale-space fashion using extreme gradient boosting classifiers.
DCE-MRI based Breast Intratumor Heterogeneity Analysis via Dual Attention Deep Clustering Network and its Application in Molecular Typing
More attention has been paid to the precision and personalized treatment of breast cancer, which is a primary risk factor that threatens the females lives. It is momentous for diagnosis, analysis and therapy of tumors to lucubrate breast intratumor heterogeneity. We propose a DCE-MRI dynamic mode based self-supervised dual attention deep clustering network (DADCN) which is utilized to achieve the individual precise segmentation of breast intratumor heterogeneity region in this paper. The specific representations learned by the graph attention network are consciously combined with the deep abstract features extracted from the deep convolutional neural network. Then the structural information of the voxel in breast tumor is mined by spreading on the graph. The model is self-supervised by dual relative loss and residual loss and the clustering graph is measured by graph cut loss. We also employ Pearson, Spearman and Kendall analysis to evaluate degree of correlation between clustering results and intratumor heterogeneity represented by molecular typing. We ultimately detect that the degree of intratumor heterogeneity is automatically determined via segmentation of the heterogeneity region, to accomplish the noninvasive individual molecular typing prediction of breast cancer. The number of clusters in breast intratumor heterogeneity region is an independent biomarker for the diagnosis of benign and malignant tumors and prediction of basal-like molecular typing.
Challenges in predicting glioma survival time in multi-modal deep networks
Prediction of cancer survival time is of considerable interest in medicine as it leads to better patient care and reduces health care costs. In this study, we propose a multi-path multimodal neural network that predicts Glioblastoma Multiforme (GBM) survival time at the 14 months threshold. We obtained image, gene expression, and SNP variants from whole-exome sequences all from the The Cancer Genome Atlas portal for a total of 126 patients. We perform a 10-fold cross-validation experiment on each of the data sources separately as well as the model with all data combined. From post-contrast Tl MRI data, we used 3D scans and 2D slices that we selected manually to show the tumor region. We find that the model with 2D MRI slices and genomic data combined gives the highest accuracies over individual sources but by a modest margin. We see considerable variation in accuracies across the 10 folds and that our model achieves 100% accuracy on the training data but lags behind in test accuracy. With dropout our training accuracy falls considerably. This shows that predicting glioma survival time is a challenging task but it is unclear if this is also a symptom of insufficient data. A clear direction here is to augment our data that we plan to explore with generative models. Overall we present a novel multi-modal network that incorporates SNP, gene expression, and MRI image data for glioma survival time prediction.
An Attention Based Deep Learning Model for Direct Estimation of Pharmacokinetic Maps from DCE-MRI Images
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is a useful imaging technique that can quantitatively measure the pharmacokinetic (PK) parameters to characterize the microvasculature of tissues. Typically, the PK parameters are extracted by fitting the MR signal intensity of the pixels on the time series with the nonlinear least-squares method. The main disadvantage is that there are thousands of voxels in a single MR slice and the time consumption of voxels fitting too btain the P K parameters is very large. Recently, deep learning methods based on convolutional neural networks (CNNs) and Long Short-Term Memory (LSTM) network have been applied to directly estimate the PK parameters from the acquired DCE-MRI image-temporal series. However, how to effectively extract discriminative spatial and temporal features within DCE-MRI for the estimation of PK parameters is still a challenging problem, due to the large intensity variation of tissue images in different temporal phases of DCE-MRI during the injection of contrast agents. In this work, we propose an attention based deep learning model for the estimation of PK parameters, which can improve the estimation performance of PK parameters by focusing on dominant spatial and temporal characteristics. Specifically, a temporal frame attention block (FAB) and a channel/spatial attention block (CSAB) are separately designed to focus on dominant features in specific temporal phases, channels and spatial areas for better estimation. Experimental results of clinical DCE-MRI from an open source RIDER-NEURO dataset with quantitative and qualitative evaluation demonstrate that the proposed method outperforms previously reported CNN-based and LSTM-based deep learning models for the estimation of PK maps, and the ablation study also demonstrates the effectiveness of the proposed attention-based modules. In addition, the visualization of the attention mechanism reflects interesting findings that are consistent with clinical interpretation.
Local-Whole-Focus: Identifying Breast Masses and Calcified Clusters on Full-Size Mammograms
Huang, Jun
Xiao, He
Wang, Qingfeng
Liu, Zhiqin
Chen, Bo
Wang, Yaobin
Zhang, Ping
Zhou, Ying
2022Conference Paper, cited 0 times
CBIS-DDSM
Algorithm Development
Transfer learning
Deep Learning
Automatic detection
Computer Aided Detection (CADe)
The detection of breast masses and calcified clusters on mammograms is critical for early diagnosis and treatment to improve the survivals of breast cancer patients. In this study, we propose a local-whole-focus pipeline to automatically identify breast masses and calcified clusters on full-size mammograms, from local breast tissues to the whole mammograms, and then focusing on the lesion areas. We first train a deep model to learn the fine features of breast masses and calcified clusteres on local breast tissues, and then transfer the well-trained deep model to identify breast masses and calcified clusteres on full-size mammograms with image-level annotations. We also highlight the areas of the breast masses and calcified clusteres in mammograms to visualize the identification results. We evaluated the proposed local-whole-focus pipeline on a public dataset CBIS-DDSM (Curated Breast Imaging Subset of Digital Database for Screening Mammography) and a private dataset MY-Mammo (Mianyang central hospital mammograms). The experiment results showed the DenseNet embedded with squeeze-and-excitation (SE) blocks achieved competitive results on the identification of breast masses and calcified clusteres on full-size mammograms. The highlight areas of the breast masses and calcified clusteres on the entire mammograms could also explain model decision making, which are important in practical medical applications. Index Terms–Breast mass, calcified cluster, local breast tissue, full-size mammogram, automatic identification.
UDA-CT: A General Framework for CT Image Standardization
Selim, Md
Zhang, Jie
Fei, Baowei
Lewis, Matthew
Zhang, Guo-Qiang
Chen, Jin
2022Conference Paper, cited 0 times
Lung-PET-CT-Dx
Deep Learning
adversarial training
Algorithm Development
Large-scale CT image studies often suffer from a lack of homogeneity regarding radiomic characteristics due to the images acquired with scanners from different vendors or with different reconstruction algorithms. We propose a deep learning-based framework called UDA-CT to tackle the homogeneity issue by leveraging both paired and unpaired images. Using UDA-CT, the CT images can be standardized both from different acquisition protocols of the same scanner and CT images acquired using a similar protocol but scanners from different vendors. UDA-CT incorporates recent advances in deep learning including domain adaptation and adversarial augmentation. It includes a unique design for model training batch which integrates nonstandard images and their adversarial variations to enhance model generalizability. The experimental results show that UDA-CT significantly improves the performance of the cross-scanner image standardization by utilizing both paired and unpaired data.
MLLCD: A Meta Learning-based Method for Lung Cancer Diagnosis Using Histopathology Images
Lung cancer is a leading cause of death. An accurate early lung cancer diagnosis can improve a patient’s survival chances. Histopathological images are essential for cancer diagnosis. With the development of deep learning in the past decade, many scholars have used deep learning to learn the features of histopathological images and achieve lung cancer classification. However, deep learning requires a large quantity of annotated data to train the model to achieve a good classification effect, and collecting many annotated pathological images is time-consuming and expensive. Faced with the scarcity of pathological data, we present a meta-learning method for lung cancer diagnosis (called MLLCD). In detail, the MLLCD works in three steps. First, we preprocess all data using the bilinear interpolation method and then design the base learner which units a convolutional neural network(CNN) and transformer to distill local features and global features of pathology images with different resolutions. Finally, we train and update the base learner with a model-agnostic meta-learning (MAML) algorithm. Clinical Proteomic Tumor Analysis Consortium (CPTAC) cancer patient data demonstrate that our proposed model achieves the receiver operating characteristic (ROC) values of 0.94 for lung cancer diagnosis.
Fast wavelet based image characterization for content based medical image retrieval
A large collection of medical images surrounds health care centers and hospitals. Medical images produced by different modalities like magnetic resonance imaging (MRI), computed tomography (CT), positron emission tomography (PET), and X-rays have increased incredibly with the advent of latest technologies for image acquisition. Retrieving clinical images of interest from these large data sets is a thought-provoking and demanding task. In this paper, a fast wavelet based medical image retrieval system is proposed that can aid physicians in the identification or analysis of medical images. The image signature is calculated using kurtosis and standard deviation as features. A possible use case is when the radiologist has some suspicion on diagnosis and wants further case histories, the acquired clinical images are sent (e.g. MRI images of brain) as a query to the content based medical image retrieval system. The system is tuned to retrieve the top most relevant images to the query. The proposed system is computationally efficient and more accurate in terms of the quality of retrieved images.
Tumor Segmentation in Brain MRI: U-Nets versus Feature Pyramid Network
Manifestations of brain tumors can trigger various psychiatric symptoms. Brain tumor detection can efficiently solve or reduce chances of occurrences of diseases, such as Alzheimer's disease, dementia-based disorders, multiple sclerosis and bipolar disorder. In this paper, we propose a segmentation-based approach to detect brain tumors in MRI 1 1 . We provide a comparative study between two different U-Net architectures (U-Net: baseline and U-Net: ResNeXt50 backbone) and a Feature Pyramid Network (FPN) that are trained/validated on the TCGA-LGG dataset of size 3, 929 images. U-Net architecture with ResNeXt50 backbone achieves the best Dice coefficient of 0.932, while baseline U-Net and FPN separately achieve Dice coefficients of 0.846 and 0.899, respectively. The results obtained from U-Net with ResNeXt50 backbone outperform previous works.
3D Deep Learning for Anatomical Structure Segmentation in Multiple Imaging Modalities
Villarini, B.
Asaturyan, H.
Kurugol, S.
Afacan, O.
Bell, J. D.
Thomas, E. L.
Proc IEEE Int Symp Comput Based Med Syst2021Journal Article, cited 3 times
Website
Pancreas-CT
3D deep learning
CADx system
anatomical structure
multi-modal imaging
Segmentation
Accurate, quantitative segmentation of anatomical structures in radiological scans, such as Magnetic Resonance Imaging (MRI) and Computer Tomography (CT), can produce significant biomarkers and can be integrated into computer-aided assisted diagnosis (CADx) systems to support the interpretation of medical images from multi-protocol scanners. However, there are serious challenges towards developing robust automated segmentation techniques, including high variations in anatomical structure and size, the presence of edge-based artefacts, and heavy un-controlled breathing that can produce blurred motion-based artefacts. This paper presents a novel computing approach for automatic organ and muscle segmentation in medical images from multiple modalities by harnessing the advantages of deep learning techniques in a two-part process. (1) a 3D encoder-decoder, Rb-UNet, builds a localisation model and a 3D Tiramisu network generates a boundary-preserving segmentation model for each target structure; (2) the fully trained Rb-UNet predicts a 3D bounding box encapsulating the target structure of interest, after which the fully trained Tiramisu model performs segmentation to reveal detailed organ or muscle boundaries. The proposed approach is evaluated on six different datasets, including MRI, Dynamic Contrast Enhanced (DCE) MRI and CT scans targeting the pancreas, liver, kidneys and psoas-muscle and achieves quantitative measures of mean Dice similarity coefficient (DSC) that surpass or are comparable with the state-of-the-art. A qualitative evaluation performed by two independent radiologists verified the preservation of detailed organ and muscle boundaries.
A fully automated deep learning pipeline to assess muscle mass in brain tumor patients
Background: Brain tumors are the leading cause of cancer death in the under-40s. The commonest malignant brain tumor is a Glioblastoma multiforme (GBM) with less than 5% 5-year survival. Low skeletal muscle mass is associated with poor survival in cancer and is measurable on routine imaging, but manual muscle mass quantification is time-consuming and susceptible to interrater inconsistency. In patients with brain tumors, measurement of the thickness of the temporalis muscle acts as a proxy. We present a fully-automated deep learning-based system for temporalis muscle quantification, a skeletal muscle mass surrogate on MRI head. Methods: MRI scans of 330 patients were obtained from four different datasets. Two 2D U-Nets were trained, one to segment the eyeballs and the other to segment the temporalis muscle, and used to quantify the cross-sectional area of the eyeballs and temporalis muscle. The eyeball segmentation was used to chose a consistent level on which to assess temporalis muscle mass. We assessed accuracy of segmentation using DICE and Hausdorff scores, and assessed the performance of the system to choose the correct slice on the MRI by comparing it with a manual choice of MRI slice. Results: The set models predict eyeball and temporalis muscle segmentations with good accuracy. Mean Dice scores is 0.90±0.03 and 0.90±0.05 and Hausdorff distances are 2.88±0.60 and 1.89±0.35, respectively. The automatic pipeline chooses slices for segmentation that are either identical or close to the manual choice in 96.1% of cases. Conclusions: We have developed an end-to-end system that uses two independently trained U-Nets to first segment the eyeball and then use that as a reference point to pick the correct slice of the MRI on which to use the second U-Net to measure temporalis cross-sectional area. This allows for the automated processing of head MRI scans to measure temporalis muscle mass, which has been previously shown to correlate with body muscle mass and survival in several cancer types.
Radiomics Software Tools: A comparative Analysis on Breast Cancer
Radiomics is an emerging and promising field used to describe visual information from medical images by means of numerical features. Several Radiomics software tools are available in the literature, but they return different features and make dissimilar calculations. Choosing one tool or another is not easy so a comparison for classification tasks is required. This paper compares three of these frameworks (3D Slicer, LIFEx and MaZda) on breast cancer data. In this analysis, we tested the features extracted from each tool using different pre-processing techniques and machine learning algorithms to classify the lesion as benign or malignant on more than 350 registers. Two different projections were considered, that is, craniocaudal (183 registers) and mediolateral oblique (172 registers). The results demonstrated that 3D Slicer obtained the best performance in the craniocaudal projection, while MaZda and LIFEx are more appropriate for the mediolateral oblique projection. The results are really promising for classification tasks, exceeding 85% in F1-score.
Alternative Tool for the Diagnosis of Diseases Through Virtual Reality
Virtual reality (VR) presents objects or simulated scenes to reproduce situations in a way similar to the real thing. In medicine, processing and 3D reconstruction of medical images is an important step in VR. We propose a methodology for processing medical images, to segment organs, reconstruct structures in 3D and represent structures in a VR environment, in order to provide the specialist with an alternative tool for the analysis of medical images. We present a method of image segmentation based on area differentiation and other image processing techniques; the 3D reconstruction was by the 'isosurface' method. Different studies show the benefits of VR applied to clinical practice, adding its uses as an educational tool. A VR environment was created to be visualized with glasses for this purpose, this can be an alternative tool in the identification and visualization of COVID-19 affected lungs through medical image processing and subsequent 3D reconstruction.
An End-to-end Image Feature Representation Model of Pulmonary Nodules
Hu, Jinqiao
2022Conference Paper, cited 0 times
LIDC-IDRI
Computer Aided Detection (CADe)
LUNG
Support Vector Machine (SVM)
Convolutional Neural Network (CNN)
Swarm
Deep Learning
Lung cancer is a cancer with a high mortality rate. If lung cancer can be detected early, the mortality rate can be greatly reduced. Lung nodule detection based on CT or MRI equipment is a common method to detect early lung cancer. Computer vision technology is widely used for image processing and classification of pulmonary nodules, but because the distinction between pulmonary nodule areas and surrounding non-nodule areas is not obvious, general image processing methods can only extract the superficial features of the image in pulmonary nodules. The detection accuracy cannot be further improved. In this paper, we propose an end-to-end model for constructing feature representations for lung nodule image classification based on local and global features. First, local plaque regions are selected and associated with relatively intact tissue, and then local and global features are extracted from each region. Deep models represent features that implement high-level abstract representations that describe image objects. The test results on standard datasets show that the method proposed in this paper has advantages on some evaluation metrics.
Survival analysis of pre-operative GBM patients by using quantitative image features
This paper concerns a preliminary study of the relationship between survival time of both overall and progression free survival, and multiple imaging features of patients with glioblastoma. Simulation results showed that specific imaging features were found to have significant prognostic value to predict survival time in glioblastoma patients.
Investigating Radiological Diagnosis Through Smartphone-Based Virtual Reality Applications: A User Study
This study investigates the utilization of low-cost three-dimensional visualization devices., specifically virtual reality applications on smartphones., to enhance radiological diagnosis processes. Radiology., a critical field in medicine., often faces challenges such as external illumination and poor ergonomic conditions during diagnostic procedures. Virtual reality technology has emerged as a potential solution to address these issues. The research conducted user studies with radiologists and medical physicists to assess the effectiveness and usability of the virtual reality application. Feedback from participants indicated positive perceptions of the virtual environment., 3D model quality., and interaction with the application. The study also evaluated geometric transformations on the 3D model, highlighting the importance of user-friendly controls. Overall, the findings suggest that virtual reality technology on smartphones holds promise in supporting radiological diagnosis by providing an immersive and efficient tool for medical professionals.
Brain Tumor Detection Using ResNet Architectures
Sevak, Mayur
Dwivedi, Vedvyas
Shraddha Patel, Shraddha
Pandya, Rahul
Shah, Vatsalkumar Vipulkumar
2023Conference Paper, cited 0 times
TCGA-LGG
BRAIN
Algorithm Development
Magnetic Resonance Imaging (MRI)
Deep Learning
Computer Aided Detection (CADe)
Any tumor occurs in the body due to uncontrolled and rapid growth of cells at a particular body part and then also affecting the healthy cells in the vicinity. On the off chance that not treated at an underlying stage, it might prove fatal. Despite many important efforts and promising results, accurate predictive testing and classification of these tumors is a daunting task. Important tests for the identification of this uncontrolled cell growth come from changes in the cancer's area, shape, and volume. The primary focus of this research is to convey a technique for early detection and on tumor growth recognition from MRIs so that the decision taking can be even more spontaneous in terms of detection and beginning of the treatment. This research aims at detecting tumors in a human brain depending on a dataset, which includes images of MRI scans of about 110 patients with lower-grade gliomas. A deep learning algorithmic approach, which is ResNet architecture, is implemented for detection of these tumors in the dataset. The model performed with significant accuracy levels following preprocessing and training with segmented masks of tumors.
Lung Nodule Classification Using Deep Features in CT Images
Kumar, Devinder
Wong, Alexander
Clausi, David A
2015Conference Proceedings, cited 114 times
Website
LIDC-IDRI
Computer Aided Diagnosis (CADx)
Early detection of lung cancer can help in a sharp decrease in the lung cancer mortality rate, which accounts for more than 17% percent of the total cancer related deaths. A large number of cases are encountered by radiologists on a daily basis for initial diagnosis. Computer-aided diagnosis (CAD) systems can assist radiologists by offering a "second opinion" and making the whole process faster. We propose a CAD system which uses deep features extracted from an auto encoder to classify lung nodules as either malignant or benign. We use 4303 instances containing 4323 nodules from the National Cancer Institute (NCI) Lung Image Database Consortium (LIDC) dataset to obtain an overall accuracy of 75.01% with a sensitivity of 83.35% and false positive of 0.39/patient over a 10 fold cross validation.
Learning Multi-Class Segmentations From Single-Class Datasets
Multi-class segmentation has recently achieved significant performance in natural images and videos. This achievement is due primarily to the public availability of large multi-class datasets. However, there are certain domains, such as biomedical images, where obtaining sufficient multi-class annotations is a laborious and often impossible task and only single-class datasets are available. While existing segmentation research in such domains use private multi-class datasets or focus on single-class segmentations, we propose a unified highly efficient framework for robust simultaneous learning of multi-class segmentations by combining single-class datasets and utilizing a novel way of conditioning a convolutional network for the purpose of segmentation. We demonstrate various ways of incorporating the conditional information, perform an extensive evaluation, and show compelling multi-class segmentation performance on biomedical images, which outperforms current state-of-the-art solutions (up to 2.7%). Unlike current solutions, which are meticulously tailored for particular single-class datasets, we utilize datasets from a variety of sources. Furthermore, we show the applicability of our method also to natural images and evaluate it on the Cityscapes dataset. We further discuss other possible applications of our proposed framework.
Automated Segmentation of Prostate MR Images Using Prior Knowledge Enhanced Random Walker
In this paper., we propose a hybrid encryption scheme to transmit the medical image dataset securely in radiology networks. The proposed methodology uses the RSA (Rivest-Shamir-Adleman) encryption technique., XOR technique, and the digitally reconstructed Radiograph (DRR) image from the 3D volume of MRI scan images. As a first step, the volume of interest (VOI) was segmented and then computed the DRR image on the segmented volume in the sagittal direction. The pixels of the DRR image was XORed with all the image slices. All the images and the DRR image were encrypted separately using the RSA technique and transmitted. At the receiver, the XOR was applied to all the received images, the original slices were retained, VOI was segmented again, and the DRR was recomputed. Now, the received DRR and the recomputed DRR were compared for the changes in the image content through histogram comparison, MSE, and Mean absolute deviation. The data integrity violation was tested by adding an image, deleting an image, and modifying the pixels of the image before sending it. The method was applied to fifty (n=50) samples. In all the above test cases performed, the method identified the data integrity violation correctly.
Mobile-based Application for COVID-19 Detection from Lung X-Ray Scans with Artificial Neural Networks (ANN)
In early 2020, the World Health Organization (WHO) identified a novel coronavirus referred to as SARS-CoV-2, which is associated with the now commonly known COVID-19 disease. COVID-19 was shortly later characterized as a pandemic. All countries around the globe have been severely affected and the disease has accumulated a total of over 200 million cases and more than five million deaths in the past two years. Symptoms associated with COVID-19 vary greatly in severity. Some infected with COVID-19 are asymptomatic, while others experience critical disease with life-threatening complications. In this paper, a mobile-based application has been created to help classify Covid-19 and non-Covid-19 lung when given an image of a Chest X-Ray (CXR). A variety of different artificial neural networks (ANN) including our baseline model, InceptionV3, MobileNetV2, MobileNetV3, VGG16, and VGG19 were tested to see which would provide the optimal results. It is concluded that MobileNetV3 gives the best test accuracy of 95.49% and is considered a lightweight model suitable for a mobile-based application.
Variational Quantum Denoising Technique for Medical Images
A novel variational restoration framework for the medical images corrupted by quantum, or Poisson, noise is proposed in this research paper. The considered approach is using a variational scheme that leads to a nonlinear fourth-order PDE-based model. That partial differential equation model is then solved numerically by developing a consistent finite difference-based numerical approximation scheme converging to its variational solution. The obtained numerical algorithm removes successfully the quantum noise from the medical images, preserves their details, and outperforms other shot noise filtering solutions.
An Adversarial Network Embedded with Attention Mechanism for Pancreas Segmentation
Pancreas segmentation plays an important role in the diagnosis of pancreatic diseases and related complications. However, accurately segmenting pancreas from computed tomography (CT) images tends to be challenging due to the limited proportion and irregular shape of pancreas in the abdominal CT volume. To solve this issue, we propose an adversarial network embedded with attention mechanism for pancreas segmentation in this paper. The involvement of generative adversarial network contributes to retaining much spatial information for segmentation through capturing high dimensional data distributions due to its competing mechanism between the discriminator and the generator. Furthermore, the application of attention mechanism enhances the interdependency among pixels, and thus containing contextual information for segmentation. Experimental results show that our proposed model achieves competitive performance compared with most pancreas segmentation methods.
Leukemia Classification Using EfficientNetB5: A Deep Learning Approach
Alshoraihy, Aseel
Ibrahim, Anagheem
Issa, Housam Hasan Bou
2024Conference Paper, cited 0 times
C-NMC 2019
Pathomics
Deep Learning
Blood cancer
Convolutional Neural Network (CNN)
Classification
Computer Aided Detection (CADe)
Computer Aided Diagnosis (CADx)
Leukemia is a critical disease that requires early and accurate diagnosis. Leukemia is a type of blood cancer mainly occurring when bone marrow builds extra white blood cells in the human body. This disease affects adults and is a common cancer type among children. This paper presents a deep-learning approach using EfficientNetB5 to classify Leukemia using The Cancer Imaging Archive (TCIA) with more than 10,000 images from 118 patients. The achieved confusion matrix will contribute to improving the research in diagnosing cancer.
Brain Tumor Classification Using MRI Images with K-Nearest Neighbor Method
Ramdlon, Rafi Haidar
Martiana Kusumaningtyas, Entin
Karlita, Tita
2019Conference Proceedings, cited 0 times
Algorithm Development
BRAIN
Classification
The accuracy level in diagnosing tumor type through MRI results is required to establish appropriate medical treatment. MRI results can be computationally examined using K-Nearest Neighbor method, a basic science application and classification technique of image processing. Tumor classification system is designed to detect tumor and edema in T1 and T2 images sequences, as well as to label and classify tumor type. Data interpretation of such system derives from Axial section of MRI results only, which is classified into three classes: Astrocytoma, Glioblastoma, and Oligodendroglioma. To detect tumor area, basic image processing technique is employed, comprising of image enhancement, image binarization, morphological image, and watershed. Tumor classification is applied after segmentation process of Shape Extration Feature is undertaken. The results of tumor classification obtained was 89.5 percent, which is able to provide information regarding tumor detection more clearly and specifically.
Lung Segmentation and Nodule Detection in Computed Tomography Scan using a Convolutional Neural Network Trained Adversarially using Turing Test Loss
Sathish, Rakshith
Sathish, Rachana
Sethuraman, Ramanathan
Sheet, Debdoot
2020Conference Proceedings, cited 0 times
LIDC-IDRI
Lung cancer is the most common form of cancer found worldwide with a high mortality rate. Early detection of pulmonary nodules by screening with a low-dose computed tomography (CT) scan is crucial for its effective clinical management. Nodules which are symptomatic of malignancy occupy about 0.0125 - 0.025% of volume in a CT scan of a patient. Manual screening of all slices is a tedious task and presents a high risk of human errors. To tackle this problem we propose a computationally efficient two stage framework. In the first stage, a convolutional neural network (CNN) trained adversarially using Turing test loss segments the lung region. In the second stage, patches sampled from the segmented region are then classified to detect the presence of nodules. The proposed method is experimentally validated on the LUNA16 challenge dataset with a dice coefficient of 0.984±0.0007 for 10-fold cross-validation.
Metal Artifacts Reduction in CT Scans using Convolutional Neural Network with Ground Truth Elimination
Mai, Q.
Wan, J. W. L.
Annu Int Conf IEEE Eng Med Biol Soc2020Journal Article, cited 0 times
Website
HNSCC
Computed Tomography (CT)
Image denoising
Metal artifacts are very common in CT scans since metal insertion or replacement is performed for enhancing certain functionality or mechanism of patient's body. These streak artifacts could degrade CT image quality severely, and consequently, they could influence clinician's diagnosis. Many existing supervised learning methods approaching this problem assume the availability of clean images data, images free of metal artifacts, at the part with metal implant. However, in clinical practices, those clean images do not usually exist. Therefore, there is no support for the existing supervised learning based methods to work clinically. We focus on reducing the streak artifacts on the hip scans and propose a convolutional neural network based method to eliminate the need of the clean images at the implant part during model training. The idea is to use the scans of the parts near the hip for model training. Our method is able to suppress the artifacts in corrupted images, highly improve the image quality, and preserve the details of surrounding tissues, without using any clean hip scans. We apply our method on clinical CT hip scans from multiple patients and obtain artifact-free images with high image quality.
Conditional Generative Adversarial Networks for low-dose CT image denoising aiming at preservation of critical image content
Kusters, K. C.
Zavala-Mondragon, L. A.
Bescos, J. O.
Rongen, P.
de With, P. H. N.
van der Sommen, F.
Annu Int Conf IEEE Eng Med Biol Soc2021Journal Article, cited 0 times
LDCT-and-Projection-data
Generative Adversarial Network (GAN)
Algorithms
Humans
*Image Processing
Computer-Assisted
Signal-To-Noise Ratio
*Tomography
X-Ray Computed
X-ray Computed Tomography (CT) is an imaging modality where patients are exposed to potentially harmful ionizing radiation. To limit patient risk, reduced-dose protocols are desirable, which inherently lead to an increased noise level in the reconstructed CT scans. Consequently, noise reduction algorithms are indispensable in the reconstruction processing chain. In this paper, we propose to leverage a conditional Generative Adversarial Networks (cGAN) model, to translate CT images from low-to-routine dose. However, when aiming to produce realistic images, such generative models may alter critical image content. Therefore, we propose to employ a frequency-based separation of the input prior to applying the cGAN model, in order to limit the cGAN to high-frequency bands, while leaving low-frequency bands untouched. The results of the proposed method are compared to a state-of-the-art model within the cGAN model as well as in a single-network setting. The proposed method generates visually superior results compared to the single-network model and the cGAN model in terms of quality of texture and preservation of fine structural details. It also appeared that the PSNR, SSIM and TV metrics are less important than a careful visual evaluation of the results. The obtained results demonstrate the relevance of defining and separating the input image into desired and undesired content, rather than blindly denoising entire images. This study shows promising results for further investigation of generative models towards finding a reliable deep learning-based noise reduction algorithm for low-dose CT acquisition.
Improved Genotype-Guided Deep Radiomics Signatures for Recurrence Prediction of Non-Small Cell Lung Cancer
Aonpong, P.
Iwamoto, Y.
Han, X. H.
Lin, L.
Chen, Y. W.
Annu Int Conf IEEE Eng Med Biol Soc2021Journal Article, cited 0 times
NSCLC Radiogenomics
Radiomic features
*Carcinoma
Non-Small-Cell Lung/diagnostic imaging/genetics
Genotype
LUNG
Humans
*Lung Neoplasms/diagnostic imaging/genetics
Tomography
X-Ray Computed
Non-small cell lung cancer (NSCLC) is a type of lung cancer that has a high recurrence rate after surgery. Precise prediction of preoperative prognosis for NSCLC recurrence tends to contribute to the suitable preparation for treatment. Currently, many studied have been conducted to predict the recurrence of NSCLC based on Computed Tomography-images (CT images) or genetic data. The CT image is not expensive but inaccurate. The gene data is more expensive but has high accuracy. In this study, we proposed a genotype-guided radiomics method called GGR and GGR_Fusion to make a higher accuracy prediction model with requires only CT images. The GGR is a two-step method which is consists of two models: the gene estimation model using deep learning and the recurrence prediction model using estimated genes. We further propose an improved performance model based on the GGR model called GGR_Fusion to improve the accuracy. The GGR_Fusion uses the extracted features from the gene estimation model to enhance the recurrence prediction model. The experiments showed that the prediction performance can be improved significantly from 78.61% accuracy, AUC=0.66 (existing radiomics method), 79.09% accuracy, AUC=0.68 (deep learning method) to 83.28% accuracy, AUC=0.77 by the proposed GGR and 84.39% accuracy, AUC=0.79 by the proposed GGR_Fusion.Clinical Relevance-This study improved the preoperative recurrence of NSCLC prediction accuracy from 78.61% by the conventional method to 84.39% by our proposed method using only the CT image.
Spatiotemporal learning of dynamic positron emission tomography data improves diagnostic accuracy in breast cancer
Inglese, Marianna
Duggento, Andrea
Boccato, Tommaso
Ferrante, Matteo
Toschi, Nicola
2022Conference Proceedings, cited 0 times
ACRIN-FLT-Breast
Positron emission tomography (PET) can reveal metabolic activity in a voxelwise manner. PET analysis is commonly performed in a static manner by analyzing the standardized uptake value (SUV) obtained from the plateau region of PET acquisitions. A dynamic PET acquisition can provide a map of the spatiotemporal concentration of the tracer in vivo, hence conveying information about radiotracer delivery to tissue, its interaction with the target and washout. Therefore, tissue-specific biochemical properties are embedded in the shape of time activity curves (TACs), which are generally used for kinetic analysis. Conventionally, TACs are employed along with information about blood plasma activity concentration, i.e., the arterial input function (AIF), and specific compartmental models to obtain a full quantitative analysis of PET data. The main drawback of this approach is the need for invasive procedures requiring arterial blood sample collection during the whole PET scan. In this paper, we address the challenge of improving PET diagnostic accuracy through an alternative approach based on the analysis of time signal intensity patterns. Specifically, we demonstrate the diagnostic potential of tissue TACs provided by dynamic PET acquisition using various deep learning models. Our framework is shown to outperform the discriminative potential of classical SUV analysis, hence paving the way for more accurate PET-based lesion discrimination without additional acquisition time or invasive procedures. Clinical Relevance- The diagnostic accuracy of dynamic PET data exploited by deep-learning based time signal intensity pattern analysis is superior to that of static SUV imaging.
An Optimized U-Net for Unbalanced Multi-Organ Segmentation
Berzoini, Raffaele
Colombo, Aurora A.
Bardini, Susanna
Conelli, Antonello
D'Arnese, Eleonora
Santambrogio, Marco D.
2022Conference Proceedings, cited 0 times
CT-ORG
Medical practice is shifting towards the automation and standardization of the most repetitive procedures to speed up the time-to-diagnosis. Semantic segmentation repre-sents a critical stage in identifying a broad spectrum of regions of interest within medical images. Indeed, it identifies relevant objects by attributing to each image pixels a value representing pre-determined classes. Despite the relative ease of visually locating organs in the human body, automated multi-organ segmentation is hindered by the variety of shapes and dimensions of organs and computational resources. Within this context, we propose BIONET, a U-Net-based Fully Convolutional Net-work for efficiently semantically segmenting abdominal organs. BIONET deals with unbalanced data distribution related to the physiological conformation of the considered organs, reaching good accuracy for variable organs dimension with low variance, and a Weighted Global Dice Score score of 93.74 ± 1.1%, and an inference performance of 138 frames per second. Clinical Relevance - This work established a starting point for developing an automatic tool for semantic segmentation of variable-sized organs within the abdomen, reaching considerable accuracy on small and large organs with low variability, reaching a 93.74 ± 1.1 % of Weighted Global Dice Score.
A Cascaded Deep Learning Framework for Segmentation of Nuclei in Digital Histology Images**This research was supported by Natural Sciences and Engineering Research Council (NSERC) of Canada, Terry Fox Foundation, and the Lotte and John Hecht Memorial Foundation.
Saednia, Khadijeh
Tran, William T.
Sadeghi-Naini, Ali
2022Conference Proceedings, cited 0 times
Post-NAT-BRCA
Accurate segmentation of nuclei is an essential step in analysis of digital histology images for diagnostic and prognostic applications. Despite recent advances in automated frameworks for nuclei segmentation, this task is still challenging. Specifically, detecting small nuclei in large-scale histology images and delineating the border of touching nuclei accurately is a complicated task even for advanced deep neural networks. In this study, a cascaded deep learning framework is proposed to segment nuclei accurately in digitized microscopy images of histology slides. A U-Net based model with customized pixel-wised weighted loss function is adapted in the proposed framework, followed by a U-Net based model with VGG16 backbone and a soft Dice loss function. The model was pretrained on the Post-NAT-BRCA public dataset before training and independent evaluation on the MoNuSeg dataset. The cascaded model could outperform the other state-of-the-art models with an AJI of 0.72 and a F1-score of 0.83 on the MoNuSeg test set.
Compressibility variations of JPEG2000 compressed computed tomography
Compression is increasingly used in medical applications to enable efficient and universally accessible electronic health records. However, lossy compression introduces artifacts that can alter diagnostic accuracy, interfere with image processing algorithms and cause liability issues in cases of diagnostic errors. Compression guidelines were introduced to mitigate these issues and foster the use of modern compression algorithms with diagnostic imaging. However, these guidelines are usually defined as maximum compression ratios for each imaging protocol and do not take compressibility variations due to image content into account. In this paper we have evaluated the compressibility of thousands of computed tomography slices of an anthropomorphic thoracic phantom acquired with different parameters. We have shown that exposure, slice thickness and reconstruction filters have a significant impact on compressibility suggesting that guidelines based solely on compression ratios may be inadequate.
A statistical method for lung tumor segmentation uncertainty in PET images based on user inference
Radiomic analysis of multi-contrast brain MRI for the prediction of survival in patients with glioblastoma multiforme
Chaddad, Ahmad
Desrosiers, Christian
Toews, Matthew
2016Conference Proceedings, cited 11 times
Website
Radiomics
BRAIN
Glioblastoma Multiforme (GBM)
Machine Learning
Magnetic Resonance Imaging (MRI)
Image texture features are effective at characterizing the microstructure of cancerous tissues. This paper proposes predicting the survival times of glioblastoma multiforme (GBM) patients using texture features extracted in multi-contrast brain MRI images. Texture features are derived locally from contrast enhancement, necrosis and edema regions in T1-weighted post-contrast and fluid-attenuated inversion-recovery (FLAIR) MRIs, based on the gray-level co-occurrence matrix representation. A statistical analysis based on the Kaplan-Meier method and log-rank test is used to identify the texture features related with the overall survival of GBM patients. Results are presented on a dataset of 39 GBM patients. For FLAIR images, four features (Energy, Correlation, Variance and Inverse of Variance) from contrast enhancement regions and a feature (Homogeneity) from edema regions were shown to be associated with survival times (p-value <; 0.01). Likewise, in T1-weighted images, three features (Energy, Correlation, and Variance) from contrast enhancement regions were found to be useful for predicting the overall survival of GBM patients. These preliminary results show the advantages of texture analysis in predicting the prognosis of GBM patients from multi-contrast brain MRI.
Using Multi-level Convolutional Neural Network for Classification of Lung Nodules on CT images
Lyu, Juan
Ling, Sai Ho
2018Conference Proceedings, cited 0 times
LIDC-IDRI
Lung cancer is one of the four major cancers in the world. Accurate diagnosing of lung cancer in the early stage plays an important role to increase the survival rate. Computed Tomography (CT)is an effective method to help the doctor to detect the lung cancer. In this paper, we developed a multi-level convolutional neural network (ML-CNN)to investigate the problem of lung nodule malignancy classification. ML-CNN consists of three CNNs for extracting multi-scale features in lung nodule CT images. Furthermore, we flatten the output of the last pooling layer into a one-dimensional vector for every level and then concatenate them. This strategy can help to improve the performance of our model. The ML-CNN is applied to ternary classification of lung nodules (benign, indeterminate and malignant lung nodules). The experimental results show that our ML-CNN achieves 84.81\% accuracy without any additional hand-craft preprocessing algorithm. It is also indicated that our model achieves the best result in ternary classification.
Deep multi-modality collaborative learning for distant metastases predication in PET-CT soft-tissue sarcoma studies
Peng, Yige
Bi, Lei
Guo, Yuyu
Feng, Dagan
Fulham, Michael
Kim, Jinman
2019Conference Proceedings, cited 0 times
Soft-tissue Sarcoma
Deep learning
collaborative learning
Towards Efficient Segmentation and Classification of White Blood Cell Cancer Using Deep Learning
White Blood cell cancer is a plasma cell cancer that starts in the bone marrow and leads to the formation of abnormal plasma cells. Medical examiners must be exceedingly selective when diagnosing myeloma cells. Moreover, because the final judgment is dependent on human perception and judgment, there is a chance that the conclusion may be incorrect. This study is noteworthy because it creates a software-assisted way for recognizing and identifying myeloma cells in bone marrow scans. MASK-Recurrent Convolutional Neural Network has been utilized for recognition, while Efficient Net B3 has been used for detection. The mean Average Precision (mAP) of MASK-RCNN is 93%, whereas Efficient Net B3 is 95% accurate. According to the findings of this study, the Mask-RCNN model can identify multiple myeloma, and Efficient Net B3 can distinguish between myeloma and non-myeloma cells.
Malignancy prediction by using characteristic-based fuzzy sets: A preliminary study
In this paper we examine a technique for developing prognostic image characteristics, termed radiomics, for non-small cell lung cancer based on a tumour edge region-based analysis. Texture features were extracted from the rind of the tumour in a publicly available 3D CT data set to predict two-year survival. The derived models were compared against the previous methods of training radiomic signatures that are descriptive of the whole tumour volume. Radiomic features derived solely from regions external, but neighbouring, the tumour were shown to also have prognostic value. By using additional texture features an increase in accuracy, of 3%, is shown over previous approaches for predicting two-year survival, upon examining the outside rind including the volume compared to the volume without the rind. This indicates that while the centre of the tumour is currently the main clinical target for radiotherapy treatment, the tissue immediately around the tumour is also clinically important.
An Image Processing Tool for Efficient Feature Extraction in Computer-Aided Detection Systems
In this paper, we present an image processing tool that supports efficient image feature extraction and pre-processing developed in the context of a computer-aided detection (CAD) system for lung cancer nodule detection from CT images. We outline the main functionalities of the tool, which implements a number of novel methods for handling image pre-processing and feature extraction tasks. In particular, we describe an efficient way to compute the run-length feature, a photometric feature describing the texture of an image.
Removing Mixture Noise from Medical Images Using Block Matching Filtering and Low-Rank Matrix Completion
An accurate and timely diagnosis is of utmost importance when it comes to treating brain tumors effectively. To facilitate this process, we have developed a brain tumor classification approach that employs transfer learning using a pre-trained version of the EfficientNet V2 model. Our dataset comprises brain tumor images that have been categorized into four distinct labels: tumor (glioma, meningioma, pituitary) and normal. As our base model, we employed the EfficientNet V2 model with variations of B0, B1, B2, and B3 for experiments. To adapt the model to our number of label categories, we modified the final layer and retrained it on our dataset. Our optimization process involved using Adam’s algorithm and the categorical cross-entropy loss function. We conducted experiments in multiple stages, which involved randomizing the dataset, pre-processing, training the model, and evaluating the results. During the evaluation, we used appropriate metrics to assess the accuracy and loss of the test data. Furthermore, we analyzed the performance of the model by visualizing the loss and accuracy curves throughout the training process. Our extensive experimentation involving dataset randomization, pre-processing, model training, and evaluation has yielded remarkable results. Through relevant evaluation metrics and visualization of loss and accuracy curves, we have achieved impressive accuracy and loss rates on test data. Our research has led us to the successful classification of brain tumors using the EfficientNet V2 models with B0, B1, B2, and B3 variations. Additionally, our use of a confusion matrix has allowed us to assess the classification ability of each tumor category. This breakthrough research has the potential to greatly enhance medical diagnosis by utilizing transfer learning techniques and pre-trained models. We hope that this approach can help detect and treat brain tumors in their early stages, ultimately leading to better patient outcomes.
Singular value decomposition using block least mean square method for image denoising and compression
Image denoising is a well documented part of Image processing. It has always posed a problem for researchers and there is no dearth of solutions extended. Obtaining a denoised and perfectly similar image after application of processes represents a mirage that has been chased a lot. In this paper, we attempt to combine the effects of block least mean square algorithm (BLMS) to maximizes the Peak Signal to Noise Ratio (PSNR), along with singular valued decomposition (SVD), so as to achieve results that bring us closer to our aim of perfect reconstruction. The results showed that the combination of these methods provides easy computation, coupled with efficiency and as such is an effective way of approaching the problem.
A Novel Brain Tumor Segmentation Approach Based on Deep Convolutional Neural Network and Level Set
Wang, Jingjing
Gao, Jun
Ren, Jinwen
Zhao, Yanhua
Zhang, Liren
2020Conference Paper, cited 0 times
BraTS-TCGA-GBM
Algorithm Development
Segmentation
BraTS 2019
Deep convolutional neural network (DCNN)
In recent years deep convolutional Neural Network (DCNN) gets a big success in brain tumor segmentation. But there are artifacts in the border region of segmentation results using DCNNs. To solve this question, we propose a hybrid model including DCNNs and traditional segmentation methods. First, we use U-Net and ResU-Net network in coarse segmentation. In order to deepen the network levels and improve the network performance, we add residual module to U-Net and comprise the ResU-Net. Second, we use level set in fine segmentation of tumor boundary. We take the intersection of the coarse segmentation outputs of U-Net and ResU-Net as input of level set module. The aim of taking the intersection of U-Net and ResU-Net outputs is to get better initialization information for the level set algorithm and accelerate the evolution of level set functions. The proposed approach is validated on the BraTS 2018 challenge dataset. The metrics used to evaluate the segmentation results are: Dice, Specificity, Sensitivity, Hausdorff distances (HD). We compare our approach with U-Net, ResU-Net and some other methods. The experimental results indicate our approach is better than some other deep networks.
A Probabilistic Model for Segmentation of Ambiguous 3D Lung Nodule
Many medical images domains suffer from inherent ambiguities. A feasible approach to resolve the ambiguity of lung nodule in the segmentation task is to learn a distribution over segmentations based on a given 2D lung nodule image. Whereas lung nodule with 3D structure contains dense 3D spatial information, which is obviously helpful for resolving the ambiguity of lung nodule, but so far no one has studied it. To this end we propose a probabilistic generative segmentation model consisting of a V-Net and a conditional variational autoencoder. The proposed model obtains the 3D spatial information of lung nodule with V-Net to learn a density model over segmentations. It is capable of efficiently producing multiple plausible semantic lung nodule segmentation hypotheses to assist radiologists in making further diagnosis to resolve the present ambiguity. We evaluate our method on publicly available LIDC-IDRI dataset and achieves a new state-of-the-art result with 0.231±0.005 in D2GED. This result demonstrates the effectiveness and importance of leveraging the 3D spatial information of lung nodule for such problems. Code is available at: https://github.com/jiangjiangxiaolong/PV-Net.
Image Correction in Emission Tomography Using Deep Convolution Neural Network
Suzuki, T
Kudo, H
2019Conference Proceedings, cited 0 times
Soft-tissue Sarcoma
Image denoising
We propose a new approach using Deep Convolution Neural Network (DCNN) to correct for image degradations due to statistical noise and photon attenuation in Emission Tomography (ET). The proposed approach first reconstructs an image by the standard Filtered Backprojection (FBP) without correcting for the degradations followed by inputting the degraded image into DCNN to obtain an improved image. We consider two different scenarios. The first scenario inputs an ET image only into DCNN, whereas the second scenario inputs a pair of degraded ET image and CT/MRI image to improve accuracy of the correction. The simulation result demonstrates that both the scenarios can improve image quality compared to the FBP without correction, and, in particular, accuracy of the second scenario is comparable to that of the standard iterative reconstruction such as Maximum Likelihood Expectation Maximization (MLEM) and Ordered-Subsets EM (OSEM) methods. The proposed method is able to output an image in very short time, because it does not rely on iterative computations.
A Pipeline for Lung Tumor Detection and Segmentation from CT Scans Using Dilated Convolutional Neural Networks
Lung cancer is the most prevalent cancer worldwide with about 230,000 new cases every year. Most cases go undiagnosed until it’s too late, especially in developing countries and remote areas. Early detection is key to beating cancer. Towards this end, the work presented here proposes an automated pipeline for lung tumor detection and segmentation from 3D lung CT scans from the NSCLC-Radiomics Dataset. It also presents a new dilated hybrid-3D convolutional neural network architecture for tumor segmentation. First, a binary classifier chooses CT scan slices that may contain parts of a tumor. To segment the tumors, the selected slices are passed to the segmentation model which extracts feature maps from each 2D slice using dilated convolutions and then fuses the stacked maps through 3D convolutions - incorporating the 3D structural information present in the CT scan volume into the output. Lastly, the segmentation masks are passed through a post-processing block which cleans them up through morphological operations. The proposed segmentation model outperformed other contemporary models like LungNet and U-Net. The average and median dice coefficient on the test set for the proposed model were 65.7% and 70.39% respectively. The next best model, LungNet had dice scores of 62.67% and 66.78%.
Deep Learning for Automatic Identification of Nodule Morphology Features and Prediction of Lung Cancer
Lung Cancer is the most common and deadly cancer in the world. Correct prognosis affects the survival rate of patient. The most important symptom for early diagnosis is nodules images in CT scan. Diagnosis performed in hospital is divided into 2 steps : (1) Firstly, detect nodules from CT scan. (2) Secondly, evaluate the morphological features of nodules and give the diagnostic results. In this work, we proposed an automatic lung cancer prognosis system. The system has 3 steps : (1) In the first step, we trained two models, one based on convolutional neural network (CNN), and the other recurrent neural network (RNN), to detect nodules in CT scan. (2) In the second step, convolutional neural networks (CNN) are trained to evaluate the value of nine morphological features of nodules. (3) In the final step, logistic regression between values of features and cancer probability is trained using XGBoost model. In addition, we give an analysis of which features are important for cancer prediction. Overall, we achieved 82.39% accuracy for lung cancer prediction. By logistic regression analysis, we find that features of diameter, spiculation and lobulation are useful for reducing false positive.
Predicting the Stage of Non-small Cell Lung Cancer with Divergence Neural Network Using Pre-treatment Computed Tomography
Determining the stage of non-small cell lung cancer (NSCLC) is important for treatment and prognosis. Staging includes a professional interpretation of imaging, thus we aimed to build an automatic process with deep learning (DL). We proposed an end-to-end DL method that uses pre-treatment computer tomography images to classify the early- and advanced-stage of NSCLC. DL models were developed and tested to classify the early- and advanced-stage using training (n = 58), validation (n = 7), and testing (n = 17) cohorts obtained from public domains. The network consists of three parts of encoder, decoder, and classification layer. Encoder and decoder layers are trained to reconstruct original images. Classification layers are trained to classify early- and advanced-stage NSCLC patients with a dense layer. Other machine learning-based approaches were compared. Our model achieved accuracy of 0.8824, sensitivity of 1.0, specificity of 0.6, and area under the curve (AUC) of 0.7333 compared with other approaches (AUC 0.5500 - 0.7167) in the test cohort for classifying between early- and advanced-stages. Our DL model to classify NSCLC patients into early-stage and advanced-stage showed promising results and could be useful in future NSCLC research.
Extraction of Tumour in Breast MRI using Joint Thresholding and Segmentation – A Study
Breast Cancer (BC) is one of the harsh conditions, which largely affects the women group. Due to its significance, a range of procedures are available for premature detection and early treatment to save the patient. The clinical level diagnosis of BC will be done using; (i) Image supported detection and (ii) Core-Needle-Biopsy (CNB) assisted confirmation. The proposed work aim to develop a computerized scheme to detect the Breast-Tumor-Section (BTS) from the breast MRI slices. This work implements a joint thresholding and segmentation methodology to enhance and extract the BTS from the 2D MRI slices. A tri-level thresholding based on Slime-Mould-Algorithm and Shannon's-Entropy (SMA+SE) is implemented to enhance the BTS and Watershed-Segmentation (WS) is implemented to mine the BTS. After extracting the BTS, a study between the BTS and Ground-Truth image is performed and the necessary Image-Performance-Values (IPV) are computed. In this work the axial, coronal and sagittal slices of 2D breast MRI are separately examined and the attained results are presented.
Deep Learning Based Approach for Multiple Myeloma Detection
Multiple myeloma cancer is caused by the abnormal growth of plasma cells in the bone marrow. The most commonly used method for diagnosis of multiple myeloma is Bone marrow aspiration, where the aspirate slide images are either observed visually or passed onto existing digital image processing software for the detection of myeloma cells. The current work explores the effectiveness of deep learning based object detection/segmentation algorithms such as Mask-RCNN and unet for the detection of multiple myeloma. The manual polygon annotation of the current dataset is performed using VGG image annotation software. The deep learning models were trained by monitoring the train and validation loss per epoch and the best model was selected based on the minimal loss for the validation data. From the comparison results obtained for both the models, it is observed that Mask-RCNN has competing results than unet and it addresses most of the challenges existing in multiple myeloma segmentation.
False Positive Reduction in Mammographic Mass Detection Using Image Representations for Textural Analysis
Breast cancer is a prominent disease affecting women and is associated with low survival rate. Mammogram is a widely accepted and adopted modality for diagnosing breast cancer. The challenges faced in the early detection of breast cancer include poor contrast of mammograms, complex nature of abnormalities and difficulty in interpreting dense tissues. Computer-Aided Diagnosis (CAD) schemes help radiologists improve the sensitivity by rendering an objective diagnosis, in addition to reducing the time and cost involved. Conventional methods for automated diagnosis involve extracting handcrafted features from Region of Interest (ROI) followed by classification using Machine Learning (ML) techniques. The main challenge faced in CAD is higher false positive rate which adds to patient anxiety. This paper proposes a new CAD scheme for reducing the number of false positives in mammographic mass detection using a Deep Learning (DL) method. Convolutional Neural Network (CNN) can be considered as a prospective candidate for efficiently eliminating false positives in mammographic mass detection. More specifically, image representations that include Hilbert's image representation and forest fire model which contain rich textural information are given as input to CNN for mammogram classification. The proposed system outperforms ML approach based on handcrafted features extracted from the image representations considered. In particular, forest fire- CNN combination achieves accuracy as high as 96%.
Content dependent intra mode selection for medical image compression using HEVC
This paper presents a method for complexity reduction in medical image encoding that exploits the structure of medical images. The amount of texture detail and structure in medical images depends on the modality used to capture the image and the body part captured by that image. The proposed approach was evaluated using Computed Radiography (CR) modality, commonly known as x-ray imaging, and three body parts. The proposed method essentially reduces the number of CU partitions evaluated as well as the number of intra prediction modes for each evaluated partition. Evaluation using the HEVC reference software (HM) 16.4 and lossless intra coding shows an average reduction of 52.47% in encoding time with a negligible penalty of up to 0.22%, increase in compressed file size.
Spatial-channel attention-based stochastic neighboring embedding pooling and long-short-term memory for lung nodules classification
Saihood, Ahmed
Karshenas, Hossein
Nilchi, Ahmad Reza Naghsh
2022Conference Paper, cited 0 times
LIDC-IDRI
SPIE LungX Challenge
Radiomics
Algorithm Development
LUNG
Classification
Handling lesion size and location variance in lung nodules are one of the main shortcomings of traditional convolutional neural networks (CNNs). The pooling layer within CNNs reduces the resolution of the feature maps causing small local details loss that needs processing by the following layers. In this article, we proposed a new pooling-based stochastic neighboring embedding method (SNE-pooling) that is able to handle the long-range dependencies property of the lung nodules. Further, an attention-based SNE-pooling model is proposed that could perform spatial and channel attention. The experimental results conducted on LIDC and LUNGx datasets show that the attention-based SNE-pooling model significantly improves the performance for the state of the art.
Classification of COVID-19 and Nodule in CT Images using Deep Convolutional Neural Network
Distinguishing between coronavirus disease 2019 (COVID-19) and nodule as an early indicator of lung cancer in Computed Tomography (CT) images has been a challenge that radiologists have faced since COVID-19 was announced as a pandemic. The similarity between these two infections is the main reason that brings dilemmas for them and may lead to a misdiagnosis. As a result, manual classification is not as efficient as automated classification. This paper proposes an automated approach to classify COVID-19 infections from nodules in CT images. Convolutional Neural Networks (CNNs) have significantly meliorated automated image classification tasks, particularly for medical images. Accordingly, we propose a refined CNN-based architecture through modifications in the network layers to reduce complexity. Furthermore, to vanquish the lack of training data, data augmentation approaches are utilized. In our method, Multi Layer Perceptron (MLP) is obligated to categorize the feature vectors extracted from denoised input images by convolutional layers into two main classes of COVID-19 infections and nodules. To the best of our knowledge, other state-of-the-art methods can only classify one of the two classes listed above. Compared to the mentioned counterparts, our proposed method has a promising performance with an accuracy of 97.80%.
Image Segmentation and Pre-Processing for Lung Cancer Detection in Humans Based on Deep-Learning
Singh, Drishti
Singh, Jaspreet
2023Conference Paper, cited 0 times
SPIE-AAPM Lung CT Challenge
Computer Aided Detection (CADe)
Algorithm Development
Classification
When it comes to cancer and its linked disorders, lung cancer is consistently ranked among the top causes of mortality. The primary method for making the diagnosis is to do a scan analysis of the patient's lungs. It's possible that an MRI, CT scan, or X-ray will be analyzed in this scan analysis. Due to the wide variety of imaging techniques that can be used to a patient's lungs, one of the challenging jobs that must be completed is the automated classification of lung cancer. Methods involving machine learning, deep learning and image processing have demonstrated a significant amount of promise for the classification and identification of lung cancer. In this research, we demonstrate a successful strategy for detecting and classifying malignant and benign lung cancer-related CT scan pictures. This approach was developed as part of our research for this paper. The proposed method begins by classifying the photos after they have been processed with image processing techniques. After that, the supervised learning algorithms are utilized to further process the images. In this section, we have extracted statistical features as well as textural characteristics, and then we have fed several extracted features to multiple classifiers. We have utilized a total of seven distinct classifiers, which are referred to as the KNN classifier, SVM classifier, multinomial naive Bayes classifier, decision tree, SGD(stochastic gradient descent), MLP (multi-layer perceptron) and random forest. When training and testing these classifiers, we employed a dataset that included both benign and malignant lung cancer-related images. In the findings that were collected, it was discovered that the accuracy is highest which is approx. 88 percent(for MLP classifier), when compared to other classifiers.
Brain Tumor Extraction from MRI Using Clustering Methods and Evaluation of Their Performance
In this paper, we consider the extraction of brain tumor from MRI (Magnetic Resonance Imaging) images using K-means, Fuzzy c-means and Region growing clustering methods. After extraction, various parameters related to performance of clustering methods, and also, parameters related to description of tumor are calculated. MRI is a non-invasive method which provides the view of structural features of tissues in the body at very high resolution (typically on 100 μm scale). Therefore, it will be advantageous if the detection and segmentation of brain tumors are based on MRI. This work is in the direction of replacing the manual identification and separation of tumor structures from brain MRI by computer aided techniques, which would add great value with respect to accuracy, reproducibility, diagnosis and treatment planning. The brain tumor separated from original image is referred as Region of Interest (ROI) and remaining portion of original image is referred as Non-region of Interest (NROI).
Prior-aware Neural Network for Partially-Supervised Multi-Organ Segmentation
Zhou, Yuyin
Li, Zhe
Bai, Song
Wang, Chong
Chen, Xinlei
Han, Mei
Fishman, Elliot
Yuille, Alan L.
2019Conference Paper, cited 0 times
Pancreas-CT
Accurate multi-organ abdominal CT segmentation is essential to many clinical applications such as computeraided intervention. As data annotation requires massive human labor from experienced radiologists, it is common that training data are partially labeled, e.g., pancreas datasets only have the pancreas labeled while leaving the rest marked as background. However, these background labels can be misleading in multi-organ segmentation since the “background” usually contains some other organs of interest. To address the background ambiguity in these partially-labeled datasets, we propose Prior-aware Neural Network (PaNN) via explicitly incorporating anatomical priors on abdominal organ sizes, guiding the training process with domain-specific knowledge. More specifically, PaNN assumes that the average organ size distributions in the abdomen should approximate their empirical distributions, prior statistics obtained from the fully-labeled dataset. As our training objective is difficult to be directly optimized using stochastic gradient descent, we propose to reformulate it in a min-max form and optimize it via the stochastic primal-dual gradient algorithm. PaNN achieves state-of-the-art performance on the MICCAI2015 challenge “Multi-Atlas Labeling Beyond the Cranial Vault”, a competition on organ segmentation in the abdomen. We report an average Dice score of 84.97%, surpassing the prior art by a large margin of 3.27%.
An Investigative Study of Shallow, Deep and Dense Learning Models for Breast Cancer Detection based on Microcalcifications
Murthy, D. Sudarsana
Prasad, V. Siva
Aman, K.
Kumar Reddy, Madduru Poojith
Madhavi, K. Reddy
Sunitha, Gurram
2022Conference Paper, cited 0 times
CBIS-DDSM
Convolutional Neural Network (CNN)
Radiomic features
Early cancer diagnosis, detection and treatment continues to be a mammoth task in because of many challenges such as socio and cultural myths, economic conditions, access to healthcare services, healthcare practices, availability of expert oncologists etc. Mammography is a successful screening method for the breast cancer detection. Mammography captures multiple features like masses, microcalcifications etc. Microcalcifications may indicate breast cancer in its early stages and are considered to play a crucial role in early breast cancer diagnosis. In this paper, we have undertaken an investigative study for breast cancer classification by automated learning from mammography images with microcalcifications. Three types of convolutional neural architectures – shallow (ResNet101), deep (VGG101) and dense (DenseNet101) learning models are employed in this investigative study towards contributing to the objective of rapid and early breast cancer diagnosis. To improve the accuracies of the learning models, the features extracted from microcalcifications have been fed to the learning models. We have experimented with varying hyperparameter setup and have recorded the optimal performances of the three models. It has been observed that among the three models, ResNet101 model demonstrated best performance of 94.2% in benign and malicious cancer classification and also demonstrated best performance in terms of time complexity. The dense model DenseNet101 was more sensitive and specific towards the classification of breast cancer using the microcalcifications. VGG101 performed well and has worked with nearly optimal results as that of ResNet 101 with a value of 93.6%.
Glioma Brain Tumor Classification using Transfer Learning
Brain cancer is caused by the population of abnormal cells called glial cells that takes place in the brain. Over the years, the number of patients who have brain cancer is increasing with respect to the aging population, is a worldwide health problem. The objective of this paper is to develop a method to detect the brain tissues which are affected by cancer especially for grade-4 tumor, Glioblastoma multiforme (GBM). GBM is one of the most malignant cancerous brain tumors as they are fast growing and more likely to spread to other parts of the brain. In this paper, Naïve Bayes classification is utilized for recognition of a tumor region accurately that contains all spreading cancerous tissues. Brain MRI database, preprocessing, morphological operations, pixel subtraction, maximum entropy threshold, statistical features extraction, and Naïve Bayes classifier based prediction algorithm are used in this research. The goal of this method is to detect the tumor area from different brain MRI images and to predict that detected area whether it is a tumor or not. When compared to other methods, this method can properly detect the tumor located in different regions of the brain including the middle region (aligned with eye level) which is the significant advantage of this method. When tested on 50 MRI images, this method develops 81.25% detection rate on tumor images and 100% detection rate on non-tumor images with the overall accuracy 94%.
A scheme for patient study retrieval from 3D brain MR volumes
The paper presents a pipeline for case retrieval in magnetic resonance (MR) brain volumes acquired from biomedical image sensors. The framework proposed in this paper, inputs a patient study consisting of MR brain image slices and outputs similar patient case studies present in the brain MR volume database. Query slice pertains to a new case and the output slices belong to the previous case histories stored in the database. The framework could be of immense help to the medical practitioners. It might prove to be a useful diagnostic aid for the medical expert and also serve as a teaching aid for students and researchers in the medical field. Apart from diagnosis, radiologists can use the tumor location to past case studies relevant to the present patient study, which can aid in the treatment of the patients. Similarity distance employed in this work is the three dimensional Hausdorff distance which is significant as it takes into account the spatial location of the tumors. The preliminary results are encouraging and therefore the scheme could be adapted to various modalities and pathologies.
Glioma/Glioblastoma Detection in Brain MRI using Pre-trained Deep-Learning Scheme
Convolutional Neural Network (CNN) supported medicinal image examination is extensively accepted due to its reputation and improved accuracy. The investigational outcome obtained with DLS along with a chosen classifier helps to achieve better detection results than the traditional and machine-learning methods. The proposed research examines the performance of pre-trained VGG16 and VGG19 schemes in detecting the brain tumour (Glioma/Glioblastoma) grade using different pooling methods. The classification is performed using SoftMax with five-fold cross-validation, and the products are compared and presented. The brain tumour images considered in this study are collected from The Cancer Imaging Archive (TCIA) dataset. This work considered 2000 images (1000 Glioma and 1000 Glioblastoma) of axial-plane with dimension of 224×224×3 pixels for the assessment, and the attained results are compared. The experimental outcome achieved with Python® confirms that the VGG16 with average-pooling provides a better classification accuracy (>99%) with Decision Tree (DT) compared with other methods considered.
Predicting Lung Cancer Survival Time Using Deep Learning Techniques
Lung cancer is one of the most commonly diagnosed cancer. Most studies found that lung cancer patients have a survival time up to 5 years after the cancer is found. An accurate prognosis is the most critical aspect of a clinical decision-making process for patients. predicting patients' survival time helps healthcare professionals to make treatment recommendations based on the prediction. In this paper, we used various deep learning methods to predict the survival time of Non-Small Cell Lung Cancer (NSCLC) patients in days which has been evaluated on clinical and radiomics dataset. The dataset was extracted from computerized tomography (CT) images that contain data for 300 patients. The concordance index (C-index) was used to evaluate the models. We applied several deep learning approaches and the best accuracy gained is 70.05% on the OWKIN task using Multilayer Perceptron (MLP) which outperforms the baseline model provided by the OWKIN task organizers
Multi-scale features exist widely in biomedical images. For example, the scale of lesions may vary greatly according to different diseases. Effective representation of multi-scale features is essential for fully perceiving and understanding objects, which guarantees the performance of models. However, in biomedical image tasks, the insufficiency of data may prevent models from effectively capturing multi-scale features. In this paper, we propose Feature Pyramid Block (FPB), a novel structure to improve multi-scale feature representation within a single convolutional layer, which can be easily plugged into existing convolutional networks. Experiments on public biomedical image datasets prove consistent performance improvement with FPB. Furthermore, the convergence speed is faster and the computational costs are lower when using FPB, which proves high efficiency of our method.
Adversarial EM For Partially-Supervised Image-Quality Enhancement: Application To Low-Dose Pet Imaging
For image-quality enhancement, typical deep neural networks (DNNs) use large training sets and full supervision, but they generalize poorly to out-of-distribution (OOD) images exhibiting degradations absent during training. Also, having pairs of corresponding images at the desired quality and low quality becomes infeasible in many scenarios in medical image analysis. We propose a novel adversarial-learning framework for DNN-based image-quality enhancement which also incorporates variational modeling in latent space using expectation maximization (EM). Our EM framework extends to partially supervised learning that relaxes the quality requirement for reference images-used for DNN-loss computation during training-to a range in between the input/low quality and the desired/high quality. Results on two public datasets of positron-emission tomography show our framework’s benefits in generalizing to OOD images and visualizing DNN-output uncertainty while learning without full supervision.
Multi-Modal Medical Image Fusion for Non-Small Cell Lung Cancer Classification
The early detection and nuanced subtype classification of non-small cell lung cancer (NSCLC), a predominant cause of cancer mortality worldwide, is a critical and complex issue. In this paper, we introduce an innovative integration of multi-modal data, synthesizing fused medical imaging (CT and PET scans) with clinical health records and genomic data. This unique fusion methodology leverages advanced machine learning models, notably MedClip and BEiT, for sophisticated image feature extraction, setting a new standard in computational oncology. Our research surpasses existing approaches, as evidenced by a substantial enhancement in NSCLC detection and classification precision. The results showcase notable improvements across key performance metrics, including accuracy, precision, recall, and F1-score. Specifically, our leading multi-modal classifier model records an impressive accuracy of 94.04%. We believe that our approach has the potential to transform NSCLC diagnostics, facilitating earlier detection and more effective treatment planning and, ultimately, leading to superior patient outcomes in lung cancer care.
User-guided graph reduction for fast image segmentation
Graph-based segmentation methods such as the random walker (RW) are known to be computationally expensive. For high resolution images, user interaction with the algorithm is significantly affected. This paper introduces a novel seeding approach for graph-based segmentation that reduces computation time. Instead of marking foreground and background pixels, the user roughly marks the object boundary forming separate regions. The image pixels are then grouped into a hierarchy of increasingly large layers based on their distance from these markings. Next, foreground and background seeds are automatically generated according to the hierarchical layers of each region. The highest layers of the hierarchy are ignored leading to a significant graph reduction. Finally, validation experiments based on multiple automatically generated input seeds were carried out on a variety of medical images. Results show a significant gain in time for high resolution images using the new approach.
Fast Super-Resolution in MRI Images Using Phase Stretch Transform, Anchored Point Regression and Zero-Data Learning
Medical imaging is fundamentally challenging due to absorption and scattering in tissues and by the need to minimize illumination of the patient with harmful radiation. Common problems are low spatial resolution, limited dynamic range and low contrast. These predicaments have fueled interest in enhancing medical images using digital post processing. In this paper, we propose and demonstrate an algorithm for real-time inference that is suitable for edge computing. Our locally adaptive learned filtering technique named Phase Stretch Anchored Regression (PhSAR) combines the Phase Stretch Transform for local features extraction in visually impaired images with clustered anchored points to represent image feature space and fast regression based learning. In contrast with the recent widely-used deep neural network for image super-resolution, our algorithm achieves significantly faster inference and less hallucination on image details and is interpretable. Tests on brain MRI images using zero-data learning reveal its robustness with explicit PSNR improvement and lower latency compared to relevant benchmarks.
Breast Lesion Segmentation in DCE-MRI using Multi-Objective Clustering with NSGA-II
Breast cancer causes the highest death among all types of cancers in women. Early detection and diagnosis leading to early treatment can save the life. The computer-assisted methodologies for breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) segmentation can help the radiologists/doctors in the diagnosis of the disease as well as further treatment planning. In this article, we propose a breast DCE-MRI segmentation method using a hard-clustering technique with a Non-dominated Sorting Genetic Algorithm (NSGA-II). The well-known cluster validity metrics namely DB-index and Dunn-index are utilized as objective functions in NSGA-II algorithm. The noise and intensity inhomogeneities in MRI are removed from MRI in the preprocessing step as these artifacts affect the segmentation process. After segmentation, the lesions are separated and finally, localized in the MRI. The devised method is applied to segment 10 Sagittal T2-Weighted fat-suppressed DCE-MRI of the breast. A comparative study has been conducted with the K-means algorithm and the devised method outperforms K-means both quantitatively and qualitatively.
Automatic fissure detection in CT images based on the genetic algorithm
Lung cancer is one of the most frequently occurring cancer and has a very low five-year survival rate. Computer-aided diagnosis (CAD) helps reducing the burden of radiologists and improving the accuracy of abnormality detection during CT image interpretations. Owing to rapid development of the scanner technology, the volume of medical imaging data is becoming huger and huger. Automated segmentations of the target organ region are always required by the CAD systems. Although the analysis of lung fissures provides important information for treatment, it is still a challenge to extract fissures automatically based on the CT values because the appearance of lung fissures is very fuzzy and indefinite. Since the oblique fissures can be visualized more easily among other fissures on the chest CT images, they are used to check the exact localization of the lesions. In this paper, we propose a fully automatic fissure detection method based on the genetic algorithm to identify the oblique fissures. The accurate rates of identifying the oblique fissures in the right lung and the left lung are 97% and 86%, respectively when the method was tested on 87 slices.
Pancreatic Carcinoma Detection with Publicly available Radiological Images: A Systematic Analysis
Chhikara, Jasmine
Goel, Nidhi
Rathee, Neeru
2022Conference Paper, cited 0 times
Pancreas-CT
CPTAC-PDA
Medical Segmentation Decathlon 2021
Deep Learning
Computer Aided Diagnosis (CADx)
Pancreatic carcinoma is the fifth deadliest melanoma existing worldwide. It shares the maximum percentage of total mortalities caused by cancer every year. The main cause of the high mortality and minimal survival rate is the delayed detection of abnormal cell growth in pancreatic regions of patients diagnosed with this ailment. In recent years, researchers have been putting effort into the early detection of pancreatic carcinoma in radiological imaging scans of the whole abdomen. In this paper, the authors have systematically reviewed the data reported and various works done on publicly available imaging datasets of pancreatic cancer. The analyzed datasets are Pancreas- Computed Tomography, Clinical Proteomic Tumor Analysis Consortium Pancreatic Ductal Adenocarcinoma from The Cancer Imaging Archive, and Pancreas Tumor from the Medical Segmentation Decathlon online repository. The review is supported by reporting incidences depending on age group, clinical history, physical conditions, pathological findings, tumor nature, region, stage, and tumor size of examined patients. The outcomes of categorized subjects will aid academicians, research scholars, and industrialists to understand the propagation of pancreatic cancer for early detection in computer-aided systems.
AI Based Classification Framework For Cancer Detection Using Brain MRI Images
Brain imaging technologies plays an important role in medical diagnosis by providing new views of the brain anatomy giving greater insight into brain condition and functions. Image processing is used in the area of medical science to assist the early detection and treatment of life-critical illness. In this paper, cancer detection based on the brain magnetic resonance imaging (MRI) images using a combination of convolutional neural network (CNN) and sparse stacked auto encoder is presented. This combination is found to provide a significant effect in improving the accuracy and effectiveness of the classification process. The proposed method is coded in MATLAB and verified with the dataset consisting of 120 MRI images. The results obtained had shown that the proposed classifier is very much effective in classifying and grading the brain tumor MRI images.
Classification of LGG/GBM Brain Tumor in MRI Using Deep-Learning Schemes: A Study
Brain abnormalities require immediate medical attention, including diagnosis and treatment. One of the most severe brain disorders is brain tumor, and magnetic resonance imaging (MRI) is frequently used for clinical level screening of these illnesses. In order to categorize brain MRI images into low-grade gliomas (LGG) and glioblastoma-multiform (GBM), a deep learning strategy will be implemented in this work. The steps in this scheme are as follows: (i) gathering data and converting 3D to 2D; (ii) deep features mining using selected scheme; (iii) binary classification using SoftMax; and (iv) comparison analysis using selected deep learning techniques to determine the best model for additional refinement. The LGG/GBM photos are thought to be gathered by the Cancer Imaging Archive (TCIA) database. The results of this study demonstrate that max-pooling offers a higher accuracy than average-pooling based models, and the performance of the created scheme is validated using both average- and maxpooling. In the chosen models, the result of VGG16 is superior for the LGG/GBM detection task.
U-Net based Pancreas Segmentation from Computed Tomography Images
Delineation of pancreas from computed tomography (CT) scan images is burdensome owing to its anatomic variation in size, shape and its presence with respect to adjacent organs. This work explores the U-Net architecture to delineate pancreas from the CT volume of abdomen. U-Net was trained for variable number of epochs to identify the optimal learning environment for better segmentation performance. U-Net when trained for pancreas segmentation using CT dataset taken from The Cancer Imaging Archive (TCIA) has resulted in a segmentation performance of dice similarity coefficient of 0.8138, intersection over union of 0.7962 and boundary F1 score of 0.8036.
A 3D semi-automated co-segmentation method for improved tumor target delineation in 3D PET/CT imaging
The planning of radiotherapy is increasingly based on multi-modal imaging techniques such as positron emission tomography (PET)-computed tomography (CT), since PET/CT provides not only anatomical but also functional assessment of the tumor. In this work, we propose a novel co-segmentation method, utilizing both the PET and CT images, to localize the tumor. The method constructs the segmentation problem as minimization of a Markov random field model, which encapsulates features from both imaging modalities. The minimization problem can then be solved by the maximum flow algorithm, based on graph cuts theory. The proposed tumor delineation algorithm was validated in both a phantom, with a high-radiation area, and in patient data. The obtained results show significant improvement compared to existing segmentation methods, with respect to various qualitative and quantitative metrics.
Breast MRI Registration Using Metaheuristic Algorithms
Ten percent of the women in the whole world are suffering from breast cancer in their lives. Breast MRI registration is an important task to align MR images of pre-and post-contrast for diagnosis and classification of cancer type into benign and malignant using pharmacokinetic analysis. It is also very much essential to align images that are to be taken in various time intervals to isolate the lesion of small intervals. This registration technique is also useful to monitor various types of cancer therapies. The main enlightenment of algorithms used for image registration has also transferred from a control point for semi-automated techniques, to sophisticated voxel-based automated techniques which use mutual information as a resemblance measure. In this manuscript, breast MRI registration using Multi-verse optimization (MVO) algorithm and Student Phycology based optimization (SPBO) algorithm is proposed; MVO and SPBO are metaheuristics-based Optimization algorithm which we have applied to register breast MR images. We have considered 40 pairs of breast MR-images of pre and post-contrast. After that, images are registered using MVO and SPBO algorithms. The results of the SPBO-based breast MRI registration method are compared with that MVO-based registration method. The experimental results inferred that the SPBO-based registration method statistically outperforms the MVO-based registration method in the registration of breast MR images.
Kidney MRI Segmentation for Lesion Detection Using Clustering with Slime Mould Algorithm
Both the incidence and mortality rates of kidney cancer are increasing worldwide. Imaging examinations followed by effective systemic therapies can reduce the mortality rate. In this article, a new method to segment the kidney MRI for lesion detection is developed using a hard-clustering technique with Slime Mould Algorithm (SMA). First, a new partitional or hard clustering technique is developed using SMA which searches the optimal cluster centers for segmentation. In the preprocessing steps of the proposed method, the noise and intensity inhomogeneities are removed from the MR images as these artifacts affect the segmentation process. Region of Interests (ROIs) are selected and the clustering process is carried out using the SMA-based clustering technique. After the clustering, i.e., segmentation, the lesions are separated from the segmented images and finally, localized in the MR images as the postprocessing steps. The quantitative results are measured in terms of a well-known cluster validity index named Dunn-index and compared with that of the K-means algorithm. Both the quantitative and qualitative (i.e., visual) results show that the proposed method performs better than K-means.
Brain Tumor Segmentation based on Knowledge Distillation and Adversarial Training
3D MRI brain tumor segmentation is a reliable method for disease diagnosis and treatment plans in the future. Early on, the segmentation of brain tumors is mostly done manually. However, manual segmentation of 3D MRI brain tumor requires professional anatomical knowledge and may be inaccurate. In this paper, we propose a 3D MRI brain tumor segmentation architecture based on the encoder-decoder structure. Specially, we introduce knowledge distillation and adversarial training methods, which compresses and improves the accuracy and robustness of the model. Furthermore, we obtain soft targets by designing multiple teacher network training and then apply them to the student network. Finally, we evaluate our method on a challenging BraTS dataset. As a result, the performance of our proposed model is superior to state-of-the-art methods.
Lung nodule detection in CT images using deep convolutional neural networks
Golan, Rotem
Jacob, Christian
Denzinger, Jörg
2016Conference Proceedings, cited 26 times
Website
LIDC-IDRI
Radiomics
Computer Aided Detection (CADe)
Computed Tomography (CT)
Early detection of lung nodules in thoracic Computed Tomography (CT) scans is of great importance for the successful diagnosis and treatment of lung cancer. Due to improvements in screening technologies, and an increased demand for their use, radiologists are required to analyze an ever increasing amount of image data, which can affect the quality of their diagnoses. Computer-Aided Detection (CADe) systems are designed to assist radiologists in this endeavor. Here, we present a CADe system for the detection of lung nodules in thoracic CT images. Our system is based on (1) the publicly available Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) database, which contains 1018 thoracic CT scans with nodules of different shape and size, and (2) a deep Convolutional Neural Network (CNN), which is trained, using the back-propagation algorithm, to extract valuable volumetric features from the input data and detect lung nodules in sub-volumes of CT images. Considering only those test nodules that have been annotated by four radiologists, our CADe system achieves a sensitivity (true positive rate) of 78.9% with 20 false positives (FPs) per scan, or a sensitivity of 71.2% with 10 FPs per scan. This is achieved without using any segmentation or additional FP reduction procedures, both of which are commonly used in other CADe systems. Furthermore, our CADe system is validated on a larger number of lung nodules compared to other studies, which increases the variation in their appearance, and therefore, makes their detection by a CADe system more challenging.
Computer-aided detection of Pulmonary Nodules based on SVM in thoracic CT images
Eskandarian, Parinaz
Bagherzadeh, Jamshid
2015Conference Proceedings, cited 12 times
Website
LIDC-IDRI
Computer-Aided diagnosis of Solitary Pulmonary Nodules using the method of X-ray CT images is the early detection of lung cancer. In this study, a computer-aided system for detection of pulmonary nodules on CT scan based support vector machine classifier is provided for the diagnosis of solitary pulmonary nodules. So at the first step, by data mining techniques, volume of data are reduced. Then divided by the area of the chest, the suspicious nodules are identified and eventually nodules are detected. In comparison with the threshold-based methods, support vector machine classifier to classify more accurately describes areas of the lungs. In this study, the false positive rate is reduced by combination of threshold with support vector machine classifier. Experimental results based on data from 147 patients with lung LIDC image database show that the proposed system is able to obtained sensitivity of 89.9% and false positive of 3.9 per scan. In comparison to previous systems, the proposed system demonstrates good performance.
Enhancing Brain Tumor Classification: A Comparative Study of Single-Model and Multi-Model Fusion Approaches
Brain tumors are the leading cause of death world-wide. Deep learning has been successful in previous tasks like classification. However, it's being limited by the reliance on a single imaging modality which isn't enough, where a single modality can provide higher performance but is unreliable for accurate treatment and diagnosis. This study aims to improve brain tumor classification using deep learning and fusion techniques of multiple modalities. The study employs three fusion approaches: image-level fusion, feature-level fusion, and wavelet-based fusion. Extensive experiments were conducted on the BRATS2020 dataset. Initially, we train and evaluate the performance of 21 baseline models, encompassing 20 CNN-based architectures alongside the vision transformer model. Moreover, we identify the highest-performing models within each class for fusion. Furthermore, inspired by the baseline models, we dive deeper, introducing each modality as input to its respective best-performing model and fusing the outputs for multi-modality model-level fusion. Finally, we employ wavelet-based fusion to optimize information integration, implementing Discrete Wavelet Transform on our dataset. Model-level fusion outperformed image fusion across all evaluation metrics by 1 % accuracy, 4.7% precision, 6.6 % recall, and 0.7% F1-score.
Brain tumour segmentation is a crucial task in medical imaging that involves identifying and delineating the boundaries of tumour tissues in the brain from MRI scans. Accurate segmentation plays an indispensable role in the diagnosis, treatment planning, and monitoring of patients with brain tumours. This study presents a novel approach to address the class imbalance prevalent in brain tumour segmentation using a shared-encoder multi-class segmentation framework. The proposed method involves training a single encoder class learner and multiple decoder class learners, which are designed to learn feature representation of a certain class subset, in addition to a shared encoder between them that extracts common features across all classes. The outputs of the complement-class learners are combined and propagated to a meta-learner to obtain the final segmentation map. The authors evaluate their method on a publicly available brain tumour segmentation dataset (BraTS20) and assess performance against the 2D U-Net model trained on all classes using standard evaluation metrics for multi-class semantic segmentation. The IoU and DSC scores for the proposed architecture stands at 0.644 and 0.731, respectively, as compared to 0.604 and 0.690 obtained by the base models. Furthermore, our model exhibits significant performance boosts in individual classes, as evidenced by the DSC scores of 0.588, 0.734, and 0.684 for the necrotic tumour core, peritumoral edema, and the GD-enhancing tumour classes, respectively. In contrast, the 2D-Unet model yields DSC scores of 0.554, 0.699, and 0.641 for the same classes, respectively. The approach exhibits notable performance gains in segmenting the T1-Gd class, which not only poses a formidable challenge in terms of segmentation but also holds paramount clinical significance for radiation therapy.
A Pre-study on the Layer Number Effect of Convolutional Neural Networks in Brain Tumor Classification
Convolutional Neural Networks significantly influenced the revolution of Artificial Intelligence and Deep Learning, and it has become a basic model for image classification processes. However, Convolutional Neural Networks can be applied in different architectures and has many other parameters that require several experiments to reach the optimal results in applications. The number of images used, the input size of the images, the number of layers, and their parameters are the main factors that directly affect the success of the models. In this study, seven CNN architectures with different convolutional layers and dense layers were applied to the Brain Tumor Progression dataset. The CNN architectures are designed by gradually decreasing and increasing the layers, and the performance results on the considered dataset have been analyzed using five-fold cross-validation. The results showed that deeper architectures in binary classification tasks could reduce the performance rates up to 7%. It has been observed that models with the lowest number of layers are more successful in sensitivity results. General results demonstrated that networks with two convolutional and fully connected layers produced superior results depending on the filter and neuron number adjustments within their layers. The results might support the researchers to determine the initial architecture in binary classification studies.
On How to Push Efficient Medical Semantic Segmentation to the Edge: the SENECA approach
Berzoini, Raffaele
D'Arnese, Eleonora
Conficconi, Davide
2022 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)2022Journal Article, cited 0 times
CT-ORG
Segmentation
Graphics Processing Units (GPU)
U-Net
Semantic segmentation is the process of assigning each input image pixel a value representing a class, and it enables the clustering of pixels into object instances. It is a highly employed computer vision task in various fields such as autonomous driving and medical image analysis. In particular, in medical practice, semantic segmentation identifies different regions of interest within an image, like different organs or anomalies such as tumors. Fully Convolutional Networks (FCNs) have been employed to solve semantic segmentation in different fields and found their way in the medical one. In this context, the low contrast among semantically different areas, the constraint related to energy consumption, and computation resource availability increase the complexity and limit their adoption in daily practice. Based on these considerations, we propose SENECA to bring medical semantic segmentation to the edge with high energy efficiency and low segmentation time while preserving the accuracy. We reached a throughput of 335.4 ± 0.34 frames per second on the FPGA, 4.65× better than its GPU counterpart, with a global dice score of 93.04% ± 0.07 and an improvement in terms of energy efficiency with respect to the GPU of 12.7×.
3D CNN-BN: A Breakthrough in Colorectal Cancer Detection with Deep Learning Technique
We propose a patient-specific arterial input function (AIF) and tracer kinetic (TK) model-driven network to rapidly estimate the extended Tofts- Kety kinetic model parameters in DCE-MRI. We term our network as AIF-TK-net, which maps an input comprising of an image patch of the DCE-time series and the patient-specific AIF to the output image patch of the TK parameters. We leverage the open-source NEURO-RIDER database of brain tumor DCE-MRI scans to train our network. Once trained, our model rapidly infers the TK maps of unseen DCE-MRI images on the order of a 0.34 sec/slice for a 256x256x65 time series data on a NVIDIA GeForce GTX 1080 Ti GPU. We show its utility on high time resolution DCE-MRI datasets where significant variability in AIFs across patients exists. We demonstrate that the proposed AIF - TK net considerably improves the TK parameter estimation accuracy in comparison to a network, which does not utilize the patient AIF.
Liver Segmentation in CT with MRI Data: Zero-Shot Domain Adaptation by Contour Extraction and Shape Priors
In this work we address the problem of domain adaptation for segmentation tasks with deep convolutional neural networks. We focus on managing the domain shift from MRI to CT volumes on the example of 3D liver segmentation. Domain adaptation between modalities is particularly of practical importance, as different hospital departments usually tend to use different imaging modalities and protocols in their clinical routine. Thus, training a model with source data from one department may not be sufficient for application in another institution. Most adaptation strategies make use of target domain samples and often additionally incorporate the corresponding ground truths from the target domain during the training process. In contrast to these approaches, we investigate the possibility of training our model solely on source domain data sets, i.e. we apply zero-shot domain adaptation. To compensate the missing target domain data, we use prior knowledge about both modalities to steer the model towards more general features during the training process. We particularly make use of fixed Sobel kernels to enhance contour information and apply anatomical priors, learned separately by a convolutional autoencoder. Although we completely discard including the target domain in the training process, our proposed approach improves a vanilla U-Net implementation drastically and yields promising segmentation results.
Spatial Decomposition For Robust Domain Adaptation In Prostate Cancer Detection
The utility of high-quality imaging of Prostate Cancer (PCa) using 3.0 Tesla MRI (versus 1.5 Tesla) is well established, yet a vast majority of MRI units across many countries are 1.5 Tesla. Recently, Deep Learning has been applied successfully to augment radiological interpretation of medical images. However, training such models requires very large amount of data, and often the models do not generalize well to data with different acquisition parameters. To address this, we introduce domain standardization, a novel method that enables image synthesis between domains by separating anatomy- and modality-related factors of images. Our results show an improved PCa classification with an AUC of 0.75 compared to traditional transfer learning methods. We envision domain standardization to be applied as a promising tool towards enhancing the interpretation of lower resolution MRI images, reducing the barriers of the potential uptake of deep models for jurisdictions with smaller populations.
Predicting Mutation Status and Recurrence Free Survival in Non-Small Cell Lung Cancer: A Hierarchical ct Radiomics – Deep Learning Approach
Non-Small Cell Lung Cancer (NSCLC) is the world's leading cause of cancer deaths. A significant portion of these patients develop recurrence despite curative resection. Prognostic modeling of recurrence free survival in NSCLC has been attempted using computed tomography (CT) imaging features. Radiomic features have also been used to identify mutation subtypes in various cancers, however, the implications of such features on eventual patient outcome are unclear. Studies have shown that genetic mutation subtypes in lung cancers (KRAS and EGFR) have imaging correlates that can be detected using radiomic features from CT scans. In this study, we provide a degree of interpretability to quantitative imaging features predictive of mutation status by demonstrating their association with recurrence free survival using a hierarchical CT radiomics - deep learning pipeline.
Enhanced-Quality Gan (EQ-GAN) on Lung CT Scans: Toward Truth and Potential Hallucinations
Lung Computed Tomography (CT) scans are extensively used to screen lung diseases. Strategies such as large slice spacing and low-dose CT scans are often preferred to reduce radiation exposure and therefore the risk for patients' health. The counterpart is a significant degradation of image quality and/or resolution. In this work we investigate a generative adversarial network (GAN) for lung CT image enhanced-quality (EQ). Our EQ-GAN is trained on a high-quality lung CT cohort to recover the visual quality of scans degraded by blur and noise. The capability of our trained GAN to generate EQ CT scans is further illustrated on two test cohorts. Results confirm gains in visual quality metrics, remarkable visual enhancement of vessels, airways and lung parenchyma, as well as other enhancement patterns that require further investigation. We also compared automatic lung lobe segmentation on original versus EQ scans. Average Dice scores vary between lobes, can be as low as 0.3 and EQ scans enable segmentation of some lobes missed in the original scans. This paves the way to using EQ as pre-processing for lung lobe segmentation, further research to evaluate the impact of EQ to add robustness to airway and vessel segmentation, and to investigate anatomical details revealed in EQ scans.
Lung Cancer Identification via Deep Learning: A Multi-Stage Workflow
Lung cancer diagnosis involves different screening exams concluding with a biopsy. Although it is among the most diagnosed, lung cancer is characterized by a very high mortality rate caused by its aggressive nature. Though a swift identification is essential, the current procedure requires multiple physicians to visually inspect many images, leading to a lengthy analysis time. In this context, to support the radiologists and automate such repetitive processes, Deep Learning (DL) techniques have found their way as helpful diagnosis support tools. With this work, we propose an end-to-end multi-step framework for lung cancer localization within routinely acquired Computed Tomography images. The framework is composed of a first step of lung segmentation, followed by a patch classification model, and ends with a mass segmentation module. Lung segmentation reaches an accuracy of 99.6% even when considerable damages are present, while the patch classifier achieves a sensitivity of 85.48% in identifying patches containing masses. Finally, we evaluate the end-to-end framework for mass segmentation, which proves to be the most challenging task reaching a mean Dice coefficient of 68.56%.
Multi-Class Brain Tumor Segmentation via 3d and 2d Neural Networks
Brain tumor segmentation is an important and time-consuming part of the usual clinical diagnosis process. Multi-class segmentation of different tumor types is a challenging task, due to the differences in shape, size, location and scanner parameters. Many 2D and 3D convolution neural network architectures have been proposed to address this problem achieving a significant success. It is well known that 2D approach is generally faster and more popular in the most of such problems. However, the usage of 3D models allows us to simultaneously improve the quality of segmentation. Accounting the context along the sagittal plane leads to the learning of 3-dimensional features that we used for computationally expensive 3D operations what in its turn increases the learning time as well as decreases the speed of operation.In this paper, we compare the 2D and 3D approaches on 2 datasets with MRI images: the one from the BraTS 2020 competition and a private Siberian Brain tumor dataset. In each dataset, any single scan is represented by 4 sequences T1, T1C, T2 and T2-Flair, annotated by two certified neuro-radiologist specialists. The datasets differ from each other in the dimension, grade set and tumor type. Numerical comparison was performed based on the Dice score index. We provide the case by case analysis for the samples that caused most difficulties for the models. The results obtained in our work demonstrate the significant over performing of 3D methods keeping robustness in a regard of data source and type that allow us to get a little closer to AI-assisted diagnosis.
Prognostic value of multimodal MRI tumor features in Glioblastoma multiforme using textural features analysis
Upadhaya, Taman
Morvan, Yannick
Stindel, Eric
Reste, Le
Hatt, Mathieu
2015Conference Proceedings, cited 12 times
Website
TCGA-GBM
Radiomics
BRAIN
Support Vector Machine (SVM)
Image-derived features (“radiomics”) are increasingly being considered for patient management in (neuro)oncology and radiotherapy. In Glioblastoma multiforme (GBM), simple features are often used by clinicians in clinical practice, such as the size of the tumor or the relative sizes of the necrosis and active tumor. First order statistics provide a limited characterization power because they do not incorporate spatial information and thus cannot differentiate patterns. In this work, we present the methodological framework for building a prognostic model based on heterogeneity textural features of multimodal MRI sequences (T1, T1-contrast, T2 and FLAIR) in GBM. The proposed workflow consists in i) registering the available 3D multimodal MR images and segmenting the tumor volume, ii) extracting image features such as heterogeneity metrics and shape indices, iii) building a prognostic model using Support Vector Machine by selecting, ranking and combining optimal features. We present preliminary results obtained for the classification of 40 patients into short (≤ 15 months) or long (> 15 months) overall survival, validated using leave-one-out cross-validation. Our results suggest that several textural features in each MR sequence have prognostic value in GBM, classification accuracy of 90% (sensitivity 85%, specificity 95%) being obtained by combining both T1 sequences. Future work will consist in i) adding more patients for validation using training and testing groups, ii) considering additional features, iii) building a fully multimodal MRI model by combining features from more than two sequences, iv) consider survival as a continuous variable and v) combine image-derived features with clinical and histopatholoigcal data to build an even more accurate model.
Two-stage fusion set selection in multi-atlas-based image segmentation
Conventional multi-atlas-based segmentation demands pairwise full-fledged registration between each atlas image and the target image, which leads to high computational cost and poses great challenge in the new era of big data. On the other hand, only the most relevant atlases should contribute to final label fusion. In this work, we introduce a two-stage fusion set selection method by first trimming the atlas collection into an augmented subset based on a low-cost registration and the preliminary relevance metric, followed by a further refinement based on a full-fledged registration and the corresponding relevance metric. A statistical inference model is established to relate the preliminary and the refined relevance metrics, and a proper augmented subset size is derived based on it. Empirical evidence supported the inference model, and end-to-end performance assessment demonstrated the proposed scheme to be computationally efficient without compromising segmentation accuracy.
Information theory optimization based feature selection in breast mammography lesion classification
Uthoff, Johanna
Sieren, Jessica C.
2018Conference Paper, cited 0 times
CBIS-DDSM
Feature Extraction
Radiomics
Segmentation
low set co-information
Deep Learning
Tanh activation function
BREAST
Quantitative imaging features of intensity, texture, and shape were extracted from breast lesions and surrounding tissue in 287 mammograms (150 malignant, 137 benign). A feature set reduction method to remove highly intra-correlated features was devised using k-medoids clustering and k-fold cross validation. A novel feature selection method using information theory was introduced which builds a feature set for classification by determining a group of class-informative features with low set co-information. An artificial neural network was built from the selected feature set using 10-hidden layer nodes and the tanh activation function. The resulting computer-aided diagnosis tool achieved a training accuracy of 96.2%, sensitivity of 97.6%, specificity of 95.2%, and area-under-the-curve of 0.971 along with 97.1% sensitivity and 94.9% specificity a blinded validation set.
Big biomedical image processing hardware acceleration: A case study for K-means and image filtering
Most hospitals today are dealing with the big data problem, as they generate and store petabytes of patient records most of which in form of medical imaging, such as pathological images, CT scans and X-rays in their datacenters. Analyzing such large amounts of biomedical imaging data to enable discovery and guide physicians in personalized care is becoming an important focus of data mining and machine learning algorithms developed for biomedical Informatics (BMI). Algorithms that are developed for BMI heavily rely on complex and computationally intensive machine learning and data mining methods to learn from large data. The high processing demand of big biomedical imaging data has given rise to their implementation in high-end server platforms running software ecosystems that are optimized for dealing with large amount of data including Apache Hadoop and Apache Spark. However, efficient processing of such large amount of imaging data running computational intensive learning methods is becoming a challenging problem using state-of-the-art high performance computing server architectures. To address this challenge, in this paper, we introduce a scalable and efficient hardware acceleration method using low cost commodity FPGAs that is interfaced with a server architecture through a high speed interface. In this work we present a full end-to-end implementation of big data image processing and machine learning applications in a heterogeneous CPU+FPGA architecture. We develop the MapReduce implementation of K-means and Laplacian Filtering in Hadoop Streaming environment that allows developing mapper functions in non-Java based languages suited for interfacing with FPGA-based hardware accelerating environment. We accelerate the mapper functions through hardware+software (HW+SW) co-design. We do a full implementation of the HW+SW mappers on the Zynq FPGA platform. The results show promising kernel speedup of up to 27× for large image data sets. This translate to 7.8× and 1.8× speedup in an end-to-end Hadoop MapReduce implementation of K-mean s and Laplacian Filtering algorithm, respectively.
Modeling and Operator Control of a Robotic Tool for Bidirectional Manipulation in Targeted Prostate Biopsy
Padasdao, B.
Batsaikhan, Z.
Lafreniere, S.
Rabiei, M.
Konh, B.
Int Symp Med Robot2022Journal Article, cited 0 times
Prostate-MRI-US-Biopsy
Mechanism design
Medical robotics
Medical robots and systems
Steerable catheters/needles
Surgical robotics
This work introduces design, manipulation, and operator control of a bidirectional robotic tool for minimally invasive targeted prostate biopsy. The robotic tool is purposed to be used as a compliant flexure section of active biopsy needles. The design of the robotic tool comprises of a flexure section fabricated on a nitinol tube that enables bidirectional bending via actuation of two internal tendons. The statics of the flexure section is presented and validated with experimental data. Finally, the capability of the robotic tool to reach targeted positions inside prostate gland is evaluated.
Deep Learning–based Method for Denoising and Image Enhancement in Low-Field MRI
Deep learning has proven successful in a variety of medical image processing applications, including denoising and removing artifacts. This is of particular interest for low-field Magnetic Resonance Imaging (MRI), which is promising for its affordability, compact footprint, and reduced shielding requirements, but inherently suffers from low signal-to-noise ratio. In this work, we propose a method of simulating scanner-specific images from publicly available, 1.5T and 3T database of MR images, using a signal encoding matrix incorporating explicitly modeled imaging gradients and fields. We apply a stacked, U-Net architecture to reduce noise from the system and remove artifacts due to the inhomogeneous B0 field, nonlinear gradients, undersampling of k-space and image reconstruction to enhance low-field MR images. The final network is applied as a post-processing step following image reconstruction to phantom and human images acquired on a 60-67mT MR scanner and demonstrates promising qualitative and quantitative improvements to overall image quality.
Hiding privacy and clinical information in medical images using QR code
This study aims to hide patient's privacy details of DICOM files using the QR code images with the same size using steganographic technique. The proposed method is based on the properties of the discrete cosine transform (DCT) of the DICOM images to embed a QR code image. The proposed approach includes two main parts: the embedding of data and the extraction procedure. Moreover, the embedded QR code will be reconstructed blindly from the Stego DICOM without the presence of the original DICOM file. The performances of proposed approach were tested using TCIA COVID-19 Dataset and the terms of the Peak Signal to Noise Ratio (PSNR), the Structural Similarity Index (SSIM) and the Bit Error Rate (BER) values. The simulation results achieved high PSNR values ranged between 63.47 dB and 81.97 dB after the embedding procedure by using a QR code image within the DICOM image of the same size.
A visual analytics approach using the exploration of multidimensional feature spaces for content-based medical image retrieval
Kumar, Ashnil
Nette, Falk
Klein, Karsten
Fulham, Michael
Kim, Jinman
IEEE Journal of Biomedical and Health Informatics2014Journal Article, cited 27 times
Website
LIDC-IDRI
Content based medical image retrieval
Improved False Positive Reduction by Novel Morphological Features for Computer-Aided Polyp Detection in CT Colonography
Ren, Yacheng
Ma, Jingchen
Xiong, Junfeng
Chen, Yi
Lu, Lin
Zhao, Jun
IEEE Journal of Biomedical and Health Informatics2018Journal Article, cited 3 times
Website
Algorithm Development
LungCT-Diagnosis
Convolutional Neural Network (CNN)
Deep learning
low-dose CT
sparse-view CT
view interpolation
Investigating the role of model-based and model-free imaging biomarkers as early predictors of neoadjuvant breast cancer therapy outcome
Kontopodis, Eleftherios
Venianaki, Maria
Manikis, George C
Nikiforaki, Katerina
Salvetti, Ovidio
Papadaki, Efrosini
Papadakis, Georgios Z
Karantanas, Apostolos H
Marias, Kostas
IEEE J Biomed Health Inform2019Journal Article, cited 0 times
QIN Breast
Breast
DCE-MRI
Imaging biomarkers (IBs) play a critical role in the clinical management of breast cancer (BRCA) patients throughout the cancer continuum for screening, diagnosis and therapy assessment especially in the neoadjuvant setting. However, certain model-based IBs suffer from significant variability due to the complex workflows involved in their computation, whereas model-free IBs have not been properly studied regarding clinical outcome. In the present study, IBs from 35 BRCA patients who received neoadjuvant chemotherapy (NAC) were extracted from dynamic contrast enhanced MR imaging (DCE-MRI) data with two different approaches, a model-free approach based on pattern recognition (PR), and a model-based one using pharmacokinetic compartmental modeling. Our analysis found that both model-free and model-based biomarkers can predict pathological complete response (pCR) after the first cycle of NAC. Overall, 8 biomarkers predicted the treatment response after the first cycle of NAC, with statistical significance (p-value<0.05), and 3 at the baseline. The best pCR predictors at first follow-up, achieving high AUC and sensitivity and specificity more than 50%, were the hypoxic component with threshold2 (AUC 90.4%) from the PR method, and the median value of kep (AUC 73.4%) from the model-based approach. Moreover, the 80th percentile of ve achieved the highest pCR prediction at baseline with AUC 78.5%. The results suggest that model-free DCE-MRI IBs could be a more robust alternative to complex, model-based ones such as kep and favor the hypothesis that the PR image-derived hypoxic image component captures actual tumor hypoxia information able to predict BRCA NAC outcome.
Bi-Modality Medical Image Synthesis Using Semi-Supervised Sequential Generative Adversarial Networks
Yang, Xin
Lin, Yi
Wang, Zhiwei
Li, Xin
Cheng, Kwang-Ting
IEEE Journal of Biomedical and Health Informatics2019Journal Article, cited 0 times
PROSTATEx
Prostate
MRI
Machine Learning
In this paper, we propose a bi-modality medical image synthesis approach based on sequential generative adversarial network (GAN) and semi-supervised learning. Our approach consists of two generative modules that synthesize images of the two modalities in a sequential order. A method for measuring the synthesis complexity is proposed to automatically determine the synthesis order in our sequential GAN. Images of the modality with a lower complexity are synthesized first, and the counterparts with a higher complexity are generated later. Our sequential GAN is trained end-to-end in a semi-supervised manner. In supervised training, the joint distribution of bi-modality images are learned from real paired images of the two modalities by explicitly minimizing the reconstruction losses between the real and synthetic images. To avoid overfitting limited training images, in unsupervised training, the marginal distribution of each modality is learned based on unpaired images by minimizing the Wasserstein distance between the distributions of real and fake images. We comprehensively evaluate the proposed model using two synthesis tasks based on three types of evaluate metrics and user studies. Visual and quantitative results demonstrate the superiority of our method to the state-of-the-art methods, and reasonable visual quality and clinical significance. Code is made publicly available at https://github.com/hust- linyi/Multimodal-Medical-Image-Synthesis.
Variation-Aware Federated Learning With Multi-Source Decentralized Medical Image Data
Yan, Z.
Wicaksana, J.
Wang, Z.
Yang, X.
Cheng, K. T.
IEEE J Biomed Health Inform2021Journal Article, cited 69 times
Website
PROSTATEx
Federated learning
CycleGAN
Synthetic images
Classification
Magnetic Resonance Imaging (MRI)
Humans
Male
*Privacy
*Prostatic Neoplasms
Privacy concerns make it infeasible to construct a large medical image dataset by fusing small ones from different sources/institutions. Therefore, federated learning (FL) becomes a promising technique to learn from multi-source decentralized data with privacy preservation. However, the cross-client variation problem in medical image data would be the bottleneck in practice. In this paper, we propose a variation-aware federated learning (VAFL) framework, where the variations among clients are minimized by transforming the images of all clients onto a common image space. We first select one client with the lowest data complexity to define the target image space and synthesize a collection of images through a privacy-preserving generative adversarial network, called PPWGAN-GP. Then, a subset of those synthesized images, which effectively capture the characteristics of the raw images and are sufficiently distinct from any raw image, is automatically selected for sharing with other clients. For each client, a modified CycleGAN is applied to translate its raw images to the target image space defined by the shared synthesized images. In this way, the cross-client variation problem is addressed with privacy preservation. We apply the framework for automated classification of clinically significant prostate cancer and evaluate it using multi-source decentralized apparent diffusion coefficient (ADC) image data. Experimental results demonstrate that the proposed VAFL framework stably outperforms the current horizontal FL framework. As VAFL is independent of deep learning architectures for classification, we believe that the proposed framework is widely applicable to other medical image classification tasks.
Reconstruction-Assisted Feature Encoding Network for Histologic Subtype Classification of Non-Small Cell Lung Cancer
Li, Haichun
Song, Qilong
Gui, Dongqi
Wang, Minghui
Min, Xuhong
Li, Ao
IEEE Journal of Biomedical and Health Informatics2022Journal Article, cited 0 times
NSCLC-Radiomics
Accurate histological subtype classification between adenocarcinoma (ADC) and squamous cell carcinoma (SCC) using computed tomography (CT) images is of great importance to assist clinicians in determining treatment and therapy plans for non-small cell lung cancer (NSCLC) patients. Although current deep learning approaches have achieved promising progress in this field, they are often difficult to capture efficient tumor representations due to inadequate training data, and in consequence show limited performance. In this study, we propose a novel and effective reconstruction-assisted feature encoding network (RAFENet) for histological subtype classification by leveraging an auxiliary image reconstruction task to enable extra guidance and regularization for enhanced tumor feature representations. Different from existing reconstruction-assisted methods that directly use generalizable features obtained from shared encoder for primary task, a dedicated task-aware encoding module is utilized in RAFENet to perform refinement of generalizable features. Specifically, a cascade of cross-level non-local blocks are introduced to progressively refine generalizable features at different levels with the aid of lower-level task-specific information, which can successfully learn multi-level task-specific features tailored to histological subtype classification. Moreover, in addition to widely adopted pixel-wise reconstruction loss, we introduce a powerful semantic consistency loss function to explicitly supervise the training of RAFENet, which combines both feature consistency loss and prediction consistency loss to ensure semantic invariance during image reconstruction. Extensive experimental results show that RAFENet effectively addresses the difficult issues that cannot be resolved by existing reconstruction-based methods and consistently outperforms other state-of-the-art methods on both public and in-house NSCLC datasets. Supplementary material is available at https://github.com/lhch1994/Rafenet_sup_material.
Customized Federated Learning for Multi-Source Decentralized Medical Image Classification
Wicaksana, J.
Yan, Z.
Yang, X.
Liu, Y.
Fan, L.
Cheng, K. T.
IEEE J Biomed Health Inform2022Journal Article, cited 4 times
Website
PROSTATEx
Algorithm Development
Federated learning
Male
Humans
*Deep Learning
Privacy
*Skin Diseases
PROSTATE
The performance of deep networks for medical image analysis is often constrained by limited medical data, which is privacy-sensitive. Federated learning (FL) alleviates the constraint by allowing different institutions to collaboratively train a federated model without sharing data. However, the federated model is often suboptimal with respect to the characteristics of each client's local data. Instead of training a single global model, we propose Customized FL (CusFL), for which each client iteratively trains a client-specific/private model based on a federated global model aggregated from all private models trained in the immediate previous iteration. Two overarching strategies employed by CusFL lead to its superior performance: 1) the federated model is mainly for feature alignment and thus only consists of feature extraction layers; 2) the federated feature extractor is used to guide the training of each private model. In that way, CusFL allows each client to selectively learn useful knowledge from the federated model to improve its personalized model. We evaluated CusFL on multi-source medical image datasets for the identification of clinically significant prostate cancer and the classification of skin lesions.
A Decision Support System for the Identification of Metastases of Metastatic Melanoma Using Whole-Body FDG PET/CT Images
Vagenas, Theodoros P.
Economopoulos, Theodore L.
Sachpekidis, Christos
Dimitrakopoulou-Strauss, Antonia
Pan, Leyun
Provata, Astero
Matsopoulos, George K.
IEEE Journal of Biomedical and Health Informatics2022Journal Article, cited 0 times
FDG-PET-CT-Lesions
Metastatic Melanoma (MM) is an aggressive type of cancer which produces metastases throughout the body with very poor survival rates. Recent advances in immunotherapy have shown promising results for controlling disease's progression. Due to the often rapid progression, fast and accurate diagnosis and treatment response assessment is vital for the whole patient management. These procedures prerequisite accurate, whole-body tumor identification. This can be offered by the imaging modality Positron Emission Tomography (PET)/Computed Tomography (CT) with the radiotracer F 18-Fluorodeoxyglucose (FDG). However, manual segmentation of PET/CT images is a very time-consuming and labor intensive procedure that requires expert knowledge. Most of the previously published segmentation techniques focus on a specific type of tumor or part of the body and require a great amount of manually labeled data, which is, however, difficult for MM. Multimodal analysis of PET/CT is also crucial because FDG-PET contains only the functional information of tumors which can be complemented by the anatomical information of CT. In this paper, we propose a whole-body segmentation framework capable of efficiently identifying the highly heterogeneous tumor lesions of MM from the whole-body 3D FDG-PET/CT images. The proposed decision support system begins with an Ensemble Unsupervised Segmentation of regions of high FDG-uptake based on Fuzzy C-means and a custom region growing algorithm. Then, a region classification model based on radiomics features and Neural Networks classifies these regions as tumors or not. Experimental results showed high performance in the identification of MM lesions with Sensitivity 83.68%, Specificity 91.82%, F1-score 75.42%, AUC 94.16% and Balanced accuracy 87.75% which were also supported by the public dataset evaluation.
Domain-Aware Dual Attention for Generalized Medical Image Segmentation on Unseen Domains
Lai, Huilin
Luo, Ye
Li, Bo
Zhang, Guokai
Lu, Jianwei
IEEE Journal of Biomedical and Health Informatics2023Journal Article, cited 0 times
ISBI-MR-Prostate-2013
Recently, there has been significant progress in medical image segmentation utilizing deep learning techniques. However, these achievements largely rely on the supposition that the source and target domain data are identically distributed, and the direct application of related methods without addressing the distribution shift results in dramatic degradation in realistic clinical environments. Current approaches concerning the distribution shift either require the target domain data in advance for adaptation, or focus only on the distribution shift across domains while ignoring the intra-domain data variation. This paper proposes a domain-aware dual attention network for the generalized medical image segmentation task on unseen target domains. To alleviate the severe distribution shift between the source and target domains, an Extrinsic Attention (EA) module is designed to learn image features with knowledge originating from multi-source domains. Moreover, an Intrinsic Attention (IA) module is also proposed to handle the intra-domain variation by individually modeling the pixel-region relations derived from an image. The EA and IA modules complement each other well in terms of modeling the extrinsic and intrinsic domain relationships, respectively. To validate the model effectiveness, comprehensive experiments are conducted on various benchmark datasets, including the prostate segmentation in magnetic resonance imaging (MRI) scans and the optic cup/disc segmentation in fundus images. The experimental results demonstrate that our proposed model effectively generalizes to unseen domains and exceeds the existing advanced approaches.
NERONE: The Fast Way to Efficiently Execute Your Deep Learning Algorithm At the Edge
Berzoini, R.
D'Arnese, E.
Conficconi, D.
Santambrogio, M. D.
IEEE J Biomed Health Inform2023Journal Article, cited 0 times
Website
CT-ORG
Graphics Processing Units (GPU)
Segmentation
Classification
Semantic segmentation and classification are pivotal in many clinical applications, such as radiation dose quantification and surgery planning. While manually labeling images is highly time-consuming, the advent of Deep Learning (DL) has introduced a valuable alternative. Nowadays, DL models inference is run on Graphics Processing Units (GPUs), which are power-hungry devices, and, therefore, are not the most suited solution in constrained environments where Field Programmable Gate Arrays (FPGAs) become an appealing alternative given their remarkable performance per watt ratio. Unfortunately, FPGAs are hard to use for non-experts, and the creation of tools to open their employment to the computer vision community is still limited. For these reasons, we propose NERONE, which allows end users to seamlessly benefit from FPGA acceleration and energy efficiency without modifying their DL development flows. To prove the capability of NERONE to cover different network architectures, we have developed four models, one for each of the chosen datasets (three for segmentation and one for classification), and we deployed them, thanks to NERONE, on three different embedded FPGA-powered boards achieving top average energy efficiency improvements of 3.4x and 1.9x against a mobile and a datacenter GPU devices, respectively.
RCPS: Rectified Contrastive Pseudo Supervision for Semi-Supervised Medical Image Segmentation
Zhao, Xiangyu
Qi, Zengxin
Wang, Sheng
Wang, Qian
Wu, Xuehai
Mao, Ying
Zhang, Lichi
IEEE Journal of Biomedical and Health Informatics2023Journal Article, cited 0 times
Pancreas-CT
Medical image segmentation methods are generally designed as fully-supervised to guarantee model performance, which requires a significant amount of expert annotated samples that are high-cost and laborious. Semi-supervised image segmentation can alleviate the problem by utilizing a large number of unlabeled images along with limited labeled images. However, learning a robust representation from numerous unlabeled images remains challenging due to potential noise in pseudo labels and insufficient class separability in feature space, which undermines the performance of current semi-supervised segmentation approaches. To address the issues above, we propose a novel semi-supervised segmentation method named as Rectified Contrastive Pseudo Supervision (RCPS), which combines a rectified pseudo supervision and voxel-level contrastive learning to improve the effectiveness of semi-supervised segmentation. Particularly, we design a novel rectification strategy for the pseudo supervision method based on uncertainty estimation and consistency regularization to reduce the noise influence in pseudo labels. Furthermore, we introduce a bidirectional voxel contrastive loss in the network to ensure intra-class consistency and inter-class contrast in feature space, which increases class separability in the segmentation. The proposed RCPS segmentation method has been validated on two public datasets and an in-house clinical dataset. Experimental results reveal that the proposed method yields better segmentation performance compared with the state-of-the-art methods in semi-supervised medical image segmentation. The source code is available at https://github.com/hsiangyuzhao/RCPS.
Double Transformer Super-Resolution for Breast Cancer ADC Images
Yang, Ying
Xiang, Tao
Lv, Xiao
Li, Lihua
Lui, Lok Ming
Zeng, Tieyong
IEEE Journal of Biomedical and Health Informatics2024Journal Article, cited 0 times
ACRIN-6698
Breast
Diffusion MRI
Radiomics
Diffusion-weighted imaging (DWI) has been extensively explored in guiding the clinic management of patients with breast cancer. However, due to the limited resolution, accurately characterizing tumors using DWI and the corresponding apparent diffusion coefficient (ADC) is still a challenging problem. In this paper, we aim to address the issue of super-resolution (SR) of ADC images and evaluate the clinical utility of SR-ADC images through radiomics analysis. To this end, we propose a novel double transformer-based network (DTformer) to enhance the resolution of ADC images. More specifically, we propose a symmetric U-shaped encoder-decoder network with two different types of transformer blocks, named as UTNet, to extract deep features for super-resolution. The basic backbone of UTNet is composed of a locally-enhanced Swin transformer block (LeSwin-T) and a convolutional transformer block (Conv-T), which are responsible for capturing long-range dependencies and local spatial information, respectively. Additionally, we introduce a residual upsampling network (RUpNet) to expand image resolution by leveraging initial residual information from the original low-resolution (LR) images. Extensive experiments show that DTformer achieves superior SR performance. Moreover, radiomics analysis reveals that improving the resolution of ADC images is beneficial for tumor characteristic prediction, such as histological grade and human epidermal growth factor receptor 2 (HER2) status.
BTSSPro: Prompt-Guided Multimodal Co-Learning for Breast Cancer Tumor Segmentation and Survival Prediction
Li, Wei
Liu, Tianyu
Feng, Feiyan
Yu, Shengpeng
Wang, Hong
Sun, Yanshen
IEEE Journal of Biomedical and Health Informatics2024Journal Article, cited 0 times
Breast-MRI-NACT-Pilot
ISPY1
Early detection significantly enhances patients' survival rates by identifying tumors in their initial stages through medical imaging. However, prevailing methodologies encounter challenges in extracting comprehensive information from diverse modalities, thereby exacerbating semantic disparities and overlooking critical task correlations, consequently compromising the accuracy of prognosis predictions. Moreover, clinical insights emphasize the advantageous sharing of parameters between tumor segmentation and survival prediction for enhanced prognostic accuracy. This paper proposes a novel model, BTSSPro, designed to concurrently address B reast cancer T umor S egmentation and S urvival prediction through a Pro mpt-guided multi-modal co-learning framework. Technologically, our approach involves the extraction of tumor-specific discriminative features utilizing shared dual attention (SDA) blocks, which amalgamate spatial and channel information from breast MR images. Subsequently, we employ a guided fusion module (GFM) to seamlessly integrate the Electronic Health Record (EHR) vector into the extracted tumor-related discriminative feature representations. This integration prompts the model's feature selection to align more closely with real-world scenarios. Finally, a feature harmonic unit (FHU) is introduced to coordinate the transformer encoder and CNN decoder, thus reducing semantic differences. Remarkably, BTSSPro achieved a C-index of 0.968 and Dice score of 0.715 on the Breast MRI-NACT-Pilot dataset and a C-index of 0.807 and Dice score of 0.791 on the ISPY1 dataset, surpassing the previous state-of-the-art methods.
A Physiological-Informed Generative Model for Improving Breast Lesion Classification in Small DCE-MRI Datasets
Gravina, Michela
Maddaluno, Massimo
Marrone, Stefano
Sansone, Mario
Fusco, Roberta
Granata, Vincenza
Petrillo, Antonella
Sansone, Carlo
IEEE Journal of Biomedical and Health Informatics2024Journal Article, cited 0 times
Website
Advanced-MRI-Breast-Lesions
CNN Approach for Predicting Survival Outcome of Patients With COVID-19
Chaddad, Ahmad
Tanougast, Camel
2023Journal Article, cited 0 times
COVID-19-NY-SBU
Coronavirus disease 2019 (COVID-19) has been challenged specifically with the new variant. The number of patients seeking treatment has increased significantly, putting tremendous pressure on hospitals and healthcare systems. With the potential of artificial intelligence (AI) to leverage clinicians to improve personalized medicine for COVID-19, we propose a deep learning model based on 1-D and 3-D convolutional neural networks (CNNs) to predict the survival outcome of COVID-19 patients. Our model consists of two CNN channels that operate with CT scans and the corresponding clinical variables. Specifically, each patient data set consists of CT images and the corresponding 44 clinical variables used in the 3-D CNN and 1-D CNN input, respectively. This model aims to combine imaging and clinical features to predict short-term from long-term survival. Our models demonstrate higher performance metrics compared to state-of-the-art models with area under the receiver operator characteristic curve of 91.44%–91.60% versus 84.36%–88.10% and Accuracy of 83.39%–84.47% versus 79.06%–81.94% in predicting the survival groups of patients with COVID-19. Based on the findings, the combined clinical and imaging features in the deep CNN model can be used as a prognostic tool and help to distinguish censored and uncensored cases of COVID-19.
Improvement of Image Classification by Multiple Optical Scattering
Gao, Xinyu
Li, Yi
Qiu, Yanqing
Mao, Bangning
Chen, Miaogen
Meng, Yanlong
Zhao, Chunliu
Kang, Juan
Guo, Yong
Shen, Changyu
2021Journal Article, cited 0 times
C-NMC 2019
Multiple optical scattering occurs when light propagates in a non-uniform medium. During the multiple scattering, images were distorted and the spatial information they carried became scrambled. However, the image information is not lost but presents in the form of speckle patterns (SPs). In this study, we built up an optical random scattering system based on an liquid crystal display (LCD) and an RGB laser source. We found that the image classification can be improved by the help of random scattering which is considered as a feedforward neural network to extracts features from image. Along with the ridge classification deployed on computer, we achieved excellent classification accuracy higher than 94, for a variety of data sets covering medical, agricultural, environmental protection and other fields. In addition, the proposed optical scattering system has the advantages of high speed, low power consumption, and miniaturization, which is suitable for deploying in edge computing applications.
Context Dependent Fuzzy Associated Statistical Model for Intensity Inhomogeneity Correction from Magnetic Resonance Images
Subudhi, BN
Veerakumar, T
Esakkirajan, S
Ghosh, A
IEEE Journal of Translational Engineering in Health and Medicine2019Journal Article, cited 0 times
Website
PROSTATE-DIAGNOSIS
NCI-ISBI 2013 Challenge: Automated Segmentation of Prostate Structures
Prostate-3T
Image denoising
In this article, a novel context dependent fuzzy set associated statistical model based intensity inhomogeneity correction technique for Magnetic Resonance Image (MRI) is proposed. The observed MRI is considered to be affected by intensity inhomogeneity and it is assumed to be a multiplicative quantity. In the proposed scheme the intensity inhomogeneity correction and MRI segmentation is considered as a combined task. The maximum a posteriori probability (MAP) estimation principle is explored to solve this problem. A fuzzy set associated Gibbs' Markov random field (MRF) is considered to model the spatio-contextual information of an MRI. It is observed that the MAP estimate of the MRF model does not yield good results with any local searching strategy, as it gets trapped to local optimum. Hence, we have exploited the advantage of variable neighborhood searching (VNS) based iterative global convergence criterion for MRF-MAP estimation. The effectiveness of the proposed scheme is established by testing it on different MRIs. Three performance evaluation measures are considered to evaluate the performance of the proposed scheme against existing state-of-the-art techniques. Simulation results establish the effectiveness of the proposed technique.
Radiomics Based Bayesian Inversion Method for Prediction of Cancer and Pathological Stage
Shakir, Hina
Khan, Tariq
Rasheed, Haroon
Deng, Yiming
IEEE Journal of Translational Engineering in Health and Medicine2021Journal Article, cited 0 times
NSCLC-Radiomics
OBJECTIVE: To develop a Bayesian inversion framework on longitudinal chest CT scans which can perform efficient multi-class classification of lung cancer.
METHODS: While the unavailability of large number of training medical images impedes the performance of lung cancer classifiers, the purpose built deep networks have not performed well in multi-class classification. The presented framework employs particle filtering approach to address the non-linear behaviour of radiomic features towards benign and cancerous (stages I, II, III, IV) nodules and performs efficient multi-class classification (benign, early stage cancer, advanced stage cancer) in terms of posterior probability function. A joint likelihood function incorporating diagnostic radiomic features is formulated which can compute likelihood of cancer and its pathological stage. The proposed research study also investigates and validates diagnostic features to discriminate accurately between early stage (I, II) and advanced stage (III, IV) cancer.
RESULTS: The proposed stochastic framework achieved 86% accuracy on the benchmark database which is better than the other prominent cancer detection methods.
CONCLUSION: The presented classification framework can aid radiologists in accurate interpretation of lung CT images at an early stage and can lead to timely medical treatment of cancer patients.
Deep Survival Analysis With Clinical Variables for COVID-19
Chaddad, Ahmad
Hassan, Lama
Katib, Yousef
Bouridane, Ahmed
IEEE Journal of Translational Engineering in Health and Medicine2023Journal Article, cited 0 times
COVID-19-NY-SBU
OBJECTIVE: Millions of people have been affected by coronavirus disease 2019 (COVID-19), which has caused millions of deaths around the world. Artificial intelligence (AI) plays an increasing role in all areas of patient care, including prognostics. This paper proposes a novel predictive model based on one dimensional convolutional neural networks (1D CNN) to use clinical variables in predicting the survival outcome of COVID-19 patients.
METHODS AND PROCEDURES: We have considered two scenarios for survival analysis, 1) uni-variate analysis using the Log-rank test and Kaplan-Meier estimator and 2) combining all clinical variables ([Formula: see text]=44) for predicting the short-term from long-term survival. We considered the random forest (RF) model as a baseline model, comparing to our proposed 1D CNN in predicting survival groups.
RESULTS: Our experiments using the univariate analysis show that nine clinical variables are significantly associated with the survival outcome with corrected p < 0.05. Our approach of 1D CNN shows a significant improvement in performance metrics compared to the RF and the state-of-the-art techniques (i.e., 1D CNN) in predicting the survival group of patients with COVID-19.
CONCLUSION: Our model has been tested using clinical variables, where the performance is found promising. The 1D CNN model could be a useful tool for detecting the risk of mortality and developing treatment plans in a timely manner.
CLINICAL IMPACT: The findings indicate that using both Heparin and Exnox for treatment is typically the most useful factor in predicting a patient's chances of survival from COVID-19. Moreover, our predictive model shows that the combination of AI and clinical data can be applied to point-of-care services through fast-learning healthcare systems.
Development of Clinically-Informed 3D Tumor Models for Microwave Imaging Applications
Computed Tomography (CT) scans are used during medical imaging diagnosis as they provide detailed cross-sectional images of the human body by making use of X-rays. X-ray radiation as part of medical diagnosis poses health risks to patients leading experts to opt for low doses of radiation when possible. In accordance with European Directives, ionising radiation doses for medical purposes are to be kept as low as reasonably achievable (ALARA). While reduced radiation is beneficial from a health perspective, this impacts the quality of the images as the noise in the images increases, reducing the radiologist’s confidence in diagnosis. Various low-dose CT (LDCT) image denoising strategies available in the literature attempt to solve this conflict. However, current models face problems like over-smoothed results and lose detailed information. Consequently, the quality of LDCT images after denoising is still an important problem. The models presented in this work use deep learning techniques that are modified and trained for this problem. The results show that the best model in terms of image quality achieved a peak signal to noise ratio (PSNR) of 19.5 dB, a structural similarity index measure (SSIM) of 0.7153 and a root mean square error (RMSE) of 43.34. It performed the required operations in an average time of 4843.80s. Furthermore, tests at different dose levels were done to test the robustness of the best performing models.
Progesterone Receptor Status Analysis in Breast Cancer Patients using DCE- MR Images and Gabor Derived Anisotropy Index
Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI)
Gabor filter
Hormone receptors play a key role in female breast cancers as predictive biomarkers. Breast cancer subtype with Progesterone receptor (PgR) expression is one of the important hormone receptors in predicting prognosis and evaluating the Neoadjuvant Chemotherapy (NAC) treatment response. PgR (-) breast cancers are associated with a higher response to NAC compared to PgR (+) breast cancer patients. Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) is the widely used imaging modality in assessing the NAC response in patients. However, evaluating the treatment response of PgR breast cancers is complicated and challenging since breast cancer with positive receptor statuses will respond differently to NAC. Therefore, in this work, an attempt has been made to differentiate the PgR (+) and PgR (-) breast cancer patients due to NAC using Gabor derived Anisotropy Index (AI). A total of 50 PgR (+) and 63 PgR (-) DCE-MR images at 4 time points of NAC treatment are considered from the openly available I-SPY1 of the TCIA database. AI is calculated within the PgR status groups from Gabor energies that are acquired after designing the Gabor filter bank with 5 scales and 7 orientations. Results demonstrate that the AI values can significantly differentiate PgR (+) and PgR (-) breast cancer patients (p≤0.05) due to NAC. The mean AI values are observed to be high in PgR (+) patients (4.14E+10± 1.17E+ 11) than PgR (-) patients (1.95E+10±8.06E+10) . AI could statistically differentiate visit 1 & visit 4 of NAC treatment in both PgR status patients with a p-value of 0.0246 and 0.0387 respectively. Further, the percentage difference in the mean value of AI is observed to be high in PgR (-) between visit 1 V s 4, visit 2 V s 4, visit 1 V s 3, and visit 2 Vs 3 compared to PgR (+) subjects. Hence, AI could be used as a single index value in assessing the treatment response in both PgR (+) and PgR (-) subjects.
Improving Generalizability to Out-of-Distribution Data in Radiogenomic Models to Predict IDH Mutation Status in Glioma Patients
multi-parametric magnetic resonance imaging (multi-parametric MRI)
Glioma
Radiogenomics offers a potential virtual and non-invasive biopsy, being very promising in cases where genomic testing is not available or possible. However, radiogenomics mod-els often lack generalizability, where a performance degradation on unseen data caused by differences in the MRI sequence parameters, MRI manufacturers, and scanners make this issue worse. Therefore, selecting the radiomic features to be included in the model is of paramount importance, as a proper feature selection may lead to robustness and generalizability of the models in unseen data. This study developed and assessed a novel unsupervised, yet biological-based, feature selection method capable of improving the performance of radiogenomic models in unseen data. We assessed 63 low-grade gliomas and glioblastomas multiform patients acquired in 4 different institutions/centers and publicly available in The Cancer Genome Archive (TCGA) and The Cancer Imaging Archive (TCIA). Radiomics features were extracted from multiparametric MRI images (pre-contrast T1-weighted - T1w, post-contrast T1-weighted - cT1w, T2-weighted - T2w, and FLAIR) and different regions-of-interest (enhancing tumor, non-enhancing tumor/necrosis, and edema). The proposed method was compared with an embedded feature selection approach commonly used in radiomics/radiogenomics studies by leaving data from a center as an independent held-out test set and tuning the model with the data from the remaining centers. The performances of the proposed method was consistently better in all test sets showing that it improves robustness and generalizability to out-of-distribution data.
2D and 2.5 D Pancreas and Tumor Segmentation in Heterogeneous CT Images of PDAC Patients
From Handcrafted to Deep-Learning-Based Cancer Radiomics
Afshar, Parnian
Mohammadi, Arash
Plataniotis, Konstantinos N.
Oikonomou, Anastasia
Benali, Habib
2019Journal Article, cited 0 times
LIDC-IDRI
Recent advancements in signal processing (SP) and machine learning, coupled with electronic medical record keeping in hospitals and the availability of extensive sets of medical images through internal/external communication systems, have resulted in a recent surge of interest in radiomics. Radiomics, an emerging and relatively new research field, refers to extracting semiquantitative and/or quantitative features from medical images with the goal of developing predictive and/or prognostic models. In the near future, it is expected to be a critical component for integrating image-derived information used for personalized treatment. The conventional radiomics workflow is typically based on extracting predesigned features (also referred to as handcrafted or engineered features) from a segmented region of interest (ROI). Nevertheless, recent advancements in deep learning have inspired trends toward deep-learning-based radiomics (DLRs) (also referred to as discovery radiomics). In addition to the advantages of these two approaches, there are also hybrid solutions that exploit the potential of multiple data sources. Considering the variety of approaches to radiomics, further improvements require a comprehensive and integrated sketch, which is the goal of this article. This article provides a unique interdisciplinary perspective on radiomics by discussing state-of-the-art SP solutions in the context of radiomics.
Constructing 3D-Printable CAD Models of Prostates from MR Images
This paper describes the development of a procedure to generate patient-specific, three-dimensional (3D) solid models of prostates (and related anatomy) from magnetic resonance (MR) images. The 3D models are rendered in STL file format which can be physically printed or visualized on a holographic display system. An example is presented in which a 3D model is printed following this procedure.
Design of a Patient-Specific Radiotherapy Treatment Target
This paper describes the design of a patient-specific, radiotherapy quality assurance target that can be used to verify a treatment plan by measurement of actual dosage. Staring from a patient's (segmented) MR images, a physical model containing insertable cartridges for holding dosimeters is printed in 3D. Dosimeters can be located at specific locations of interest (e.g., tumor, nerve bundles, urethra). The model (dosimeter insert) can be placed into a pelvis 'shell' and subject to a specified treatment plan. A design for the dosimeter insert can be efficiently fabricated using rapid prototyping techniques.
Automatic MRI Breast tumor Detection using Discrete Wavelet Transform and Support Vector Machines
The human right is to live a healthy life free of serious diseases. Cancer is the most serious disease facing humans and possibly leading to death. So, a definitive solution must be done to these diseases, to eliminate them and also to protect humans from them. Breast cancer is considered being one of the dangerous types of cancers that face women in particular. Early examination should be done periodically and the diagnosis must be more sensitive and effective to preserve the women lives. There are various types of breast cancer images but magnetic resonance imaging (MRI) has become one of the important ways in breast cancer detection. In this work, a new method is done to detect the breast cancer using the MRI images that is preprocessed using a 2D Median filter. The features are extracted from the images using discrete wavelet transform (DWT). These features are reduced to 13 features. Then, support vector machine (SVM) is used to detect if there is a tumor or not. Simulation results have been accomplished using the MRI images datasets. These datasets are extracted from the standard Breast MRI database known as the “Reference Image Database to Evaluate Response (RIDER)”. The proposed method has achieved an accuracy of 98.03 % using the available MRIs database. The processing time for all processes was recorded as 0.894 seconds. The obtained results have demonstrated the superiority of the proposed system over the available ones in the literature.
Vector quantization-based automatic detection of pulmonary nodules in thoracic CT images
Computer-aided detection (CADe) of pulmonary nodules from computer tomography (CT) scans is critical for assisting radiologists to identify lung lesions at an early stage. In this paper, we propose a novel CADe system for lung nodule detection based on a vector quantization (VQ) approach. Compared to existing CADe systems, the extraction of lungs from the chest CT image is fully automatic, and the detection and segmentation of initial nodule candidates (INCs) within the lung volume is fast and accurate due to the self-adaptive nature of VQ algorithm. False positives in the detected INCs are reduced by rule-based pruning in combination with a feature-based support vector machine classifier. We validate the proposed approach on 60 CT scans from a publicly available database. Preliminary results show that our CADe system is effective to detect nodules with a sensitivity of 90.53 % at a specificity level of 86.00%.
Zonal Segmentation of Prostate T2W-MRI using Atrous Convolutional Neural Network
Khan, Zia
Yahya, Norashikin
Alsaih, Khaled
Meriaudeau, Fabrice
2019Conference Paper, cited 0 times
PROSTATE
Segmentation
The number of prostate cancer cases is steadily increasing especially with rising number of ageing population. It is reported that 5-year relative survival rate for man with stage 1 prostate cancer is almost 99% hence, early detection will significantly improve treatment planning and increase survival rate. Magnetic resonance imaging (MRI) technique is a common imaging modality for diagnosis of prostate cancer. MRI provide good visualization of soft tissue and enable better lesion detection and staging of prostate cancer. The main challenge of prostate whole gland segmentation is due to blurry boundary of central gland (CG) and peripheral zone (PZ) which lead to differential diagnosis. Since there is substantial difference in occurance and characteristic of cancer in both zones. So to enhance the diagnosis of prostate gland, we implemented DeeplabV3+ semantic segmentation approach to segment the prostate into zones. DeepLabV3+ achieved significant results in segmentation of prostate MRI by applying several parallel atrous convolution with different rates. The CNN-based semantic segmentation approach is trained and tested on NCI-ISBI 1.5T and 3T MRI dataset consist of 40 patients. Performance evaluation based on Dice similarity coefficient (DSC) of the Deeplab-based segmentation is compared with two other CNN-based semantic segmentation techniques: FCN and PSNet. Results shows that prostate segmentation using DeepLabV3+ can perform better than FCN and PSNet with average DSC of 70.3% in PZ and 88% in CG zone. This indicates the significant contribution made by the atrous convolution layer, in producing better prostate segmentation result.
Analysis of Neoadjuvant Treatment Response in Breast Cancer Using Deep Networks
Breast cancer is the most common type of cancer among women, and it has the highest death rate among all types of cancer. Pre-surgical (neoadjuvant) treatment can improve the patient's prognosis, but it is impossible to predict whether the patient will respond. Previous studies were conducted to find characteristics capable of associating neoadjuvant treatment with the patient's response, some using conventional radiomics and others using convolutional networks, mostly using private databases. In this work, a deep learning model was proposed using images and clinical data from a public database (Duke Breast Cancer MRI) capable of extracting characteristics of breast MRI images and associating the attributes with prognosis. The 300 selected patients were divided between training, validation using cross-validation, and testing. Using quantitative analysis of results generated from the trained model, it was concluded that the proposed model can classify patients that achieved a complete response to neoadjuvant treatment. The results demonstrated superior accuracy when compared to the study by Cain et al. in the same database, with a mean AUC (area under the curve) of 0.70 to 0.82 and a mean accuracy of 70% for testing. The proposed model obtained competitive results compared to the literature in public databases, but a further study should be conducted to validate the method in another database.
Breast cancer is one of the most common cancers in women. Typically, the course of the disease is asymptomatic in the early stages of breast cancer. Imaging breast examinations allow early detection of the cancer, which is associated with increased chances of a complete cure. There are many breast imaging techniques such as: mammography (MM), ultrasound imaging (US), positron-emission tomography (PET), computed tomography (CT), and magnetic resonance imaging (MRI). These imaging techniques differ in terms of effectiveness, price, type of physical phenomenon, the impact on the patient and its availability. In this paper, we focus on MRI imaging and we compare three breast lesion segmentation algorithms that have been tested on QIN Breast DCE-MRI database, which is publicly available. The obtained values of Dice and Jaccard indices indicate the segmentation using k-means algorithm.
Computer-aided detection of brain tumors using image processing techniques
Computer-aided detection applications has managed to make significant contributions to medical world in today's technology. In this study, the detection of brain tumors in magnetic resonance images was performed. This study proposes a computer aided detection system that is based on morphological reconstruction and rule based detection of tumors that using the morphological features of the regions of interest. The steps involved in this study are: the pre-processing stage, the segmentation stage, the stage of identification of the region of interest and the stage of detection of tumors. With these methods applied on 497 magnetic resonance image slices of 10 patients, the performance of the computer aided detection system is achieved 84,26% accuracy.
The analysis of Magnetic Resonance Image has an important role in definite detection of Brain Tumor. The shape, location and size of tumor are examined by Radiology specialist to diagnose and plan treatment. In the intense work pace, it's not possible to get results quickly. At this scheme, unnoticed information can be recovered by an image processing algorithm. In this study, at database images which are collected from REMBRANT were cleared from noise, transformed with Karhunen Loeve Transform to gray level and segmented with Pott's Markov Random Field Model. This hybrid algorithm minimizes the data loss, contrast and noise problems. After segmentation stage, shape and statistical analysis are performed to get feature vector about Region of Interest. The images are classified as existing tumor or not existing tumor. The algorithm can recognize the presence of tumor with 100% and tumor's area with 95% accuracy. The results are reported to help the specialists.
DeepMMSA: A Novel Multimodal Deep Learning Method for Non-small Cell Lung Cancer Survival Analysis
Lung cancer is the leading cause of cancer death worldwide. The critical reason for the deaths is delayed diagnosis and poor prognosis. With the accelerated development of deep learning techniques, it has been successfully applied extensively in many real-world applications, including health sectors such as medical image interpretation and disease diagnosis. By combining more modalities that being engaged in the processing of information, multimodal learning can extract better features and improve the predictive ability. The conventional methods for lung cancer survival analysis normally utilize clinical data and only provide a statistical probability. To improve the survival prediction accuracy and help prognostic decision-making in clinical practice for medical experts, we for the first time propose a multimodal deep learning framework for non-small cell lung cancer (NSCLC) survival analysis, named DeepMMSA. This framework leverages CT images in combination with clinical data, enabling the abundant information held within medical images to be associate with lung cancer survival information. We validate our model on the data of 422 NSCLC patients from The Cancer Imaging Archive (TCIA). Experimental results support our hypothesis that there is an underlying relationship between prognostic information and radiomic images. Besides, quantitative results show that our method could surpass the state-of-the-art methods by 4% on concordance.
Synthetic minority image over-sampling technique: How to improve AUC for glioblastoma patient survival prediction
Real-world datasets are often imbalanced, with an important class having many fewer examples than other classes. In medical data, normal examples typically greatly outnumber disease examples. A classifier learned from imbalanced data, will tend to be very good at the predicting examples in the larger (normal) class, yet the smaller (disease) class is typically of more interest. Imbalance is dealt with at the feature vector level (create synthetic feature vectors or discard some examples from the larger class) or by assigning differential costs to errors. Here, we introduce a novel method for over-sampling minority class examples at the image level, rather than the feature vector level. Our method was applied to the problem of Glioblastoma patient survival group prediction. Synthetic minority class examples were created by adding Gaussian noise to original medical images from the minority class. Uniform local binary patterns (LBP) histogram features were then extracted from the original and synthetic image examples with a random forests classifier. Experimental results show the new method (Image SMOTE) increased minority class predictive accuracy and also the AUC (area under the receiver operating characteristic curve), compared to using the imbalanced dataset directly or to creating synthetic feature vectors.
Evaluation of Malignancy of Lung Nodules from CT Image Using Recurrent Neural Network
Wang, Weilun
Chakraborty, Goutam
2019Journal Article, cited 0 times
LIDC-IDRI
The efficacy of treatment of cancer depends largely on early detection and correct prognosis. It is more important in case of pulmonary cancer, where the detection is based on identifying malignant nodules in the Computed Tomography (CT) scans of the lung. There are two problems for making correct decision about malignancy: (1) At early stage, the nodule size is small (length 5 to 10 mm). As the CT scan covers a volume of 30cm.×30cm.×40cm., manually searching for nodules takes a very long time (approximately 10 minutes for an expert). (2) There are benign nodules and nodules due to other ailments like bronchitis, pneumonia, tuberculosis. To identify whether the nodule is carcinogenic needs long experience and expertise.In recent years, several works have been reported to classify lung cancer using not only the CT scan image, but also other features causing or related to cancer. In all recent works, for CT image analysis, 3-D Convolution Neural Network (CNN) is used to identify cancerous nodules. In spite of various preprocessing used to improve training efficiency, 3-D CNN is extremely slow. The aim of this work is to improve training efficiency by proposing a new deep NN model. It consists of a hierarchical (sliced) structure of recurrent neural network (RNN), where different layers of the hierarchy can be trained simultaneously, decreasing training time. In addition, selective attention (alignment) during training improves convergence rate. The result shows a 3-fold increase in training efficiency, compared to recent state-of-the-art work using 3-D CNN.
Medical image retrieval using hybrid wavelet network classifier
Nowadays the amount of imaging data is rapidly increasing with the widespread dissemination of picture archiving in medical systems. Effective image retrieval systems are required to manage these complex and large image databases. Indexing medical images become, for clinical applications, an essential and effective tool which assists the monitoring in diagnosis therapeutic. CBIR (Content Based Image Retrieval) is one of the possible solutions to manage effectively these bases. In order to achieve this application, we have to ensure these key tasks: indexing medical images and classification. Accordingly to accomplish this work, medical images are indexed and classified using wavelet network classifier (WNC) based on fast wavelet transform (FWT) for its robustness and for its pertinent results in the classification domain.
Isolation of Prostate Gland in T1-Weighted Magnetic Resonance Images using Computer Vision
Family physicians rarely see a malignant bone cancer because it is hard to find, and most of the time, bone cancer is benign. It is very time-consuming and complicated for the pathologist to classify Osteosarcoma histopathological images. Typically Osteosarcoma classifies into viable, Non-viable, and Non-tumor classes, but intra-class variation and inter-class similarity are complex tasks. This paper used the Random Forest(RF) machine learning algorithm, which efficiently and accurately classifies Osteosarcoma into Viable, Non-viable, and Non-tumor classes. The Random Forest method gives a classification accuracy of 92.40%, a sensitivity of 85.44%, and specificity 93.38% with AUC=0.95.
Performance Analysis of Low and High-Grade Breast Tumors Using DCE MR Images and LASSO Feature Selection
Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI)
Algorithm Development
HER2
BREAST
Breast cancer is a complex genetic disease with diverse morphological and biological characteristics. Generally, the grade of a breast tumor is a prognostic factor and representation of its potential aggressiveness. Presently, Dynamic Contrast-Enhanced MRI (DCE-MRI) has gained a predominant role in assessing tumor grades and vascular physiology. However, due to tumor heterogeneity, tumor-grade classification is still a daunting challenge for radiologists. Therefore, to unburden the tumor grading process, a study has been carried out with 638 patients taken from the Duke-Breast-Cancer-MRI database (431-low-grade & 207-high-grade). The clinicopathological characteristics such as ER receptors, PR receptors, HER2, Pathological Complete Response (PCR or non-PCR), Menopausal status, and Bi-lateral status have shown a high significance of p = <0.00001, <0.00001, 0.0023, <0.00001, 0.0262, and 0.0045 respectively. The LASSO (Least Absolute Shrinkage and Selection Operator) feature selection model has selected 8 optimal features out of 529 feature sets (from Duke-Breast-Cancer-MRI). The extracted features are involved in the classification of high-grade and low-grade tumors by using a collection of classifiers such as Linear Support Vector Machines (LSVM), Logistic regression (LR), Linear discriminant analysis (LDA), Gaussian Naïve Bayes (GNB), k-Nearest Neighbors (KNN), and Random Forest (RF). The outcome of the L-SVM and LR showed better performance metrics values among all classifiers. Hence, the acquired classification results disclose that histological grade prediction using radiomics would aid clinical management and prognosis.
A Virtual Spine Construction Algorithm for a Patient - Specific Pedicle Screw Surgical Simulators
Syamlan, Adlina
Mampaey, Tuur
Fathurachman,
Denis, Kathleen
Poorten, Emmanuel Vander
Tjahjowidodo, Tegoeh
2022Conference Paper, cited 0 times
COVID-19-NY-SBU
Segmentation
Algorithm Development
U-Net
LUNG
SPINE
This paper presents an underlying study of a virtual spine construction as part of a surgical simulator. The goal is to create a patient - specific segmentation and rendering algorithm in two aspects, namely geometric modelling and material properties estimation. The spines are isolated from the CT scan data using an in house segmentation algorithm based on U - Net architecture, which are then rendered using the marching cube algorithm. Two rendering parameters (step size and voxel size) are tuned to give the best visual result. The material property are extracted from the gray scale values of the CT scan. The developed algorithm are bench - marked against an open source segmentation software.
Context-Aware Self-Supervised Learning of Whole Slide Images
Aryal, Milan
Soltani, Nasim Yahya
2024Journal Article, cited 0 times
TCGA-KICH
TCGA-KIRC
TCGA-KIRP
Machine Learning
Presenting whole slide images (WSIs) as graph will enable a more efficient and accurate learning framework for cancer diagnosis. Due to the fact that a single WSI consists of billions of pixels and there is a lack of vast annotated datasets required for computational pathology, the problem of learning from WSIs using typical deep learning approaches such as convolutional neural network (CNN) is challenging. Additionally, WSIs down-sampling may lead to the loss of data that is essential for cancer detection. A novel two-stage learning technique is presented in this work. Since context, such as topological features in the tumor surroundings, may hold important information for cancer grading and diagnosis, a graph representation capturing all dependencies among regions in the WSI is very intuitive. Graph convolutional network (GCN) is deployed to include context from the tumor and adjacent tissues, and self-supervised learning is used to enhance training through unlabeled data. More specifically, the entire slide is presented as a graph, where the nodes correspond to the patches from the WSI. The proposed framework is then tested using WSIs from prostate and kidney cancers.
Automated segmentation refinement of small lung nodules in CT scans by local shape analysis
Diciotti, Stefano
Lombardo, Simone
Falchini, Massimo
Picozzi, Giulia
Mascalchi, Mario
IEEE Trans Biomed Eng2011Journal Article, cited 68 times
Website
Radiomics
Segmentation
Computer Aided Diagnosis (CADx)
LUNG
LIDC-IDRI
One of the most important problems in the segmentation of lung nodules in CT imaging arises from possible attachments occurring between nodules and other lung structures, such as vessels or pleura. In this report, we address the problem of vessels attachments by proposing an automated correction method applied to an initial rough segmentation of the lung nodule. The method is based on a local shape analysis of the initial segmentation making use of 3-D geodesic distance map representations. The correction method has the advantage that it locally refines the nodule segmentation along recognized vessel attachments only, without modifying the nodule boundary elsewhere. The method was tested using a simple initial rough segmentation, obtained by a fixed image thresholding. The validation of the complete segmentation algorithm was carried out on small lung nodules, identified in the ITALUNG screening trial and on small nodules of the lung image database consortium (LIDC) dataset. In fully automated mode, 217/256 (84.8%) lung nodules of ITALUNG and 139/157 (88.5%) individual marks of lung nodules of LIDC were correctly outlined and an excellent reproducibility was also observed. By using an additional interactive mode, based on a controlled manual interaction, 233/256 (91.0%) lung nodules of ITALUNG and 144/157 (91.7%) individual marks of lung nodules of LIDC were overall correctly segmented. The proposed correction method could also be usefully applied to any existent nodule segmentation algorithm for improving the segmentation quality of juxta-vascular nodules.
Intensity Augmentation to Improve Generalizability of Breast Segmentation Across Different MRI Scan Protocols
Hesse, Linde S.
Kuling, Grey
Veta, Mitko
Martel, Anne L.
2021Journal Article, cited 0 times
QIN Breast DCE-MRI
OBJECTIVE: The segmentation of the breast from the chest wall is an important first step in the analysis of breast magnetic resonance images. 3D U-Nets have been shown to obtain high segmentation accuracy and appear to generalize well when trained on one scanner type and tested on another scanner, provided that a very similar MR protocol is used. There has, however, been little work addressing the problem of domain adaptation when image intensities or patient orientation differ markedly between the training set and an unseen test set. In this work we aim to address this domain shift problem.
METHOD: We propose to apply extensive intensity augmentation in addition to geometric augmentation during training. We explored both style transfer and a novel intensity remapping approach as intensity augmentation strategies. For our experiments, we trained a 3D U-Net on T1-weighted scans. We tested our network on T2-weighted scans from the same dataset as well as on an additional independent test set acquired with a T1-weighted TWIST sequence and a different coil configuration.
RESULTS: By applying intensity augmentation we increased segmentation performance for the T2-weighted scans from a Dice of 0.71 to 0.88. This performance is very close to the baseline performance of training with T2-weighted scans (0.92). On the T1-weighted dataset we obtained a performance increase from 0.77 to 0.85.
CONCLUSION: Our results show that the proposed intensity augmentation increases segmentation performance across different datasets.
SIGNIFICANCE: The proposed method can improve whole breast segmentation of clinical MR scans acquired with different protocols.
Technical validation of multi-section robotic bronchoscope with first person view control for transbronchial biopsies of peripheral lung
Masaki, F.
King, F.
Kato, T.
Tsukada, H.
Colson, Y. L.
Hata, N.
IEEE Trans Biomed Eng2021Journal Article, cited 0 times
Website
Phantom FDA
Model
This study aims to validate the advantage of the new engineering method to maneuver multi-section robotic bronchoscope with first person view control in transbronchial biopsy. Six physician operators were recruited and tasked to operate a manual and a robotic bronchoscope to the peripheral area placed in patient-derived lung phantoms. The metrics collected were the furthest generation count of the airway the bronchoscope reached, force incurred to the phantoms, and NASA-Task Load Index. The furthest generation count of the airway the physicians reached using the manual and the robotic bronchoscopes were 6.6 +/- 1.2th and 6.7 +/- 0.8th. Robotic bronchoscopes successfully reached the 5th generation count into the peripheral area of the airway, while the manual bronchoscope typically failed earlier in the 3rd generation. More force was incurred to the airway when the manual bronchoscope was used (0.24 +/- 0.20 [N]) than the robotic bronchoscope was applied (0.18 +/- 0.22 [N], p<0.05). The manual bronchoscope imposed more physical demand than the robotic bronchoscope by NASA-TLX score (55 +/- 24 vs 19 +/- 16, p<0.05). These results indicate that a robotic bronchoscope facilitates the advancement of the bronchoscope to the peripheral area with less physical demand to physician operators. The metrics collected in this study would expect to be used as a benchmark for the future development of robotic bronchoscopes.
ORRN: An ODE-Based Recursive Registration Network for Deformable Respiratory Motion Estimation With Lung 4DCT Images
Liang, X.
Lin, S.
Liu, F.
Schreiber, D.
Yip, M.
IEEE Trans Biomed Eng2023Journal Article, cited 3 times
Website
4D-Lung
Humans
*Lung Neoplasms
Four-Dimensional Computed Tomography/methods
Lung/diagnostic imaging
Motion correction
Respiratory Rate
Algorithms
Registration
Deep Learning
OBJECTIVE: Deformable Image Registration (DIR) plays a significant role in quantifying deformation in medical data. Recent Deep Learning methods have shown promising accuracy and speedup for registering a pair of medical images. However, in 4D (3D + time) medical data, organ motion, such as respiratory motion and heart beating, can not be effectively modeled by pair-wise methods as they were optimized for image pairs but did not consider the organ motion patterns necessary when considering 4D data. METHODS: This article presents ORRN, an Ordinary Differential Equations (ODE)-based recursive image registration network. Our network learns to estimate time-varying voxel velocities for an ODE that models deformation in 4D image data. It adopts a recursive registration strategy to progressively estimate a deformation field through ODE integration of voxel velocities. RESULTS: We evaluate the proposed method on two publicly available lung 4DCT datasets, DIRLab and CREATIS, for two tasks: 1) registering all images to the extreme inhale image for 3D+t deformation tracking and 2) registering extreme exhale to inhale phase images. Our method outperforms other learning-based methods in both tasks, producing the smallest Target Registration Error of 1.24 mm and 1.26 mm, respectively. Additionally, it produces less than 0.001% unrealistic image folding, and the computation speed is less than 1 s for each CT volume. CONCLUSION: ORRN demonstrates promising registration accuracy, deformation plausibility, and computation efficiency on group-wise and pair-wise registration tasks. SIGNIFICANCE: It has significant implications in enabling fast and accurate respiratory motion estimation for treatment planning in radiation therapy or robot motion planning in thoracic needle insertion.
Prediction of Glioma Grade Using Intratumoral and Peritumoral Radiomic Features From Multiparametric MRI Images
Cheng, J.
Liu, J.
Yue, H.
Bai, H.
Pan, Y.
Wang, J.
IEEE/ACM Trans Comput Biol Bioinform2022Journal Article, cited 26 times
Website
BraTS 2019
Algorithms
*Glioma/diagnostic imaging
Humans
Magnetic Resonance Imaging/methods
*Multiparametric Magnetic Resonance Imaging
Retrospective Studies
The accurate prediction of glioma grade before surgery is essential for treatment planning and prognosis. Since the gold standard (i.e., biopsy)for grading gliomas is both highly invasive and expensive, and there is a need for a noninvasive and accurate method. In this study, we proposed a novel radiomics-based pipeline by incorporating the intratumoral and peritumoral features extracted from preoperative mpMRI scans to accurately and noninvasively predict glioma grade. To address the unclear peritumoral boundary, we designed an algorithm to capture the peritumoral region with a specified radius. The mpMRI scans of 285 patients derived from a multi-institutional study were adopted. A total of 2153 radiomic features were calculated separately from intratumoral volumes (ITVs)and peritumoral volumes (PTVs)on mpMRI scans, and then refined using LASSO and mRMR feature ranking methods. The top-ranking radiomic features were entered into the classifiers to build radiomic signatures for predicting glioma grade. The prediction performance was evaluated with five-fold cross-validation on a patient-level split. The radiomic signatures utilizing the features of ITV and PTV both show a high accuracy in predicting glioma grade, with AUCs reaching 0.968. By incorporating the features of ITV and PTV, the AUC of IPTV radiomic signature can be increased to 0.975, which outperforms the state-of-the-art methods. Additionally, our proposed method was further demonstrated to have strong generalization performance in an external validation dataset with 65 patients. The source code of our implementation is made publicly available at https://github.com/chengjianhong/glioma_grading.git.
Feature-Sensitive Deep Convolutional Neural Network for Multi-Instance Breast Cancer Detection
Wang, Yan
Zhang, Lei
Shu, Xin
Feng, Yangqin
Yi, Zhang
Lv, Qing
2022Journal Article, cited 0 times
CBIS-DDSM
To obtain a well-performed computer-aided detection model for detecting breast cancer, it is usually needed to design an effective and efficient algorithm and a well-labeled dataset to train it. In this paper, first, a multi-instance mammography clinic dataset was constructed. Each case in the dataset includes a different number of instances captured from different views, it is labeled according to the pathological report, and all the instances of one case share one label. Nevertheless, the instances captured from different views may have various levels of contributions to conclude the category of the target case. Motivated by this observation, a feature-sensitive deep convolutional neural network with an end-to-end training manner is proposed to detect breast cancer. The proposed method first uses a pre-train model with some custom layers to extract image features. Then, it adopts a feature fusion module to learn to compute the weight of each feature vector. It makes the different instances of each case have different sensibility on the classifier. Lastly, a classifier module is used to classify the fused features. The experimental results on both our constructed clinic dataset and two public datasets have demonstrated the effectiveness of the proposed method.
An Efficient Detection and Classification of Acute Leukemia using Transfer Learning and Orthogonal Softmax Layer-based Model
Das, P. K.
Sahoo, B.
Meher, S.
IEEE/ACM Trans Comput Biol Bioinform2022Journal Article, cited 0 times
Website
C_NMC_2019
Blood cancer
Pathomics
Support Vector Machine (SVM)
Image color analysis
Transfer learning
Acute lymphoblastic leukemia
acute myelogenous leukemia
Classification
Orthogonal softMax layer (OSL)
For the early diagnosis of hematological disorders like blood cancer, microscopic analysis of blood cells is very important. Traditional deep CNNs lead to overfitting when it receives small medical image datasets such as ALLIDB1, ALLIDB2, and ASH. This paper proposes a new and effective model for classifying and detecting Acute Lymphoblastic Leukemia (ALL) or Acute Myelogenous Leukemia (AML) that delivers excellent performance in small medical datasets. Here, we have proposed a novel Orthogonal SoftMax Layer (OSL)-based Acute Leukemia detection model that consists of ResNet 18-based deep feature extraction followed by efficient OSL-based classification. Here, OSL is integrated with the ResNet18 to improve the classification performance by making the weight vectors orthogonal to each other. Hence, it integrates ResNet benefits (residual learning and identity mapping) with the benefits of OSL-based classification (improvement of feature discrimination capability and computational efficiency). Furthermore, we have introduced extra dropout and ReLu layers in the architecture to achieve a faster network with enhanced performance. The performance verification is performed on standard ALLIDB1, ALLIDB2, and C_NMC_2019 datasets for efficient ALL detection and ASH dataset for effective AML detection. The experimental performance demonstrates the superiority of the proposed model over other compairing models.
Patient Graph Deep Learning to Predict Breast Cancer Molecular Subtype
Furtney, Isaac
Bradley, Ray
Kabuka, Mansur R.
2023Journal Article, cited 0 times
ACRIN-6698
ISPY2
TCGA-BRCA
Breast cancer is a heterogeneous disease consisting of a diverse set of genomic mutations and clinical characteristics. The molecular subtypes of breast cancer are closely tied to prognosis and therapeutic treatment options. We investigate using deep graph learning on a collection of patient factors from multiple diagnostic disciplines to better represent breast cancer patient information and predict molecular subtype. Our method models breast cancer patient data into a multi-relational directed graph with extracted feature embeddings to directly represent patient information and diagnostic test results. We develop a radiographic image feature extraction pipeline to produce vector representation of breast cancer tumors in DCE-MRI and an autoencoder-based genomic variant embedding method to map variant assay results to a low-dimensional latent space. We leverage related-domain transfer learning to train and evaluate a Relational Graph Convolutional Network to predict the probabilities of molecular subtypes for individual breast cancer patient graphs. Our work found that utilizing information from multiple multimodal diagnostic disciplines improved the model's prediction results and produced more distinct learned feature representations for breast cancer patients. This research demonstrates the capabilities of graph neural networks and deep learning feature representation to perform multimodal data fusion and representation in the breast cancer domain.
Ultra-Fast 3D GPGPU Region Extractions for Anatomy Segmentation
Region extractions are ubiquitous in any anatomy segmentation. Region growing is one such method. Starting from an initial seed point, it grows a region of interest until all valid voxels are checked, thereby resulting in an object segmentation. Although widely used, it is computationally expensive because of its sequential approach. In this paper, we present a parallel and high performance alternate for region growing using GPGPU capability. The idea is to approximate region growing requirements within an algorithm using a parallel connected-component labeling (CCL) solution. To showcase this, we selected a typical lung segmentation problem using region growing. In CPU, sequential approach consists of 3D region growing inside a mask, that is created after applying a threshold. In GPU, parallel alternative is to apply parallel CCL and select the biggest region of interest. We evaluated our approach on 45 clinical chest CT scans in LIDC data from TCIA repository. With respect to CPU, our CUDA based GPU facilitated an average performance improvement of 240× approximately. Speed up is so profound that it can be even applied to 4D lung segmentation at 6 fps.
Large-Scale Hierarchical Medical Image Retrieval Based on a Multilevel Convolutional Neural Network
Presently, with advancements in medical imaging modalities, various imaging methods are widely used in clinics. To efficiently assess and manage the images, in this paper, a content-based medical image retrieval (CBMIR) system is suggested as a clinical tool. A global medical image database is established through a collection of data from more than ten countries and dozens of sources, schools and laboratories. The database has more than 536 294 medical images, including 14 imaging modalities, 40 organs and 52 diseases. A multilevel convolutional neural network (MLCNN) using hierarchical progressive feature learning is subsequently proposed to perform hierarchical medical image retrieval, including multiple levels of image modalities, organs and diseases. At each classification level, a dense block is trained through a labeled classification. With the epochs increasing, four training stages are performed to simultaneously train the three levels with different weights of the loss function. Then, the trained features are used in the CBMIR system. The results show that using the MLCNN on a representative dataset can achieve a mAP of 0.86, which is higher than the 0.71 achieved by ResNet152 in the literature. Applying the hierarchical progressive feature learning can achieve a 12%-16% performance improvement in CNNs and outperform vision Transformer with only 63% of the training time. The proposed representative image selection and multilevel architecture improves the efficiency and precision of retrieving large-scale medical image databases.
A Fast Nearest Neighbor Search Scheme Over Outsourced Encrypted Medical Images
Medical imaging is crucial for medical diagnosis, and the sensitive nature of medical images necessitates rigorous security and privacy solutions to be in place. In a cloud-based medical system for Healthcare Industry 4.0, medical images should be encrypted prior to being outsourced. However, processing queries over encrypted data without first executing the decryption operation is challenging and impractical at present. In this paper, we propose a secure and efficient scheme to find the exact nearest neighbor over encrypted medical images. Instead of calculating the Euclidean distance, we reject candidates by computing the lower bound of the Euclidean distance that is related to the mean and standard deviation of data. Unlike most existing schemes, our scheme can obtain the exact nearest neighbor rather than an approximate result. We, then, evaluate our proposed approach to demonstrate its utility.
A Deep Generative Model-Integrated Framework for Three-Dimensional Time-Difference Electrical Impedance Tomography
Zhang, Ke
Wang, Lu
Guo, Rui
Lin, Zhichao
Li, Maokun
Yang, Fan
Xu, Shenheng
Abubakar, Aria
IEEE Transactions on Instrumentation and Measurement2022Journal Article, cited 0 times
QIN LUNG CT
Image reconstruction
Image quality
Algorithm Development
The time-difference image reconstruction problem of electrical impedance tomography (EIT) refers to reconstructing the conductivity change in a human body part between two time points using the boundary impedance measurements. Conventionally, the problem can be formulated as a linear inverse problem. However, due to the physical property of the forward process, the inverse problem is seriously ill-posed. As a result, traditional regularized least-squares-based methods usually produce low-resolution images that are difficult to interpret. This work proposes a framework that uses a deep generative model to constrain the unknown conductivity. Specifically, this framework allows the inclusion of a constraint that describes a mathematical relationship between the generative model and the unknown conductivity. The resultant constrained minimization problem is solved using an extended alternating direction method of multipliers (ADMM). The effectiveness of the framework is demonstrated by the example of three-dimensional time-difference chest EIT imaging. Numerical experiment shows a significant improvement of the image quality compared with total variation-regularized least-squares method (PSNR is improved by 4.3% for 10% noise and 4.6% for 30% noise; SSIM is improved by 4.8% for 10% noise and 6.0% for 30% noise). Human experiments show improved correlation between the reconstructed images and images from reference techniques.
Local Wavelet Pattern: A New Feature Descriptor for Image Retrieval in Medical CT Databases
Dubey, Shiv Ram
Singh, Satish Kumar
Singh, Rajat Kumar
IEEE Trans Image Process2015Journal Article, cited 52 times
Website
wavelet
Computed Tomography (CT)
A new image feature description based on the local wavelet pattern (LWP) is proposed in this paper to characterize the medical computer tomography (CT) images for content-based CT image retrieval. In the proposed work, the LWP is derived for each pixel of the CT image by utilizing the relationship of center pixel with the local neighboring information. In contrast to the local binary pattern that only considers the relationship between a center pixel and its neighboring pixels, the presented approach first utilizes the relationship among the neighboring pixels using local wavelet decomposition, and finally considers its relationship with the center pixel. A center pixel transformation scheme is introduced to match the range of center value with the range of local wavelet decomposed values. Moreover, the introduced local wavelet decomposition scheme is centrally symmetric and suitable for CT images. The novelty of this paper lies in the following two ways: 1) encoding local neighboring information with local wavelet decomposition and 2) computing LWP using local wavelet decomposed values and transformed center pixel values. We tested the performance of our method over three CT image databases in terms of the precision and recall. We also compared the proposed LWP descriptor with the other state-of-the-art local image descriptors, and the experimental results suggest that the proposed method outperforms other methods for CT image retrieval.
A Deep Learning Reconstruction Framework for Differential Phase-Contrast Computed Tomography With Incomplete Data
Fu, Jian
Dong, Jianbing
Zhao, Feng
2019Journal Article, cited 0 times
CT Lymph Nodes
Machine Learning
Differential phase-contrast computed tomography (DPC-CT) is a powerful analysis tool for soft-tissue and low-atomic-number samples. Limited by the implementation conditions, DPC-CT with incomplete projections happens quite often. Conventional reconstruction algorithms face difficulty when given incomplete data. They usually involve complicated parameter selection operations, which are also sensitive to noise and are time-consuming. In this paper, we report a new deep learning reconstruction framework for incomplete data DPC-CT. It involves the tight coupling of the deep learning neural network and DPC-CT reconstruction algorithm in the domain of DPC projection sinograms. The estimated result is not an artifact caused by the incomplete data, but a complete phase-contrast projection sinogram. After training, this framework is determined and can be used to reconstruct the final DPC-CT images for a given incomplete projection sinogram. Taking the sparse-view, limited-view and missing-view DPC-CT as examples, this framework is validated and demonstrated with synthetic and experimental data sets. Compared with other methods, our framework can achieve the best imaging quality at a faster speed and with fewer parameters. This work supports the application of the state-of-the-art deep learning theory in the field of DPC-CT.
One-Pass Multi-Task Networks With Cross-Task Guided Attention for Brain Tumor Segmentation
Zhou, Chenhong
Ding, Changxing
Wang, Xinchao
Lu, Zhentai
Tao, Dacheng
2020Journal Article, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Class imbalance has emerged as one of the major challenges for medical image segmentation. The model cascade (MC) strategy, a popular scheme, significantly alleviates the class imbalance issue via running a set of individual deep models for coarse-to-fine segmentation. Despite its outstanding performance, however, this method leads to undesired system complexity and also ignores the correlation among the models. To handle these flaws in the MC approach, we propose in this paper a light-weight deep model, i.e., the One-pass Multi-task Network (OM-Net) to solve class imbalance better than MC does, while requiring only one-pass computation for brain tumor segmentation. First, OM-Net integrates the separate segmentation tasks into one deep model, which consists of shared parameters to learn joint features, as well as task-specific parameters to learn discriminative features. Second, to more effectively optimize OM-Net, we take advantage of the correlation among tasks to design both an online training data transfer strategy and a curriculum learning-based training strategy. Third, we further propose sharing prediction results between tasks, which enables us to design a cross-task guided attention (CGA) module. By following the guidance of the prediction results provided by the previous task, CGA can adaptively recalibrate channel-wise feature responses based on the category-specific statistics. Finally, a simple yet effective post-processing method is introduced to refine the segmentation results of the proposed attention network. Extensive experiments are conducted to demonstrate the effectiveness of the proposed techniques. Most impressively, we achieve state-of-the-art performance on the BraTS 2015 testing set and BraTS 2017 online validation set. Using these proposed approaches, we also won joint third place in the BraTS 2018 challenge among 64 participating teams.The code will be made publicly available at https://github.com/chenhong-zhou/OM-Net.
Analysis of Computed Tomography Images of Lung Cancer Patients with The Marker Controlled Based Method
Erkoc, Merve
Icer, Semra
2022Conference Paper, cited 0 times
RIDER Lung PET-CT
LIDC-IDRI
NSCLC-Radiomics
NSCLC-Radiomics-Genomics
Radiomics
Segmentation
In this study, it was aimed to obtain the tumor region from computed tomography images after a number of pre-processes using the Marker-Controlled watershed segmentation. In accordance with this purpose, tumor segmentation was performed using four different data sets. Segmentation success was analyzed with the Jaccard index method in terms of similarity rates to the reference images. The index was calculated as average 0.8231 for the RIDER lung CT dataset, 0.8365 for the lung 1 dataset, 0.8578 for the lung 3 dataset and 0.8641 for the LIDC-IDRI dataset. Our current work on the practical and successful segmentation of lung tumor has been promising for next steps.
MRI Based High-Grade and Low-Grade Brain Gliomas Classification by Using Video Vision Transformers
Gliomas, which constitute the majority of brain tumors, are among the most common and aggressive brain tumors of the central nervous system. Since gliomas are often difficult to treat and significantly affect the quality of life and survival of patients, the successful classification of glioma grade is crucial. In this study, we used the Video Vision Transformer (ViViT) approach, known for its success in object detection problems, to classify gliomas as high-grade gliomas (HGG) or low-grade gliomas (LGG) using MR images obtained from a well-known brain tumor database, BraTS-2019. Instead of utilizing all MR images, the model was trained with fewer images to efficiently represent the dataset. The proposed method performs end-to-end feature extraction and classification by eliminating the need for preprocessing steps such as merging different sequences or segmenting the tumor, which typically yields to extra processing load. In the study, we also utilized a novel loss function to address false positive (FP) and false negative (FN) situations in binary classification problems, and the obtained results were evaluated with the AUC metric, which is widely used for imbalanced data sets. To the best of our knowledge, this is the first study to apply the ViViT architecture for the classification of glioma grades. We obtained promising results (AUC: 0.9410) which suggest significant potential for future research in this area.
Automatic Blood-Cell Classification via Convolutional Neural Networks and Transfer Learning
Soto-Ayala, Luis Claudio
Cantoral-Ceballos, Jose Antonio
2021Journal Article, cited 0 times
AML-Cytomorphology_LMU
The evaluation and diagnosis of cancer related diseases can be complex and lengthy. This is exacerbated due to manual analyses based on techniques that may take copious amount of time. In the last decade, different tools have been created to detect, analyze and classify different types of cancer in humans. However, there is still a lack of tools or models to automate the analysis of human cells to determine the presence of cancer. Such a model has the potential to improve early detection and prevention of said diseases, leading to more timely medical diagnoses. In this research, we present our current effort on the development of a Deep Learning Model capable of identifying blood cell anomalies. Our results show an accuracy that meets or exceeds the current state of the art, particularly achieving lower false negative rate in comparison to previous efforts reported.
Prostate segmentation: An efficient convex optimization approach with axial symmetry using 3-D TRUS and MR images
Qiu, Wu
Yuan, Jing
Ukwatta, Eranga
Sun, Yue
Rajchl, Martin
Fenster, Aaron
Medical Imaging, IEEE Transactions on2014Journal Article, cited 58 times
Website
QIN PROSTATE
Improving Computer-Aided Detection Using Convolutional Neural Networks and Random View Aggregation
Roth, Holger R.
Lu, Le
Liu, Jiamin
Yao, Jianhua
Seff, Ari
Cherry, Kevin
Kim, Lauren
Summers, Ronald M.
IEEE Transactions on Medical Imaging2015Journal Article, cited 740 times
Website
CT Lymph Nodes
Machine Learning
Automated computer-aided detection (CADe) has been an important tool in clinical practice and research. State-of-the-art methods often show high sensitivities at the cost of high false-positives (FP) per patient rates. We design a two-tiered coarse-to-fine cascade framework that first operates a candidate generation system at sensitivities ∼ 100% of but at high FP levels. By leveraging existing CADe systems, coordinates of regions or volumes of interest (ROI or VOI) are generated and function as input for a second tier, which is our focus in this study. In this second stage, we generate 2D (two-dimensional) or 2.5D views via sampling through scale transformations, random translations and rotations. These random views are used to train deep convolutional neural network (ConvNet) classifiers. In testing, the ConvNets assign class (e.g., lesion, pathology) probabilities for a new set of random views that are then averaged to compute a final per-candidate classification probability. This second tier behaves as a highly selective process to reject difficult false positives while preserving high sensitivities. The methods are evaluated on three data sets: 59 patients for sclerotic metastasis detection, 176 patients for lymph node detection, and 1,186 patients for colonic polyp detection. Experimental results show the ability of ConvNets to generalize well to different medical imaging CADe applications and scale elegantly to various data sets. Our proposed methods improve performance markedly in all cases. Sensitivities improved from 57% to 70%, 43% to 77%, and 58% to 75% at 3 FPs per patient for sclerotic metastases, lymph nodes and colonic polyps, respectively.
Guest Editorial Deep Learning in Medical Imaging: Overview and Future Promise of an Exciting New Technique
Greenspan, Hayit
van Ginneken, Bram
Summers, Ronald M
IEEE Transactions on Medical Imaging2016Journal Article, cited 395 times
Website
Pancreas-CT
CT Lymph Nodes
Extended Modality Propagation: Image Synthesis of Pathological Cases
Cordier N
Delingette H
Le M
Ayache N
IEEE Transactions on Medical Imaging2016Journal Article, cited 18 times
Website
Algorithm Development
Brain modeling
Image generation
Image segmentation
Magnetic resonance imaging (MRI)
Pathology
Training
Tumors
generative model
glioma
medical image simulation
modality synthesis
multi-atlas
patch-based
Lossless Compression of Medical Images Using 3-D Predictors
Lucas, Luís F. R.
Rodrigues, Nuno M. M.
da Silva Cruz, Luis A.
de Faria, Sérgio M. M.
IEEE Transactions on Medical Imaging2017Journal Article, cited 106 times
Website
REMBRANDT
RIDER Breast MRI
RIDER NEURO MRI
TCGA-ESCA
TCGA-BLCA
LungCT-Diagnosis
Magnetic Resonance Imaging
This paper describes a highly efficient method for lossless compression of volumetric sets of medical images, such as CTs or MRIs. The proposed method, referred to as 3-D-MRP, is based on the principle of minimum rate predictors (MRPs), which is one of the state-of-the-art lossless compression technologies presented in the data compression literature. The main features of the proposed method include the use of 3-D predictors, 3-D-block octree partitioning and classification, volume-based optimization, and support for 16-b-depth images. Experimental results demonstrate the efficiency of the 3-D-MRP algorithm for the compression of volumetric sets of medical images, achieving gains above 15% and 12% for 8- and 16-bit-depth contents, respectively, when compared with JPEG-LS, JPEG2000, CALIC, and HEVC, as well as other proposals based on the MRP algorithm.
Automated Detection of Clinically Significant Prostate Cancer in mp-MRI Images Based on an End-to-End Deep Neural Network
Wang, Z.
Liu, C.
Cheng, D.
Wang, L.
Yang, X.
Cheng, K. T.
IEEE Trans Med Imaging2018Journal Article, cited 127 times
Website
Algorithm Development
PROSTATEx
Humans
Image Interpretation
Computer-Assisted/*methods
Magnetic Resonance Imaging/*methods
Male
*Neural Networks
Computer
Prostate/diagnostic imaging
Prostatic Neoplasms/*diagnostic imaging
ROC Curve
Automated methods for detecting clinically significant (CS) prostate cancer (PCa) in multi-parameter magnetic resonance images (mp-MRI) are of high demand. Existing methods typically employ several separate steps, each of which is optimized individually without considering the error tolerance of other steps. As a result, they could either involve unnecessary computational cost or suffer from errors accumulated over steps. In this paper, we present an automated CS PCa detection system, where all steps are optimized jointly in an end-to-end trainable deep neural network. The proposed neural network consists of concatenated subnets: 1) a novel tissue deformation network (TDN) for automated prostate detection and multimodal registration and 2) a dual-path convolutional neural network (CNN) for CS PCa detection. Three types of loss functions, i.e., classification loss, inconsistency loss, and overlap loss, are employed for optimizing all parameters of the proposed TDN and CNN. In the training phase, the two nets mutually affect each other and effectively guide registration and extraction of representative CS PCa-relevant features to achieve results with sufficient accuracy. The entire network is trained in a weakly supervised manner by providing only image-level annotations (i.e., presence/absence of PCa) without exact priors of lesions' locations. Compared with most existing systems which require supervised labels, e.g., manual delineation of PCa lesions, it is much more convenient for clinical usage. Comprehensive evaluation based on fivefold cross validation using 360 patient data demonstrates that our system achieves a high accuracy for CS PCa detection, i.e., a sensitivity of 0.6374 and 0.8978 at 0.1 and 1 false positives per normal/benign patient.
Automatic Multi-organ Segmentation on Abdominal CT with Dense V-networks
Gibson E
Giganti F
Hu Y
Bonmati
S. Bandula E
Gurusamy K
Davidson B
Pereira S
Clarkson M
Barratt D
IEEE Transactions on Medical Imaging2018Journal Article, cited 14 times
Website
Computed tomography (CT)
Image segmentation
KIDNEY
LIVER
PANCREAS
Three-dimensional displays
Abdominal CT
Deep learning
Duodenum
ESOPHAGUS
Gallbladder
Gastrointestinal tract
Segmentation
Spleen
STOMACH
Algorithm Development
Knowledge-based Collaborative Deep Learning for Benign-Malignant Lung Nodule Classification on Chest CT
Xie, Yutong
Xia, Yong
Zhang, Jianpeng
Song, Yang
Feng, Dagan
Fulham, Michael
Cai, Weidong
IEEE Transactions on Medical Imaging2018Journal Article, cited 0 times
LIDC-IDRI
SPIE-AAPM Lung CT Challenge
Machine Learning
The accurate identification of malignant lung nodules on chest CT is critical for the early detection of lung cancer, which also offers patients the best chance of cure. Deep learning methods have recently been successfully introduced to computer vision problems, although substantial challenges remain in the detection of malignant nodules due to the lack of large training data sets. In this paper, we propose a multi-view knowledge-based collaborative (MV-KBC) deep model to separate malignant from benign nodules using limited chest CT data. Our model learns 3-D lung nodule characteristics by decomposing a 3-D nodule into nine fixed views. For each view, we construct a knowledge-based collaborative (KBC) submodel, where three types of image patches are designed to fine-tune three pre-trained ResNet-50 networks that characterize the nodules' overall appearance, voxel, and shape heterogeneity, respectively. We jointly use the nine KBC submodels to classify lung nodules with an adaptive weighting scheme learned during the error back propagation, which enables the MV-KBC model to be trained in an end-to-end manner. The penalty loss function is used for better reduction of the false negative rate with a minimal effect on the overall performance of the MV-KBC model. We tested our method on the benchmark LIDC-IDRI data set and compared it to the five state-of-the-art classification approaches. Our results show that the MV-KBC model achieved an accuracy of 91.60% for lung nodule classification with an AUC of 95.70%. These results are markedly superior to the state-of-the-art approaches.
Augmentation of CBCT Reconstructed From Under-Sampled Projections Using Deep Learning
Jiang, Zhuoran
Chen, Yingxuan
Zhang, Yawei
Ge, Yun
Yin, Fang-Fang
Ren, Lei
IEEE Transactions on Medical Imaging2019Journal Article, cited 0 times
4D-Lung
Cone-Beam Computed Tomography
Deep Learning
Edges tend to be over-smoothed in total variation (TV) regularized under-sampled images. In this paper, symmetric residual convolutional neural network (SR-CNN), a deep learning based model, was proposed to enhance the sharpness of edges and detailed anatomical structures in under-sampled cone-beam computed tomography (CBCT). For training, CBCT images were reconstructed using TV-based method from limited projections simulated from the ground truth CT, and were fed into SR-CNN, which was trained to learn a restoring pattern from under-sampled images to the ground truth. For testing, under-sampled CBCT was reconstructed using TV regularization and was then augmented by SR-CNN. Performance of SR-CNN was evaluated using phantom and patient images of various disease sites acquired at different institutions both qualitatively and quantitatively using structure similarity (SSIM) and peak signal-to-noise ratio (PSNR). SR-CNN substantially enhanced image details in the TV-based CBCT across all experiments. In the patient study using real projections, SR-CNN augmented CBCT images reconstructed from as low as 120 half-fan projections to image quality comparable to the reference fully-sampled FDK reconstruction using 900 projections. In the tumor localization study, improvements in the tumor localization accuracy were made by the SR-CNN augmented images compared with the conventional FDK and TV-based images. SR-CNN demonstrated robustness against noise levels and projection number reductions and generalization for various disease sites and datasets from different institutions. Overall, the SR-CNN-based image augmentation technique was efficient and effective in considerably enhancing edges and anatomical structures in under-sampled 3D/4D-CBCT, which can be very valuable for image-guided radiotherapy.
Automated Muscle Segmentation from Clinical CT using Bayesian U-Net for Personalized Musculoskeletal Modeling
Hiasa, Yuta
Otake, Yoshito
Takao, Masaki
Ogawa, Takeshi
Sugano, Nobuhiko
Sato, Yoshinobu
IEEE Trans Med Imaging2019Journal Article, cited 2 times
Website
Algorithm Development
Computed Tomography (CT)
Segmentation
We propose a method for automatic segmentation of individual muscles from a clinical CT. The method uses Bayesian convolutional neural networks with the U-Net architecture, using Monte Carlo dropout that infers an uncertainty metric in addition to the segmentation label. We evaluated the performance of the proposed method using two data sets: 20 fully annotated CTs of the hip and thigh regions and 18 partially annotated CTs that are publicly available from The Cancer Imaging Archive (TCIA) database. The experiments showed a Dice coefficient (DC) of 0.891+/-0.016 (mean+/-std) and an average symmetric surface distance (ASD) of 0.994+/-0.230 mm over 19 muscles in the set of 20 CTs. These results were statistically significant improvements compared to the state-of-the-art hierarchical multi-atlas method which resulted in 0.845+/-0.031 DC and 1.556+/-0.444 mm ASD. We evaluated validity of the uncertainty metric in the multi-class organ segmentation problem and demonstrated a correlation between the pixels with high uncertainty and the segmentation failure. One application of the uncertainty metric in activelearning is demonstrated, and the proposed query pixel selection method considerably reduced the manual annotation cost for expanding the training data set. The proposed method allows an accurate patient-specific analysis of individual muscle shapes in a clinical routine. This would open up various applications including personalization of biomechanical simulation and quantitative evaluation of muscle atrophy.
Generalizing Deep Learning for Medical Image Segmentation to Unseen Domains via Deep Stacked Transformation
Zhang, Ling
Xu, Daguang
Xu, Ziyue
Wang, Xiaosong
Yang, Dong
Sanford, Thomas
Harmon, Stephanie
Turkbey, Baris
Wood, Bradford J
Roth, Holger
Myronenko, Andriy
IEEE Trans Med Imaging2020Journal Article, cited 0 times
Website
ISBI-MR-Prostate-2013
Segmentation
Deep Learning
Recent advances in deep learning for medical image segmentation demonstrate expert-level accuracy. However, application of these models in clinically realistic environments can result in poor generalization and decreased accuracy, mainly due to the domain shift across different hospitals, scanner vendors, imaging protocols, and patient populations etc. Common transfer learning and domain adaptation techniques are proposed to address this bottleneck. However, these solutions require data (and annotations) from the target domain to retrain the model, and is therefore restrictive in practice for widespread model deployment. Ideally, we wish to have a trained (locked) model that can work uniformly well across unseen domains without further training. In this paper, we propose a deep stacked transformation approach for domain generalization. Specifically, a series of n stacked transformations are applied to each image during network training. The underlying assumption is that the “expected” domain shift for a specific medical imaging modality could be simulated by applying extensive data augmentation on a single source domain, and consequently, a deep model trained on the augmented “big” data (BigAug) could generalize well on unseen domains. We exploit four surprisingly effective, but previously understudied, image-based characteristics for data augmentation to overcome the domain generalization problem. We train and evaluate the BigAug model (with n = 9 transformations) on three different 3D segmentation tasks (prostate gland, left atrial, left ventricle) covering two medical imaging modalities (MRI and ultrasound) involving eight publicly available challenge datasets. The results show that when training on relatively small dataset (n=10~32 volumes, depending on the size of the available datasets) from a single source domain: (i) BigAug models degrade an average of 11% (Dice score change) from source to unseen domain, substantially better than conventional augmentation (degrading 39%) and CycleGAN-based domain adaptation method (degrading 25%), (ii) BigAug is better than “shallower" stacked transforms (i.e. those with fewer transforms) on unseen domains and demonstrates modest improvement to conventional augmentation on the source domain, (iii) after training with BigAug on one source domain, performance on an unseen domain is similar to training a model from scratch on that domain when using the same number of training samples. When training on large datasets (n=465 volumes) with BigAug, (iv) application to unseen domains reaches the performance of state-of-the-art fully supervised models that are trained and tested on their source domains. These findings establish a strong benchmark for the study of domain generalization in medical imaging, and can be generalized to the design of highly robust deep segmentation models for clinical deployment.
Fourier Properties of Symmetric-Geometry Computed Tomography and Its Linogram Reconstruction With Neural Network
Zhang, Tao
Zhang, Li
Chen, Zhiqiang
Xing, Yuxiang
Gao, Hewei
IEEE Transactions on Medical Imaging2020Journal Article, cited 0 times
Pancreas-CT
In this work, we investigate the Fourier properties of a symmetric-geometry computed tomography (SGCT) with linearly distributed source and detector in a stationary configuration. A linkage between the 1D Fourier Transform of a weighted projection from SGCT and the 2D Fourier Transform of a deformed object is established in a simple mathematical form (i.e., the Fourier slice theorem for SGCT). Based on its Fourier slice theorem and its unique data sampling in the Fourier space, a Linogram-based Fourier reconstruction method is derived for SGCT. We demonstrate that the entire Linogram reconstruction process can be embedded as known operators into an end-to-end neural network. As a learning-based approach, the proposed Linogram-Net has capability of improving CT image quality for non-ideal imaging scenarios, a limited-angle SGCT for instance, through combining weights learning in the projection domain and loss minimization in the image domain. Numerical simulations and physical experiments on an SGCT prototype platform showed that our proposed Linogram-based method can achieve accurate reconstruction from a dual-SGCT scan and can greatly reduce computational complexity when compared with the filtered backprojection type reconstruction. The Linogram-Net achieved accurate reconstruction when projection data are complete and significantly suppressed image artifacts from a limited-angle SGCT scan mimicked by using a clinical CT dataset, with the average CT number error in the selected regions of interest reduced from 67.7 Hounsfield Units (HU) to 28.7 HU, and the average normalized mean square error of overall images reduced from 4.21e-3 to 2.65e-3.
Combined Spiral Transformation and Model-Driven Multi-Modal Deep Learning Scheme for Automatic Prediction of TP53 Mutation in Pancreatic Cancer
Chen, Xiahan
Lin, Xiaozhu
Shen, Qing
Qian, Xiaohua
IEEE Transactions on Medical Imaging2021Journal Article, cited 0 times
HEAD-NECK-RADIOMICS-HN1
Pancreatic cancer is a malignant form of cancer with one of the worst prognoses. The poor prognosis and resistance to therapeutic modalities have been linked to TP53 mutation. Pathological examinations, such as biopsies, cannot be frequently performed in clinical practice; therefore, noninvasive and reproducible methods are desired. However, automatic prediction methods based on imaging have drawbacks such as poor 3D information utilization, small sample size, and ineffectiveness multi-modal fusion. In this study, we proposed a model-driven multi-modal deep learning scheme to overcome these challenges. A spiral transformation algorithm was developed to obtain 2D images from 3D data, with the transformed image inheriting and retaining the spatial correlation of the original texture and edge information. The spiral transformation could be used to effectively apply the 3D information with less computational resources and conveniently augment the data size with high quality. Moreover, model-driven items were designed to introduce prior knowledge in the deep learning framework for multi-modal fusion. The model-driven strategy and spiral transformation-based data augmentation can improve the performance of the small sample size. A bilinear pooling module was introduced to improve the performance of fine-grained prediction. The experimental results show that the proposed model gives the desired performance in predicting TP53 mutation in pancreatic cancer, providing a new approach for noninvasive gene prediction. The proposed methodologies of spiral transformation and model-driven deep learning can also be used for the artificial intelligence community dealing with oncological applications. Our source codes with a demon will be released at https://github.com/SJTUBME-QianLab/SpiralTransform.
Multi-Atlas Image Soft Segmentation via Computation of the Expected Label Value
Aganj, Iman
Fischl, Bruce
IEEE Transactions on Medical Imaging2021Journal Article, cited 0 times
Pancreas-CT
The use of multiple atlases is common in medical image segmentation. This typically requires deformable registration of the atlases (or the average atlas) to the new image, which is computationally expensive and susceptible to entrapment in local optima. We propose to instead consider the probability of all possible atlas-to-image transformations and compute the expected label value (ELV), thereby not relying merely on the transformation deemed "optimal" by the registration method. Moreover, we do so without actually performing deformable registration, thus avoiding the associated computational costs. We evaluate our ELV computation approach by applying it to brain, liver, and pancreas segmentation on datasets of magnetic resonance and computed tomography images.
Label-Free Segmentation of COVID-19 Lesions in Lung CT
Yao, Qingsong
Xiao, Li
Liu, Peihang
Zhou, S. Kevin
IEEE Transactions on Medical Imaging2021Journal Article, cited 0 times
COVID-19
COVID-19 Testing
Humans
Lung
SARS-CoV-2
Tomography
X-Ray Computed
Information and Computing Sciences
Artificial Intelligence
LIDC-IDRI
Scarcity of annotated images hampers the building of automated solution for reliable COVID-19 diagnosis and evaluation from CT. To alleviate the burden of data annotation, we herein present a label-free approach for segmenting COVID-19 lesions in CT via voxel-level anomaly modeling that mines out the relevant knowledge from normal CT lung scans. Our modeling is inspired by the observation that the parts of tracheae and vessels, which lay in the high-intensity range where lesions belong to, exhibit strong patterns. To facilitate the learning of such patterns at a voxel level, we synthesize 'lesions' using a set of simple operations and insert the synthesized 'lesions' into normal CT lung scans to form training pairs, from which we learn a normalcy-recognizing network (NormNet) that recognizes normal tissues and separate them from possible COVID-19 lesions. Our experiments on three different public datasets validate the effectiveness of NormNet, which conspicuously outperforms a variety of unsupervised anomaly detection (UAD) methods.
X-Ray Scatter Estimation Using Deep Splines
Roser, Philipp
Birkhold, Annette
Preuhs, Alexander
Syben, Christopher
Felsner, Lina
Hoppe, Elisabeth
Strobel, Norbert
Kowarschik, Markus
Fahrig, Rebecca
Maier, Andreas
IEEE Transactions on Medical Imaging2021Journal Article, cited 0 times
CT Lymph Nodes
HNSCC-3DCT-RT
X-ray scatter compensation is a very desirable technique in flat-panel X-ray imaging and cone-beam computed tomography. State-of-the-art U-net based scatter removal approaches yielded promising results. However, as there are no physics' constraints applied to the output of the U-Net, it cannot be ruled out that it yields spurious results. Unfortunately, in the context of medical imaging, those may be misleading and could lead to wrong conclusions. To overcome this problem, we propose to embed B-splines as a known operator into neural networks. This inherently constrains their predictions to well-behaved and smooth functions. In a study using synthetic head and thorax data as well as real thorax phantom data, we found that our approach performed on par with U-net when comparing both algorithms based on quantitative performance metrics. However, our approach not only reduces runtime and parameter complexity, but we also found it much more robust to unseen noise levels. While the U-net responded with visible artifacts, the proposed approach preserved the X-ray signal's frequency characteristics.
Multi-Task Fusion for Improving Mammography Screening Data Classification
Wimmer, Maria
Sluiter, Gert
Major, David
Lenis, Dimitrios
Berg, Astrid
Neubauer, Theresa
Bühler, Katja
IEEE Transactions on Medical Imaging2022Journal Article, cited 0 times
CBIS-DDSM
Machine learning and deep learning methods have become essential for computer-assisted prediction in medicine, with a growing number of applications also in the field of mammography. Typically these algorithms are trained for a specific task, e.g., the classification of lesions or the prediction of a mammogram's pathology status. To obtain a comprehensive view of a patient, models which were all trained for the same task(s) are subsequently ensembled or combined. In this work, we propose a pipeline approach, where we first train a set of individual, task-specific models and subsequently investigate the fusion thereof, which is in contrast to the standard model ensembling strategy. We fuse model predictions and high-level features from deep learning models with hybrid patient models to build stronger predictors on patient level. To this end, we propose a multi-branch deep learning model which efficiently fuses features across different tasks and mammograms to obtain a comprehensive patient-level prediction. We train and evaluate our full pipeline on public mammography data, i.e., DDSM and its curated version CBIS-DDSM, and report an AUC score of 0.962 for predicting the presence of any lesion and 0.791 for predicting the presence of malignant lesions on patient level. Overall, our fusion approaches improve AUC scores significantly by up to 0.04 compared to standard model ensembling. Moreover, by providing not only global patient-level predictions but also task-specific model results that are related to radiological features, our pipeline aims to closely support the reading workflow of radiologists.
Shadow-Consistent Semi-Supervised Learning for Prostate Ultrasound Segmentation
Xu, Xuanang
Sanford, Thomas
Turkbey, Baris
Xu, Sheng
Wood, Bradford J.
Yan, Pingkun
IEEE Transactions on Medical Imaging2022Journal Article, cited 0 times
Prostate-MRI-US-Biopsy
Prostate segmentation in transrectal ultrasound (TRUS) image is an essential prerequisite for many prostate-related clinical procedures, which, however, is also a long-standing problem due to the challenges caused by the low image quality and shadow artifacts. In this paper, we propose a Shadow-consistent Semi-supervised Learning (SCO-SSL) method with two novel mechanisms, namely shadow augmentation (Shadow-AUG) and shadow dropout (Shadow-DROP), to tackle this challenging problem. Specifically, Shadow-AUG enriches training samples by adding simulated shadow artifacts to the images to make the network robust to the shadow patterns. Shadow-DROP enforces the segmentation network to infer the prostate boundary using the neighboring shadow-free pixels. Extensive experiments are conducted on two large clinical datasets (a public dataset containing 1,761 TRUS volumes and an in-house dataset containing 662 TRUS volumes). In the fully-supervised setting, a vanilla U-Net equipped with our Shadow-AUG&Shadow-DROP outperforms the state-of-the-arts with statistical significance. In the semi-supervised setting, even with only 20% labeled training data, our SCO-SSL method still achieves highly competitive performance, suggesting great clinical value in relieving the labor of data annotation. Source code is released at https://github.com/DIAL-RPI/SCO-SSL.
RECISTSup: Weakly-Supervised Lesion Volume Segmentation Using RECIST Measurement
Wang, H.
Yi, F.
Wang, J.
Yi, Z.
Zhang, H.
IEEE Trans Med Imaging2022Journal Article, cited 0 times
CT Lymph Nodes
Algorithm Development
*Image Processing
Computer-Assisted/methods
Radiography
Response Evaluation Criteria in Solid Tumors
Lesion volume segmentation in medical imaging is an effective tool for assessing lesion/tumor sizes and monitoring changes in growth. Since manually segmentation of lesion volume is not only time-consuming but also requires radiological experience, current practices rely on an imprecise surrogate called response evaluation criteria in solid tumors (RECIST). Although RECIST measurement is coarse compared with voxel-level annotation, it can reflect the lesion's location, length, and width, resulting in a possibility of segmenting lesion volume directly via RECIST measurement. In this study, a novel weakly-supervised method called RECISTSup is proposed to automatically segment lesion volume via RECIST measurement. Based on RECIST measurement, a new RECIST measurement propagation algorithm is proposed to generate pseudo masks, which are then used to train the segmentation networks. Due to the spatial prior knowledge provided by RECIST measurement, two new losses are also designed to make full use of it. In addition, the automatically segmented lesion results are used to supervise the model training iteratively for further improving segmentation performance. A series of experiments are carried out on three datasets to evaluate the proposed method, including ablation experiments, comparison of various methods, annotation cost analyses, visualization of results. Experimental results show that the proposed RECISTSup achieves the state-of-the-art result compared with other weakly-supervised methods. The results also demonstrate that RECIST measurement can produce similar performance to voxel-level annotation while significantly saving the annotation cost.
Noise Reduction in CT Using Learned Wavelet-Frame Shrinkage Networks
Zavala-Mondragon, L. A.
Rongen, P.
Bescos, J. O.
de With, P. H. N.
van der Sommen, F.
IEEE Trans Med Imaging2022Journal Article, cited 0 times
LDCT-and-Projection-data
Image denoising
*Image Processing
Computer-Assisted
*Neural Networks
Computer
Signal-To-Noise Ratio
Tomography
X-Ray Computed
Encoding-decoding (ED) CNNs have demonstrated state-of-the-art performance for noise reduction over the past years. This has triggered the pursuit of better understanding the inner workings of such architectures, which has led to the theory of deep convolutional framelets (TDCF), revealing important links between signal processing and CNNs. Specifically, the TDCF demonstrates that ReLU CNNs induce low-rankness, since these models often do not satisfy the necessary redundancy to achieve perfect reconstruction (PR). In contrast, this paper explores CNNs that do meet the PR conditions. We demonstrate that in these type of CNNs soft shrinkage and PR can be assumed. Furthermore, based on our explorations we propose the learned wavelet-frame shrinkage network, or LWFSN and its residual counterpart, the rLWFSN. The ED path of the (r)LWFSN complies with the PR conditions, while the shrinkage stage is based on the linear expansion of thresholds proposed Blu and Luisier. In addition, the LWFSN has only a fraction of the training parameters (<1%) of conventional CNNs, very small inference times, low memory footprint, while still achieving performance close to state-of-the-art alternatives, such as the tight frame (TF) U-Net and FBPConvNet, in low-dose CT denoising.
Domain and Content Adaptive Convolution based Multi-Source Domain Generalization for Medical Image Segmentation
Hu, S.
Liao, Z.
Zhang, J.
Xia, Y.
IEEE Trans Med Imaging2022Journal Article, cited 0 times
ISBI-MR-Prostate-2013
MIDRC-RICORD-1a
Radiomics
Algorithm Development
Transfer learning
COVID-19
PROSTATE
LUNG
The domain gap caused mainly by variable medical image quality renders a major obstacle on the path between training a segmentation model in the lab and applying the trained model to unseen clinical data. To address this issue, domain generalization methods have been proposed, which however usually use static convolutions and are less flexible. In this paper, we propose a multi-source domain generalization model based on the domain and content adaptive convolution (DCAC) for the segmentation of medical images across different modalities. Specifically, we design the domain adaptive convolution (DAC) module and content adaptive convolution (CAC) module and incorporate both into an encoder-decoder backbone. In the DAC module, a dynamic convolutional head is conditioned on the predicted domain code of the input to make our model adapt to the unseen target domain. In the CAC module, a dynamic convolutional head is conditioned on the global image features to make our model adapt to the test image. We evaluated the DCAC model against the baseline and four state-of-the-art domain generalization methods on the prostate segmentation, COVID-19 lesion segmentation, and optic cup/optic disc segmentation tasks. Our results not only indicate that the proposed DCAC model outperforms all competing methods on each segmentation task but also demonstrate the effectiveness of the DAC and CAC modules. Code is available at https://git.io/DCAC.
Learn2Reg: Comprehensive Multi-Task Medical Image Registration Challenge, Dataset and Evaluation in the Era of Deep Learning
Hering, Alessa
Hansen, Lasse
Mok, Tony C. W.
Chung, Albert C. S.
Siebert, Hanna
Häger, Stephanie
Lange, Annkristin
Kuckertz, Sven
Heldmann, Stefan
Shao, Wei
Vesal, Sulaiman
Rusu, Mirabela
Sonn, Geoffrey
Estienne, Théo
Vakalopoulou, Maria
Han, Luyi
Huang, Yunzhi
Yap, Pew-Thian
Brudfors, Mikael
Balbastre, Yaël
Joutard, Samuel
Modat, Marc
Lifshitz, Gal
Raviv, Dan
Lv, Jinxin
Li, Qiang
Jaouen, Vincent
Visvikis, Dimitris
Fourcade, Constance
Rubeaux, Mathieu
Pan, Wentao
Xu, Zhe
Jian, Bailiang
De Benetti, Francesca
Wodzinski, Marek
Gunnarsson, Niklas
Sjölund, Jens
Grzech, Daniel
Qiu, Huaqi
Li, Zeju
Thorley, Alexander
Duan, Jinming
Großbröhmer, Christoph
Hoopes, Andrew
Reinertsen, Ingerid
Xiao, Yiming
Landman, Bennett
Huo, Yuankai
Murphy, Keelin
Lessmann, Nikolas
van Ginneken, Bram
Dalca, Adrian V.
Heinrich, Mattias P.
IEEE Transactions on Medical Imaging2023Journal Article, cited 0 times
TCGA-KIRC
TCGA-KIRP
TCGA-LIHC
Image registration is a fundamental medical image analysis task, and a wide variety of approaches have been proposed. However, only a few studies have comprehensively compared medical image registration approaches on a wide range of clinically relevant tasks. This limits the development of registration methods, the adoption of research advances into practice, and a fair benchmark across competing approaches. The Learn2Reg challenge addresses these limitations by providing a multi-task medical image registration data set for comprehensive characterisation of deformable registration algorithms. A continuous evaluation will be possible at https://learn2reg.grand-challenge.org. Learn2Reg covers a wide range of anatomies (brain, abdomen, and thorax), modalities (ultrasound, CT, MR), availability of annotations, as well as intra- and inter-patient registration evaluation. We established an easily accessible framework for training and validation of 3D registration methods, which enabled the compilation of results of over 65 individual method submissions from more than 20 unique teams. We used a complementary set of metrics, including robustness, accuracy, plausibility, and runtime, enabling unique insight into the current state-of-the-art of medical image registration. This paper describes datasets, tasks, evaluation methods and results of the challenge, as well as results of further analysis of transferability to new datasets, the importance of label supervision, and resulting bias. While no single approach worked best across all tasks, many methodological aspects could be identified that push the performance of medical image registration to new state-of-the-art performance. Furthermore, we demystified the common belief that conventional registration methods have to be much slower than deep-learning-based methods.
Linearized Analysis of Noise and Resolution for DL-Based Image Generation
Xu, Jingyan
Noo, Frederic
IEEE Transactions on Medical Imaging2023Journal Article, cited 0 times
Pancreas-CT
Deep-learning (DL) based CT image generation methods are often evaluated using RMSE and SSIM. By contrast, conventional model-based image reconstruction (MBIR) methods are often evaluated using image properties such as resolution, noise, bias. Calculating such image properties requires time consuming Monte Carlo (MC) simulations. For MBIR, linearized analysis using first order Taylor expansion has been developed to characterize noise and resolution without MC simulations. This inspired us to investigate if linearization can be applied to DL networks to enable efficient characterization of resolution and noise. We used FBPConvNet as an example DL network and performed extensive numerical evaluations, including both computer simulations and real CT data. Our results showed that network linearization works well under normal exposure settings. For such applications, linearization can characterize image noise and resolutions without running MC simulations. We provide with this work the computational tools to implement network linearization. The efficiency and ease of implementation of network linearization can hopefully popularize the physics-related image quality measures for DL applications. Our methodology is general; it allows flexible compositions of DL nonlinear modules and linear operators such as filtered-backprojection (FBP). For the latter, we develop a generic method for computing the covariance images that is needed for network linearization.
Label-Efficient Self-Supervised Federated Learning for Tackling Data Heterogeneity in Medical Imaging
Yan, Rui
Qu, Liangqiong
Wei, Qingyue
Huang, Shih-Cheng
Shen, Liyue
Rubin, Daniel L.
Xing, Lei
Zhou, Yuyin
IEEE Transactions on Medical Imaging2023Journal Article, cited 0 times
MIDRC-RICORD-1C
The collection and curation of large-scale medical datasets from multiple institutions is essential for training accurate deep learning models, but privacy concerns often hinder data sharing. Federated learning (FL) is a promising solution that enables privacy-preserving collaborative learning among different institutions, but it generally suffers from performance deterioration due to heterogeneous data distributions and a lack of quality labeled data. In this paper, we present a robust and label-efficient self-supervised FL framework for medical image analysis. Our method introduces a novel Transformer-based self-supervised pre-training paradigm that pre-trains models directly on decentralized target task datasets using masked image modeling, to facilitate more robust representation learning on heterogeneous data and effective knowledge transfer to downstream models. Extensive empirical results on simulated and real-world medical imaging non-IID federated datasets show that masked image modeling with Transformers significantly improves the robustness of models against various degrees of data heterogeneity. Notably, under severe data heterogeneity, our method, without relying on any additional pre-training data, achieves an improvement of 5.06%, 1.53% and 4.58% in test accuracy on retinal, dermatology and chest X-ray classification compared to the supervised baseline with ImageNet pre-training. In addition, we show that our federated self-supervised pre-training methods yield models that generalize better to out-of-distribution data and perform more effectively when fine-tuning with limited labeled data, compared to existing FL algorithms. The code is available at https://github.com/rui-yan/SSL-FL.
Multi-Modal Learning for Predicting the Genotype of Glioma
Wei, Yiran
Chen, Xi
Zhu, Lei
Zhang, Lipei
Schönlieb, Carola-Bibiane
Price, Stephen
Li, Chao
IEEE Transactions on Medical Imaging2023Journal Article, cited 0 times
IvyGAP
TCGA-GBM
TCGA-LGG
The isocitrate dehydrogenase (IDH) gene mutation is an essential biomarker for the diagnosis and prognosis of glioma. It is promising to better predict glioma genotype by integrating focal tumor image and geometric features with brain network features derived from MRI. Convolutional neural networks show reasonable performance in predicting IDH mutation, which, however, cannot learn from non-Euclidean data, e.g., geometric and network data. In this study, we propose a multi-modal learning framework using three separate encoders to extract features of focal tumor image, tumor geometrics and global brain networks. To mitigate the limited availability of diffusion MRI, we develop a self-supervised approach to generate brain networks from anatomical multi-sequence MRI. Moreover, to extract tumor-related features from the brain network, we design a hierarchical attention module for the brain network encoder. Further, we design a bi-level multi-modal contrastive loss to align the multi-modal features and tackle the domain gap at the focal tumor and global brain. Finally, we propose a weighted population graph to integrate the multi-modal features for genotype prediction. Experimental results on the testing set show that the proposed model outperforms the baseline deep learning models. The ablation experiments validate the performance of different components of the framework. The visualized interpretation corresponds to clinical knowledge with further validation. In conclusion, the proposed learning framework provides a novel approach for predicting the genotype of glioma.
An Efficient Deep Neural Network to Classify Large 3D Images With Small Objects
Park, Jungkyu
Chłędowski, Jakub
Jastrzębski, Stanisław
Witowski, Jan
Xu, Yanqi
Du, Linda
Gaddam, Sushma
Kim, Eric
Lewin, Alana
Parikh, Ujas
Plaunova, Anastasia
Chen, Sardius
Millet, Alexandra
Park, James
Pysarenko, Kristine
Patel, Shalin
Goldberg, Julia
Wegener, Melanie
Moy, Linda
Heacock, Laura
Reig, Beatriu
Geras, Krzysztof J.
IEEE Transactions on Medical Imaging2024Journal Article, cited 0 times
Breast-Cancer-Screening-DBT
Breast
Mammography
Machine Learning
3D imaging enables accurate diagnosis by providing spatial information about organ anatomy. However, using 3D images to train AI models is computationally challenging because they consist of 10x or 100x more pixels than their 2D counterparts. To be trained with high-resolution 3D images, convolutional neural networks resort to downsampling them or projecting them to 2D. We propose an effective alternative, a neural network that enables efficient classification of full-resolution 3D medical images. Compared to off-the-shelf convolutional neural networks, our network, 3D Globally-Aware Multiple Instance Classifier (3D-GMIC), uses 77.98%-90.05% less GPU memory and 91.23%-96.02% less computation. While it is trained only with image-level labels, without segmentation labels, it explains its predictions by providing pixel-level saliency maps. On a dataset collected at NYU Langone Health, including 85,526 patients with full-field 2D mammography (FFDM), synthetic 2D mammography, and 3D mammography, 3D-GMIC achieves an AUC of 0.831 (95% CI: 0.769-0.887) in classifying breasts with malignant findings using 3D mammography. This is comparable to the performance of GMIC on FFDM (0.816, 95% CI: 0.737-0.878) and synthetic 2D (0.826, 95% CI: 0.754-0.884), which demonstrates that 3D-GMIC successfully classified large 3D images despite focusing computation on a smaller percentage of its input compared to GMIC. Therefore, 3D-GMIC identifies and utilizes extremely small regions of interest from 3D images consisting of hundreds of millions of pixels, dramatically reducing associated computational challenges. 3D-GMIC generalizes well to BCS-DBT, an external dataset from Duke University Hospital, achieving an AUC of 0.848 (95% CI: 0.798-0.896).
Deep Generative Adversarial Reinforcement Learning for Semi-Supervised Segmentation of Low-Contrast and Small Objects in Medical Images
Xu, C.
Zhang, T.
Zhang, D.
Zhang, D.
Han, J.
IEEE Trans Med Imaging2024Journal Article, cited 0 times
Website
Pancreas-CT
Image Segmentation
Generative Adversarial Network (GAN)
BRAIN
LIVER
PANCREAS
Organ segmentation
Deep reinforcement learning (DRL) has demonstrated impressive performance in medical image segmentation, particularly for low-contrast and small medical objects. However, current DRL-based segmentation methods face limitations due to the optimization of error propagation in two separate stages and the need for a significant amount of labeled data. In this paper, we propose a novel deep generative adversarial reinforcement learning (DGARL) approach that, for the first time, enables end-to-end semi-supervised medical image segmentation in the DRL domain. DGARL ingeniously establishes a pipeline that integrates DRL and generative adversarial networks (GANs) to optimize both detection and segmentation tasks holistically while mutually enhancing each other. Specifically, DGARL introduces two innovative components to facilitate this integration in semi-supervised settings. First, a task-joint GAN with two discriminators links the detection results to the GAN's segmentation performance evaluation, allowing simultaneous joint evaluation and feedback. This ensures that DRL and GAN can be directly optimized based on each other's results. Second, a bidirectional exploration DRL integrates backward exploration and forward exploration to ensure the DRL agent explores the correct direction when forward exploration is disabled due to lack of explicit rewards. This mitigates the issue of unlabeled data being unable to provide rewards and rendering DRL unexplorable. Comprehensive experiments on three generalization datasets, comprising a total of 640 patients, demonstrate that our novel DGARL achieves 85.02% Dice and improves at least 1.91% for brain tumors, achieves 73.18% Dice and improves at least 4.28% for liver tumors, and achieves 70.85% Dice and improves at least 2.73% for pancreas compared to the ten most recent advanced methods, our results attest to the superiority of DGARL. Code is available at GitHub.
PROST-Net: A Deep Learning Approach to Support Real-Time Fusion in Prostate Biopsy
Palladino, Luigi
Maris, Bogdan
Antonelli, Alessandro
Fiorini, Paolo
IEEE Transactions on Medical Robotics and Bionics2022Journal Article, cited 0 times
Website
Prostate-MRI-US-Biopsy
Multi-modal imaging
Segmentation
PROSTATE
Prostate biopsy fusion systems employ manual segmentation of the prostate before the procedure, therefore the image registration is static. To pave the way for dynamic fusion, we introduce PROST-Net, a deep learning (DL) based method to segment the prostate in real-time. The algorithm works in three steps: firstly, it detects the presence of the prostate, secondly defines a region of interest around it, discharging other pixels of the image before the last step which is the segmentation. This approach reduces the amount of data to be processed during segmentation and allows to contour the prostate regardless of the image modality (e.g., magnetic resonance (MRI) or ultrasound (US)) and, in the case of US, regardless of the geometric disposition of the sensor array (e.g., linear or convex). PROST-Net produced a mean Dice similarity coefficient of 86% in US images and 77% in MRI images and outperformed other CNN-based techniques. PROST-Net is integrated in a robotic system–PROST– for trans-perineal fusion biopsy. The robot with PROST-Net gives the potential to track the prostate in real-time, thus reducing human errors during the biopsy procedure.
Wearable Mechatronic Ultrasound-Integrated AR Navigation System for Lumbar Puncture Guidance
Jiang, Baichuan
Wang, Liam
Xu, Keshuai
Hossbach, Martin
Demir, Alican
Rajan, Purnima
Taylor, Russell H.
Moghekar, Abhay
Foroughi, Pezhman
Kazanzides, Peter
Boctor, Emad M.
IEEE Transactions on Medical Robotics and Bionics2023Journal Article, cited 0 times
COVID-19-NY-SBU
As one of the most commonly performed spinal interventions in routine clinical practice, lumbar punctures are usually done with only hand palpation and trial-and-error. Failures can prolong procedure time and introduce complications such as cerebrospinal fluid leaks and headaches. Therefore, an effective needle insertion guidance method is desired. In this work, we present a complete lumbar puncture guidance system with the integration of (1) a wearable mechatronic ultrasound imaging device, (2) volume-reconstruction and bone surface estimation algorithms and (3) two alternative augmented reality user interfaces for needle guidance, including a HoloLens-based and a tablet-based solution. We conducted a quantitative evaluation of the end-to-end navigation accuracy, which shows that our system can achieve an overall needle navigation accuracy of 2.83 mm and 2.76 mm for the Tablet-based and the HoloLens-based solutions, respectively. In addition, we conducted a preliminary user study to qualitatively evaluate the effectiveness and ergonomics of our system on lumbar phantoms. The results show that users were able to successfully reach the target in an average of 1.12 and 1.14 needle insertion attempts for Tablet-based and HoloLens-based systems, respectively, exhibiting the potential to reduce the failure rates of lumbar puncture procedures with the proposed lumbar-puncture guidance.
Evaluate the Malignancy of Pulmonary Nodules Using the 3D Deep Leaky Noisy-OR Network
Liao, Fangzhou
Liang, Ming
Li, Zhe
Hu, Xiaolin
Song, Sen
IEEE Trans Neural Netw Learn Syst2017Journal Article, cited 15 times
Website
Radiomics
LUNG
Computer Assisted Detection (CAD)
Deep Learning
Automatic diagnosing lung cancer from computed tomography scans involves two steps: detect all suspicious lesions (pulmonary nodules) and evaluate the whole-lung/pulmonary malignancy. Currently, there are many studies about the first step, but few about the second step. Since the existence of nodule does not definitely indicate cancer, and the morphology of nodule has a complicated relationship with cancer, the diagnosis of lung cancer demands careful investigations on every suspicious nodule and integration of information of all nodules. We propose a 3-D deep neural network to solve this problem. The model consists of two modules. The first one is a 3-D region proposal network for nodule detection, which outputs all suspicious nodules for a subject. The second one selects the top five nodules based on the detection confidence, evaluates their cancer probabilities, and combines them with a leaky noisy-OR gate to obtain the probability of lung cancer for the subject. The two modules share the same backbone network, a modified U-net. The overfitting caused by the shortage of the training data is alleviated by training the two modules alternately. The proposed model won the first place in the Data Science Bowl 2017 competition.
Classical self-supervised networks suffer from convergence problems and reduced segmentation accuracy due to forceful termination. Qubits or bilevel quantum bits often describe quantum neural network models. In this article, a novel self-supervised shallow learning network model exploiting the sophisticated three-level qutrit-inspired quantum information system, referred to as quantum fully self-supervised neural network (QFS-Net), is presented for automated segmentation of brain magnetic resonance (MR) images. The QFS-Net model comprises a trinity of a layered structure of qutrits interconnected through parametric Hadamard gates using an eight-connected second-order neighborhood-based topology. The nonlinear transformation of the qutrit states allows the underlying quantum neural network model to encode the quantum states, thereby enabling a faster self-organized counterpropagation of these states between the layers without supervision. The suggested QFS-Net model is tailored and extensively validated on the Cancer Imaging Archive (TCIA) dataset collected from the Nature repository. The experimental results are also compared with state-of-the-art supervised (U-Net and URes-Net architectures) and the self-supervised QIS-Net model and its classical counterpart. Results shed promising segmented outcomes in detecting tumors in terms of dice similarity and accuracy with minimum human intervention and computational resources. The proposed QFS-Net is also investigated on natural gray-scale images from the Berkeley segmentation dataset and yields promising outcomes in segmentation, thereby demonstrating the robustness of the QFS-Net model.
NeRP: Implicit Neural Representation Learning With Prior Embedding for Sparsely Sampled Image Reconstruction
Shen, Liyue
Pauly, John
Xing, Lei
2022Journal Article, cited 0 times
Brain-Tumor-Progression
Image reconstruction is an inverse problem that solves for a computational image based on sampled sensor measurement. Sparsely sampled image reconstruction poses additional challenges due to limited measurements. In this work, we propose a methodology of implicit Neural Representation learning with Prior embedding (NeRP) to reconstruct a computational image from sparsely sampled measurements. The method differs fundamentally from previous deep learning-based image reconstruction approaches in that NeRP exploits the internal information in an image prior and the physics of the sparsely sampled measurements to produce a representation of the unknown subject. No large-scale data is required to train the NeRP except for a prior image and sparsely sampled measurements. In addition, we demonstrate that NeRP is a general methodology that generalizes to different imaging modalities such as computed tomography (CT) and magnetic resonance imaging (MRI). We also show that NeRP can robustly capture the subtle yet significant image changes required for assessing tumor progression.
3D Pyramid Pooling Network for Abdominal MRI Series Classification
Zhu, Zhe
Mittendorf, Amber
Shropshire, Erin
Allen, Brian
Miller, Chad
Bashir, Mustafa R.
Mazurowski, Maciej A.
2022Journal Article, cited 0 times
TCGA-LUAD
Recognizing and organizing different series in an MRI examination is important both for clinical review and research, but it is poorly addressed by the current generation of picture archiving and communication systems (PACSs) and post-processing workstations. In this paper, we study the problem of using deep convolutional neural networks for automatic classification of abdominal MRI series to one of many series types. Our contributions are three-fold. First, we created a large abdominal MRI dataset containing 3717 MRI series including 188,665 individual images, derived from liver examinations. 30 different series types are represented in this dataset. The dataset was annotated by consensus readings from two radiologists. Both the MRIs and the annotations were made publicly available. Second, we proposed a 3D pyramid pooling network, which can elegantly handle abdominal MRI series with varied sizes of each dimension, and achieved state-of-the-art classification performance. Third, we performed the first ever comparison between the algorithm and the radiologists on an additional dataset and had several meaningful findings.
AbdomenCT-1K: Is Abdominal Organ Segmentation a Solved Problem?
Ma, Jun
Zhang, Yao
Gu, Song
Zhu, Cheng
Ge, Cheng
Zhang, Yichi
An, Xingle
Wang, Congcong
Wang, Qiyuan
Liu, Xin
Cao, Shucheng
Zhang, Qi
Liu, Shangqing
Wang, Yunpeng
Li, Yuhui
He, Jian
Yang, Xiaoping
2022Journal Article, cited 0 times
Pancreas-CT
With the unprecedented developments in deep learning, automatic segmentation of main abdominal organs seems to be a solved problem as state-of-the-art (SOTA) methods have achieved comparable results with inter-rater variability on many benchmark datasets. However, most of the existing abdominal datasets only contain single-center, single-phase, single-vendor, or single-disease cases, and it is unclear whether the excellent performance can generalize on diverse datasets. This paper presents a large and diverse abdominal CT organ segmentation dataset, termed AbdomenCT-1K, with more than 1000 (1K) CT scans from 12 medical centers, including multi-phase, multi-vendor, and multi-disease cases. Furthermore, we conduct a large-scale study for liver, kidney, spleen, and pancreas segmentation and reveal the unsolved segmentation problems of the SOTA methods, such as the limited generalization ability on distinct medical centers, phases, and unseen diseases. To advance the unsolved problems, we further build four organ segmentation benchmarks for fully supervised, semi-supervised, weakly supervised, and continual learning, which are currently challenging and active research topics. Accordingly, we develop a simple and effective method for each benchmark, which can be used as out-of-the-box methods and strong baselines. We believe the AbdomenCT-1K dataset will promote future in-depth research towards clinical applicable abdominal organ segmentation methods.
Faber: A Hardware/SoftWare Toolchain for Image Registration
D'Arnese, Eleonora
Conficconi, Davide
Sozzo, Emanuele Del
Fusco, Luigi
Sciuto, Donatella
Santambrogio, Marco Domenico
IEEE Transactions on Parallel and Distributed Systems2023Journal Article, cited 0 times
Website
CPTAC-LUAD
Algorithm Development
Image Registration
Graphics Processing Units (GPU)
Image registration is a well-defined computation paradigm widely applied to align one or more images to a target image. This paradigm, which builds upon three main components, is particularly compute-intensive and represents many image processing pipelines’ bottlenecks. State-of-the-art solutions leverage hardware acceleration to speed up image registration, but they are usually limited to implementing a single component. We present Faber, an open-source HW/SW CAD toolchain tailored to image registration. The Faber toolchain comprises HW/SW highly-tunable registration components, supports users with different expertise in building custom pipelines, and automates the design process. In this direction, Faber provides both default settings for entry-level users and latency and resource models to guide HW experts in customizing the different components. Finally, Faber achieves from 1.5× to 54× in speedup and from 2× to 177× in energy efficiency against state-of-the-art tools on a Xeon Gold.
Artifact Reduction for Sparse-view CT using Deep Learning with Band Patch
Okamoto, Takayuki
Ohnishi, Takashi
Haneishi, Hideaki
IEEE Transactions on Radiation and Plasma Medical Sciences2022Journal Article, cited 1 times
Website
LDCT-and-Projection-data
Image denoising
Computed Tomography (CT)
Sparse-view computed tomography (CT), an imaging technique that reduces the number of projections, can reduce the total scan duration and radiation dose. However, sparse data sampling causes streak artifacts on images reconstructed with analytical algorithms. In this paper, we propose an artifact reduction method for sparse-view CT using deep learning. We developed a light-weight fully convolutional network to estimate a fully sampled sinogram from a sparse-view sinogram by enlargement in the vertical direction. Furthermore, we introduced the band patch, a rectangular region cropped in the vertical direction, as an input image for the network based on the sinogram’s characteristics. Comparison experiments using a swine rib dataset of micro-CT scans and a chest dataset of clinical CT scans were conducted to compare the proposed method, improved U-net from a previous study, and the U-net with band patches. The experimental results showed that the proposed method achieved the best performance and the U-net with band patches had the second-best result in terms of accuracy and prediction time. In addition, the reconstructed images of the proposed method suppressed streak artifacts while preserving the object’s structural information. We confirmed that the proposed method and band patch are useful for artifact reduction for sparse-view CT.
Spatiotemporal Learning of Dynamic Positron Emission Tomography Data Improves Diagnostic Accuracy in Breast Cancer
Inglese, Marianna
Ferrante, Matteo
Duggento, Andrea
Boccato, Tommaso
Toschi, Nicola
IEEE Transactions on Radiation and Plasma Medical Sciences2023Journal Article, cited 0 times
ACRIN-FLT-Breast
Positron emission tomography (PET) is a noninvasive imaging technology able to assess the metabolic or functional state of healthy and/or pathological tissues. In clinical practice, PET data are usually acquired statically and normalized for the evaluation of the standardized uptake value (SUV). In contrast, dynamic PET acquisitions provide information about radiotracer delivery to tissue, its interaction with the target, and its physiological washout. The shape of the time activity curves (TACs) embeds tissue-specific biochemical properties. Conventionally, TACs are employed along with information about blood plasma activity concentration, i.e., the arterial input function, and tracer-specific compartmental models to obtain a full quantitative analysis of PET data. This method’s primary disadvantage is the requirement for invasive arterial blood sample collection throughout the whole PET scan. In this study, we employ a variety of deep learning models to illustrate the diagnostic potential of dynamic PET acquisitions of varying lengths for discriminating breast cancer lesions in the absence of arterial blood sampling compared to static PET only. Our findings demonstrate that the use of TACs, even in the absence of arterial blood sampling and even when using only a share of all timeframes available, outperforms the discriminative ability of conventional SUV analysis.
Homomorphic-Encrypted Volume Rendering
Mazza, Sebastian
Patel, Daniel
Viola, Ivan
IEEE Trans Vis Comput Graph2020Journal Article, cited 0 times
Website
Lung-PET-CT-Dx
Image generation
Computationally demanding tasks are typically calculated in dedicated data centers, and real-time visualizations also follow this trend. Some rendering tasks, however, require the highest level of confidentiality so that no other party, besides the owner, can read or see the sensitive data. Here we present a direct volume rendering approach that performs volume rendering directly on encrypted volume data by using the homomorphic Paillier encryption algorithm. This approach ensures that the volume data by using the homomorphic Paillier encryption algorithm. This approach ensures that the volume data and rendered image are uninterpretable to the rendering server. Our volume rendering pipeline introduces novel approaches for encrypted-data compositing, interpolation, and opacity modulation, as well as simple transfer function design, where each of these routines maintains the highest level of privacy. We present performance and memory overhead analysis that is associated with our privacy-preserving scheme. Our approach is open and secure by design, as opposed to secure through obscurity. Owners of the data only have to keep their secure key confidential to guarantee the privacy of their volume data and the rendered images. Our work is, to our knowledge, the first privacy-preserving remote volume-rendering approach that does not require that any server involved be trustworthy; even in cases when the server is compromised, no sensitive data will be leaked to a foreign party.
Predicting the Grade of Clear Cell Renal Cell Carcinoma from CT Images Using Random Subspace-KNN and Random Forest Classifiers
Accurate and non-invasive determination of the International Society of Urological Pathology (ISUP) based tumor grade is important for the effective management of patients with clear cell renal cell carcinoma (cc-RCC). In this study, the radiomic analysis of 3D computed tomography (CT) images are used to determine ISUP grades of cc-RCC patients by exploring machine learning (ML) methods that can address small ISUP grade image datasets. 143 cc-RCC patient studies from The Cancer Imaging Archive (TCIA) USA were used in the study. 1133 radiomic features were extracted from the normalized 3D segmented CT images. Correlation coefficient analysis, Random Forest feature importance analysis and backward elimination methods were used consecutively to reduce the number of features. 15 out of 1133 features were selected. A k-nearest neighbors (KNN) classifier with random subspaces and a Random Forest classifier were implemented. Model performances were evaluated independently on the unused 20% of the original imbalanced data. ISUP grades were predicted by a KNN classifier under random subspaces with an accuracy of 90% and area under the curve (AUC) of 0.88 using the test data. Grades were predicted by a Random Forest classifier with an accuracy of 83% and AUC of 0.80 using the test data. In conclusion, ensemble classifiers can be used to predict the ISUP grade of cc-RCC tumors from CT images with sufficient reliability. Larger datasets and new types of features are currently being investigated.
Conditional Generative Adversarial Refinement Networks for Unbalanced Medical Image Semantic Segmentation
In the medical field, there is a demand for highspeed transmission and efficient storage of medical images between healthcare organizations. Therefore, image compression techniques are essential in that field. In this study, we conducted an experimental comparison between two famous lossless algorithms: lossless Discrete Cosine Transform (DCT) and lossless Haar Wavelet Transform (HWT). Covering three different datasets that contain different types of medical images: MRI, CT, and gastrointestinal endoscopic images; with different image formats PNG, JPG and TIF. According to the conducted experiments, in terms of compressed image size and compression ratio, we found that DCT outperforms HWT regarding PNG and TIF format which represent CT-grey and MRI-color images. And regarding JPG format which represents the gastrointestinal endoscopic color images, DCT performs well when grey-scale images are used; where HWT outperforms DCT when color images are used. However, HWT outperforms DCT in compression time regarding all the image types and formats.
A tool for lung nodules analysis based on segmentation and morphological operation
Optimizing Convolutional Neural Network by Hybridized Elephant Herding Optimization Algorithm for Magnetic Resonance Image Classification of Glioma Brain Tumor Grade
Gliomas belong to the group of the most frequent types of brain tumors. For this specific type of brain tumors, in its beginning stages, it is extremely complex to get the exact diagnosis. Even with the works from the most experienced doctors, it will not be possible without magnetic resonance imaging, which aids to make the diagnosis of brain tumors. In order to create classification of the images, to where the class of glioma belongs to, for achieving superior performance, convolutional neural networks can be used. For achieving high-level accuracy on the image classification, the convolutional network hyperparameters’ calibrations must reach a very accurate response of high accuracy results and this task proves to take up a lot of computational time and energy. Proceeding with the proposed solution, in this scientific research paper a metaheuristic method has been proposed to automatically search and target the near-optimal values of convolutional neural network hyperparameters based on hybridized version of elephant herding optimization swarm intelligence metaheuristics. The hybridized elephant herding optimization has been incorporated for convolutional neural network hyperparameters’ tuning to develop a system for automatic and instantaneous image classification of glioma brain tumors grades from the magnetic resonance imaging. Comparative analysis was performed with other methods tested on the same problem instance an results proved superiority of approach proposed in this paper.
A review of medical image data augmentation techniques for deep learning applications
Chlap, Phillip
Min, Hang
Vandenberg, Nym
Dowling, Jason
Holloway, Lois
Haworth, Annette
2021Journal Article, cited 0 times
LIDC-IDRI
Research in artificial intelligence for radiology and radiotherapy has recently become increasingly reliant on the use of deep learning-based algorithms. While the performance of the models which these algorithms produce can significantly outperform more traditional machine learning methods, they do rely on larger datasets being available for training. To address this issue, data augmentation has become a popular method for increasing the size of a training dataset, particularly in fields where large datasets aren't typically available, which is often the case when working with medical images. Data augmentation aims to generate additional data which is used to train the model and has been shown to improve performance when validated on a separate unseen dataset. This approach has become commonplace so to help understand the types of data augmentation techniques used in state-of-the-art deep learning models, we conducted a systematic review of the literature where data augmentation was utilised on medical images (limited to CT and MRI) to train a deep learning model. Articles were categorised into basic, deformable, deep learning or other data augmentation techniques. As artificial intelligence models trained using augmented data make their way into the clinic, this review aims to give an insight to these techniques and confidence in the validity of the models produced.
Evaluating the performance of a deep learning‐based computer‐aided diagnosis (DL‐CAD) system for detecting and characterizing lung nodules: Comparison with the performance of double reading by radiologists
Li, Li
Liu, Zhou
Huang, Hua
Lin, Meng
Luo, Dehong
Thoracic cancer2018Journal Article, cited 0 times
Website
LIDC-IDRI
LDCT
deep learning
Computer aided diagnosis
Computer aided detection
NLST
Combined use of radiomics and artificial neural networks for the three‐dimensional automatic segmentation of glioblastoma multiforme
de los Reyes, Alexander Mulet
Lord, Victoria Hyde
Buemi, Maria Elena
Gandía, Daniel
Déniz, Luis Gómez
Alemán, Maikel Noriega
Suárez, Cecilia
Expert Systems2024Journal Article, cited 0 times
Website
TCGA-GBM
BraTS 2020
Artificial Neural Network (ANN)
Automatic Segmentation
Glioblastoma Multiforme (GBM)
Radiomics
Glioblastoma multiforme (GBM) is the most prevalent and aggressive primary brain tumour that has the worst prognosis in adults. Currently, the automatic segmentation of this kind of tumour is being intensively studied. Here, the automatic threedimensional segmentation of the GBM is achieved with its related subzones (active tumour, inner necrosis, and peripheral oedema). Preliminary segmentations were first defined based on the four basic magnetic resonance imaging modalities and classic image processing methods (multithreshold Otsu, Chan–Vese active contours, and morphological erosion). After an automatic gap-filling post processing step, these preliminary segmentations were combined and corrected by a supervised artificial neural network of multilayer perceptron type with a hidden layer of 80 neurons, fed by 30 selected radiomic features of gray intensity and texture. Network classification has an overall accuracy of 83.9%, while the complete combined algorithm achieves average Dice similarity coefficients of 89.3%, 80.7%, 79.7%, and 66.4% for the entire region of interest, active tumour, oedema, and necrosis segmentations, respectively.; These values are in the range of the best reported in the present bibliography, but even with better Hausdorff distances and lower computational costs. Results presented here evidence that it is possible to achieve the automatic segmentation of this kind of tumour by traditional radiomics. This has relevant clinical potential at the time of diagnosis, precision radiotherapy planning, or post-treatment response evaluation.
Deep learning‐based aggregate analysis to identify cut‐off points for decision‐making in pancreatic cancer detection
Dzemyda, Gintautas
Kurasova, Olga
Medvedev, Viktor
Šubonienė, Aušra
Gulla, Aistė
Samuilis, Artūras
Jagminas, Džiugas
Strupas, Kęstutis
Expert Systems2024Journal Article, cited 0 times
Pancreas-CT
Computed tomography
Deep Learning
Abstract This study addresses the problem of detecting pancreatic cancer by classifying computed tomography (CT) images into cancerous and non‐cancerous classes using the proposed deep learning‐based aggregate analysis framework. The application of deep learning, as a branch of machine learning and artificial intelligence, to specific medical challenges can lead to the early detection of diseases, thus accelerating the process towards timely and effective intervention. The concept of classification is to reasonably select an optimal cut‐off point, which is used as a threshold for evaluating the model results. The choice of this point is key to ensure efficient evaluation of the classification results, which directly affects the diagnostic accuracy. A significant aspect of this research is the incorporation of private CT images from Vilnius University Hospital Santaros Klinikos, combined with publicly available data sets. To investigate the capabilities of the deep learning‐based framework and to maximize pancreatic cancer diagnostic performance, experimental studies were carried out combining data from different sources. Classification accuracy metrics such as the Youden index, (0, 1)‐criterion, Matthew's correlation coefficient, the F1 score, LR+, LR−, balanced accuracy, and g‐mean were used to find the optimal cut‐off point in order to balance sensitivity and specificity. By carefully analyzing and comparing the obtained results, we aim to develop a reliable system that will not only improve the accuracy of pancreatic cancer detection but also have wider application in the early diagnosis of other malignancies.
Magnetic resonance imaging of in vitro urine flow in single and tandem stented ureters subject to extrinsic ureteral obstruction
Dror, I.
Harris, T.
Kalchenko, V.
Shilo, Y.
Berkowitz, B.
Int J Urol2022Journal Article, cited 1 times
Website
OBJECTIVE: To quantify the relative volumetric flows in stent and ureter lumina, as a function of stent size and configuration, in both unobstructed and externally obstructed stented ureters. METHODS: Magnetic resonance imaging was used to measure flow in stented ureters using a phantom kidney model. Volumetric flow in the stent and ureter lumina were determined along the stented ureters, for each of four single stent sizes (4.8F, 6F, 7F, and 8F), and for tandem (6F and 7F) configurations. Measurements were made in the presence of a fully encircling extrinsic ureteral obstruction as well as in benchmark cases with no extrinsic ureteral obstruction. RESULTS: Under no obstruction, the relative contribution of urine flow in single stents is 1-10%, while the relative contributions to flow are ~6 and ~28% for tandem 6F and 7F, respectively. In the presence of an extrinsic ureteral obstruction and single stents, all urine passes within the stent lumen near the extrinsic ureteral obstruction. For tandem 6F and 7F stents under extrinsic ureteral obstruction, relative volumetric flows in the two stent lumina are ~73% and ~81%, respectively, with the remainder passing through the ureter lumen. CONCLUSIONS: Magnetic resonance imaging demonstrates that with no extrinsic ureteral obstruction, minimal urine flow occurs within a stent. Stent lumen flow is significant in the presence of extrinsic ureteral obstruction, in the vicinity of the extrinsic ureteral obstruction. For tandem stents subjected to extrinsic ureteral obstruction, urine flow also occurs in the ureter lumen between the stents, which can reduce the likelihood of kidney failure even in the case of both stent lumina being occluded.
A Three-Dimensional-Printed Patient-Specific Phantom for External Beam Radiation Therapy of Prostate Cancer
Lee, Christopher L
Dietrich, Max C
Desai, Uma G
Das, Ankur
Yu, Suhong
Xiang, Hong F
Jaffe, C Carl
Hirsch, Ariel E
Bloch, B Nicolas
Journal of Engineering and Science in Medical Diagnostics and Therapy2018Journal Article, cited 0 times
Website
Prostate Cancer
3D Printed Phantom
Radiation Therapy
Direct three-dimensional segmentation of prostate glands with nnU-Net
Wang, R.
Chow, S. S. L.
Serafin, R. B.
Xie, W.
Han, Q.
Baraznenok, E.
Lan, L.
Bishop, K. W.
Liu, J. T. C.
J Biomed Opt2024Journal Article, cited 0 times
Website
PCa_Bx_3Dpathology
Male
Humans
*Prostate/diagnostic imaging
*Prostatic Neoplasms/diagnostic imaging
Biopsy
Coloring Agents
Eosine Yellowish-(YS)
biomedical image processing
computational three-dimensional pathology
deep learning
gland segmentation
prostate cancer
SIGNIFICANCE: In recent years, we and others have developed non-destructive methods to obtain three-dimensional (3D) pathology datasets of clinical biopsies and surgical specimens. For prostate cancer risk stratification (prognostication), standard-of-care Gleason grading is based on examining the morphology of prostate glands in thin 2D sections. This motivates us to perform 3D segmentation of prostate glands in our 3D pathology datasets for the purposes of computational analysis of 3D glandular features that could offer improved prognostic performance. AIM: To facilitate prostate cancer risk assessment, we developed a computationally efficient and accurate deep learning model for 3D gland segmentation based on open-top light-sheet microscopy datasets of human prostate biopsies stained with a fluorescent analog of hematoxylin and eosin (H&E). APPROACH: For 3D gland segmentation based on our H&E-analog 3D pathology datasets, we previously developed a hybrid deep learning and computer vision-based pipeline, called image translation-assisted segmentation in 3D (ITAS3D), which required a complex two-stage procedure and tedious manual optimization of parameters. To simplify this procedure, we use the 3D gland-segmentation masks previously generated by ITAS3D as training datasets for a direct end-to-end deep learning-based segmentation model, nnU-Net. The inputs to this model are 3D pathology datasets of prostate biopsies rapidly stained with an inexpensive fluorescent analog of H&E and the outputs are 3D semantic segmentation masks of the gland epithelium, gland lumen, and surrounding stromal compartments within the tissue. RESULTS: nnU-Net demonstrates remarkable accuracy in 3D gland segmentations even with limited training data. Moreover, compared with the previous ITAS3D pipeline, nnU-Net operation is simpler and faster, and it can maintain good accuracy even with lower-resolution inputs. CONCLUSIONS: Our trained DL-based 3D segmentation model will facilitate future studies to demonstrate the value of computational 3D pathology for guiding critical treatment decisions for patients with prostate cancer.
Associating spatial diversity features of radiologically defined tumor habitats with epidermal growth factor receptor driver status and 12-month survival in glioblastoma: methods and preliminary investigation
Lee, Joonsang
Narang, Shivali
Martinez, Juan J
Rao, Ganesh
Rao, Arvind
Journal of Medical Imaging2015Journal Article, cited 15 times
Website
TCGA-GBM
Radiogenomics
Radiomics
Magnetic Resonance Imaging (MRI)
We analyzed the spatial diversity of tumor habitats, regions with distinctly different intensity characteristics of a tumor, using various measurements of habitat diversity within tumor regions. These features were then used for investigating the association with a 12-month survival status in glioblastoma (GBM) patients and for the identification of epidermal growth factor receptor (EGFR)-driven tumors. T1 postcontrast and T2 fluid attenuated inversion recovery images from 65 GBM patients were analyzed in this study. A total of 36 spatial diversity features were obtained based on pixel abundances within regions of interest. Performance in both the classification tasks was assessed using receiver operating characteristic (ROC) analysis. For association with 12-month overall survival, area under the ROC curve was 0.74 with confidence intervals [0.630 to 0.858]. The sensitivity and specificity at the optimal operating point ([Formula: see text]) on the ROC were 0.59 and 0.75, respectively. For the identification of EGFR-driven tumors, the area under the ROC curve (AUC) was 0.85 with confidence intervals [0.750 to 0.945]. The sensitivity and specificity at the optimal operating point ([Formula: see text]) on the ROC were 0.76 and 0.83, respectively. Our findings suggest that these spatial habitat diversity features are associated with these clinical characteristics and could be a useful prognostic tool for magnetic resonance imaging studies of patients with GBM.
Prediction of clinical phenotypes in invasive breast carcinomas from the integration of radiomics and genomics data
Guo, Wentian
Li, Hui
Zhu, Yitan
Lan, Li
Yang, Shengjie
Drukker, Karen
Morris, Elizabeth
Burnside, Elizabeth
Whitman, Gary
Giger, Maryellen L
Ji, Y.
TCGA Breast Phenotype Research Group
Journal of Medical Imaging2015Journal Article, cited 57 times
Website
TCGA-BRCA
Breast
Radiogenomics
Genomic and radiomic imaging profiles of invasive breast carcinomas from The Cancer Genome Atlas and The Cancer Imaging Archive were integrated and a comprehensive analysis was conducted to predict clinical outcomes using the radiogenomic features. Variable selection via LASSO and logistic regression were used to select the most-predictive radiogenomic features for the clinical phenotypes, including pathological stage, lymph node metastasis, and status of estrogen receptor (ER), progesterone receptor (PR), and human epidermal growth factor receptor 2 (HER2). Cross-validation with receiver operating characteristic (ROC) analysis was performed and the area under the ROC curve (AUC) was employed as the prediction metric. Higher AUCs were obtained in the prediction of pathological stage, ER, and PR status than for lymph node metastasis and HER2 status. Overall, the prediction performances by genomics alone, radiomics alone, and combined radiogenomics features showed statistically significant correlations with clinical outcomes; however, improvement on the prediction performance by combining genomics and radiomics data was not found to be statistically significant, most likely due to the small sample size of 91 cancer cases with 38 radiomic features and 144 genomic features.
Bolus arrival time and its effect on tissue characterization with dynamic contrast-enhanced magnetic resonance imaging
Mehrtash, Alireza
Gupta, Sandeep N
Shanbhag, Dattesh
Miller, James V
Kapur, Tina
Fennessy, Fiona M
Kikinis, Ron
Fedorov, Andriy
Journal of Medical Imaging2016Journal Article, cited 6 times
Website
QIN Prostate
Algorithm Development
Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI)
PROSTATE
BREAST
Matching the bolus arrival time (BAT) of the arterial input function (AIF) and tissue residue function (TRF) is necessary for accurate pharmacokinetic (PK) modeling of dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). We investigated the sensitivity of volume transfer constant ([Formula: see text]) and extravascular extracellular volume fraction ([Formula: see text]) to BAT and compared the results of four automatic BAT measurement methods in characterization of prostate and breast cancers. Variation in delay between AIF and TRF resulted in a monotonous change trend of [Formula: see text] and [Formula: see text] values. The results of automatic BAT estimators for clinical data were all comparable except for one BAT estimation method. Our results indicate that inaccuracies in BAT measurement can lead to variability among DCE-MRI PK model parameters, diminish the quality of model fit, and produce fewer valid voxels in a region of interest. Although the selection of the BAT method did not affect the direction of change in the treatment assessment cohort, we suggest that BAT measurement methods must be used consistently in the course of longitudinal studies to control measurement variability.;
Special Section Guest Editorial: LUNGx Challenge for computerized lung nodule classification: reflections and lessons learned
Armato, Samuel G
Hadjiiski, Lubomir
Tourassi, Georgia D
Drukker, Karen
Giger, Maryellen L
Li, Feng
Redmond, George
Farahani, Keyvan
Kirby, Justin S
Clarke, Laurence P
Journal of Medical Imaging2015Journal Article, cited 20 times
Website
The purpose of this work is to describe the LUNGx Challenge for the computerized classification of lung nodules on diagnostic computed tomography (CT) scans as benign or malignant and report the performance of participants' computerized methods along with that of six radiologists who participated in an observer study performing the same Challenge task on the same dataset. The Challenge provided sets of calibration and testing scans, established a performance assessment process, and created an infrastructure for case dissemination and result submission. Ten groups applied their own methods to 73 lung nodules (37 benign and 36 malignant) that were selected to achieve approximate size matching between the two cohorts. Area under the receiver operating characteristic curve (AUC) values for these methods ranged from 0.50 to 0.68; only three methods performed statistically better than random guessing. The radiologists' AUC values ranged from 0.70 to 0.85; three radiologists performed statistically better than the best-performing computer method. The LUNGx Challenge compared the performance of computerized methods in the task of differentiating benign from malignant lung nodules on CT scans, placed in the context of the performance of radiologists on the same task. The continued public availability of the Challenge cases will provide a valuable resource for the medical imaging research community.
Multisite concordance of apparent diffusion coefficient measurements across the NCI Quantitative Imaging Network
Newitt, David C
Malyarenko, Dariya
Chenevert, Thomas L
Quarles, C Chad
Bell, Laura
Fedorov, Andriy
Fennessy, Fiona
Jacobs, Michael A
Solaiyappan, Meiyappan
Hectors, Stefanie
Taouli, B.
Muzi, M.
Kinahan, P. E. E.
Schmainda, K. M.
Prah, M. A.
Taber, E. N.
Kroenke, C.
Huang, W., Arlinghaus, L.
Yankeelov, T. E.
Cao, Y.
Aryal, M.
Yen, Y.-F.
Kalpathy-Cramer, J.
Shukla-Dave, A.
Fung, M.
Liang, J.
Boss, M.
Hylton, N.
Journal of Medical Imaging2017Journal Article, cited 6 times
Website
QIN
DCE-MRI
Performance analysis of a computer-aided detection system for lung nodules in CT at different slice thicknesses
Narayanan, B. N.
Hardie, R. C.
Kebede, T. M.
J Med Imaging (Bellingham)2018Journal Article, cited 2 times
Website
LIDC-IDRI
Algorithm Development
computed tomography
computer-aided detection
downsampling
lung nodules
slice thickness
We study the performance of a computer-aided detection (CAD) system for lung nodules in computed tomography (CT) as a function of slice thickness. In addition, we propose and compare three different training methodologies for utilizing nonhomogeneous thickness training data (i.e., composed of cases with different slice thicknesses). These methods are (1) aggregate training using the entire suite of data at their native thickness, (2) homogeneous subset training that uses only the subset of training data that matches each testing case, and (3) resampling all training and testing cases to a common thickness. We believe this study has important implications for how CT is acquired, processed, and stored. We make use of 192 CT cases acquired at a thickness of 1.25 mm and 283 cases at 2.5 mm. These data are from the publicly available Lung Nodule Analysis 2016 dataset. In our study, CAD performance at 2.5 mm is comparable with that at 1.25 mm and is much better than at higher thicknesses. Also, resampling all training and testing cases to 2.5 mm provides the best performance among the three training methods compared in terms of accuracy, memory consumption, and computational time.
Computer-aided detection of lung nodules: a review
Shaukat, Furqan
Raja, Gulistan
Frangi, Alejandro F.
Journal of Medical Imaging2019Journal Article, cited 0 times
LCTSC
RIDER Lung PET-CT
We present an in-depth review and analysis of salient methods for computer-aided detection of lung nodules. We evaluate the current methods for detecting lung nodules using literature searches with selection criteria based on validation dataset types, nodule sizes, numbers of cases, types of nodules, extracted features in traditional feature-based classifiers, sensitivity, and false positives (FP)/scans. Our review shows that current detection systems are often optimized for particular datasets and can detect only one or two types of nodules. We conclude that, in addition to achieving high sensitivity and reduced FP/scans, strategies for detecting lung nodules must detect a variety of nodules with high precision to improve the performances of the radiologists. To the best of our knowledge, ours is the first review of the effectiveness of feature extraction using traditional feature-based classifiers. Moreover, we discuss deep-learning methods in detail and conclude that features must be appropriately selected to improve the overall accuracy of the system. We present an analysis of current schemes and highlight constraints and future research areas.
Automatic mass detection in mammograms using deep convolutional neural networks
Agarwal, Richa
Diaz, Oliver
Lladó, Xavier
Yap, Moi Hoon
Martí, Robert
Journal of Medical Imaging2019Journal Article, cited 0 times
Website
CBIS-DDSM
BREAST
Mammography
Computer Aided Detection (CADe)
machine learning
With recent advances in the field of deep learning, the use of convolutional neural networks (CNNs) in medical imaging has become very encouraging. The aim of our paper is to propose a patch-based CNN method for automated mass detection in full-field digital mammograms (FFDM). In addition to evaluating CNNs pretrained with the ImageNet dataset, we investigate the use of transfer learning for a particular domain adaptation.; First, the CNN is trained using a large public database of digitized mammograms (CBIS-DDSM dataset), and then the model is transferred and tested onto the smaller database of digital mammograms (INbreast dataset).; We evaluate three widely used CNNs (VGG16, ResNet50, InceptionV3) and show that the InceptionV3 obtains the best performance for classifying the mass and nonmass breast region for CBIS-DDSM. We further show the benefit of domain adaptation between the CBIS-DDSM (digitized) and INbreast (digital) datasets using the InceptionV3 CNN. Mass detection evaluation follows a fivefold cross-validation strategy using free-response operating characteristic curves. Results show that the transfer learning from CBIS-DDSM obtains a substantially higher performance with the best true positive rate (TPR) of 0.98 0.02 at 1.67 false positives per image (FPI), compared with transfer learning from ImageNet with TPR of 0.91 0.07 at 2.1 FPI. In addition, the proposed framework improves upon mass detection results described in the literature on the INbreast database, in terms of both TPR and FPI.
Breast MRI radiomics for the pretreatment prediction of response to neoadjuvant chemotherapy in node-positive breast cancer patients
Drukker, Karen
Edwards, Alexandra
Doyle, Christopher
Papaioannou, John
Kulkarni, Kirti
Giger, Maryellen L.
Journal of Medical Imaging2019Journal Article, cited 0 times
ISPY1
The purpose of this study was to evaluate breast MRI radiomics in predicting, prior to any treatment, the response to neoadjuvant chemotherapy (NAC) in patients with invasive lymph node (LN)-positive breast cancer for two tasks: (1) prediction of pathologic complete response and (2) prediction of post-NAC LN status. Our study included 158 patients, with 19 showing post-NAC complete pathologic response (pathologic TNM stage T0,N0,MX) and 139 showing incomplete response. Forty-two patients were post-NAC LN-negative, and 116 were post-NAC LN-positive. We further analyzed prediction of response by hormone receptor subtype of the primary cancer (77 hormone receptor-positive, 39 HER2-enriched, 38 triple negative, and 4 cancers with unknown receptor status). Only pre-NAC MRIs underwent computer analysis, initialized by an expert breast radiologist indicating index cancers and metastatic axillary sentinel LNs on DCE-MRI images. Forty-nine computer-extracted radiomics features were obtained, both for the primary cancers and for the metastatic sentinel LNs. Since the dataset contained MRIs acquired at 1.5 T and at 3.0 T, we eliminated features affected by magnet strength using the Mann-Whitney U-test with the null-hypothesis that 1.5 T and 3.0 T samples were selected from populations having the same distribution. Bootstrapping and ROC analysis were used to assess performance of individual features in the two classification tasks. Eighteen features appeared unaffected by magnet strength. Pre-NAC tumor features generally appeared uninformative in predicting response to therapy. In contrast, some pre-NAC LN features were able to predict response: two pre-NAC LN features were able to predict pathologic complete response (area under the ROC curve (AUC) up to 0.82 [0.70; 0.88]), and another two were able to predict post-NAC LN-status (AUC up to 0.72 [0.62; 0.77]), respectively. In the analysis by a hormone receptor subtype, several potentially useful features were identified for predicting response to therapy in the hormone receptor-positive and HER2-enriched cancers.
Classification of brain tumor isocitrate dehydrogenase status using MRI and deep learning
Nalawade, S.
Murugesan, G. K.
Vejdani-Jahromi, M.
Fisicaro, R. A.
Bangalore Yogananda, C. G.
Wagner, B.
Mickey, B.
Maher, E.
Pinho, M. C.
Fei, B.
Madhuranthakam, A. J.
Maldjian, J. A.
J Med Imaging (Bellingham)2019Journal Article, cited 0 times
Website
BRAIN
Convolutional Neural Network (CNN)
IDH mutation
Classification
Computer Aided Detection (CADe)
Isocitrate dehydrogenase (IDH) mutation status is an important marker in glioma diagnosis and therapy. We propose an automated pipeline for noninvasively predicting IDH status using deep learning and T2-weighted (T2w) magnetic resonance (MR) images with minimal preprocessing (N4 bias correction and normalization to zero mean and unit variance). T2w MR images and genomic data were obtained from The Cancer Imaging Archive dataset for 260 subjects (120 high-grade and 140 low-grade gliomas). A fully automated two-dimensional densely connected model was trained to classify IDH mutation status on 208 subjects and tested on another held-out set of 52 subjects using fivefold cross validation. Data leakage was avoided by ensuring subject separation during the slice-wise randomization. Mean classification accuracy of 90.5% was achieved for each axial slice in predicting the three classes of no tumor, IDH mutated, and IDH wild type. Test accuracy of 83.8% was achieved in predicting IDH mutation status for individual subjects on the test dataset of 52 subjects. We demonstrate a deep learning method to predict IDH mutation status using T2w MRI alone. Radiologic imaging studies using deep learning methods must address data leakage (subject duplication) in the randomization process to avoid upward bias in the reported classification accuracy.
Evaluation of data augmentation via synthetic images for improved breast mass detection on mammograms using deep learning
Cha, K. H.
Petrick, N.
Pezeshk, A.
Graff, C. G.
Sharma, D.
Badal, A.
Sahiner, B.
J Med Imaging (Bellingham)2020Journal Article, cited 1 times
Website
CBIS-DDSM
BREAST
Phantom
Computer Assisted Detection (CAD)
We evaluated whether using synthetic mammograms for training data augmentation may reduce the effects of overfitting and increase the performance of a deep learning algorithm for breast mass detection. Synthetic mammograms were generated using in silico procedural analytic breast and breast mass modeling algorithms followed by simulated x-ray projections of the breast models into mammographic images. In silico breast phantoms containing masses were modeled across the four BI-RADS breast density categories, and the masses were modeled with different sizes, shapes, and margins. A Monte Carlo-based x-ray transport simulation code, MC-GPU, was used to project the three-dimensional phantoms into realistic synthetic mammograms. 2000 mammograms with 2522 masses were generated to augment a real data set during training. From the Curated Breast Imaging Subset of the Digital Database for Screening Mammography (CBIS-DDSM) data set, we used 1111 mammograms (1198 masses) for training, 120 mammograms (120 masses) for validation, and 361 mammograms (378 masses) for testing. We used faster R-CNN for our deep learning network with pretraining from ImageNet using the Resnet-101 architecture. We compared the detection performance when the network was trained using different percentages of the real CBIS-DDSM training set (100%, 50%, and 25%), and when these subsets of the training set were augmented with 250, 500, 1000, and 2000 synthetic mammograms. Free-response receiver operating characteristic (FROC) analysis was performed to compare performance with and without the synthetic mammograms. We generally observed an improved test FROC curve when training with the synthetic images compared to training without them, and the amount of improvement depended on the number of real and synthetic images used in training. Our study shows that enlarging the training data with synthetic samples can increase the performance of deep learning systems.
Reproducibility of radiomic features using network analysis and its application in Wasserstein k-means clustering
Oh, Jung Hun
Apte, Aditya P.
Katsoulakis, Evangelia
Riaz, Nadeem
Hatzoglou, Vaios
Yu, Yao
Mahmood, Usman
Veeraraghavan, Harini
Pouryahya, Maryam
Iyer, Aditi
Shukla-Dave, Amita
Tannenbaum, Allen
Lee, Nancy Y.
Deasy, Joseph O.
Journal of Medical Imaging2021Journal Article, cited 0 times
Website
HNSCC
Radiomics
Reproducibility
Machine Learning
3D Printed Phantom
Purpose: The goal of this study is to develop innovative methods for identifying radiomic features that are reproducible over varying image acquisition settings.; ; Approach: We propose a regularized partial correlation network to identify reliable and reproducible radiomic features. This approach was tested on two radiomic feature sets generated using two different reconstruction methods on computed tomography (CT) scans from a cohort of 47 lung cancer patients. The largest common network component between the two networks was tested on phantom data consisting of five cancer samples. To further investigate whether radiomic features found can identify phenotypes, we propose a k-means clustering algorithm coupled with the optimal mass transport theory. This approach following the regularized partial correlation network analysis was tested on CT scans from 77 head and neck squamous cell carcinoma (HNSCC) patients in the Cancer Imaging Archive (TCIA) and validated using an independent dataset.; ; Results: A set of common radiomic features was found in relatively large network components between the resultant two partial correlation networks resulting from a cohort of lung cancer patients. The reliability and reproducibility of those radiomic features were further validated on phantom data using the Wasserstein distance. Further analysis using the network-based Wasserstein k-means algorithm on the TCIA HNSCC data showed that the resulting clusters separate tumor subsites as well as HPV status, and this was validated on an independent dataset.; ; Conclusion: We showed that a network-based analysis enables identifying reproducible radiomic features and use of the selected set of features can enhance clustering results.
Improving performance and generalizability in radiogenomics: a pilot study for prediction of IDH1/2 mutation status in gliomas with multicentric data
Santinha, J.
Matos, C.
Figueiredo, M.
Papanikolaou, N.
J Med Imaging (Bellingham)2021Journal Article, cited 0 times
Website
TCGA-GBM
TCGA-LGG
Algorithm Development
Radiomics
Radiogenomics
IDH1/2 mutation status
Purpose: Radiogenomics offers a potential virtual and noninvasive biopsy. However, radiogenomics models often suffer from generalizability issues, which cause a performance degradation on unseen data. In MRI, differences in the sequence parameters, manufacturers, and scanners make this generalizability issue worse. Such image acquisition information may be used to define different environments and select robust and invariant radiomic features associated with the clinical outcome that should be included in radiomics/radiogenomics models. Approach: We assessed 77 low-grade gliomas and glioblastomas multiform patients publicly available in TCGA and TCIA. Radiomics features were extracted from multiparametric MRI images (T1-weighted, contrast-enhanced T1-weighted, T2-weighted, and fluid-attenuated inversion recovery) and different regions-of-interest (enhancing tumor, nonenhancing tumor/necrosis, and edema). A method developed to find variables that are part of causal structures was used for feature selection and compared with an embedded feature selection approach commonly used in radiomics/radiogenomics studies, across two different scenarios: (1) leaving data from a center as an independent held-out test set and tuning the model with the data from the remaining centers and (2) use stratified partitioning to obtain the training and the held-out test sets. Results: In scenario (1), the performance of the proposed methodology and the traditional embedded method was AUC: 0.75 [0.25; 1.00] versus 0.83 [0.50; 1.00], Sens.: 0.67 [0.20; 0.93] versus 0.67 [0.20; 0.93], Spec.: 0.75 [0.30; 0.95] versus 0.75 [0.30; 0.95], and MCC: 0.42 [0.19; 0.68] versus 0.42 [0.19; 0.68] for center 1 as the held-out test set. The performance of both methods for center 2 as the held-out test set was AUC: 0.64 [0.36; 0.91] versus 0.55 [0.27; 0.82], Sens.: 0.00 [0.00; 0.73] versus 0.00 [0.00; 0.73], Spec.: 0.82 [0.52; 0.94] versus 0.91 [0.62; 0.98], and MCC: - 0.13 [ - 0.38 ; - 0.04 ] versus - 0.09 [ - 0.38 ; - 0.02 ] , whereas for center 3 was AUC: 0.80 [0.62; 0.95] versus 0.89 [0.56; 0.96], Sens.: 0.86 [0.48; 0.97] versus 0.86 [0.48; 0.97], Spec.: 0.72 [0.54; 0.85] versus 0.79 [0.61; 0.90], and MCC: 0.47 [0.41; 0.53] versus 0.55 [0.48; 0.60]. For center 4, the performance of both methods was AUC: 0.77 [0.51; 1.00] versus 0.75 [0.47; 0.97], Sens.: 0.53 [0.30; 0.75] versus 0.00 [0.00; 0.15], Spec.: 0.71 [0.35; 0.91] versus 0.86 [0.48; 0.97], and MCC: 0.23 [0.16; 0.31] versus. - 0.32 [ - 0.46 ; - 0.20 ] . In scenario (2), the performance of these methods was AUC: 0.89 [0.71; 1.00] versus 0.79 [0.58; 0.94], Sens.: 0.86 [0.80; 0.92] versus 0.43 [0.15; 0.74], Spec.: 0.87 [0.62; 0.96] versus 0.87 [0.62; 0.96], and MCC: 0.70 [0.60; 0.77] versus 0.33 [0.24; 0.42]. Conclusions: This proof-of-concept study demonstrated good performance by the proposed feature selection method in the majority of the studied scenarios, as it promotes robustness of features included in the models and the models' generalizability by making used imaging data of different scanners or with sequence parameters.
Quality control of radiomic features using 3D-printed CT phantoms
Mahmood, U.
Apte, A.
Kanan, C.
Bates, D. D. B.
Corrias, G.
Manneli, L.
Oh, J. H.
Erdi, Y. E.
Nguyen, J.
O'Deasy, J.
Shukla-Dave, A.
J Med Imaging (Bellingham)2021Journal Article, cited 0 times
Website
RIDER LUNG CT
Computed Tomography (CT)
Quantitative imaging
Radiomics
PHANTOM
Purpose: The lack of standardization in quantitative radiomic measures of tumors seen on computed tomography (CT) scans is generally recognized as an unresolved issue. To develop reliable clinical applications, radiomics must be robust across different CT scan modes, protocols, software, and systems. We demonstrate how custom-designed phantoms, imprinted with human-derived patterns, can provide a straightforward approach to validating longitudinally stable radiomic signature values in a clinical setting. Approach: Described herein is a prototype process to design an anatomically informed 3D-printed radiomic phantom. We used a multimaterial, ultra-high-resolution 3D printer with voxel printing capabilities. Multiple tissue regions of interest (ROIs), from four pancreas tumors, one lung tumor, and a liver background, were extracted from digital imaging and communication in medicine (DICOM) CT exam files and were merged together to develop a multipurpose, circular radiomic phantom (18 cm diameter and 4 cm width). The phantom was scanned 30 times using standard clinical CT protocols to test repeatability. Features that have been found to be prognostic for various diseases were then investigated for their repeatability and reproducibility across different CT scan modes. Results: The structural similarity index between the segment used from the patients' DICOM image and the phantom CT scan was 0.71. The coefficient variation for all assessed radiomic features was < 1.0 % across 30 repeat scans of the phantom. The percent deviation (pDV) from the baseline value, which was the mean feature value determined from repeat scans, increased with the application of the lung convolution kernel, changes to the voxel size, and increases in the image noise. Gray level co-occurrence features, contrast, dissimilarity, and entropy were particularly affected by different scan modes, presenting with pDV > +/- 15 % . Conclusions: Previously discovered prognostic and popular radiomic features are variable in practice and need to be interpreted with caution or excluded from clinical implementation. Voxel-based 3D printing can reproduce tissue morphology seen on CT exams. We believe that this is a flexible, yet practical, way to design custom phantoms to validate and compare radiomic metrics longitudinally, over time, and across systems.
Automatic zonal segmentation of the prostate from 2D and 3D T2-weighted MRI and evaluation for clinical use
Hamzaoui, D.
Montagne, S.
Renard-Penna, R.
Ayache, N.
Delingette, H.
J Med Imaging (Bellingham)2022Journal Article, cited 0 times
PROSTATEx
Deep learning
inter-rater variability
Magnetic Resonance Imaging (MRI)
PROSTATE
Automatic Segmentation
Purpose: An accurate zonal segmentation of the prostate is required for prostate cancer (PCa) management with MRI. Approach: The aim of this work is to present UFNet, a deep learning-based method for automatic zonal segmentation of the prostate from T2-weighted (T2w) MRI. It takes into account the image anisotropy, includes both spatial and channelwise attention mechanisms and uses loss functions to enforce prostate partition. The method was applied on a private multicentric three-dimensional T2w MRI dataset and on the public two-dimensional T2w MRI dataset ProstateX. To assess the model performance, the structures segmented by the algorithm on the private dataset were compared with those obtained by seven radiologists of various experience levels. Results: On the private dataset, we obtained a Dice score (DSC) of 93.90 +/- 2.85 for the whole gland (WG), 91.00 +/- 4.34 for the transition zone (TZ), and 79.08 +/- 7.08 for the peripheral zone (PZ). Results were significantly better than other compared networks' ( p-value < 0.05 ). On ProstateX, we obtained a DSC of 90.90 +/- 2.94 for WG, 86.84 +/- 4.33 for TZ, and 78.40 +/- 7.31 for PZ. These results are similar to state-of-the art results and, on the private dataset, are coherent with those obtained by radiologists. Zonal locations and sectorial positions of lesions annotated by radiologists were also preserved. Conclusions: Deep learning-based methods can provide an accurate zonal segmentation of the prostate leading to a consistent zonal location and sectorial position of lesions, and therefore can be used as a helping tool for PCa diagnosis.
Efficient multiscale fully convolutional UNet model for segmentation of 3D lung nodule from CT image
Agnes, S. A.
Anitha, J.
J Med Imaging (Bellingham)2022Journal Article, cited 0 times
LIDC-IDRI
Convolutional Neural Network (CNN)
Deep learning
maxout aggregation
multiscale fully convolutional UNet
semantic segmentation
3D segmentation
Purpose: Segmentation of lung nodules in chest CT images is essential for image-driven lung cancer diagnosis and follow-up treatment planning. Manual segmentation of lung nodules is subjective because the approach depends on the knowledge and experience of the specialist. We proposed a multiscale fully convolutional three-dimensional UNet (MF-3D UNet) model for automatic segmentation of lung nodules in CT images. Approach: The proposed model employs two strategies, fusion of multiscale features with Maxout aggregation and trainable downsampling, to improve the performance of nodule segmentation in 3D CT images. The fusion of multiscale (fine and coarse) features with the Maxout function allows the model to retain the most important features while suppressing the low-contribution features. The trainable downsampling process is used instead of fixed pooling-based downsampling. Results: The performance of the proposed MF-3D UNet model is examined by evaluating the model with CT scans obtained from the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) dataset. A quantitative and visual comparative analysis of the proposed work with various customized UNet models is also presented. The comparative analysis shows that the proposed model yields reliable segmentation results compared with other methods. The experimental result of 3D MF-UNet shows encouraging results in the segmentation of different types of nodules, including juxta-pleural, solitary pulmonary, and non-solid nodules, with an average Dice similarity coefficient of 0.83 +/- 0.05 , and it outperforms other CNN-based segmentation models. Conclusions: The proposed model accurately segments the nodules using multiscale feature aggregation and trainable downsampling approaches. Also, 3D operations enable precise segmentation of complex nodules using inter-slice connections.
Contour interpolation by deep learning approach
Zhao, C.
Duan, Y.
Yang, D.
J Med Imaging (Bellingham)2022Journal Article, cited 0 times
NSCLC-Radiomics
Pancreas-CT
HNSCC-3D-CT-RT
LiTS
Deep learning
Segmentation
LUNG
LIVER
STOMACH
PURPOSE: Contour interpolation is an important tool for expediting manual segmentation of anatomical structures. The process allows users to manually contour on discontinuous slices and then automatically fill in the gaps, therefore saving time and efforts. The most used conventional shape-based interpolation (SBI) algorithm, which operates on shape information, often performs suboptimally near the superior and inferior borders of organs and for the gastrointestinal structures. In this study, we present a generic deep learning solution to improve the robustness and accuracy for contour interpolation, especially for these historically difficult cases. APPROACH: A generic deep contour interpolation model was developed and trained using 16,796 publicly available cases from 5 different data libraries, covering 15 organs. The network inputs were a 128 x 128 x 5 image patch and the two-dimensional contour masks for the top and bottom slices of the patch. The outputs were the organ masks for the three middle slices. The performance was evaluated on both dice scores and distance-to-agreement (DTA) values. RESULTS: The deep contour interpolation model achieved a dice score of 0.95 +/- 0.05 and a mean DTA value of 1.09 +/- 2.30 mm , averaged on 3167 testing cases of all 15 organs. In a comparison, the results by the conventional SBI method were 0.94 +/- 0.08 and 1.50 +/- 3.63 mm , respectively. For the difficult cases, the dice score and DTA value were 0.91 +/- 0.09 and 1.68 +/- 2.28 mm by the deep interpolator, compared with 0.86 +/- 0.13 and 3.43 +/- 5.89 mm by SBI. The t-test results confirmed that the performance improvements were statistically significant ( p < 0.05 ) for all cases in dice scores and for small organs and difficult cases in DTA values. Ablation studies were also performed. CONCLUSIONS: A deep learning method was developed to enhance the process of contour interpolation. It could be useful for expediting the tasks of manual segmentation of organs and structures in the medical images.
Predicting recurrence risks in lung cancer patients using multimodal radiomics and random survival forests
Christie, J. R.
Daher, O.
Abdelrazek, M.
Romine, P. E.
Malthaner, R. A.
Qiabi, M.
Nayak, R.
Napel, S.
Nair, V. S.
Mattonen, S. A.
J Med Imaging (Bellingham)2022Journal Article, cited 0 times
NSCLC Radiogenomics
Computed Tomography (CT)
lung cancer
machine learning
Positron Emission Tomography (PET)
Radiomics
PURPOSE: We developed a model integrating multimodal quantitative imaging features from tumor and nontumor regions, qualitative features, and clinical data to improve the risk stratification of patients with resectable non-small cell lung cancer (NSCLC). APPROACH: We retrospectively analyzed 135 patients [mean age, 69 years (43 to 87, range); 100 male patients and 35 female patients] with NSCLC who underwent upfront surgical resection between 2008 and 2012. The tumor and peritumoral regions on both preoperative CT and FDG PET-CT and the vertebral bodies L3 to L5 on FDG PET were segmented to assess the tumor and bone marrow uptake, respectively. Radiomic features were extracted and combined with clinical and CT qualitative features. A random survival forest model was developed using the top-performing features to predict the time to recurrence/progression in the training cohort ( n = 101 ), validated in the testing cohort ( n = 34 ) using the concordance, and compared with a stage-only model. Patients were stratified into high- and low-risks of recurrence/progression using Kaplan-Meier analysis. RESULTS: The model, consisting of stage, three wavelet texture features, and three wavelet first-order features, achieved a concordance of 0.78 and 0.76 in the training and testing cohorts, respectively, significantly outperforming the baseline stage-only model results of 0.67 ( p < 0.005 ) and 0.60 ( p = 0.008 ), respectively. Patients at high- and low-risks of recurrence/progression were significantly stratified in both the training ( p < 0.005 ) and the testing ( p = 0.03 ) cohorts. CONCLUSIONS: Our radiomic model, consisting of stage and tumor, peritumoral, and bone marrow features from CT and FDG PET-CT significantly stratified patients into low- and high-risk of recurrence/progression.
Anonymization and validation of three-dimensional volumetric renderings of computed tomography data using commercially available T1-weighted magnetic resonance imaging-based algorithms
Patel, Rahil
Provenzano, Destie
Loew, Murray
Journal of Medical Imaging2023Journal Article, cited 0 times
ACRIN-FMISO-Brain
CPTAC-LSCC
LDCT-and-Projection-data
Purpose: Previous studies have demonstrated that three-dimensional (3D) volumetric renderings of magnetic resonance imaging (MRI) brain data can be used to identify patients using facial recognition. We have shown that facial features can be identified on simulation-computed tomography (CT) images for radiation oncology and mapped to face images from a database. We aim to determine whether CT images can be anonymized using anonymization software that was designed for T1-weighted MRI data.
Approach: Our study examines (1) the ability of off-the-shelf anonymization algorithms to anonymize CT data and (2) the ability of facial recognition algorithms to identify whether faces could be detected from a database of facial images. Our study generated 3D renderings from 57 head CT scans from The Cancer Imaging Archive database. Data were anonymized using AFNI (deface, reface, and 3Dskullstrip) and FSL's BET. Anonymized data were compared to the original renderings and passed through facial recognition algorithms (VGG-Face, FaceNet, DLib, and SFace) using a facial database (labeled faces in the wild) to determine what matches could be found.
Results: Our study found that all modules were able to process CT data and that AFNI's 3Dskullstrip and FSL's BET data consistently showed lower reidentification rates compared to the original.
Conclusions: The results from this study highlight the potential usage of anonymization algorithms as a clinical standard for deidentifying brain CT data. Our study demonstrates the importance of continued vigilance for patient privacy in publicly shared datasets and the importance of continued evaluation of anonymization methods for CT data.
Automatic localization of target vertebrae in spine surgery using fast CT-to-fluoroscopy (3D-2D) image registration
Localization of target vertebrae is an essential step in minimally invasive spine surgery, with conventional methods relying on "level counting" - i.e., manual counting of vertebrae under fluoroscopy starting from readily identifiable anatomy (e.g., the sacrum). The approach requires an undesirable level of radiation, time, and is prone to counting errors due to the similar appearance of vertebrae in projection images; wrong-level surgery occurs in 1 of every ~3000 cases. This paper proposes a method to automatically localize target vertebrae in x-ray projections using 3D-2D registration between preoperative CT (in which vertebrae are preoperatively labeled) and intraoperative fluoroscopy. The registration uses an intensity-based approach with a gradient-based similarity metric and the CMA-ES algorithm for optimization. Digitally reconstructed radiographs (DRRs) and a robust similarity metric are computed on GPU to accelerate the process. Evaluation in clinical CT data included 5,000 PA and LAT projections randomly perturbed to simulate human variability in setup of mobile intraoperative C-arm. The method demonstrated 100% success for PA view (projection error: 0.42mm) and 99.8% success for LAT view (projection error: 0.37mm). Initial implementation on GPU provided automatic target localization within about 3 sec, with further improvement underway via multi-GPU. The ability to automatically label vertebrae in fluoroscopy promises to streamline surgical workflow, improve patient safety, and reduce wrong-site surgeries, especially in large patients for whom manual methods are time consuming and error prone.
Measurement of spiculation index in 3D for solitary pulmonary nodules in volumetric lung CT images
Current development of algorithms in computer-aided diagnosis (CAD) scheme is growing rapidly to assist the radiologist in medical image interpretation. Texture analysis of computed tomography (CT) scans is one of important preliminary stage in the computerized detection system and classification for lung cancer. Among different types of images features analysis, Haralick texture with variety of statistical measures has been used widely in image texture description. The extraction of texture feature values is essential to be used by a CAD especially in classification of the normal and abnormal tissue on the cross sectional CT images. This paper aims to compare experimental results using texture extraction and different machine leaning methods in the classification normal and abnormal tissues through lung CT images. The machine learning methods involve in this assessment are Artificial Immune Recognition System (AIRS), Naive Bayes, Decision Tree (J48) and Backpropagation Neural Network. AIRS is found to provide high accuracy (99.2%) and sensitivity (98.0%) in the assessment. For experiments and testing purpose, publicly available datasets in the Reference Image Database to Evaluate Therapy Response (RIDER) are used as study cases.
The quest for'diagnostically lossless' medical image compression: a comparative study of objective quality metrics for compressed medical images
Kowalik-Urbaniak, Ilona
Brunet, Dominique
Wang, Jiheng
Koff, David
Smolarski-Koff, Nadine
Vrscay, Edward R
Wallace, Bill
Wang, Zhou
2014Conference Proceedings, cited 0 times
Image Compression
BRAIN
JPEG2000
Computed Tomography (CT)
Our study, involving a collaboration with radiologists (DK,NSK) as well as a leading international developer of medical imaging software (AGFA), is primarily concerned with improved methods of assessing the diagnostic quality of compressed medical images and the investigation of compression artifacts resulting from JPEG and JPEG2000. In this work, we compare the performances of the Structural Similarity quality measure (SSIM), MSE/PSNR, compression ratio CR and JPEG quality factor Q, based on experimental data collected in two experiments involving radiologists. An ROC and Kolmogorov-Smirnov analysis indicates that compression ratio is not always a good indicator of visual quality. Moreover, SSIM demonstrates the best performance, i.e., it provides the closest match to the radiologists' assessments. We also show that a weighted Youden index and curve fitting method can provide SSIM and MSE thresholds for acceptable compression ratios.
A novel computer-aided detection system for pulmonary nodule identification in CT images
; Computer-aided detection (CADe) of pulmonary nodules from computer tomography (CT) scans is critical for assisting radiologists to identify lung lesions at an early stage. In this paper, we propose a novel approach for CADe of lung nodules using a two-stage vector quantization (VQ) scheme. The first-stage VQ aims to extract lung from the chest volume, while the second-stage VQ is designed to extract initial nodule candidates (INCs) within the lung volume. Then rule-based expert filtering is employed to prune obvious FPs from INCs, and the commonly-used support vector machine (SVM) classifier is adopted to further reduce the FPs. The proposed system was validated on 100 CT scans randomly selected from the 262 scans that have at least one juxta-pleural nodule annotation in the publicly available database - Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI). The two-stage VQ only missed 2 out of the 207 nodules at agreement level 1, and the INCs detection for each scan took about 30 seconds in average. Expert filtering reduced FPs more than 18 times, while maintaining a sensitivity of 93.24%. As it is trivial to distinguish INCs attached to pleural wall versus not on wall, we investigated the feasibility of training different SVM classifiers to further reduce FPs from these two kinds of INCs. Experiment results indicated that SVM classification over the entire set of INCs was in favor of, where the optimal operating of our CADe system achieved a sensitivity of 89.4% at a specificity of 86.8%.
Smooth extrapolation of unknown anatomy via statistical shape models
We propose a novel non-invasive brain tumor type classification using Multi-fractal Detrended Fluctuation Analysis (MFDFA) [1] in structural magnetic resonance (MR) images. This preliminary work investigates the efficacy of the MFDFA features along with our novel texture feature known as multi-fractional Brownian motion (mBm) [2]in classifying (grading) brain tumors as High Grade (HG) and Low Grade (LG). Based on prior performance, Random Forest (RF) [3] is employed for tumor grading using two different datasets such as BRATS-2013 [4] and BRATS-2014 [5]. Quantitative scores such as precision, recall, accuracy are obtained using the confusion matrix. On an average 90% precision and 85% recall from the inter-dataset cross-validation confirm the efficacy of the proposed method.
Phenotypic characterization of glioblastoma identified through shape descriptors
This paper proposes quantitatively describing the shape of glioblastoma (GBM) tissue phenotypes as a set of shape features derived from segmentations, for the purposes of discriminating between GBM phenotypes and monitoring tumor progression. GBM patients were identified from the Cancer Genome Atlas, and quantitative MR imaging data were obtained from the Cancer Imaging Archive. Three GBM tissue phenotypes are considered including necrosis, active tumor and edema/invasion. Volumetric tissue segmentations are obtained from registered T1˗weighted (T1˗WI) postcontrast and fluid-attenuated inversion recovery (FLAIR) MRI modalities. Shape features are computed from respective tissue phenotype segmentations, and a Kruskal-Wallis test was employed to select features capable of classification with a significance level of p < 0.05. Several classifier models are employed to distinguish phenotypes, where a leave-one-out cross-validation was performed. Eight features were found statistically significant for classifying GBM phenotypes with p <0.05, orientation is uninformative. Quantitative evaluations show the SVM results in the highest classification accuracy of 87.50%, sensitivity of 94.59% and specificity of 92.77%. In summary, the shape descriptors proposed in this work show high performance in predicting GBM tissue phenotypes. They are thus closely linked to morphological characteristics of GBM phenotypes and could potentially be used in a computer assisted labeling system.
GBM heterogeneity characterization by radiomic analysis of phenotype anatomical planes
Glioblastoma multiforme (GBM) is the most common malignant primary tumor of the central nervous system, characterized among other traits by rapid metastatis. Three tissue phenotypes closely associated with GBMs, namely, necrosis (N), contrast enhancement (CE), and edema/invasion (E), exhibit characteristic patterns of texture heterogeneity in magnetic resonance images (MRI). In this study, we propose a novel model to characterize GBM tissue phenotypes using gray level co-occurrence matrices (GLCM) in three anatomical planes. The GLCM encodes local image patches in terms of informative, orientation-invariant texture descriptors, which are used here to sub-classify GBM tissue phenotypes. Experiments demonstrate the model on MRI data of 41 GBM patients, obtained from the cancer genome atlas (TCGA). Intensity-based automatic image registration is applied to align corresponding pairs of fixed T1˗weighted (T1˗WI) post-contrast and fluid attenuated inversion recovery (FLAIR) images. GBM tissue regions are then segmented using the 3D Slicer tool. Texture features are computed from 12 quantifier functions operating on GLCM descriptors, that are generated from MRI intensities within segmented GBM tissue regions. Various classifier models are used to evaluate the effectiveness of texture features for discriminating between GBM phenotypes. Results based on T1-WI scans showed a phenotype classification accuracy of over 88.14%, a sensitivity of 85.37% and a specificity of 96.1%, using the linear discriminant analysis (LDA) classifier. This model has the potential to provide important characteristics of tumors, which can be used for the sub-classification of GBM phenotypes.
Lung nodule detection using 3D convolutional neural networks trained on weakly labeled data
Anirudh, Rushil
Thiagarajan, Jayaraman J
Bremer, Timo
Kim, Hyojin
2016Conference Proceedings, cited 33 times
Website
Convolutional Neural Network (CNN)
LUNG
Pulmonary nodule detection using a cascaded SVM classifier
Automatic detection of lung nodules from chest CT has been researched intensively over the last decades resulting also in several commercial products. However, solutions are adopted only slowly into daily clinical routine as many current CAD systems still potentially miss true nodules while at the same time generating too many false positives (FP). While many earlier approaches had to rely on rather few cases for development, larger databases become now available and can be used for algorithmic development. In this paper, we address the problem of lung nodule detection via a cascaded SVM classifier. The idea is to sequentially perform two classification tasks in order to select from an extremely large pool of potential candidates the few most likely ones. As the initial pool is allowed to contain thousands of candidates, very loose criteria could be applied during this pre-selection. In this way, the chances that a true nodule is falsely rejected as a candidate are reduced significantly. The final algorithm is trained and tested on the full LIDC/IDRI database. Comparison is done against two previously published CAD systems. Overall, the algorithm achieved sensitivity of 0.859 at 2.5 FP/volume where the other two achieved sensitivity values of 0.321 and 0.625, respectively. On low dose data sets, only slight increase in the number of FP/volume was observed, while the sensitivity was not affected.
Decision forests for learning prostate cancer probability maps from multiparametric MRI
Magnetic Resonance Imaging (MRI) is the standard of care in the clinic for diagnosis and follow up of Soft Tissue Sarcomas (STS) which presents an opportunity to explore the heterogeneity inherent in these rare tumors. Tumor heterogeneity is a challenging problem to quantify and has been shown to exist at many scales, from genomic to radiomic, existing both within an individual tumor, between tumors from the same primary in the same patient and across different patients. In this paper, we propose a method which focuses on spatially distinct sub-regions or habitats in the diagnostic MRI of patients with STS by using pixel signal intensity. Habitat characteristics likely represent areas of differing underlying biology within the tumor, and delineation of these differences could provide clinically relevant information to aid in selecting a therapeutic regimen (chemotherapy or radiation). To quantify tumor heterogeneity, first we assay intra-tumoral segmentations based on signal intensity and then build a spatial mapping scheme from various MRI modalities. Finally, we predict clinical outcomes, using in this paper the appearance of distant metastasis - the most clinically meaningful endpoint. After tumor segmentation into high and low signal intensities, a set of quantitative imaging features based on signal intensity is proposed to represent variation in habitat characteristics. This set of features is utilized to predict metastasis in a cohort of STS patients. We show that this framework, using only pre-therapy MRI, predicts the development of metastasis in STS patients with 72.41% accuracy, providing a starting point for a number of clinical hypotheses.
Predicting response before initiation of neoadjuvant chemotherapy in breast cancer using new methods for the analysis of dynamic contrast enhanced MRI (DCE MRI) data
The pharmacokinetic parameters derived from dynamic contrast enhanced (DCE) MRI have shown promise as biomarkers for tumor response to therapy. However, standard methods of analyzing DCE MRI data (Tofts model) require high temporal resolution, high signal-to-noise ratio (SNR), and the Arterial Input Function (AIF). Such models produce reliable biomarkers of response only when a therapy has a large effect on the parameters. We recently reported a method that solves the limitations, the Linear Reference Region Model (LRRM). Similar to other reference region models, the LRRM needs no AIF. Additionally, the LRRM is more accurate and precise than standard methods at low SNR and slow temporal resolution, suggesting LRRM-derived biomarkers could be better predictors. Here, the LRRM, Non-linear Reference Region Model (NRRM), Linear Tofts model (LTM), and Non-linear Tofts Model (NLTM) were used to estimate the RKtrans between muscle and tumor (or the Ktrans for Tofts) and the tumor kep,TOI for 39 breast cancer patients who received neoadjuvant chemotherapy (NAC). These parameters and the receptor statuses of each patient were used to construct cross-validated predictive models to classify patients as complete pathological responders (pCR) or non-complete pathological responders (non-pCR) to NAC. Model performance was evaluated using area under the ROC curve (AUC). The AUC for receptor status alone was 0.62, while the best performance using predictors from the LRRM, NRRM, LTM, and NLTM were AUCs of 0.79, 0.55, 0.60, and 0.59 respectively. This suggests that the LRRM can be used to predict response to NAC in breast cancer.
Radiogenomics of glioblastoma: a pilot multi-institutional study to investigate a relationship between tumor shape features and tumor molecular subtype
Glioblastoma (GBM) is the most common primary brain tumor characterized by very poor survival. However, while some patients survive only a few months, some might live for multiple years. Accurate prognosis of survival and stratification of patients allows for making more personalized treatment decisions and moves treatment of GBM one step closer toward the paradigm of precision medicine. While some molecular biomarkers are being investigated, medical imaging remains significantly underutilized for prognostication in GBM. In this study, we investigated whether computer analysis of tumor shape can contribute toward accurate prognosis of outcomes. Specifically, we implemented applied computer algorithms to extract 5 shape features from magnetic resonance imaging (MRI) for 22 GBM patients. Then, we determined whether each one of the features can accurately distinguish between patients with good and poor outcomes. We found that that one of the 5 analyzed features showed prognostic value of survival. The prognostic feature describes how well the 3D tumor shape fills its minimum bounding ellipsoid. Specifically, for low values (less or equal than the median) the proportion of patients that survived more than a year was 27% while for high values (higher than median) the proportion of patients with survival of more than 1 year was 82%. The difference was statistically significant (p < 0.05) even though the number of patients analyzed in this pilot study was low. We concluded that computerized, 3D analysis of tumor shape in MRI may strongly contribute to accurate prognostication and stratification of patients for therapy in GBM.
Prognosis classification in glioblastoma multiforme using multimodal MRI derived heterogeneity textural features: impact of pre-processing choices
Computed tomography (CT) imaging is a sensitive and specific lung cancer screening tool for the high-risk population and shown to be promising for detection of lung cancer. This study proposes an automatic methodology for detecting and segmenting lung nodules from CT images. The proposed methods begin with thorax segmentation, lung extraction and reconstruction of the original shape of the parenchyma using morphology operations. Next, a multi-scale hessian-based vesselness filter is applied to extract lung vasculature in lung. The lung vasculature mask is subtracted from the lung region segmentation mask to extract 3D regions representing candidate pulmonary nodules. Finally, the remaining structures are classified as nodules through shape and intensity features which are together used to train an artificial neural network. Up to 75% sensitivity and 98% specificity was achieved for detection of lung nodules in our testing dataset, with an overall accuracy of 97.62%±0.72% using 11 selected features as input to the neural network classifier, based on 4-fold cross-validation studies. Receiver operator characteristics for identifying nodules revealed an area under curve of 0.9476.
Automatic lung nodule classification with radiomics approach
Ma, Jingchen
Wang, Qian
Ren, Yacheng
Hu, Haibo
Zhao, Jun
2016Conference Proceedings, cited 10 times
Website
LUNG
Classification
Computer Aided Detection (CADe)
Computer Aided Diagnosis (CADx)
Lung cancer is the first killer among the cancer deaths. Malignant lung nodules have extremely high mortality while some of the benign nodules don't need any treatment .Thus, the accuracy of diagnosis between benign or malignant nodules diagnosis is necessary. Notably, although currently additional invasive biopsy or second CT scan in 3 months later may help radiologists to make judgments, easier diagnosis approaches are imminently needed. In this paper, we propose a novel CAD method to distinguish the benign and malignant lung cancer from CT images directly, which can not only improve the efficiency of rumor diagnosis but also greatly decrease the pain and risk of patients in biopsy collecting process. Briefly, according to the state-of-the-art radiomics approach, 583 features were used at the first step for measurement of nodules' intensity, shape, heterogeneity and information in multi-frequencies. Further, with Random Forest method, we distinguish the benign nodules from malignant nodules by analyzing all these features. Notably, our proposed scheme was tested on all 79 CT scans with diagnosis data available in The Cancer Imaging Archive (TCIA) which contain 127 nodules and each nodule is annotated by at least one of four radiologists participating in the project. Satisfactorily, this method achieved 82.7% accuracy in classification of malignant primary lung nodules and benign nodules. We believe it would bring much value for routine lung cancer diagnosis in CT imaging and provide improvement in decision-support with much lower cost.
HEVC/H.265 is the most interesting and cutting-edge topic in the world of digital video compression, allowing to reduce by half the required bandwidth in comparison with the previous H.264 standard. Telemedicine services and in general any medical video application can benefit from the video encoding advances. However, the HEVC is computationally expensive to implement. In this paper a method for reducing the HEVC complexity in the medical environment is proposed. The sequences that are typically processed in this context contain several homogeneous regions. Leveraging these regions, it is possible to simplify the HEVC flow while maintaining a high-level quality. In comparison with the HM16.2 standard, the encoding time is reduced up to 75%, with a negligible quality loss. Moreover, the algorithm is straightforward to implement in any hardware platform.
Computer Simulation of Low-dose CT with Clinical Lung Image Database: a preliminary study
Predictive capabilities of statistical learning methods for lung nodule malignancy classification using diagnostic image features: an investigation using the Lung Image Database Consortium dataset
Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI)
LASSO
Understanding the key radiogenomic associations for breast cancer between DCE-MRI and micro-RNA expressions is the foundation for the discovery of radiomic features as biomarkers for assessing tumor progression and prognosis. We conducted a study to analyze the radiogenomic associations for breast cancer using the TCGA-TCIA data set. The core idea that tumor etiology is a function of the behavior of miRNAs is used to build the regression models. The associations based on regression are analyzed for three study outcomes: diagnosis, prognosis, and treatment. The diagnosis group consists of miRNAs associated with clinicopathologic features of breast cancer and significant aberration of expression in breast cancer patients. The prognosis group consists of miRNAs which are closely associated with tumor suppression and regulation of cell proliferation and differentiation. The treatment group consists of miRNAs that contribute significantly to the regulation of metastasis thereby having the potential to be part of therapeutic mechanisms. As a first step, important miRNA expressions were identified and their ability to classify the clinical phenotypes based on the study outcomes was evaluated using the area under the ROC curve (AUC) as a figure-of-merit. The key mapping between the selected miRNAs and radiomic features were determined using least absolute shrinkage and selection operator (LASSO) regression analysis within a two-loop leave-one-out cross-validation strategy. These key associations indicated a number of radiomic features from DCE-MRI to be potential biomarkers for the three study outcomes.
Radiogenomic analysis of hypoxia pathway reveals computerized MRI descriptors predictive of overall survival in Glioblastoma
Lung cancer is one of the most abundant causes of the cancerous deaths worldwide. It has low survival rate mainly due to the late diagnosis. With the hardware advancements in computed tomography (CT) technology, it is now possible to capture the high resolution images of lung region. However, it needs to be augmented by efficient algorithms to detect the lung cancer in the earlier stages using the acquired CT images. To this end, we propose a two-step algorithm for early detection of lung cancer. Given the CT image, we first extract the patch from the center location of the nodule and segment the lung nodule region. We propose to use Otsu method followed by morphological operations for the segmentation. This step enables accurate segmentation due to the use of data-driven threshold. Unlike other methods, we perform the segmentation without using the complete contour information of the nodule. In the second step, a deep convolutional neural network (CNN) is used for the better classification (malignant or benign) of the nodule present in the segmented patch. Accurate segmentation of even a tiny nodule followed by better classification using deep CNN enables the early detection of lung cancer. Experiments have been conducted using 6306 CT images of LIDC-IDRI database. We achieved the test accuracy of 84.13%, with the sensitivity and specificity of 91.69% and 73.16%, respectively, clearly outperforming the state-of-the-art algorithms.
General purpose radiomics for multi-modal clinical research
In this paper we present an integrated software solution∗ targeting clinical researchers for discovering relevant radiomic biomarkers covering the entire value chain of clinical radiomics research. Its intention is to make this kind of research possible even for less experienced scientists. The solution provides means to create, collect, manage, and statistically analyze patient cohorts consisting of potentially multimodal 3D medical imaging data, associated volume of interest annotations, and radiomic features. Volumes of interest can be created by an extensive set of semi-automatic segmentation tools. Radiomic feature computation relies on the de facto standard library PyRadiomics and ensures comparability and reproducibility of carried out studies. Tabular cohort studies containing the radiomics of the volumes of interest can be managed directly within the software solution. The integrated statistical analysis capabilities introduce an additional layer of abstraction allowing non-experts to benefit from radiomics research as well. There are ready-to-use methods for clustering, uni- and multivariate statistics, and machine learning to be applied to the collected cohorts. They are validated in two case studies: for one thing, on a subset of the publicly available NSCLC-Radiomics data collection containing pretreatment CT scans of 317 non-small cell lung cancer (NSCLC) patients and for another, on the Lung Image Database Consortium imaging study with diagnostic and lung cancer screening CT scans including 2,753 distinct lesions from 870 patients. Integrated software solutions with optimized workflows like the one presented and further developments thereof may play an important role in making precision medicine come to life in clinical environments.
3D convolution neural networks for molecular subtype prediction in glioblastoma multiforme
Image processing algorithms based on deep learning techniques are being developed for a wide range of medical applications. Processed medical images are typically evaluated with the same kind of image similarity metrics used for natural scenes, disregarding the medical task for which the images are intended. We propose a com- putational framework to estimate the clinical performance of image processing algorithms using virtual clinical trials. The proposed framework may provide an alternative method for regulatory evaluation of non-linear image processing algorithms. To illustrate this application of virtual clinical trials, we evaluated three algorithms to compute synthetic mammograms from digital breast tomosynthesis (DBT) scans based on convolutional neural networks previously used for denoising low dose computed tomography scans. The inputs to the networks were one or more noisy DBT projections, and the networks were trained to minimize the difference between the output and the corresponding high dose mammogram. DBT and mammography images simulated with the Monte Carlo code MC-GPU using realistic breast phantoms were used for network training and validation. The denoising algorithms were tested in a virtual clinical trial by generating 3000 synthetic mammograms from the public VICTRE dataset of simulated DBT scans. The detectability of a calcification cluster and a spiculated mass present in the images was calculated using an ensemble of 30 computational channelized Hotelling observers. The signal detectability results, which took into account anatomic and image reader variability, showed that the visibility of the mass was not affected by the post-processing algorithm, but that the resulting slight blurring of the images severely impacted the visibility of the calcification cluster. The evaluation of the algorithms using the pixel-based metrics peak signal to noise ratio and structural similarity in image patches was not able to predict the reduction in performance in the detectability of calcifications. These two metrics are computed over the whole image and do not consider any particular task, and might not be adequate to estimate the diagnostic performance of the post-processed images.
Correlative hierarchical clustering-based low-rank dimensionality reduction of radiomics-driven phenotype in non-small cell lung cancer
Background: Lung cancer is one of the most common cancers in the United States and the most fatal, with 142,670 deaths in 2019. Accurately determining tumor response is critical to clinical treatment decisions, ultimately impacting patient survival. To better differentiate between non-small cell lung cancer (NSCLC) responders and non-responders to therapy, radiomic analysis is emerging as a promising approach to identify associated imaging features undetectable by the human eye. However, the plethora of variables extracted from an image may actually undermine the performance of computer-aided prognostic assessment, known as the curse of dimensionality. In the present study, we show that correlative-driven hierarchical clustering improves high-dimensional radiomics-based feature selection and dimensionality reduction, ultimately predicting overall survival in NSCLC patients. Methods: To select features for high-dimensional radiomics data, a correlation-incorporated hierarchical clustering algorithm automatically categorizes features into several groups. The truncation distance in the resulting dendrogram graph is used to control the categorization of the features, initiating low-rank dimensionality reduction in each cluster, and providing descriptive features for Cox proportional hazards (CPH)-based survival analysis. Using a publicly available non- NSCLC radiogenomic dataset of 204 patients’ CT images, 429 established radiomics features were extracted. Low-rank dimensionality reduction via principal component analysis (PCA) was employed (k=1, n<1) to find the representative components of each cluster of features and calculate cluster robustness using the relative weighted consistency metric. Results: Hierarchical clustering categorized radiomic features into several groups without primary initialization of cluster numbers using the correlation distance metric (as a function) to truncate the resulting dendrogram into different distances. The dimensionality was reduced from 429 to 67 features (for truncation distance of 0.1). The robustness within the features in clusters was varied from -1.12 to -30.02 for truncation distances of 0.1 to 1.8, respectively, which indicated that the robustness decreases with increasing truncation distance when smaller number of feature classes (i.e., clusters) are selected. The best multivariate CPH survival model had a C-statistic of 0.71 for truncation distance of 0.1, outperforming conventional PCA approaches by 0.04, even when the same number of principal components was considered for feature dimensionality. Conclusions: Correlative hierarchical clustering algorithm truncation distance is directly associated with robustness of the clusters of features selected and can effectively reduce feature dimensionality while improving outcome prediction.
Transfer learning with multiple convolutional neural networks for soft tissue sarcoma MRI classification
Long short-term memory networks predict breast cancer recurrence in analysis of consecutive MRIs acquired during the course of neoadjuvant chemotherapy
The purpose of this study was to assess long short-term memory networks in the prediction of recurrence-free survival in breast cancer patients using features extracted from MRIs acquired during the course of neoadjuvant chemotherapy. In the I-SPY1 dataset, up to 4 MRI exams were available per patient acquired at pre-treatment, early-treatment, interregimen, and pre-surgery time points. Breast cancers were automatically segmented and 8 features describing kinetic curve characteristics were extracted. We assessed performance of long short-term memory networks in the prediction of recurrence-free survival status at 2 years and at 5 years post-surgery. For these predictions, we analyzed MRIs from women who had at least 2 (or 5) years of recurrence-free follow-up or experienced recurrence or death within that timeframe: 157 women and 73 women, respectively. One approach used features extracted from all available exams and the other approach used features extracted from only exams prior to the second cycle of neoadjuvant chemotherapy. The areas under the ROC curve in the prediction of recurrence-free survival status at 2 years post-surgery were 0.80, 95% confidence interval [0.68; 0.88] and 0.75 [0.62; 0.83] for networks trained with all 4 available exams and only the ‘early’ exams, respectively. Hazard ratios at the lowest, median, and highest quartile cut -points were 6.29 [2.91; 13.62], 3.27 [1.77; 6.03], 1.65 [0.83; 3.27] and 2.56 [1.20; 5.48], 3.01 [1.61; 5.66], 2.30 [1.14; 4.67]. Long short-term memory networks were able to predict recurrence-free survival in breast cancer patients, also when analyzing only MRIs acquired ‘early on’ during neoadjuvant treatment.
Accurately identifying vertebral levels in large datasets
Elton, Daniel C.
Sandfort, Veit
Pickhardt, Perry J.
Summers, Ronald M.
2020Conference Proceedings, cited 0 times
CT Lymph Nodes
The vertebral levels of the spine provide a useful coordinate system when making measurements of plaque, muscle, fat, and bone mineral density. Correctly classifying vertebral levels with high accuracy is challenging due to the similar appearance of each vertebra, the curvature of the spine, and the possibility of anomalies such as fractured vertebrae, implants, lumbarization of the sacrum, and sacralization of L5. The goal of this work is to develop a system that can accurately and robustly identify the L1 level in large heterogeneous datasets. The first approach we study is using a 3D U-Net to segment the L1 vertebra directly using the entire scan volume to provide context. We also tested models for two class segmentation of L1 and T12 and a three class segmentation of L1, T12 and the rib attached to T12. By increasing the number of training examples to 249 scans using pseudo-segmentations from an in-house segmentation tool we were able to achieve 98% accuracy with respect to identifying the L1 vertebra, with an average error of 4.5 mm in the craniocaudal level. We next developed an algorithm which performs iterative instance segmentation and classification of the entire spine with a 3D U-Net. We found the instance based approach was able to yield better segmentations of nearly the entire spine, but had lower classification accuracy for L1.
Transferring CT image biomarkers from fibrosing idiopathic interstitial pneumonia to COVID-19 analysis
Fetita, Catalin
Rennotte, Simon
Latrasse, Marjorie
Tapu, Ruxandra
Maury, Mathilde
Mocanu, Bogdan
Nunes, Hilario
Brillet, Pierre-Yves
2021Conference Proceedings, cited 0 times
CT Images in COVID-19
Fibrosing idiopathic interstitial pneumonia (fIIP) is a subclass of interstitial lung diseases, which leads to fibrosis in a continuous and irreversible process of lung function decay. Patients with fIIP require regular quantitative follow-up with CT and several image biomarkers have already been proposed to grade the pathology severity and try to predict the evolution. Among them, we cite the spatial extent of the diseased lung parenchyma and airway and vascular remodeling markers. COVID-19 (Cov-19) presents several similarities with fIIP and this condition is moreover suspected to evolve to fIIP in 10-30% of severe cases. Note also that the main difference between Cov-19 and fIIP is the presence of peripheral ground glass opacities and less or no amount of fibrosis in the lung, as well as the absence of airway remodeling. This paper proposes a preliminary study to investigate how existing image markers for fIIP may apply to Cov-19 phenotyping, namely texture classification and vascular remodeling. In addition, since for some patients, the fIIP/Cov-19 follow-up protocol imposes CT acquisitions at both full inspiration and full expiration, this information could also be exploited to extract additional knowledge for each individual case. We hypothesize that taking into account the two respiratory phases to analyze breathing parameters through interpolation and registration might contribute to a better phenotyping of the pathology. This preliminary study, conducted on a reduced number of patients (eight Cov-19 of different severity degrees, two fIIP patients and one control), shows a great potential of the selected CT image markers.
CNN-based CT denoising with an accurate image domain noise insertion technique
Convolutional neural network (CNN)-based CT denoising methods have attracted great interest for improving the image quality of low-dose CT (LDCT) images. However, CNN requires a large amount of paired data consisting of normal-dose CT (NDCT) and LDCT images, which are generally not available. In this work, we aim to synthesize paired data from NDCT images with an accurate image domain noise insertion technique and investigate its effect on the denoising performance of CNN. Fan-beam CT images were reconstructed using extended cardiac-torso phantoms with Poisson noise added to projection data to simulate NDCT and LDCT. We estimated local noise power spectra and a variance map from a NDCT image using information on photon statistics and reconstruction parameters. We then synthesized image domain noise by filtering and scaling white Gaussian noise using the local noise power spectrum and variance map, respectively. The CNN architecture was U-net, and the loss function was a weighted summation of mean squared error, perceptual loss, and adversarial loss. CNN was trained with NDCT and LDCT (CNN-Ideal) or NDCT and synthesized LDCT (CNN-Proposed). To evaluate denoising performance, we measured the root mean squared error (RMSE), structural similarity index (SSIM), noise power spectrum (NPS), and modulation transfer function (MTF). The MTF was estimated from the edge spread function of a circular object with 12 mm diameter and 60 HU contrast. Denoising results from CNN-Ideal and CNN-Proposed show no significant difference in all metrics while providing high scores in RMSE and SSIM compared to NDCT and similar NPS shapes to that of NDCT.
Practical applications of machine learning in imaging trials
Hesterman, Jacob Y.
Greenblatt, Elliot
Novicki, Andrew
Ghayoor, Ali
Wellman, Tyler
Avants, Brian
2021Conference Proceedings, cited 0 times
NaF PROSTATE
Machine learning and deep learning are ubiquitous across a wide variety of scientific disciplines, including medical imaging. An overview of multiple application areas along the imaging chain where deep learning methods are utilized in discovery and clinical quantitative imaging trials is presented. Example application areas along the imaging chain include quality control, preprocessing, segmentation, and scoring. Within each area, one or more specific applications is demonstrated, such as automated structural brain MRI quality control assessment in a core lab environment, super-resolution MRI preprocessing for neurodegenerative disease quantification in translational clinical trials, and multimodal PET/CT tumor segmentation in prostate cancer trials. The quantitative output of these algorithms is described, including their impact on decision making and relationship to traditional read-based methods. Development and deployment of these techniques for use in quantitative imaging trials presents unique challenges. The interplay between technical and scientific domain knowledge required for algorithm development is highlighted. The infrastructure surrounding algorithm deployment is critical, given regulatory, method robustness, computational, and performance considerations. The sensitivity of a given technique to these considerations and thus complexity of deployment is task- and phase-dependent. Context is provided for the infrastructure surrounding these methods, including common strategies for data flow, storage, access, and dissemination as well as application-specific considerations for individual use cases.
Neural image compression for non-small cell lung cancer subtype classification in H&E stained whole-slide images
Aswolinskiy, Witali
Tellez, David
Raya, Gabriel
van der Woude, Lieke
Looijen-Salamon, Monika
van der Laak, Jeroen
Grunberg, Katrien
Ciompi, Francesco
2021Conference Proceedings, cited 0 times
Pathomics
Convolutional Neural Network (CNN)
TCGA-LUAD
CPTAC-LUAD
TCGA-LUSC
Snake-based interactive tooth segmentation for 3D mandibular meshes
Mandibular meshes segmented from computerized tomography (CT) images contain rich information of the dentition conditions, which impairs the performance of shape completion algorithms relying on such data, but can benefit virtual planning for oral reconstructive surgeries. To locate the alveolar process and remove the dentition area, we propose a semiautomatic method using non-rigid registration, an active contour model, and constructive solid geometry (CSG) operations. An easy-to-use interactive tool is developed allowing users to adjust the tooth crown contour position. A validation study and a comparison study were conducted for method evaluation. In the validation study, we removed teeth for 28 models acquired from Vancouver General Hospital (VGH) and ran a shape completion test. Regarding 95th percentile Hausdorff distance (HD95), using edentulous models produced significantly better predictions of the premorbid shapes of diseased mandibles than using models with inconsistent dentition conditions (Z = −2.484, p = 0.01). The volumetric Dice score (DSC) shows no significant difference. In the second study, we compared the proposed method to manual removal in terms of manual processing time, symmetric HD95, and symmetric root mean square deviation (RMSD). The result indicates that our method reduced the manual processing time by 40% on average and approached the accuracy of manual tooth segmentation. It is promising to warrant further efforts towards clinical usage. This work forms the basis of a useful tool for coupling jaw reconstruction and restorative dentition for patient treatment planning.
Classification of COVID-19 in chest radiographs: assessing the impact of imaging parameters using clinical and simulated images
As computer-aided diagnostics develop to address new challenges in medical imaging, including emerging diseases such as COVID-19, the initial development is hampered by availability of imaging data. Deep learning algorithms are particularly notorious for performance that tends to improve proportionally to the amount of available data. Simulated images, as available through advanced virtual trials, may present an alternative in data-constrained applications. We begin with our previously trained COVID-19 x-ray classification model (denoted as CVX) that leveraged additional training with existing pre-pandemic chest radiographs to improve classification performance in a set of COVID-19 chest radiographs. The CVX model achieves demonstrably better performance on clinical images compared to an equivalent model that applies standard transfer learning from ImageNet weights. The higher performing CVX model is then shown to generalize effectively to a set of simulated COVID-19 images, both quantitative comparisons of AUCs from clinical to simulated image sets, but also in a qualitative sense where saliency map patterns are consistent when compared between sets. We then stratify the classification results in simulated images to examine dependencies in imaging parameters when patient features are constant. Simulated images show promise in optimizing imaging parameters for accurate classification in data-constrained applications.
A comparison study of deep learning designs for improving low-dose CT denoising
Wang, Vincent
Wei, Alice
Tan, Jiaxing
Lu, Siming
Cao, Weiguo
Gao, Yongfeng
2021Conference Proceedings, cited 0 times
LDCT-and-Projection-data
Low-dose denoising is an effective method that utilizes the power of CT for screening while avoiding high radiation exposure. Several research work has reported the feasibility of deep learning based denoising, but none of them have explored the influence of different network designs on the denoising performance. In this work, we explored the impact of three commonly adapted network design concepts in denoising: (1) Network structure, (2) Residual learning, and (3) Training loss. We evaluated the network performance using the dataset containing 76 real patient scans from Mayo Clinic Low-dose CT dataset. Experimental results demonstrated that residual blocks and residual learning are recommended to be utilized in design, while pooling is not recommended. In addition, among the classical training losses, the mean absolute error (L1) loss outperforms the mean squared error (MSE) loss.
Centerline detection and estimation of pancreatic duct from abdominal CT images
Purpose: The aim of this work is to automatically detect and estimate the centerline of the pancreatic duct accurately. The proposed method uses four different algorithms for tracking the pancreatic duct in each of four type pancreatic zones. Method: The pancreatic duct was divided into 4 zones; Zone A has a clearly delineated pancreatic duct, Zone B is obscured, Zone C runs from visible segment to the pancreas’ tail and Zone D extends from head of the pancreas to the first visible point. The pancreatic duct is obscured in regions of lengths from 10-40 mm. Proposed method combines deep learning CNN for duct segmentation, followed by Dijkstra's rooting algorithm for estimation of centerline in Zones A and Zones B. In Zone C and D, the centerline was estimated using geometric information. The reference standard for the pancreatic duct was determined using non-obscured data by skilled technologists. Results: Zones A, which used a neural network method, had a success rate of 94%. In Zone B, the difference was <3mm when obscured interval was 10-40mm In Zone C and D, distance between computer estimated pancreas head and tail points and operator determined anatomical point was 10mm and 19mm, respectively. Optimal characteristic cost functions for each zone allow the natural centerline to be estimated even in obscured region. The new algorithms increased the average visible centerline length by 146% with calculation time of <40 seconds.
Method for compressing DICOM images with bit-normalization and video CODECs
The constant increase in the volume of data generated by various medical modalities has generated discussions; regarding the space needed for storage. Although the storage and network bandwidth costs are decreasing,; medical data production grows faster, thus forcing an increase in spending. With the application of image compression; and decompression techniques, such expectations and challenges overcoming can preserve all clinically; relevant information. This research evaluates a lossy method that combines an adaptive normalization for each; DICOM slice and volume compression using a video CODEC. Similarity metrics results show that the best result; achieved for these tests was the method combines the normalization function and the H264 using as parameters; FPS 60, bitrate 120, with an image in PNG format, where SSIM and CC have the maximum value (1:00), PSNR; has value 77:02 and CR was 5:46, twice higher the CR results from JPEG-LS and J2K.
Multi-institutional evaluation of a deep learning model for fully automated detection of aortic aneurysms in contrast and non-contrast CT
Xie, Yiting
Graf, Benedikt
Farzam, Parisa
Baker, Brian
Lamoureux, Christine
Sitek, Arkadiusz
2022Conference Proceedings, cited 0 times
CPTAC-PDA
TCGA-BLCA
TCGA-STAD
We developed and validated a research-only deep learning (DL) based automatic algorithm to detect thoracic and abdominal aortic aneurysms on contrast and non-contrast CT images and compared its performance with assessments obtained from retrospective radiology reports. The DL algorithm was developed using 556 CT scans. Manual annotations of aorta centerlines and cross-sectional aorta boundaries were created to train the algorithm. Aorta segmentation and aneurysm detection performances were evaluated on 2263 retrospective CT scans (154 thoracic and 176 abdominal aneurysms). Evaluation was performed by comparing the automatically detected aneurysm status to the aneurysm status reported in the radiology reports and the AUC was reported. In addition, a quantitative evaluation was performed to compare the automatically measured aortic diameters to manual diameters on a subset of 59 CT scans. Pearson correlation coefficient was used. For aneurysm detection, the AUC was 0.95 for thoracic aneurysm detection (95% confidence region [0.93, 0.97]) and 0.94 for abdominal aneurysm detection (95% confidence region [0.92, 0.96]). For aortic diameter measurement, the Pearson correlation coefficient was 0.973 (p<0.001).
4D radiomics in dynamic contrast-enhanced MRI: prediction of pathological complete response and systemic recurrence in triple-negative breast cancer
Caballo, Marco
Sanderink, Wendelien B. G.
Han, Luyi
Gao, Yuan
Athanasiou, Alexandra
Mann, Ritse M.
2022Conference Proceedings, cited 0 times
Duke-Breast-Cancer-MRI
We developed a four-dimensional (4D) radiomics approach for the analysis of breast cancer on dynamic contrast-enhanced (DCE) MRI scans. This approach quantifies 348 features related to kinetics, enhancement heterogeneity, and timedependent textural variation in 4D (3D over time) from the tumors and the peritumoral regions, leveraging both spatial and temporal image information. The potential of these features was studied for two clinical applications: the prediction of pathological complete response (pCR) to neoadjuvant chemotherapy (NAC), and of systemic recurrence (SR) in triplenegative (TN) breast cancers. For this, 72 pretreatment images of TN cancers (19 achieving pCR, 14 recurrence events), retrieved from a publicly available dataset (The Cancer Imaging Archive, Duke-Breast-Cancer-MRI dataset), were used. For both clinical problems, radiomic features were extracted from each case and used to develop a machine learning logistic regression model for outcome prediction. The model was trained and validated in a supervised leave-one-out cross validation fashion, with the input feature space reduced through statistical analysis and forward selection for overfitting prevention. The model was tested using the area under the receiver operating characteristics (ROC) curve (AUC), and statistical significance was assessed using the associated 95% confidence interval estimated through bootstrapping. The model achieved an AUC of 0.80 and 0.86, respectively for pCR and SR prediction. Both AUC values were statistically significant (p<0.05, adjusted for repeated testing). In conclusion, the developed approach could quantify relevant imaging biomarkers from TN breast cancers in pretreatment DCE-MRI images. These biomarkers were promising in the prediction of pCR to NAC and SR.
Iterative ComBat methods for harmonization of radiomic features
Horng, Hannah
Singh, Apurva
Yousefi, Bardia
Cohen, Eric A.
Haghighi, Babak
Katz, Sharyn
Noël, Peter B.
Shinohara, Russell T.
Kontos, Despina
2022Conference Proceedings, cited 0 times
NSCLC-Radiomics-Genomics
Background: ComBat is a promising harmonization method for radiomic features, but it cannot harmonize simultaneously by multiple batch effects and shows reduced performance in the setting of bimodal distributions and unknown clinical/batch variables. In this study, we develop and evaluate two iterative ComBat approaches (Nested and Nested+GMM ComBat) to address these limitations and improve radiomic feature harmonization performance. Methods: In Nested ComBat, radiomic features are sequentially harmonized by multiple batch effects with order determined by the permutation associated with the smallest number of features with statistically significant differences due to batch effects. In Nested+GMM ComBat, a Gaussian mixture model is used to identify a scan grouping associated with a latent variable from the observed feature distributions to be added as a batch effect to Nested ComBat. These approaches were used to harmonize differences associated with contrast enhancement, spatial resolution due to reconstruction kernel, and manufacturer in radiomic datasets generated by using CapTK and PyRadiomics to extract features from lung CT datasets (Lung3 and Radiogenomics). Differences due to batch effects in the original data and data harmonized with standard ComBat, Nested ComBat, and Nested+GMM ComBat were assessed. Results: Nested ComBat exhibits similar or better performance compared to standard ComBat, likely due to bimodal feature distributions. Nested+GMM ComBat successfully harmonized features with bimodal distributions and in most cases showed superior harmonization performance when compared to Nested and standard ComBat. Conclusions: Our findings show that Nested ComBat can harmonize by multiple batch effects and that Nested+GMM ComBat can improve harmonization of bimodal features.
3D residual convolutional neural network for low dose CT denoising
Zamyatin, Alexander A.
Yu, Leiming
Rozas, David
2022Conference Proceedings, cited 0 times
LDCT-and-Projection-data
CT continues to be one of the most widely used medical imaging modalities. Concerns about long term effect of x-ray radiation on patients have led to efforts to reduce the x-ray dose imparted during CT exams. Lowering CT dose results in a lower signal to noise ratio in CT data which lowers CT Image Quality (IQ). Deep learning algorithms have shown competitive denoising results against the state-of-art image-based denoising approaches. Among these deep learning algorithms, deep residual networks have demonstrated effectiveness for edge-preserving noise reduction and imaging performance improvement compared to traditional edge-preserving filters. Previously published Residual Encoder- Decoder Convolutional Neural Network (RED-CNN) showed significant achievement for noise suppression, structural preservation, and lesion detection. However, its 2D architecture makes it unsuitable for thin slice and reformatted (sagittal, coronal) imaging. In this work, we present a novel 3D RED-CNN architecture, evaluate the effect of model parameters on performance and IQ, and show steps to improve optimization convergence. We use standard imaging metrics (SSIM, PSNR) to assess imaging performance and compare to previously published algorithms. Compared to 2D RED-CNN, our proposed 3D RED CNN produces higher quality 3D results, as shown by reformatted (sagittal, coronal) views, while maintaining all advantages of the original RED-CNN in axial imaging.
Sparse capsule networks for informative representation learning in digital pathology
McNeil, Matthew
Anil, Cem
Martel, Anne
2022Conference Proceedings, cited 0 times
Post-NAT-BRCA
Digital pathology involves the digitization of high quality tissue biopsies on microscope slides to be used by physicians for patient diagnosis and prognosis. These slides have become exciting avenues for deep learning applications to improve care. Despite this, labels are difficult to produce and thus remain rare. In this work, we create a sparse capsule network with a spatial broadcast decoder to perform representation learning on segmented nuclei patches extracted from the BreastPathQ dataset. This was able to produce disentangled latent space for categories such as rotations, and logistic regression classifiers trained on the latent space performed well.
King Abdullah International Medical Research Center (KAIMRC)’s breast cancer big images data set
Almazroa, Ahmed A.
Bin Saleem, Ghaida
Alotaibi, Aljoharah
Almasloukh, Mudhi
Al Otaibi, Um Klthoum
Al Balawi, Wejdan
Alabdulmajeed, Ghufran
Alamri, Suhailah
Alsomaie, Barrak
Fahim, Mohammed
Alluhaydan, Najd
Almatar, Hessa
Park, Brian J.
Deserno, Thomas M.
2022Conference Paper, cited 0 times
BREAST-DIAGNOSIS
CBIS-DDSM
ACRIN 6698
ACRIN 6698/I-SPY2 Breast DWI
BMMR2 Challenge
BCS-DBT
BREAST
The purpose of this project is to prepare image data set to develop AI systems to serve screening and diagnosis of breast cancer research field. Whereas early detection could have a positive impact on decreasing mortality, as this could offer more options for successful intervention and therapies to reduce the chance of malignant and metastatic progression. Six students, one research technologist, and one consultant in radiology collected the images and the patients’ information. The images extracted from three imaging modalities: the Hologic 3D Mammography, Philips and Super Sonic ultrasound Machines, and GE and Philips machines for MRI. The cases were graded by a trained radiologist. A total of 3085 DICOM format images have collected for the period between 2008 – 2020 for 890 females patients ages 18 to 85. The largest portion in the data is dedicated for mammograms (51.3%), and then ultrasound (31.7%), and MRI exams (17%). There were 593 malignant cases while the benign cases were 2492 cases. The diagnosis was confirmed by biopsy technique after mammogram and ultrasound exams. The data will be continually collected in the future to serve the artificial intelligence research field and the public health community. The updated information about the data will be available on: https://kaimrc.med.sa/?page_id=11767072
Quality or quantity: toward a unified approach for multi-organ segmentation in body CT
Tushar, Fakrul Islam
Nujaim, Husam
Fu, Wanyi
Abadi, Ehsan
Mazurowski, Maciej A.
Segars, William P.
Samei, Ehsan
Lo, Joseph Y.
2022Conference Proceedings, cited 0 times
CT-ORG
Organ segmentation of medical images is a key step in virtual imaging trials. However, organ segmentation datasets are limited in in terms of quality (because labels cover only a few organs) and quantity (since case numbers are limited). In this study, we explored the tradeoffs between quality and quantity. Our goal is to create a unified approach for multi-organ segmentation of body CT, which will facilitate the creation of large numbers of accurate virtual phantoms. Initially, we compared two segmentation architectures, 3D-Unet and DenseVNet, which were trained using XCAT data that is fully labeled with 22 organs, and chose the 3D-Unet as the better performing model. We used the XCAT-trained model to generate pseudo-labels for the CT-ORG dataset that has only 7 organs segmented. We performed two experiments: First, we trained 3DUNet model on the XCAT dataset, representing quality data, and tested it on both XCAT and CT-ORG datasets. Second, we trained 3D-UNet after including the CT-ORG dataset into the training set to have more quantity. Performance improved for segmentation in the organs where we have true labels in both datasets and degraded when relying on pseudo-labels. When organs were labeled in both datasets, Exp-2 improved Average DSC in XCAT and CT-ORG by 1. This demonstrates that quality data is the key to improving the model’s performance.
Radiomic texture feature descriptor to distinguish recurrent brain tumor from radiation necrosis using multimodal MRI
Sadique, M. S.
Temtam, A.
Lappinen, E.
Iftekharuddin, K. M.
2022Conference Proceedings, cited 0 times
ACRIN-DSC-MR-Brain
IvyGAP
TCGA-GBM
Despite multimodal aggressive treatment with chemo-radiation-therapy, and surgical resection, Glioblastoma Multiforme (GBM) may recur which is known as recurrent brain tumor (rBT), There are several instances where benign and malignant pathologies might appear very similar on radiographic imaging. One such illustration is radiation necrosis (RN) (a moderately benign impact of radiation treatment) which are visually almost indistinguishable from rBT on structural magnetic resonance imaging (MRI). There is hence a need for identification of reliable non-invasive quantitative measurements on routinely acquired brain MRI scans: pre-contrast T1-weighted (T1), post-contrast T1-weighted (T1Gd), T2-weighted (T2), and T2 Fluid Attenuated Inversion Recovery (FLAIR) that can accurately distinguish rBT from RN. In this work, sophisticated radiomic texture features are used to distinguish rBT from RN on multimodal MRI for disease characterization. First, stochastic multiresolution radiomic descriptor that captures voxel-level textural and structural heterogeneity as well as intensity and histogram features are extracted. Subsequently, these features are used in a machine learning setting to characterize the rBT from RN from four sequences of the MRI with 155 imaging slices for 30 GBM cases (12 RN, 18 rBT). To reduce the bias in accuracy estimation our model is implemented using Leave-one-out crossvalidation (LOOCV) and stratified 5-fold cross-validation with a Random Forest classifier. Our model offers mean accuracy of 0.967 0.180 for LOOCV and 0.933 0.082 for stratified 5-fold cross-validation using multiresolution texture features for discrimination of rBT from RN in this study. Our findings suggest that sophisticated texture feature may offer better discrimination between rBT and RN in MRI compared to other works in the literature.
Deep variational clustering framework for self-labeling large-scale medical images
One of the most promising approaches for unsupervised learning is combining deep representation learning and deep clustering. Recent studies propose to simultaneously learn representation using deep neural networks and perform clustering by defining a clustering loss on top of embedded features. Unsupervised image clustering naturally requires good feature representations to capture the distribution of the data and subsequently differentiate data points from one another. Among existing deep learning models, the generative variational autoencoder explicitly learns data generating distribution in a latent space. We propose a Deep Variational Clustering (DVC) framework for unsupervised representation learning and clustering of large-scale medical images. DVC simultaneously learns the multivariate Gaussian posterior through the probabilistic convolutional encoder, and the likelihood distribution with the probabilistic convolutional decoder; and optimizes cluster labels assignment. Here, the learned multivariate Gaussian posterior captures the latent distribution of a large set of unlabeled images. Then, we perform unsupervised clustering on top of the variational latent space using a clustering loss. In this approach, the probabilistic decoder helps to prevent the distortion of data points in the latent space, and to preserve local structure of data generating distribution. The training process can be considered as a self-training process to refine the latent space and simultaneously optimizing cluster assignments iteratively. We evaluated our proposed framework on three public datasets that represented different medical imaging modalities. Our experimental results show that our proposed framework generalizes better across different datasets. It achieves compelling results on several medical imaging benchmarks. Thus, our approach offers potential advantages over conventional deep unsupervised learning in real-world applications. The source code of the method and of all the experiments are available publicly at: https://github.com/csfarzin/DVC
Lesion detection in digital breast tomosynthesis: method, experiences and results of participating to the DBTex challenge
The paper presents a framework for the detection of mass-like lesions in 3D digital breast tomosynthesis. It consists of several steps, including pre and post-processing, and a main detection block based on a Faster RCNN deep learning network. In addition to the framework, the paper describes different training steps to achieve better performance, including transfer learning using both mammographic and DBT data. The presented approach obtained third place in the recent DBT Lesion detection Challenge, DBTex, being the top approach without using an ensemble based method.
Abdominal CT pancreas segmentation using multi-scale convolution with aggregated transformations
Yang, Jin
Marcus, Daniel S.
Sotiras, Aristeidis
Iftekharuddin, Khan M.
Chen, Weijie
2023Conference Paper, cited 0 times
Pancreas-CT
Algorithm Development
Segmentation
Convolutional neural networks (CNNs) are a popular choice for medical image segmentation. However, they may be challenged by the large inter-subject variation in organ shapes and sizes due to CNNs typically employing convolutions with fixed-sized local receptive fields. To address this limitation, we proposed multi-scale aggregated residual convolution (MARC) and iterative multi-scale aggregated residual convolution (iMARC) to capture finer and richer features at various scales. Our goal is to improve single convolutions’ representation capabilities. This is achieved by employing convolutions with varying-sized receptive fields, combining multiple convolutions into a deeper one, and dividing single convolutions into a set of channel-independent sub-convolutions. These implementations result in an increase in their depth, width, and cardinality. The proposed MARC and iMARC can be easily integrated into general CNN architectures and trained end-to-end. To evaluate the improvements of MARC and iMARC on CNNs’ segmentation capabilities, we integrated MARC and iMARC into a standard 2D U-Net architecture for pancreas segmentation on abdominal computed tomography (CT) images. The results showed that our proposed MARC and iMARC enhanced the representation capabilities of single convolutions, resulting in improved segmentation performance with lower computational complexity.
Prostate gleason score prediction via MRI using capsule network
Li, Yuheng
Wang, Jing
Hu, Mingzhe
Patel, Pretesh
Mao, Hui
Liu, Tian
Yang, Xiaofeng
Iftekharuddin, Khan M.
Chen, Weijie
2023Conference Paper, cited 0 times
Prostate-MRI-US-Biopsy
Computer Aided Diagnosis (CADx)
Magnetic Resonance Imaging (MRI)
Classification
Convolutional Neural Network (CNN)
PROSTATE
Magnetic Resonance imaging (MRI) is a non-invasive modality for diagnosing prostate carcinoma (PCa) and deep learning has gained increasing interest in MR images. We propose a novel 3D Capsule Network to perform low grade vs high grade PCa classification. The proposed network utilizes Efficient CapsNet as backbone and consists of three main components, 3D convolutional blocks, depth-wise separable 3D convolution, and self-attention routing. The network employs convolutional blocks to extract high level features, which will form primary capsules via depth-wise separable convolution operations. A self-attention mechanism is used to route primary capsules to higher level capsules and finally a PCa grade is assigned. The proposed 3D Capsule Network was trained and tested using a public dataset that involves 529 patients diagnosed with PCa. A baseline 3D CNN method was also experimented for comparison. Our Capsule Network achieved 85% accuracy and 0.87 AUC, while the baseline CNN achieved 80% accuracy and 0.84 AUC. The superior performance of Capsule Network demonstrates its feasibility for PCa grade classification from prostate MRI and shows its potential in assisting clinical decision-making.
A comparison of U-Net series for CT pancreas segmentation
Zheng, Linya
Li, Ji
Zhang, Fan
Shi, Hong
Chen, Yinran
Luo, Xiongbiao
2023Conference Proceedings, cited 0 times
Pancreas-CT
The problems of the large variation in shape and location, and the complex background of many neighboring tissues in the pancreas segmentation hinder the early detection and diagnosis of pancreatic diseases. The U-Net family achieve great success in various medical image processing tasks such as segmentation and classification. This work aims to comparatively evaluate 2D U-Net, 2D U-Net++ and 2D U-Net3+ for CT pancreas segmentation. More interestingly, We also modify U-Net series in accordance with depth wise separable convolution (DWC) that replaces standard convolution. Without DWC, U-Net3+ works better than the other two networks and achieves an average dice similarity coefficient of 0.7555. Specifically, according to this study, we find that U-Net plus a simple module of DWC certainly works better than U-Net++ using redesigned dense skip connections and U-Net3+ using full-scale skip connections and deep supervision and can obtain an average dice similarity coefficient of 0.7613. More interestingly, the U-Net series plus DWC can significantly reduce the amount of training parameters from (39.4M, 47.2M, 27.0M) to (14.3M, 18.4M, 3.15M), respectively. At the same time, they also improve the dice similarity compared to using normal convolution.
Unsupervised learning of healthy anatomy for anomaly detection in brain CT scans
Walluscheck, Sina
Canalini, Luca
Klein, Jan
Heldmann, Stefan
2023Conference Proceedings, cited 0 times
CPTAC-GBM
Automatic detection of abnormalities to assist radiologists in acute and screening scenarios has become a particular focus in medical imaging research. Various approaches have been proposed for the detection of anomalies in magnetic resonance (MR) data, but very little work has been done for computed tomography (CT). As far as we know, there is no satisfactory approach for anomaly detection in CT brain images. We present a novel unsupervised deep learning approach to generate a normal representation (without anomalies) of CT head scans that we use to discriminate between healthy and abnormal images. In the first step, we train a GAN with 1000 healthy CT scans to generate normal head images. Subsequently, we attach an encoder to the generator and train the auto encoder network to reconstruct healthy anatomy from new input images. The auto encoder is pre-trained with generated images using a perceptual loss function. When applied to abnormal scans, the reconstructed healthy output is then used to detect anomalies by computing the Mean Squared Error between input and output image. We evaluate our slice-wise anomaly detection on 250 test images including hemorrhages and tumors. Our approach achieves an area under receiver operating characteristic curve (AUC) of 0.90 with 85.8% sensitivity and 85.5% precision without requiring large training data sets or labeled anomaly data. Therefore, our method discriminates between normal and abnormal CT scans with good accuracy.
Decision region analysis to deconstruct the subgroup influence on AI/ML predictions
Burgon, Alexis
Petrick, Nicholas
Sahiner, Berkman
Pennello, Gene
Samala, Ravi K.
2023Conference Proceedings, cited 0 times
COVID-19-NY-SBU
MIDRC-RICORD-1C
Assessing the generalizability of deep learning algorithms based on the size and diversity of the training data is not trivial. This study uses the mapping of samples in the image data space to the decision regions in the prediction space to understand how different subgroups in the data impact the neural network learning process and affect model generalizability. Using vicinal distribution-based linear interpolation, a plane of the decision region space spanned by the random ‘triplet’ of three images can be constructed. Analyzing these decision regions for many random triplets can provide insight into the relationships between distinct subgroups. In this study, a contrastive self-supervised approach is used to develop a ‘base’ classification model trained on a large chest x-ray (CXR) dataset. The base model is fine-tuned on COVID-19 CXR data to predict image acquisition technology (computed radiography (CR) or digital radiography (DX) and patient sex (male (M) or female (F)). Decision region analysis shows that the model’s image acquisition technology decision space is dominated by CR, regardless of the acquisition technology for the base images. Similarly, the Female class dominates the decision space. This study shows that decision region analysis has the potential to provide insights into subgroup diversity, sources of imbalances in the data, and model generalizability.
Anonymization and validation of 3-dimensional volumetric renderings of Computed Tomography (CT) data using commercially available T1W MRI-based algorithms
Patel, Rahil
Provenzano, Destie
Loew, Murray
2023Conference Proceedings, cited 0 times
CPTAC-LSCC
Previous studies have demonstrated that 3-dimensional (3D) volumetric renderings of magnetic resonance imaging (MRI) brain scan imaging data can be used to identify patients using facial recognition algorithms. We have shown that facial features can be identified on SIM-CT (simulation computed tomography) images for radiation oncology and mapped to face images from a database. We now seek to determine whether CT images can be anonymized using anonymization software that was designed for T1W MRI data. Our study examines (1) the ability of off-the-shelf anonymization algorithms to anonymize CT data, and (2) the ability of facial recognition algorithms to then identify whether faces could be detected from a database of facial images. This study generated 3D renderings from open-source CT scans of two patients from The Cancer Imaging Archive (TCIA) database. Data were then anonymized using AFNI (deface, reface, 3Dskullstrip), and FSL (deface and BET). Anonymized data were compared to the original renderings and also passed through facial recognition algorithms (Face_compare, VGG-Face, Facenet, DLib, and SFace) using a publicly available face database (Labeled Faces in the Wild) to determine what matches could be found. Our study found that all modules were able to process CT data in addition to T1W and T2W data and that data were successfully anonymized by AFNI's 3Dskullstrip and FSL's BET: they did not match the control image across all facial recognition algorithms. Our study demonstrates the importance of continued vigilance for patient privacy in publicly shared datasets and the importance of evaluation of anonymization methods for CT data.
Multi-modality GLCM image texture feature for segmentation and tissue classification
Andrade, Diego
Gifford, Howard C.
Das, Mini
2023Conference Proceedings, cited 0 times
Duke-Breast-Cancer-MRI
Humans and computer observer models often rely on feature analysis from a single imaging modality. We will examine benefits of new features that assist in image classification and detection of malignancies in MRI and X-Ray tomographic images. While the image formation principles are different in these modalities, there are common features like contrast that are often employed by humans (radiologist) in each of these when making decision. We will examine other features that may not be well-understood or explored such as grey level co-occurrence matrix (GLCM) texture features. As preliminary data, we show here the utility of some of these features along with classification methods aided by Gaussian mixture models (GMM) and fuzzy C-Means dimensionality reduction. GLCM maps characterize the image texture and provide a numerical and spatial tool of the texture signatures present in it. We will present pathways for using these in tissue classification, segmentation and development of task-based assessments.
A multi-view feature decomposition deep learning method for lung cancer histology classification
Gao, Heng
Wang, Minghui
Li, Haichun
Liu, Zhaodi
Liang, Wei
Li, Ao
2023Conference Proceedings, cited 0 times
NSCLC Radiogenomics
NSCLC-Radiomics
Accurate classification of squamous cell carcinoma (SCC) and adenocarcinoma (ADC) using computed tomography (CT) images is of great significance to guide treatment for patients with non-small cell lung cancer (NSCLC). Although existing deep learning methods have made promising progress in this area, they do not fully exploit tumor information to learn discriminative representations. In this study, we propose a multi-view feature decomposition deep learning method for lung cancer histology classification. Different from existing multi-view methods that directly fuse features extracted from different views, we propose a feature decomposition module (FDM) to decompose the features of axial, coronal and sagittal views into common and specific features through an attention mechanism. To constrain this feature decomposition, a feature similarity loss is introduced to make common features obtained from different views to be similar to each other. Moreover, to assure the effectiveness of feature decomposition, we design a cross-reconstruction loss which enforces each view to be reconstructed according to its own specific feature and other view’s common features. After the above feature decomposition, comprehensive representations of tumors can be obtained by efficiently integrating common features to improve the classification performance. Experimental results demonstrate that our method outperforms other state-of-the-art methods.
MC-Net: multi-scale Swin transformer and complementary self-attention fusion network for pancreas segmentation
The pancreas is located deep in the abdominal cavity, and its structure and adjacent relationship are complex. It is very difficult to treat it accurately. In order to solve the problem of automatic segmentation of pancreatic tissue in CT images, we apply the multi-scale idea of convolution neural network to Transformer, and propose a Multi-Scale Swin Transformer and Complementary Self-Attention Fusion Network for Pancreas Segmentation. Specifically, the multi-scale Swin Transformer module constructs different receptive fields through different window sizes to obtain multi-scale information; the different features of the encoder and decoder are effectively fused through a complementary self-attention fusion module. By comparing experimental evaluations on the NIH-TCIA dataset, our method improves Dice, sensitivity, and IOU by 3.9%, 6.4%, and 5.3% respectively compared to the baseline, which outperforms current state-of-the-art medical image segmentation methods.
Predicting glioma IDH mutation using multi-parametric MRI and fractal analysis
This study aims to investigate the effectiveness of applying fractal analysis to pre-operative MRI images for prediction of glioma IDH mutation status. IDH mutation has been shown to provide more prognostic and therapeutic benefits to patients, so predicting it before surgery can provide useful information for planning the proper treatments. This study utilized the UCSF-PDGM dataset from the Cancer Image Archive. We used the modified box counting method to compute the fractal dimension of segmented tumor regions in pre- and post-contrast T1-weighted MRI. The results showed that the FD provided clear differentiation between tumor grades, with higher FD correlated to higher tumor grade. Additionally, FD demonstrated clear separation between IDH wildtype and IDH mutated tumors. Enhanced differentiation based on FD was observed with post-contrast T1-weighted images. Significant p-values from the Wilcoxon rank sum test validated the potential of using fractal analysis. The AUC of ROC for IDH mutation prediction reached 0.88 for both pre- and post-contrast T1-weighted images. In conclusion, this study shows fractal analysis is a promising technique for glioma IDH mutation prediction. Future work will include studies using more advanced MRI imaging contrasts as well as combination of multi-parametric images.
A full pipeline to analyze lung histopathology images
Borras Ferris, Lluis
Püttmann, Simon
Marini, Niccolò
Vatrano, Simona
Fragetta, Filippo
Caputo, Alessandro
Ciompi, Francesco
Atzori, Manfredo
Müller, Henning
Tomaszewski, John E.
Ward, Aaron D.
2024Conference Paper, cited 0 times
TCGA-LUSC
TCGA-LUAD
Whole Slide Imaging (WSI)
Cell segmentation
Classification
Pathomics
Self-supervised
Histopathology images involve the analysis of tissue samples to diagnose several diseases, such as cancer. The analysis of tissue samples is a time-consuming procedure, manually made by medical experts, namely pathologists. Computational pathology aims to develop automatic methods to analyze Whole Slide Images (WSI), which are digitized histopathology images, showing accurate performance in terms of image analysis. Although the amount of available WSIs is increasing, the capacity of medical experts to manually analyze samples is not expanding proportionally. This paper presents a full automatic pipeline to classify lung cancer WSIs, considering four classes: Small Cell Lung Cancer (SCLC), non-small cell lung cancer divided into LUng ADenocarcinoma (LUAD) and LUng Squamous cell Carcinoma (LUSC), and normal tissue. The pipeline includes a self-supervised algorithm for pre-training the model and Multiple Instance Learning (MIL) for WSI classification. The model is trained with 2,226 WSIs and it obtains an AUC of 0.8558 ± 0.0051 and a weighted f1-score of 0.6537 ± 0.0237 for the 4-class classification on the test set. The capability of the model to generalize was evaluated by testing it on the public The Cancer Genome Atlas (TCGA) dataset on LUAD and LUSC classification. In this task, the model obtained an AUC of 0.9433 ± 0.0198 and a weighted f1-score of 0.7726 ± 0.0438.
Evaluation of few-shot detection of head and neck anatomy in CT
The detection of anatomical structures in medical imaging data plays a crucial role as a preprocessing step for various downstream tasks. It, however, poses a significant challenge due to highly variable appearances and intensity values within medical imaging data. In addition, there is a scarcity of annotated datasets in medical imaging data, due to high costs and the requirement for specialized knowledge. These limitations motivate researchers to develop automated and accurate few-shot object detection approaches. While there are generalpurpose deep learning models available for detecting objects in natural images, the applicability of these models for medical imaging data remains uncertain and needs to be validated. To address this, we carry out an unbiased evaluation of the state-of-the-art few-shot object detection methods for detecting head and neck anatomy in CT images. In particular, we choose Query Adaptive Few-Shot Object Detection (QA-FewDet), Meta Faster R-CNN, and Few-Shot Object Detection with Fully Cross-Transformer (FCT) methods and apply each model to detect various anatomical structures using novel datasets containing only a few images, ranging from 1- to 30-shot, during the fine-tuning stage. Our experimental results, carried out under the same setting, demonstrate that few-shot object detection methods can accurately detect anatomical structures, showing promising potential for integration into the clinical workflow.
An augmented reality and high-speed optical tracking system for laparoscopic surgery
While minimally invasive laparoscopic surgery can help reduce blood loss, reduce hospital time, and shorten recovery time compared to open surgery, it has the disadvantages of limited field of view and difficulty in locating subsurface targets. Our proposed solution applies an augmented reality (AR) system to overlay pre-operative images, such as those from magnetic resonance imaging (MRI), onto the target organ in the user’s real-world environment. Our system can provide critical information regarding the location of subsurface lesions to guide surgical procedures in real time. An infrared motion tracking camera system was employed to obtain real-time position data of the patient and surgical instruments. To perform hologram registration, fiducial markers were used to track and map virtual coordinates to the real world. In this study, phantom models of each organ were constructed to test the reliability and accuracy of the AR-guided laparoscopic system. Root mean square error (RMSE) was used to evaluate the targeting accuracy of the laparoscopic interventional procedure. Our results demonstrated a registration error of 2.42 ± 0.79 mm and a procedural targeting error of 4.17 ± 1.63 mm using our AR-guided laparoscopic system that will be further refined for potential clinical procedures.
The lung image database consortium (LIDC) and image database resource initiative (IDRI): a completed reference database of lung nodules on CT scans
Armato III, Samuel G
McLennan, Geoffrey
Bidaut, Luc
McNitt-Gray, Michael F
Meyer, Charles R
Reeves, Anthony P
Zhao, Binsheng
Aberle, Denise R
Henschke, Claudia I
Hoffman, Eric A
Kazerooni, E. A.
MacMahon, H.
Van Beeke, E. J.
Yankelevitz, D.
Biancardi, A. M.
Bland, P. H.
Brown, M. S.
Engelmann, R. M.
Laderach, G. E.
Max, D.
Pais, R. C.
Qing, D. P.
Roberts, R. Y.
Smith, A. R.
Starkey, A.
Batrah, P.
Caligiuri, P.
Farooqi, A.
Gladish, G. W.
Jude, C. M.
Munden, R. F.
Petkovska, I.
Quint, L. E.
Schwartz, L. H.
Sundaram, B.
Dodd, L. E.
Fenimore, C.
Gur, D.
Petrick, N.
Freymann, J.
Kirby, J.
Hughes, B.
Casteele, A. V.
Gupte, S.
Sallamm, M.
Heath, M. D.
Kuhn, M. H.
Dharaiya, E.
Burns, R.
Fryd, D. S.
Salganicoff, M.
Anand, V.
Shreter, U.
Vastagh, S.
Croft, B. Y.
Medical Physics2011Journal Article, cited 546 times
Website
LIDC-IDRI
Computer Aided Diagnosis (CADx)
LUNG
PURPOSE: The development of computer-aided diagnostic (CAD) methods for lung nodule detection, classification, and quantitative assessment can be facilitated through a well-characterized repository of computed tomography (CT) scans. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) completed such a database, establishing a publicly available reference for the medical imaging research community. Initiated by the National Cancer Institute (NCI), further advanced by the Foundation for the National Institutes of Health (FNIH), and accompanied by the Food and Drug Administration (FDA) through active participation, this public-private partnership demonstrates the success of a consortium founded on a consensus-based process. METHODS: Seven academic centers and eight medical imaging companies collaborated to identify, address, and resolve challenging organizational, technical, and clinical issues to provide a solid foundation for a robust database. The LIDC/IDRI Database contains 1018 cases, each of which includes images from a clinical thoracic CT scan and an associated XML file that records the results of a two-phase image annotation process performed by four experienced thoracic radiologists. In the initial blinded-read phase, each radiologist independently reviewed each CT scan and marked lesions belonging to one of three categories ("nodule > or =3 mm," "nodule <3 mm," and "non-nodule > or =3 mm"). In the subsequent unblinded-read phase, each radiologist independently reviewed their own marks along with the anonymized marks of the three other radiologists to render a final opinion. The goal of this process was to identify as completely as possible all lung nodules in each CT scan without requiring forced consensus. RESULTS: The Database contains 7371 lesions marked "nodule" by at least one radiologist. 2669 of these lesions were marked "nodule > or =3 mm" by at least one radiologist, of which 928 (34.7%) received such marks from all four radiologists. These 2669 lesions include nodule outlines and subjective nodule characteristic ratings. CONCLUSIONS: The LIDC/IDRI Database is expected to provide an essential medical imaging research resource to spur CAD development, validation, and dissemination in clinical practice.
Equating quantitative emphysema measurements on different CT image reconstructions
Bartel, Seth T
Bierhals, Andrew J
Pilgram, Thomas K
Hong, Cheng
Schechtman, Kenneth B
Conradi, Susan H
Gierada, David S
Medical Physics2011Journal Article, cited 15 times
Website
National Lung Screening Trial (NLST)
LUNG
LDCT
PURPOSE: To mathematically model the relationship between CT measurements of emphysema obtained from images reconstructed using different section thicknesses and kernels and to evaluate the accuracy of the models for converting measurements to those of a reference reconstruction. METHODS: CT raw data from the lung cancer screening examinations of 138 heavy smokers were reconstructed at 15 different combinations of section thickness and kernel. An emphysema index was quantified as the percentage of the lung with attenuation below -950 HU (EI950). Linear, quadratic, and power functions were used to model the relationship between EI950 values obtained with a reference 1 mm, medium smooth kernel reconstruction and values from each of the other 14 reconstructions. Preferred models were selected using the corrected Akaike information criterion (AICc), coefficients of determination (R2), and residuals (conversion errors), and cross-validated by a jackknife approach using the leave-one-out method. RESULTS: The preferred models were power functions, with model R2 values ranging from 0.949 to 0.998. The errors in converting EI950 measurements from other reconstructions to the 1 mm, medium smooth kernel reconstruction in leave-one-out testing were less than 3.0 index percentage points for all reconstructions, and less than 1.0 index percentage point for five reconstructions. Conversion errors were related in part to image noise, emphysema distribution, and attenuation histogram parameters. Conversion inaccuracy related to increased kernel sharpness tended to be reduced by increased section thickness. CONCLUSIONS: Image reconstruction-related differences in quantitative emphysema measurements were successfully modeled using power functions.
A fully automatic extraction of magnetic resonance image features in glioblastoma patients
Zhang, Jing
Barboriak, Daniel P
Hobbs, Hasan
Mazurowski, Maciej A
Medical Physics2014Journal Article, cited 21 times
Website
TCGA-GBM
BRAIN
Glioblastoma Multiforme (GBM)
Algorithm Development
PURPOSE: Glioblastoma is the most common malignant brain tumor. It is characterized by low median survival time and high survival variability. Survival prognosis for glioblastoma is very important for optimized treatment planning. Imaging features observed in magnetic resonance (MR) images were shown to be a good predictor of survival. However, manual assessment of MR features is time-consuming and can be associated with a high inter-reader variability as well as inaccuracies in the assessment. In response to this limitation, the authors proposed and evaluated a computer algorithm that extracts important MR image features in a fully automatic manner. METHODS: The algorithm first automatically segmented the available volumes into a background region and four tumor regions. Then, it extracted ten features from the segmented MR imaging volumes, some of which were previously indicated as predictive of clinical outcomes. To evaluate the algorithm, the authors compared the extracted features for 73 glioblastoma patients to the reference standard established by manual segmentation of the tumors. RESULTS: The experiments showed that their algorithm was able to extract most of the image features with moderate to high accuracy. High correlation coefficients between the automatically extracted value and reference standard were observed for the tumor location, minor and major axis length as well as tumor volume. Moderately high correlation coefficients were also observed for proportion of enhancing tumor, proportion of necrosis, and thickness of enhancing margin. The correlation coefficients for all these features were statistically significant (p < 0.0001). CONCLUSIONS: The authors proposed and evaluated an algorithm that, given a set of MR volumes of a glioblastoma patient, is able to extract MR image features that correlate well with their reference standard. Future studies will evaluate how well the computer-extracted features predict survival.;
Low-complexity atlas-based prostate segmentation by combining global, regional, and local metrics
Xie, Qiuliang
Ruan, Dan
Medical Physics2014Journal Article, cited 15 times
Website
QIN PROSTATE
PURPOSE: To improve the efficiency of atlas-based segmentation without compromising accuracy, and to demonstrate the validity of the proposed method on MRI-based prostate segmentation application. METHODS: Accurate and efficient automatic structure segmentation is an important task in medical image processing. Atlas-based methods, as the state-of-the-art, provide good segmentation at the cost of a large number of computationally intensive nonrigid registrations, for anatomical sites/structures that are subject to deformation. In this study, the authors propose to utilize a combination of global, regional, and local metrics to improve the accuracy yet significantly reduce the number of required nonrigid registrations. The authors first perform an affine registration to minimize the global mean squared error (gMSE) to coarsely align each atlas image to the target. Subsequently, a target-specific regional MSE (rMSE), demonstrated to be a good surrogate for dice similarity coefficient (DSC), is used to select a relevant subset from the training atlas. Only within this subset are nonrigid registrations performed between the training images and the target image, to minimize a weighted combination of gMSE and rMSE. Finally, structure labels are propagated from the selected training samples to the target via the estimated deformation fields, and label fusion is performed based on a weighted combination of rMSE and local MSE (lMSE) discrepancy, with proper total-variation-based spatial regularization. RESULTS: The proposed method was applied to a public database of 30 prostate MR images with expert-segmented structures. The authors' method, utilizing only eight nonrigid registrations, achieved a performance with a median/mean DSC of over 0.87/0.86, outperforming the state-of-the-art full-fledged atlas-based segmentation approach of which the median/mean DSC was 0.84/0.82 when applying to their data set. CONCLUSIONS: The proposed method requires a fixed number of nonrigid registrations, independent of atlas size, providing desirable scalability especially important for a large or growing atlas. When applied to prostate segmentation, the method achieved better performance to the state-of-the-art atlas-based approaches, with significant improvement in computation efficiency. The proposed rationale of utilizing jointly global, regional, and local metrics, based on the information characteristic and surrogate behavior for registration and fusion subtasks, can be extended naturally to similarity metrics beyond MSE, such as correlation or mutual information types.;
Medical Physics2015Journal Article, cited 4 times
Website
Algorithm Development
Image registration
PROSTATE
Magnetic Resonance Imaging (MRI)
PURPOSE: T2-weighted magnetic resonance imaging (MRI) is commonly used for anatomical visualization in the pelvis area, such as the prostate, with high soft-tissue contrast. MRI can also provide functional information such as diffusion-weighted imaging (DWI) which depicts the molecular diffusion processes in biological tissues. The combination of anatomical and functional imaging techniques is widely used in oncology, e.g., for prostate cancer diagnosis and staging. However, acquisition-specific distortions as well as physiological motion lead to misalignments between T2 and DWI and consequently to a reduced diagnostic value. Image registration algorithms are commonly employed to correct for such misalignment. METHODS: The authors compare the performance of five state-of-the-art nonrigid image registration techniques for accurate image fusion of DWI with T2. RESULTS: Image data of 20 prostate patients with cancerous lesions or cysts were acquired. All registration algorithms were validated using intensity-based as well as landmark-based techniques. CONCLUSIONS: The authors' results show that the "fast elastic image registration" provides most accurate results with a target registration error of 1.07 +/- 0.41 mm at minimum execution times of 11 +/- 1 s.
Effect of color visualization and display hardware on the visual assessment of pseudocolor medical images
Zabala-Travers, Silvina
Choi, Mina
Cheng, Wei-Chung
Badano, Aldo
Medical Physics2015Journal Article, cited 4 times
Website
Magnetic Resonance Imaging (MRI)
Image processing
PURPOSE: Even though the use of color in the interpretation of medical images has increased significantly in recent years, the ad hoc manner in which color is handled and the lack of standard approaches have been associated with suboptimal and inconsistent diagnostic decisions with a negative impact on patient treatment and prognosis. The purpose of this study is to determine if the choice of color scale and display device hardware affects the visual assessment of patterns that have the characteristics of functional medical images. METHODS: Perfusion magnetic resonance imaging (MRI) was the basis for designing and performing experiments. Synthetic images resembling brain dynamic-contrast enhanced MRI consisting of scaled mixtures of white, lumpy, and clustered backgrounds were used to assess the performance of a rainbow ("jet"), a heated black-body ("hot"), and a gray ("gray") color scale with display devices of different quality on the detection of small changes in color intensity. The authors used a two-alternative, forced-choice design where readers were presented with 600 pairs of images. Each pair consisted of two images of the same pattern flipped along the vertical axis with a small difference in intensity. Readers were asked to select the image with the highest intensity. Three differences in intensity were tested on four display devices: a medical-grade three-million-pixel display, a consumer-grade monitor, a tablet device, and a phone. RESULTS: The estimates of percent correct show that jet outperformed hot and gray in the high and low range of the color scales for all devices with a maximum difference in performance of 18% (confidence intervals: 6%, 30%). Performance with hot was different for high and low intensity, comparable to jet for the high range, and worse than gray for lower intensity values. Similar performance was seen between devices using jet and hot, while gray performance was better for handheld devices. Time of performance was shorter with jet. CONCLUSIONS: Our findings demonstrate that the choice of color scale and display hardware affects the visual comparative analysis of pseudocolor images. Follow-up studies in clinical settings are being considered to confirm the results with patient images.;
Intratumor heterogeneity (ITH) profoundly affects therapeutic responses and clinical outcomes. However, the widespread methods for assessing ITH based on genomic sequencing or pathological slides, which rely on limited tissue samples, may lead to inaccuracies due to potential sampling biases. Using a newly established multicenter breast cancer radio-multiomic dataset (n = 1474) encompassing radiomic features extracted from dynamic contrast-enhanced magnetic resonance images, we formulated a noninvasive radiomics methodology to effectively investigate ITH. Imaging ITH (IITH) was associated with genomic and pathological ITH, predicting poor prognosis independently in breast cancer. Through multiomic analysis, we identified activated oncogenic pathways and metabolic dysregulation in high-IITH tumors. Integrated metabolomic and transcriptomic analyses highlighted ferroptosis as a vulnerability and potential therapeutic target of high-IITH tumors. Collectively, this work emphasizes the superiority of radiomics in capturing ITH. Furthermore, we provide insights into the biological basis of IITH and propose therapeutic targets for breast cancers with elevated IITH.
An anatomic transcriptional atlas of human glioblastoma
Glioblastoma is an aggressive brain tumor that carries a poor prognosis. The tumor's molecular and cellular landscapes are complex, and their relationships to histologic features routinely used for diagnosis are unclear. We present the Ivy Glioblastoma Atlas, an anatomically based transcriptional atlas of human glioblastoma that aligns individual histologic features with genomic alterations and gene expression patterns, thus assigning molecular information to the most important morphologic hallmarks of the tumor. The atlas and its clinical and genomic database are freely accessible online data resources that will serve as a valuable platform for future investigations of glioblastoma pathogenesis, diagnosis, and treatment.
Clinically relevant modeling of tumor growth and treatment response
Yankeelov, Thomas E
Atuegwu, Nkiruka
Hormuth, David
Weis, Jared A
Barnes, Stephanie L
Miga, Michael I
Rericha, Erin C
Quaranta, Vito
Science Translational Medicine2013Journal Article, cited 70 times
Website
Algorithm Development
Diffusion-weighted MRI
Dynamic Contrast-Enhanced (DCE)-MRI
Positron emission tomography (PET)
BREAST
Models
Current mathematical models of tumor growth are limited in their clinical application because they require input data that are nearly impossible to obtain with sufficient spatial resolution in patients even at a single time point--for example, extent of vascularization, immune infiltrate, ratio of tumor-to-normal cells, or extracellular matrix status. Here we propose the use of emerging, quantitative tumor imaging methods to initialize a new generation of predictive models. In the near future, these models could be able to forecast clinical outputs, such as overall response to treatment and time to progression, which will provide opportunities for guided intervention and improved patient care.
Magnetic resonance image features identify glioblastoma phenotypic subtypes with distinct molecular pathway activities
Itakura, Haruka
Achrol, Achal S
Mitchell, Lex A
Loya, Joshua J
Liu, Tiffany
Westbroek, Erick M
Feroze, Abdullah H
Rodriguez, Scott
Echegaray, Sebastian
Azad, Tej D
Science Translational Medicine2015Journal Article, cited 90 times
Website
TCGA-GBM
MRI
radiomic features
Improving breast cancer diagnostics with deep learning for MRI
Witowski, Jan
Heacock, Laura
Reig, Beatriu
Kang, Stella K
Lewin, Alana
Pysarenko, Kristine
Patel, Shalin
Samreen, Naziya
Rudnicki, Wojciech
Łuczyńska, Elżbieta
Science Translational Medicine2022Journal Article, cited 0 times
Website
Duke-Breast-Cancer-MRI
TCGA-BRCA
Deep Learning
breast cancer
Cancer Digital Slide Archive: an informatics resource to support integrated in silico analysis of TCGA pathology data
Gutman, David A
Cobb, Jake
Somanna, Dhananjaya
Park, Yuna
Wang, Fusheng
Kurc, Tahsin
Saltz, Joel H
Brat, Daniel J
Cooper, Lee AD
Kong, Jun
Journal of the American Medical Informatics Association2013Journal Article, cited 70 times
Website
TCGA-GBM
TCGA-BRCA
Digital pathology
Data integration
BACKGROUND: The integration and visualization of multimodal datasets is a common challenge in biomedical informatics. Several recent studies of The Cancer Genome Atlas (TCGA) data have illustrated important relationships between morphology observed in whole-slide images, outcome, and genetic events. The pairing of genomics and rich clinical descriptions with whole-slide imaging provided by TCGA presents a unique opportunity to perform these correlative studies. However, better tools are needed to integrate the vast and disparate data types. OBJECTIVE: To build an integrated web-based platform supporting whole-slide pathology image visualization and data integration. MATERIALS AND METHODS: All images and genomic data were directly obtained from the TCGA and National Cancer Institute (NCI) websites. RESULTS: The Cancer Digital Slide Archive (CDSA) produced is accessible to the public (http://cancer.digitalslidearchive.net) and currently hosts more than 20,000 whole-slide images from 22 cancer types. DISCUSSION: The capabilities of CDSA are demonstrated using TCGA datasets to integrate pathology imaging with associated clinical, genomic and MRI measurements in glioblastomas and can be extended to other tumor types. CDSA also allows URL-based sharing of whole-slide images, and has preliminary support for directly sharing regions of interest and other annotations. Images can also be selected on the basis of other metadata, such as mutational profile, patient age, and other relevant characteristics. CONCLUSIONS: With the increasing availability of whole-slide scanners, analysis of digitized pathology images will become increasingly important in linking morphologic observations with genomic and clinical endpoints.;
Multiomics profiling reveals the benefits of gamma-delta (gammadelta) T lymphocytes for improving the tumor microenvironment, immunotherapy efficacy and prognosis in cervical cancer
Li, J.
Cao, Y.
Liu, Y.
Yu, L.
Zhang, Z.
Wang, X.
Bai, H.
Zhang, Y.
Liu, S.
Gao, M.
Lu, C.
Li, C.
Guan, Y.
Tao, Z.
Wu, Z.
Chen, J.
Yuan, Z.
J Immunother Cancer2024Journal Article, cited 0 times
Website
TCGA-CESC
Humans
*Uterine Cervical Neoplasms/genetics/therapy
Tumor Microenvironment
Multiomics
Immunotherapy
Prognosis
Biostatistics
Genital Neoplasms
Female
T-Lymphocytes
Radiogenomics
PyRadiomics
BACKGROUND: As an unconventional subpopulation of T lymphocytes, gammadelta T cells can recognize antigens independently of major histocompatibility complex restrictions. Recent studies have indicated that gammadelta T cells play contrasting roles in tumor microenvironments-promoting tumor progression in some cancers (eg, gallbladder and leukemia) while suppressing it in others (eg, lung and gastric). gammadelta T cells are mainly enriched in peripheral mucosal tissues. As the cervix is a mucosa-rich tissue, the role of gammadelta T cells in cervical cancer warrants further investigation. METHODS: We employed a multiomics strategy that integrated abundant data from single-cell and bulk transcriptome sequencing, whole exome sequencing, genotyping array, immunohistochemistry, and MRI. RESULTS: Heterogeneity was observed in the level of gammadelta T-cell infiltration in cervical cancer tissues, mainly associated with the tumor somatic mutational landscape. Definitely, gammadelta T cells play a beneficial role in the prognosis of patients with cervical cancer. First, gammadelta T cells exert direct cytotoxic effects in the tumor microenvironment of cervical cancer through the dynamic evolution of cellular states at both poles. Second, higher levels of gammadelta T-cell infiltration also shape the microenvironment of immune activation with cancer-suppressive properties. We found that these intricate features can be observed by MRI-based radiomics models to non-invasively assess gammadelta T-cell proportions in tumor tissues in patients. Importantly, patients with high infiltration levels of gammadelta T cells may be more amenable to immunotherapies including immune checkpoint inhibitors and autologous tumor-infiltrating lymphocyte therapies, than to chemoradiotherapy. CONCLUSIONS: gammadelta T cells play a beneficial role in antitumor immunity in cervical cancer. The abundance of gammadelta T cells in cervical cancerous tissue is associated with higher response rates to immunotherapy.
Acute lymphoblastic leukemia classification using persistent homology
Acute Lymphoblastic Leukemia (ALL) is a prevalent form of childhood blood cancer characterized by the proliferation of immature white blood cells that rapidly replace normal cells in the bone marrow. The exponential growth of these leukemic cells can be fatal if not treated promptly. Classifying lymphoblasts and healthy cells poses a significant challenge, even for domain experts, due to their morphological similarities. Automated computer analysis of ALL can provide substantial support in this domain and potentially save numerous lives. In this paper, we propose a novel classification approach that involves analyzing shapes and extracting topological features of ALL cells. We employ persistent homology to capture these topological features. Our technique accurately and efficiently detects and classifies leukemia blast cells, achieving a recall of 98.2% and an F1-score of 94.6%. This approach has the potential to significantly enhance leukemia diagnosis and therapy.
MRI to MGMT: predicting methylation status in glioblastoma patients using convolutional recurrent neural networks
Glioblastoma Multiforme (GBM), a malignant brain tumor, is among the most lethal of all cancers. Temozolomide is the primary chemotherapy treatment for patients diagnosed with GBM. The methylation status of the promoter or the enhancer regions of the O6− methylguanine methyltransferase (MGMT) gene may impact the efficacy and sensitivity of temozolomide, and hence may affect overall patient survival. Microscopic genetic changes may manifest as macroscopic morphological changes in the brain tumors that can be detected using magnetic resonance imaging (MRI), which can serve as noninvasive biomarkers for determining methylation of MGMT regulatory regions. In this research, we use a compendium of brain MRI scans of GBM patients collected from The Cancer Imaging Archive (TCIA) combined with methylation data from The Cancer Genome Atlas (TCGA) to predict the methylation state of the MGMT regulatory regions in these patients. Our approach relies on a bi-directional convolutional recurrent neural network architecture (CRNN) that leverages the spatial aspects of these 3-dimensional MRI scans. Our CRNN obtains an accuracy of 67% on the validation data and 62% on the test data, with precision and recall both at 67%, suggesting the existence of MRI features that may complement existing markers for GBM patient stratification and prognosis. We have additionally presented our model via a novel neural network visualization platform, which we have developed to improve interpretability of deep learning MRI-based classification models.
Tele-Operated MRI-Guided Needle Insertion for Prostate Interventions
Moreira, Pedro
Kuil, Leanne
Dias, Pedro
Borra, Ronald
Misra, Sarthak
2019Journal Article, cited 0 times
PROSTATE-DIAGNOSIS
Prostate cancer is one of the leading causes of death in men. Prostate interventions using magnetic resonance imaging (MRI) benefits from high tissue contrast if compared to other imaging modalities. The Minimally Invasive Robotics In An MRI environment (MIRIAM) robot is an MRI-compatible system able to steer different types of needles towards a point of interest using MRI guidance. However, clinicians can be reluctant to give the robot total control of the intervention. This work integrates a haptic device in the MIRIAM system to allow input from the clinician during the insertion. A shared control architecture is achieved by letting the clinician control the insertion depth via the haptic device, while the robotic system controls the needle orientation. The clinician receives haptic feedback based on the insertion depth and tissue characteristics. Four control laws relating the motion of the master robot (haptic device) to the motion of the slave robot (MIRIAM robot) are presented and evaluated. Quantitative and qualitative results from 20 human subjects demonstrate that the squared-velocity control law is the most suitable option for our application. Additionally, a pre-operative target localization algorithm is presented in order to provide the robot with the target location. The target localization and reconstruction algorithm are validated in phantom and patient images with an average dice similarity coefficient (DSC) of 0.78. The complete system is validated through experiments by inserting a needle towards a target within the MRI scanner. Four human subjects perform the experiment achieving an average targeting error of 3.4[Formula: see text]mm.
Teleoperated and Automated Control of a Robotic Tool for Targeted Prostate Biopsy
Padasdao, Blayton
Lafreniere, Samuel
Rabiei, Mahsa
Batsaikhan, Zolboo
Konh, Bardia
2023Journal Article, cited 0 times
Prostate-MRI-US-Biopsy
This work presents a robotic tool with bidirectional manipulation and control capabilities for targeted prostate biopsy interventions. Targeted prostate biopsy is an effective image-guided technique that results in detection of significant cancer with fewer cores and lower number of unnecessary biopsies compared to systematic biopsy. The robotic tool comprises of a compliant flexure section fabricated on a nitinol tube that enables bidirectional bending via actuation of two internal tendons, and a biopsy mechanism for extraction of tissue samples. The kinematic and static models of the compliant flexure section, as well as teleoperated and automated control of the robotic tool are presented and validated with experiments. It was shown that the controller can force the tip of the robotic tool to follow sinusoidal set-point positions with reasonable accuracy in air and inside a phantom tissue. Finally, the capability of the robotic tool to bend, reach targeted positions inside a phantom tissue, and extract a biopsy sample is evaluated.
Safer Motion Planning of Steerable Needles via a Shaft-to-Tissue Force Model
Bentley, Michael
Rucker, Caleb
Reddy, Chakravarthy
Salzman, Oren
Kuntz, Alan
2023Journal Article, cited 0 times
LCTSC
Steerable needles are capable of accurately targeting difficult-to-reach clinical sites in the body. By bending around sensitive anatomical structures, steerable needles have the potential to reduce the invasiveness of many medical procedures. However, inserting these needles with curved trajectories increases the risk of tissue damage due to perpendicular forces exerted on the surrounding tissue by the needle’s shaft, potentially resulting in lateral shearing through tissue. Such forces can cause significant tissue damage, negatively affecting patient outcomes. In this work, we derive a tissue and needle force model based on a Cosserat string formulation, which describes the normal forces and frictional forces along the shaft as a function of the planned needle path, friction model and parameters, and tip piercing force. We propose this new force model and associated cost function as a safer and more clinically relevant metric than those currently used in motion planning for steerable needles. We fit and validate our model through physical needle robot experiments in a gel phantom. We use this force model to define a bottleneck cost function for motion planning and evaluate it against the commonly used path-length cost function in hundreds of randomly generated three-dimensional (3D) environments. Plans generated with our force-based cost show a 62% reduction in the peak modeled tissue force with only a 0.07% increase in length on average compared to using the path-length cost in planning. Additionally, we demonstrate planning with our force-based cost function in a lung tumor biopsy scenario from a segmented computed tomography (CT) scan. By directly minimizing the modeled needle-to-tissue force, our method may reduce patient risk and improve medical outcomes from steerable needle interventions.
Arbitrary Scale Super-Resolution for Medical Images
Zhu, Jin
Tan, Chuan
Yang, Junwei
Yang, Guang
Lio', Pietro
2021Journal Article, cited 0 times
CT Images in COVID-19
Single image super-resolution (SISR) aims to obtain a high-resolution output from one low-resolution image. Currently, deep learning-based SISR approaches have been widely discussed in medical image processing, because of their potential to achieve high-quality, high spatial resolution images without the cost of additional scans. However, most existing methods are designed for scale-specific SR tasks and are unable to generalize over magnification scales. In this paper, we propose an approach for medical image arbitrary-scale super-resolution (MIASSR), in which we couple meta-learning with generative adversarial networks (GANs) to super-resolve medical images at any scale of magnification in [Formula: see text]. Compared to state-of-the-art SISR algorithms on single-modal magnetic resonance (MR) brain images (OASIS-brains) and multi-modal MR brain images (BraTS), MIASSR achieves comparable fidelity performance and the best perceptual quality with the smallest model size. We also employ transfer learning to enable MIASSR to tackle SR tasks of new medical modalities, such as cardiac MR images (ACDC) and chest computed tomography images (COVID-CT). The source code of our work is also public. Thus, MIASSR has the potential to become a new foundational pre-/post-processing step in clinical image analysis tasks such as reconstruction, image quality enhancement, and segmentation.
U-Net Model-Based Classification and Description of Brain Tumor in MRI Images
Tunga, P. Prakash
Singh, Vipula
Aditya, V. Sri
Subramanya, N.
International Journal of Image and Graphics2021Journal Article, cited 0 times
BraTS-TCGA-GBM
In this paper, we discuss the classification of the brain tumor in Magnetic Resonance Imaging (MRI) images using the U-Net model, then evaluate parameters that indicate the performance of the model. We also discuss the extraction of the tumor region from brain image and description of the tumor regarding its position and size. Here, we consider the case of Gliomas, one of the types of brain tumors, which occur in common and can be fatal depending on their position and growth. U-Net is a model of Convolutional Neural Network (CNN) which has U-shaped architecture. MRI employs a non-invasive technique and can very well provide soft-tissue contrast and hence, for the detection and description of the brain tumor, this imaging method can be beneficial. Manual delineation of tumors from brain MRI is laborious, time-consuming and can vary from expert to expert. Our work forms a computer aided technique which is relatively faster and reproducible, and the accuracy is very much on par with ground truth. The results of the work can be used for treatment planning and further processing related to storage or transmission of images.
IDH-Based Radiogenomic Characterization of Glioma Using Local Ternary Pattern Descriptor Integrated with Radiographic Features and Random Forest Classifier
Gore, Sonal
Jagtap, Jayant
International Journal of Image and Graphics2021Journal Article, cited 0 times
Website
TCGA-GBM
TCGA-LGG
Random forest classifier
Radiogenomics
BRAIN
Magnetic Resonance Imaging (MRI)
Mutations in family of Isocitrate Dehydrogenase (IDH) gene occur early in oncogenesis, especially with glioma brain tumor. Molecular diagnostic of glioma using machine learning has grabbed attention to some extent from last couple of years. The development of molecular-level predictive approach carries great potential in radiogenomic field. But more focused efforts need to be put to develop such approaches. This study aims to develop an integrative genomic diagnostic method to assess the significant utility of textures combined with other radiographic and clinical features for IDH classification of glioma into IDH mutant and IDH wild type. Random forest classifier is used for classification of combined set of clinical features and radiographic features extracted from axial T2-weighted Magnetic Resonance Imaging (MRI) images of low- and high-grade glioma. Such radiogenomic analysis is performed on The Cancer Genome Atlas (TCGA) data of 74 patients of IDH mutant and 104 patients of IDH wild type. Texture features are extracted using uniform, rotation invariant Local Ternary Pattern (LTP) method. Other features such as shape, first-order statistics, image contrast-based, clinical data like age, histologic grade are combined with LTP features for IDH discrimination. Proposed random forest-assisted model achieved an accuracy of 85.89% with multivariate analysis of integrated set of feature descriptors using Glioblastoma and Low-Grade Glioma dataset available with The Cancer Imaging Archive (TCIA). Such an integrated feature analysis using LTP textures and other descriptors can effectively predict molecular class of glioma as IDH mutant and wild type.
Region of Interest-Based Coding Technique of Medical Images Using Varying Grading of Compression
Sharma, Urvashi
Sood, Meenakshi
Puthooran, Emjee
International Journal of Image and Graphics2020Journal Article, cited 0 times
RIDER Breast MRI
A region of interest (ROI)-based compression method for medical image datasets is a requirement to maintain the quality of the diagnostically important region of the image. It is always a better option to compress the diagnostic important region in a lossless manner and the remaining portion of the image with a near-lossless compression method to achieve high compression efficiency without any compromise of quality. The predictive ROI-based compression on volumetric CT medical image is proposed in this paper; resolution-independent gradient edge detection (RIGED) and block adaptive arithmetic encoding (BAAE) are employed to ROI part for prediction and encoding that reduce the interpixel and coding redundancy. For the non-ROI portion, RIGED with an optimal threshold value, quantizer with optimal [Formula: see text]-level and BAAE with optimal block size are utilized for compression. The volumetric 8-bit and 16-bit standard CT image dataset is utilized for the evaluation of the proposed technique, and results are validated on real-time CT images collected from the hospital. Performance of the proposed technique in terms of BPP outperforms existing techniques such as JPEG 2000, M-CALIC, JPEG-LS, CALIC and JP3D by 20.31%, 19.87%, 17.77%, 15.58% and 13.66%, respectively.
Integration of Dynamic Multi-Atlas and Deep Learning Techniques to Improve Segmentation of the Prostate in MR Images
Moradi, Hamid
Foruzan, Amir Hossein
International Journal of Image and Graphics2021Journal Article, cited 0 times
Website
ISBI-MR-Prostate-2013
Deep Learning
Segmentation
Magnetic Resonance Imaging (MRI)
Accurate delineation of the prostate in MR images is an essential step for treatment planning and volume estimation of the organ. Prostate segmentation is a challenging task due to its variable size and shape. Moreover, neighboring tissues have a low-contrast with the prostate.; We propose a robust and precise automatic algorithm to define the prostate's boundaries in MR images in this paper. First, we find the prostate's ROI by a deep neural network and decrease the input image's size. Next, a dynamic multi-atlas-based approach obtains the initial segmentation of the prostate. A watershed algorithm improves the initial segmentation at the next stage. Finally, an SSM algorithm keeps the result in the domain of allowable prostate shapes.; The quantitative evaluation of 74 prostate volumes demonstrated that the proposed method; yields a mean Dice coefficient of 0.83 +/- 0.05. In comparison with recent researches, our algorithm is robust against shape and size variations.; Keywords: Prostate segmentation; deep learning; watershed segmentation; probabilistic atlas;; statistical shape modeling.
Computer-Aided Classification of Cell Lung Cancer Via PET/CT Images Using Convolutional Neural Network
El Hamdi, Dhekra
Elouedi, Ines
Slim, Ihsen
International Journal of Image and Graphics2023Journal Article, cited 0 times
Website
Lung-PET-CT-Dx
LUNG
Computed Tomography (CT)
Positron Emission Tomography (PET)
Convolutional Neural Network (CNN)
Classification
Imaging features
Lung cancer is the leading cause of cancer-related death worldwide. Therefore, early diagnosis remains essential to allow access to appropriate curative treatment strategies. This paper presents a novel approach to assess the ability of Positron Emission Tomography/Computed Tomography (PET/CT) images for the classification of lung cancer in association with artificial intelligence techniques. We have built, in this work, a multi output Convolutional Neural Network (CNN) as a tool to assist the staging of patients with lung cancer. The TNM staging system as well as histologic subtypes classification were adopted as a reference. The VGG 16 network is applied to the PET/CT images to extract the most relevant features from images. The obtained features are then transmitted to a three-branch classifier to specify Nodal (N), Tumor (T) and histologic subtypes classification. Experimental results demonstrated that our CNN model achieves good results in TN staging and histology classification. The proposed architecture classified the tumor size with a high accuracy of 0.94 and the area under the curve (AUC) of 0.97 when tested on the Lung-PET-CT-Dx dataset. It also has yielded high performance for N staging with an accuracy of 0.98. Besides, our approach has achieved better accuracy than state-of-the-art methods in histologic classification.
Analysis of a feature-deselective neuroevolution classifier (FD-NEAT) in a computer-aided lung nodule detection system for CT images
Systems for Computer-Aided Detection (CAD), specifically for lung nodule detection received increasing attention in recent years. This is in tandem with the observation that patients who are diagnosed with early stage lung cancer and who undergo curative resection have a much better prognosis. In this paper, we analyze the performance of a novel feature-deselective neuroevolution method called FD-NEAT to retain relevant features derived from CT images and evolve neural networks that perform well for combined feature selection and classification. Network performance is analyzed based on radiologists' ratings of various lung nodule characteristics defined in the LIDC database. The analysis shows that the FD-NEAT classifier relates well with the radiologists' perception in almost all the defined nodule characteristics, and shows that FD-NEAT evolves networks that are less complex than the fixed-topology ANN in terms of number of connections.
Medical image thresholding using WQPSO and maximum entropy
Image thresholding is an important method of image segmentation to find the objects of interest. Maximum entropy is an image thresholding method that exploits entropy of the distribution in gray level of the image. The performance of this method can be improved by using swarm intelligence techniques such as Particle Swarm Optimization (PSO) and Quantum PSO (QPSO). QPSO has attracted the research community due to its simplicity, easy implementation and fast convergence. The convergence of QPSO is faster than PSO and global convergence is guaranteed. In this paper, we propose a new combination of mean updated QPSO referred to as weighted QPSO with maximum entropy to find optimal threshold for magnetic resonance images (MRI). The performance of this method outperforms other existing methods in literature in terms of convergence speed and accuracy.
Automatic GPU memory management for large neural models in TensorFlow
Deep learning models are becoming larger and will not fit inthe limited memory of accelerators such as GPUs for train-ing. Though many methods have been proposed to solvethis problem, they are rather ad-hoc in nature and difficultto extend and integrate with other techniques. In this pa-per, we tackle the problem in a formal way to provide astrong foundation for supporting large models. We proposea method of formally rewriting the computational graph of amodel where swap-out and swap-in operations are insertedto temporarily store intermediate results on CPU memory.By introducing a categorized topological ordering for simu-lating graph execution, the memory consumption of a modelcan be easily analyzed by using operation distances in theordering. As a result, the problem of fitting a large model intoa memory-limited accelerator is reduced to the problem ofreducing operation distances in a categorized topological or-dering. We then show how to formally derive swap-out andswap-in operations from an existing graph and present rulesto optimize the graph. Finally, we propose a simulation-basedauto-tuning to automatically find suitable graph-rewritingparameters for the best performance. We developed a modulein TensorFlow, calledLMS, by which we successfully trainedResNet-50 with a4.9x larger mini-batch size and 3D U-Netwith a5.6x larger image resolution.
Semantic learning machine improves the CNN-Based detection of prostate cancer in non-contrast-enhanced MRI
Lapa, Paulo
Gonçalves, Ivo
Rundo, Leonardo
Castelli, Mauro
2019Conference Proceedings, cited 0 times
SPIE-AAPM PROSTATEx Challenge
Convolutional Neural Network (CNN)
Considering that Prostate Cancer (PCa) is the most frequently diagnosed tumor in Western men, considerable attention has been devoted in computer-assisted PCa detection approaches. However, this task still represents an open research question. In the clinical practice, multiparametric Magnetic Resonance Imaging (MRI) is becoming the most used modality, aiming at defining biomarkers for PCa. In the latest years, deep learning techniques have boosted the performance in prostate MR image analysis and classification. ; This work explores the use of the Semantic Learning Machine (SLM) neuroevolution algorithm to replace the backpropagation algorithm commonly used in the last fully-connected layers of Convolutional Neural Networks (CNNs). We analyzed the non-contrast-enhanced multispectral MRI sequences included in the PROSTATEx dataset, namely: T2-weighted, Proton Density weighted, Diffusion Weighted Imaging. The experimental results show that the SLM significantly outperforms XmasNet, a state-of-the-art CNN. In particular, with respect to XmasNet, the SLM achieves higher classification accuracy (without neither pre-training the underlying CNN nor relying on backprogation) as well as a speed-up of one order of magnitude.
Feature Extraction and Analysis for Lung Nodule Classification using Random Forest
NextMed, Augmented and Virtual Reality platform for 3D medical imaging visualization: Explanation of the software platform developed for 3D models visualization related with medical images using Augmented and Virtual Reality technology
The visualization of the radiological results with more advanced techniques than the current ones, such as Augmented Reality and Virtual Reality technologies, represent a great advance for medical professionals, by eliminating their imagination capacity as an indispensable requirement for the understanding of medical images. The problem is that for its application it is necessary to segment the anatomical areas of interest, and this currently involves the intervention of the human being. The Nextmed project is presented as a complete solution that includes DICOM images import, automatic segmentation of certain anatomical structures, 3D mesh generation of the segmented area, visualization engine with Augmented Reality and Virtual Reality, all thanks to different software platforms that have been implemented and detailed, including results obtained from real patients. We will focus on the visualization platform using both Augmented and Virtual Reality technologies to allow medical professionals to work with 3d model representation of medical images in a different way taking advantage of new technologies.
Superpixel Region Merging Based on Deep Network for Medical Image Segmentation
Liu, Hui
Wang, Haiou
Wu, Yan
Xing, Lei
2020Journal Article, cited 0 times
RIDER Lung CT
Automatic and accurate semantic segmentation of pathological structures in medical images is challenging because of noisy disturbance, deformable shapes of pathology, and low contrast between soft tissues. Classical superpixel-based classification algorithms suffer from edge leakage due to complexity and heterogeneity inherent in medical images. Therefore, we propose a deep U-Net with superpixel region merging processing incorporated for edge enhancement to facilitate and optimize segmentation. Our approach combines three innovations: (1) different from deep learning--based image segmentation, the segmentation evolved from superpixel region merging via U-Net training getting rich semantic information, in addition to gray similarity; (2) a bilateral filtering module was adopted at the beginning of the network to eliminate external noise and enhance soft tissue contrast at edges of pathogy; and (3) a normalization layer was inserted after the convolutional layer at each feature scale, to prevent overfitting and increase the sensitivity to model parameters. This model was validated on lung CT, brain MR, and coronary CT datasets, respectively. Different superpixel methods and cross validation show the effectiveness of this architecture. The hyperparameter settings were empirically explored to achieve a good trade-off between the performance and efficiency, where a four-layer network achieves the best result in precision, recall, F-measure, and running speed. It was demonstrated that our method outperformed state-of-the-art networks, including FCN-16s, SegNet, PSPNet, DeepLabv3, and traditional U-Net, both quantitatively and qualitatively. Source code for the complete method is available at https://github.com/Leahnawho/Superpixel-network.
PanelNet: A Novel Deep Neural Network for Predicting Collective Diagnostic Ratings by a Panel of Radiologists for Pulmonary Nodules
Reducing misdiagnosis rate is a central concern in modern medicine. In clinical practice, group-based collective diagnosis is frequently exercised to curb the misdiagnosis rate. However, little effort has been dedicated to emulating the collective intelligence behind the group-based decision making practice in computer-aided diagnosis research to this day. To fill the overlooked gap, this study introduces a novel deep neural network, titled PanelNet, that is able to computationally model and reproduce the aforesaid collective diagnosis capability demonstrated by a group of medical experts. To experimentally explore the validity of the new solution, we apply the proposed PanelNet to one of the key tasks in radiology---assessing malignant ratings of pulmonary nodules. For each nodule and a given panel, PanelNet is able to predict statistical distribution of malignant ratings collectively judged by the panel of radiologists. Extensive experimental results consistently demonstrate PanelNet outperforms multiple state-of-the-art computer-aided diagnosis methods applicable to the collective diagnostic task. To our best knowledge, no other collective computer-aided diagnosis method grounded on modern machine learning technologies has been previously proposed. By its design, PanelNet can also be easily applied to model collective diagnosis processes employed for other diseases.
A Framework for Customizable FPGA-based Image Registration Accelerators
Image Registration is a highly compute-intensive optimization procedure that determines the geometric transformation to align a floating image to a reference one. Generally, the registration targets are images taken from different time instances, acquisition angles, and/or sensor types. Several methodologies are employed in the literature to address the limiting factors of this class of algorithms, among which hardware accelerators seem the most promising solution to boost performance. However, most hardware implementations are either closed-source or tailored to a specific context, limiting their application to different fields. For these reasons, we propose an open-source hardware-software framework to generate a configurable architecture for the most compute-intensive part of registration algorithms, namely the similarity metric computation. This metric is the Mutual Information, a well-known calculus from the Information Theory, used in several optimization procedures. Through different design parameters configurations, we explore several design choices of our highly-customizable architecture and validate it on multiple FPGAs. We evaluated various architectures against an optimized Matlab implementation on an Intel Xeon Gold, reaching a speedup up to 2.86x, and remarkable performance and power efficiency against other state-of-the-art approaches.
x4 Super-Resolution Analysis of Magnetic Resonance Imaging based on Generative Adversarial Network without Supervised Images
Magnetic resonance imaging (MRI) is widely used in clinical medical auxiliary diagnosis. In acquiring images by MRI machines, patients usually need to be exposed to harmful radiation. The radiation dose can be reduced by reducing the resolution of MRI images. This paper analyzes the super-resolution of low-resolution MRI images based on a deep learning algorithm to ensure the pixel quality of the MRI image required for medical diagnosis. It then reconstructs high-resolution MRI images as an alternative method to reduce radiation dose. This paper studies how to improve the resolution of low-dose MRI by 4 times through super-resolution analysis based on deep learning technology without other available information. This paper constructs a data set close to the natural low-high resolution image pair through degenerate kernel estimation and noise injection and constructs a two-layer generated countermeasure network based on the design ideas of ESRGAN, PatchGAN, and VGG-19. The test shows that our method is better than EDSR, RCAN, and ESRGAN in comparing non-reference image quality evaluation indexes.
Automated Analysis of Blood Smear Images for Leukemia Detection: A Comprehensive Review
Mittal, Ajay
Dhalla, Sabrina
Gupta, Savita
Gupta, Aastha
2022Journal Article, cited 0 times
AML-Cytomorphology_LMU
CPTAC-AML
SN-AM
Leukemia, the malignancy of blood-forming tissues, becomes fatal if not detected in the early stages. It is detected through a blood smear test that involves the morphological analysis of the stained blood slide. The manual microscopic examination of slides is tedious, time-consuming, error-prone, and subject to inter-observer and intra-observer bias. Several computerized methods to automate this task have been developed to alleviate these problems during the past few years. However, no exclusive comprehensive review of these methods has been presented to date. Such a review shall be highly beneficial for novice readers interested in pursuing research in this domain. This article fills the void by presenting a comprehensive review of 149 papers detailing the methods used to analyze blood smear images and detect leukemia. The primary focus of the review is on presenting the underlying techniques used and their reported performance, along with their merits and demerits. It also enumerates the research issues that have been satisfactorily solved and open challenges still existing in the domain.
Deep Machine Learning Histopathological Image Analysis for Renal Cancer Detection
Koo, Jia Chun
Hum, Yan Chai
Lai, Khin Wee
Yap, Wun-She
Manickam, Swaminathan
Tee, Yee Kai
2022Conference Paper, cited 0 times
CPTAC-CCRCC
Histopathology
Deep Learning
Classification
Python
Transfer learning
Renal cancer is one of the top causes of cancer-related deaths among men globally. Early detection of renal cancer is crucial because it can significantly improve the probability of survival rate. However, assessing the histopathological renal tissues is a labor-intensive job and traditionally, this is done manually by a pathologist, leading to a high possibility of misdetection and/or misdiagnosis especially in the early stages and prone to inter-pathologist variations. The development of an automatic histopathological diagnosis of renal cancer can greatly reduce the bias and provide accurate characterization of diseases even though the nature of pathology and microscopy are highly complex and complicated. This paper investigated the use of deep learning methods to develop a binary histopathological image classification model (cancer or normal). 783 whole slide images of renal tissue were processed into patches using PyHIST tool at 5x magnification power before feeding them to the deep learning models. Five pre-trained deep learning architectures, namely VGG, ResNet, DenseNet, MobileNet, and EfficientNet, were trained with transfer learning on the CPTAC-CCRCC dataset and their performances were evaluated. EfficientNetB0 achieved the state-of-the-art accuracy (97%), specificity (94%), F1-score (98%) and AUC (96%) but slightly inferior recall (98%) when compared to the best published results in the literature. These findings showed that the proposed deep learning approach can effectively classify the histopathological images of renal tissue into tumor and non-tumor classes to make pathology diagnosis more efficient and less labor intensive.
Automatic lung segmentation in CT scans using guided filtering
Revy, Gabor
Hadhazi, Daniel
Hullam, Gabor
2022Conference Paper, cited 0 times
LCTSC
Organ segmentation
Segmentation
Algorithm Development
The segmentation of the lungs in chest CT scans is a crucial step in computer-aided diagnosis. Current algorithms designed to solve this problem usually utilize a model of some form. To build a sufficiently robust model, a very large amount of diverse data is required, which is not always available. In this work, we propose a novel model-free algorithm for lung segmentation. Our segmentation pipeline consists of expert algorithms, some of which are improved versions of previously known methods, and a novel application of the guided filter method. Our system achieves an IoU (intersection over union) value of 0.9236 ± 0.0290 (mean±std) and a DSC (Dice similarity coefficient) of 0.9601 ± 0.0158 on the LCTSC dataset. These results indicate, that our segmentation pipeline can be a viable solution in certain applications.
ICP Algorithm Based Liver Rigid Registration Method Using Liver and Liver Vessel Surface Mesh
Kim, Soohyun
Koo, Kyoyeong
Park, Taeyong
Lee, Jeongjin
2023Conference Paper, cited 0 times
TCGA-LIHC
HCC-TACE-Seg
LIVER
Hepatocellular carcinoma (HCC)
computed Tomography (CT)
Image Registration
Segmentation
Organ segmentation
Vasculature
Computer Aided Diagnosis (CADx)
To improve the survival rate of hepatocellular carcinoma (HCC), early diagnosis and treatment are essential. Early diagnosis HCC often involves comparing and analyzing hundreds of computed tomography (CT) images, which is a kind of subjective judgment and is also a time-consuming process. In this paper, we propose a liver rigid registration method using liver and liver vessel surface mesh to enable fast and objective diagnosis of HCC. The proposed method involves segmenting the liver and liver vessel regions from abdominal CT images, generating surface meshes, and performing liver rigid registration based on the Iterative Closest Point (ICP) algorithm using the generated meshes. We evaluate the accuracy of the proposed method through experiments, demonstrating its potential for rapid and objective diagnosis of HCC. The performance evaluations show that the proposed method aids in the early diagnosis and treatment of HCC fast and objectively.
Hephaestus: Codesigning and Automating 3D Image Registration on Reconfigurable Architectures
Sorrentino, Giuseppe
Venere, Marco
Conficconi, Davide
D’Arnese, Eleonora
Santambrogio, Marco Domenico
2023Journal Article, cited 0 times
CPTAC-LUAD
Healthcare is a pivotal research field, and medical imaging is crucial in many applications. Therefore finding new architectural and algorithmic solutions would benefit highly repetitive image processing procedures. One of the most complex tasks in this sense is image registration, which finds the optimal geometric alignment among 3D image stacks and is widely employed in healthcare and robotics. Given the high computational demand of such a procedure, hardware accelerators are promising real-time and energy-efficient solutions, but they are complex to design and integrate within software pipelines. Therefore, this work presents an automation framework called Hephaestus that generates efficient 3D image registration pipelines combined with reconfigurable accelerators. Moreover, to alleviate the burden from the software, we codesign software-programmable accelerators that can adapt at run-time to the image volume dimensions. Hephaestus features a cross-platform abstraction layer that enables transparently high-performance and embedded systems deployment. However, given the computational complexity of 3D image registration, the embedded devices become a relevant and complex setting being constrained in memory; thus, they require further attention and tailoring of the accelerators and registration application to reach satisfactory results. Therefore, with Hephaestus , we also propose an approximation mechanism that enables such devices to perform the 3D image registration and even achieve, in some cases, the accuracy of the high-performance ones. Overall, Hephaestus demonstrates 1.85× of maximum speedup, 2.35× of efficiency improvement with respect to the State of the Art, a maximum speedup of 2.51× and 2.76× efficiency improvements against our software, while attaining state-of-the-art accuracy on 3D registrations.
A Systematic Collection of Medical Image Datasets for Deep Learning
Li, Johann
Zhu, Guangming
Hua, Cong
Feng, Mingtao
Bennamoun, Basheer
Li, Ping
Lu, Xiaoyuan
Song, Juan
Shen, Peiyi
Xu, Xu
Mei, Lin
Zhang, Liang
Shah, Syed Afaq Ali
Bennamoun, Mohammed
2023Journal Article, cited 0 times
AAPM-RT-MAC
Brain-Tumor-Progression
BREAST-DIAGNOSIS
ISBI-MR-Prostate-2013
Lung-PET-CT-Dx
Prostate-3T
PROSTATE-DIAGNOSIS
The astounding success made by artificial intelligence in healthcare and other fields proves that it can achieve human-like performance. However, success always comes with challenges. Deep learning algorithms are data dependent and require large datasets for training. Many junior researchers face a lack of data for a variety of reasons. Medical image acquisition, annotation, and analysis are costly, and their usage is constrained by ethical restrictions. They also require several other resources, such as professional equipment and expertise. That makes it difficult for novice and non-medical researchers to have access to medical data. Thus, as comprehensively as possible, this article provides a collection of medical image datasets with their associated challenges for deep learning research. We have collected the information of approximately 300 datasets and challenges mainly reported between 2007 and 2020 and categorized them into four categories: head and neck, chest and abdomen, pathology and blood, and others. The purpose of our work is to provide a list, as up-to-date and complete as possible, that can be used as a reference to easily find the datasets for medical image analysis and the information related to these datasets.
Automated Whole-Body Tumor Segmentation and Prognosis of Cancer on PET/CT
Automatic characterization of malignant disease is an important clinical need to facilitate early detection and treatment of cancer. A deep semi-supervised transfer learning approach was developed for automated whole-body tumor segmentation and prognosis on positron emission tomography (PET)/computed tomography (CT) scans using limited annotations. This study analyzed five datasets consisting of 408 prostate-specific membrane antigen (PSMA) PET/CT scans of prostate cancer patients and 611 18F-fluorodeoxyglucose (18F-FDG) PET/CT scans of lung, melanoma, lymphoma, head and neck, and breast cancer patients. Transfer learning generalized the segmentation task across PSMA and 18F-FDG PET/CT. Imaging measures quantifying molecular tumor burden were extracted from the predicted segmentations. Prognostic risk models were developed and evaluated on follow-up clinical measures, Kaplan-Meier survival analysis, and response assessment for patients with prostate, head and neck, and breast cancers, respectively. The proposed approach demonstrated accurate tumor segmentation and prognosis on PET/CT of patients across six cancer types.
KiT-RT: An Extendable Framework for Radiative Transfer and Therapy
Kusch, Jonas
Schotthöfer, Steffen
Stammer, Pia
Wolters, Jannick
Xiao, Tianbai
2023Journal Article, cited 0 times
Lung-PET-CT-Dx
In this article, we present Kinetic Transport Solver for Radiation Therapy (KiT-RT), an open source C++-based framework for solving kinetic equations in therapy applications available at https://github.com/CSMMLab/KiT-RT . This software framework aims to provide a collection of classical deterministic solvers for unstructured meshes that allow for easy extendability. Therefore, KiT-RT is a convenient base to test new numerical methods in various applications and compare them against conventional solvers. The implementation includes spherical harmonics, minimal entropy, neural minimal entropy, and discrete ordinates methods. Solution characteristics and efficiency are presented through several test cases ranging from radiation transport to electron radiation therapy. Due to the variety of included numerical methods and easy extendability, the presented open source code is attractive for both developers, who want a basis to build their numerical solvers, and users or application engineers, who want to gain experimental insights without directly interfering with the codebase.
Chest CT Cinematic Rendering of SARS-CoV-2 Pneumonia
Necker, F. N.
Scholz, M.
Radiology2021Journal Article, cited 0 times
Website
MIDRC-RICORD-1a
LUNG
The SARS-Cov-2 pandemic has spread rapidly throughout the world since its first reported infection in Wuhan, China. Despite the introduction of vaccines for this important viral infection, there remains a significant public health risk to the population as this virus continues to mutate. While it remains unknown if these new mutations will evade the current vaccines, it is possible that we may be living with this infection for many years to come as it becomes endemic.; ; Cinematic rendering of CT images is a new way to show the three dimensionality of the various densities contained in volumetric CT data. We show an example of PCR-positive SARS-Cov-2 pneumonia using this new technique (Figure; (Movie [online]). This case is from the RSNA-RICORD dataset (1, 2). It shows the typical presentation of SARS-Cov-2 pneumonia with ground-glass subpleural opacities that are clearly seen (Figure). The higher attenuation of lung tissue filled with fluid results in these areas appearing patchy or spongy.
Pancreatic Cancer Detection on CT Scans with Deep Learning: A Nationwide Population-based Study
Chen, P. T.
Wu, T.
Wang, P.
Chang, D.
Liu, K. L.
Wu, M. S.
Roth, H. R.
Lee, P. C.
Liao, W. C.
Wang, W.
Radiology2022Journal Article, cited 5 times
Website
Pancreas-CT
Segmentation
Classification
Computer Aided Detection (CADe)
Deep Learning
Background Approximately 40% of pancreatic tumors smaller than 2 cm are missed at abdominal CT. Purpose To develop and to validate a deep learning (DL)-based tool able to detect pancreatic cancer at CT. Materials and Methods Retrospectively collected contrast-enhanced CT studies in patients diagnosed with pancreatic cancer between January 2006 and July 2018 were compared with CT studies of individuals with a normal pancreas (control group) obtained between January 2004 and December 2019. An end-to-end tool comprising a segmentation convolutional neural network (CNN) and a classifier ensembling five CNNs was developed and validated in the internal test set and a nationwide real-world validation set. The sensitivities of the computer-aided detection (CAD) tool and radiologist interpretation were compared using the McNemar test. Results A total of 546 patients with pancreatic cancer (mean age, 65 years +/- 12 [SD], 297 men) and 733 control subjects were randomly divided into training, validation, and test sets. In the internal test set, the DL tool achieved 89.9% (98 of 109; 95% CI: 82.7, 94.9) sensitivity and 95.9% (141 of 147; 95% CI: 91.3, 98.5) specificity (area under the receiver operating characteristic curve [AUC], 0.96; 95% CI: 0.94, 0.99), without a significant difference (P = .11) in sensitivity compared with the original radiologist report (96.1% [98 of 102]; 95% CI: 90.3, 98.9). In a test set of 1473 real-world CT studies (669 malignant, 804 control) from institutions throughout Taiwan, the DL tool distinguished between CT malignant and control studies with 89.7% (600 of 669; 95% CI: 87.1, 91.9) sensitivity and 92.8% specificity (746 of 804; 95% CI: 90.8, 94.5) (AUC, 0.95; 95% CI: 0.94, 0.96), with 74.7% (68 of 91; 95% CI: 64.5, 83.3) sensitivity for malignancies smaller than 2 cm. Conclusion The deep learning-based tool enabled accurate detection of pancreatic cancer on CT scans, with reasonable sensitivity for tumors smaller than 2 cm. (c) RSNA, 2022 Online supplemental material is available for this article. See also the editorial by Aisen and Rodrigues in this issue.
Predicting Microvascular Invasion in Hepatocellular Carcinoma Using CT-based Radiomics Model
Xia, T. Y.
Zhou, Z. H.
Meng, X. P.
Zha, J. H.
Yu, Q.
Wang, W. L.
Song, Y.
Wang, Y. C.
Tang, T. Y.
Xu, J.
Zhang, T.
Long, X. Y.
Liang, Y.
Xiao, W. B.
Ju, S. H.
Radiology2023Journal Article, cited 3 times
Website
TCGA-LIHC
Algorithm Development
Radiomics
RNA sequencing
Humans
Middle Aged
*Carcinoma
Hepatocellular/diagnostic imaging/genetics
*Liver Neoplasms/diagnostic imaging/genetics
Retrospective Studies
Neoplasm Invasiveness/pathology
Tomography
X-Ray Computed/methods
Background Prediction of microvascular invasion (MVI) may help determine treatment strategies for hepatocellular carcinoma (HCC). Purpose To develop a radiomics approach for predicting MVI status based on preoperative multiphase CT images and to identify MVI-associated differentially expressed genes. Materials and Methods Patients with pathologically proven HCC from May 2012 to September 2020 were retrospectively included from four medical centers. Radiomics features were extracted from tumors and peritumor regions on preoperative registration or subtraction CT images. In the training set, these features were used to build five radiomics models via logistic regression after feature reduction. The models were tested using internal and external test sets against a pathologic reference standard to calculate area under the receiver operating characteristic curve (AUC). The optimal AUC radiomics model and clinical-radiologic characteristics were combined to build the hybrid model. The log-rank test was used in the outcome cohort (Kunming center) to analyze early recurrence-free survival and overall survival based on high versus low model-derived score. RNA sequencing data from The Cancer Image Archive were used for gene expression analysis. Results A total of 773 patients (median age, 59 years; IQR, 49-64 years; 633 men) were divided into the training set (n = 334), internal test set (n = 142), external test set (n = 141), outcome cohort (n = 121), and RNA sequencing analysis set (n = 35). The AUCs from the radiomics and hybrid models, respectively, were 0.76 and 0.86 for the internal test set and 0.72 and 0.84 for the external test set. Early recurrence-free survival (P < .01) and overall survival (P < .007) can be categorized using the hybrid model. Differentially expressed genes in patients with findings positive for MVI were involved in glucose metabolism. Conclusion The hybrid model showed the best performance in prediction of MVI. (c) RSNA, 2023 Supplemental material is available for this article. See also the editorial by Summers in this issue.
MRI-based Quantification of Intratumoral Heterogeneity for Predicting Treatment Response to Neoadjuvant Chemotherapy in Breast Cancer
Shi, Z.
Huang, X.
Cheng, Z.
Xu, Z.
Lin, H.
Liu, C.
Chen, X.
Liu, C.
Liang, C.
Lu, C.
Cui, Y.
Han, C.
Qu, J.
Shen, J.
Liu, Z.
Radiology2023Journal Article, cited 0 times
Duke-Breast-Cancer-MRI
ISPY2
Algorithm Development
Imaging biomarker
*Neoadjuvant Therapy
*Breast Neoplasms/diagnostic imaging/drug therapy
Retrospective Studies
Magnetic Resonance Imaging (MRI)
Odds Ratio
Background Breast cancer is highly heterogeneous, resulting in different treatment responses to neoadjuvant chemotherapy (NAC) among patients. A noninvasive quantitative measure of intratumoral heterogeneity (ITH) may be valuable for predicting treatment response. Purpose To develop a quantitative measure of ITH on pretreatment MRI scans and test its performance for predicting pathologic complete response (pCR) after NAC in patients with breast cancer. Materials and Methods Pretreatment MRI scans were retrospectively acquired in patients with breast cancer who received NAC followed by surgery at multiple centers from January 2000 to September 2020. Conventional radiomics (hereafter, C-radiomics) and intratumoral ecological diversity features were extracted from the MRI scans, and output probabilities of imaging-based decision tree models were used to generate a C-radiomics score and ITH index. Multivariable logistic regression analysis was used to identify variables associated with pCR, and significant variables, including clinicopathologic variables, C-radiomics score, and ITH index, were combined into a predictive model for which performance was assessed using the area under the receiver operating characteristic curve (AUC). Results The training data set was comprised of 335 patients (median age, 48 years [IQR, 42-54 years]) from centers A and B, and 590, 280, and 384 patients (median age, 48 years [IQR, 41-55 years]) were included in the three external test data sets. Molecular subtype (odds ratio [OR] range, 4.76-8.39 [95% CI: 1.79, 24.21]; all P < .01), ITH index (OR, 30.05 [95% CI: 8.43, 122.64]; P < .001), and C-radiomics score (OR, 29.90 [95% CI: 12.04, 81.70]; P < .001) were independently associated with the odds of achieving pCR. The combined model showed good performance for predicting pCR to NAC in the training data set (AUC, 0.90) and external test data sets (AUC range, 0.83-0.87). Conclusion A model that combined an index created from pretreatment MRI-based imaging features quantitating ITH, C-radiomics score, and clinicopathologic variables showed good performance for predicting pCR to NAC in patients with breast cancer. (c) RSNA, 2023 Supplemental material is available for this article. See also the editorial by Rauch in this issue.
The Image Biomarker Standardization Initiative: Standardized Convolutional Filters for Reproducible Radiomics and Enhanced Clinical Insights
Whybra, P.
Zwanenburg, A.
Andrearczyk, V.
Schaer, R.
Apte, A. P.
Ayotte, A.
Baheti, B.
Bakas, S.
Bettinelli, A.
Boellaard, R.
Boldrini, L.
Buvat, I.
Cook, G. J. R.
Dietsche, F.
Dinapoli, N.
Gabrys, H. S.
Goh, V.
Guckenberger, M.
Hatt, M.
Hosseinzadeh, M.
Iyer, A.
Lenkowicz, J.
Loutfi, M. A. L.
Lock, S.
Marturano, F.
Morin, O.
Nioche, C.
Orlhac, F.
Pati, S.
Rahmim, A.
Rezaeijo, S. M.
Rookyard, C. G.
Salmanpour, M. R.
Schindele, A.
Shiri, I.
Spezi, E.
Tanadini-Lang, S.
Tixier, F.
Upadhaya, T.
Valentini, V.
van Griethuysen, J. J. M.
Yousefirizi, F.
Zaidi, H.
Muller, H.
Vallieres, M.
Depeursinge, A.
Radiology2024Journal Article, cited 1 times
Website
Soft-Tissue-Sarcoma
Humans
*Radiomics
Reproducibility of Results
Biomarkers
*Image Processing
Computer-Assisted
Multimodal Imaging
Filters are commonly used to enhance specific structures and patterns in images, such as vessels or peritumoral regions, to enable clinical insights beyond the visible image using radiomics. However, their lack of standardization restricts reproducibility and clinical translation of radiomics decision support tools. In this special report, teams of researchers who developed radiomics software participated in a three-phase study (September 2020 to December 2022) to establish a standardized set of filters. The first two phases focused on finding reference filtered images and reference feature values for commonly used convolutional filters: mean, Laplacian of Gaussian, Laws and Gabor kernels, separable and nonseparable wavelets (including decomposed forms), and Riesz transformations. In the first phase, 15 teams used digital phantoms to establish 33 reference filtered images of 36 filter configurations. In phase 2, 11 teams used a chest CT image to derive reference values for 323 of 396 features computed from filtered images using 22 filter and image processing configurations. Reference filtered images and feature values for Riesz transformations were not established. Reproducibility of standardized convolutional filters was validated on a public data set of multimodal imaging (CT, fluorodeoxyglucose PET, and T1-weighted MRI) in 51 patients with soft-tissue sarcoma. At validation, reproducibility of 486 features computed from filtered images using nine configurations x three imaging modalities was assessed using the lower bounds of 95% CIs of intraclass correlation coefficients. Out of 486 features, 458 were found to be reproducible across nine teams with lower bounds of 95% CIs of intraclass correlation coefficients greater than 0.75. In conclusion, eight filter types were standardized with reference filtered images and reference feature values for verifying and calibrating radiomics software packages. A web-based tool is available for compliance checking.
The national lung screening trial: overview and study design
National
Lung
Screening
Trial
Research
Team
Radiology2011Journal Article, cited 760 times
Website
NLST
lung
LDCT
Evaluation of reader variability in the interpretation of follow-up CT scans at lung cancer screening
Singh, Satinder
Pinsky, Paul
Fineberg, Naomi S
Gierada, David S
Garg, Kavita
Sun, Yanhui
Nath, P Hrudaya
Radiology2011Journal Article, cited 47 times
Website
NLST
lung
LDCT
Cancer Screening
Quantitative CT assessment of emphysema and airways in relation to lung cancer risk
Gierada, David S
Guniganti, Preethi
Newman, Blake J
Dransfield, Mark T
Kvale, Paul A
Lynch, David A
Pilgram, Thomas K
Radiology2011Journal Article, cited 41 times
Website
NLST
Non-small cell lung cancer: identifying prognostic imaging biomarkers by leveraging public gene expression microarray data--methods and preliminary results
Gevaert, Olivier
Xu, Jiajing
Hoang, Chuong D
Leung, Ann N
Xu, Yue
Quon, Andrew
Rubin, Daniel L
Napel, Sandy
Plevritis, Sylvia K
Radiology2012Journal Article, cited 187 times
Website
Radiogenomics
LUNG
PET/CT
Non Small Cell Lung Cancer (NSCLC)
Metagenomics/ methods
Microarray Analysis
PURPOSE: To identify prognostic imaging biomarkers in non-small cell lung cancer (NSCLC) by means of a radiogenomics strategy that integrates gene expression and medical images in patients for whom survival outcomes are not available by leveraging survival data in public gene expression data sets. MATERIALS AND METHODS: A radiogenomics strategy for associating image features with clusters of coexpressed genes (metagenes) was defined. First, a radiogenomics correlation map is created for a pairwise association between image features and metagenes. Next, predictive models of metagenes are built in terms of image features by using sparse linear regression. Similarly, predictive models of image features are built in terms of metagenes. Finally, the prognostic significance of the predicted image features are evaluated in a public gene expression data set with survival outcomes. This radiogenomics strategy was applied to a cohort of 26 patients with NSCLC for whom gene expression and 180 image features from computed tomography (CT) and positron emission tomography (PET)/CT were available. RESULTS: There were 243 statistically significant pairwise correlations between image features and metagenes of NSCLC. Metagenes were predicted in terms of image features with an accuracy of 59%-83%. One hundred fourteen of 180 CT image features and the PET standardized uptake value were predicted in terms of metagenes with an accuracy of 65%-86%. When the predicted image features were mapped to a public gene expression data set with survival outcomes, tumor size, edge shape, and sharpness ranked highest for prognostic significance. CONCLUSION: This radiogenomics strategy for identifying imaging biomarkers may enable a more rapid evaluation of novel imaging modalities, thereby accelerating their translation to personalized medicine.
Genomic mapping and survival prediction in glioblastoma: molecular subclassification strengthened by hemodynamic imaging biomarkers
Jain, Rajan
Poisson, Laila
Narang, Jayant
Gutman, David
Scarpace, Lisa
Hwang, Scott N
Holder, Chad
Wintermark, Max
Colen, Rivka R
Kirby, Justin
Freymann, John
Brat, Daniel J
Jaffe, Carl
Mikkelsen, Tom
Radiology2013Journal Article, cited 99 times
Website
Radiomics
Glioblastoma Multiforme (GBM)
Magnetic Resonance Imaging (MRI)
molecular subtype
PURPOSE: To correlate tumor blood volume, measured by using dynamic susceptibility contrast material-enhanced T2*-weighted magnetic resonance (MR) perfusion studies, with patient survival and determine its association with molecular subclasses of glioblastoma (GBM). MATERIALS AND METHODS: This HIPAA-compliant retrospective study was approved by institutional review board. Fifty patients underwent dynamic susceptibility contrast-enhanced T2*-weighted MR perfusion studies and had gene expression data available from the Cancer Genome Atlas. Relative cerebral blood volume (rCBV) (maximum rCBV [rCBV(max)] and mean rCBV [rCBV(mean)]) of the contrast-enhanced lesion as well as rCBV of the nonenhanced lesion (rCBV(NEL)) were measured. Patients were subclassified according to the Verhaak and Phillips classification schemas, which are based on similarity to defined genomic expression signature. We correlated rCBV measures with the molecular subclasses as well as with patient overall survival by using Cox regression analysis. RESULTS: No statistically significant differences were noted for rCBV(max), rCBV(mean) of contrast-enhanced lesion or rCBV(NEL) between the four Verhaak classes or the three Phillips classes. However, increased rCBV measures are associated with poor overall survival in GBM. The rCBV(max) (P = .0131) is the strongest predictor of overall survival regardless of potential confounders or molecular classification. Interestingly, including the Verhaak molecular GBM classification in the survival model clarifies the association of rCBV(mean) with patient overall survival (hazard ratio: 1.46, P = .0212) compared with rCBV(mean) alone (hazard ratio: 1.25, P = .1918). Phillips subclasses are not predictive of overall survival nor do they affect the predictive ability of rCBV measures on overall survival. CONCLUSION: The rCBV(max) measurements could be used to predict patient overall survival independent of the molecular subclasses of GBM; however, Verhaak classifiers provided additional information, suggesting that molecular markers could be used in combination with hemodynamic imaging biomarkers in the future.;
MR Imaging Predictors of Molecular Profile and Survival: Multi-institutional Study of the TCGA Glioblastoma Data Set
Gutman, David A
Cooper, Lee A D
Hwang, Scott N
Holder, Chad A
Gao, Jingjing
Aurora, Tarun D
Dunn, William D Jr
Scarpace, Lisa
Mikkelsen, Tom
Jain, Rajan
Wintermark, Max
Jilwan, Manal
Raghavan, Prashant
Huang, Erich
Clifford, Robert J
Mongkolwat, Pattanasak
Kleper, Vladimir
Freymann, John
Kirby, Justin
Zinn, Pascal O
Moreno, Carlos S
Jaffe, Carl
Colen, Rivka
Rubin, Daniel L
Saltz, Joel
Flanders, Adam
Brat, Daniel J
Radiology2013Journal Article, cited 217 times
Website
TCGA-GBM
Radiomics
Radiogenomics
Glioblastoma Multiforme (GBM)
BRAIN
PURPOSE: To conduct a comprehensive analysis of radiologist-made assessments of glioblastoma (GBM) tumor size and composition by using a community-developed controlled terminology of magnetic resonance (MR) imaging visual features as they relate to genetic alterations, gene expression class, and patient survival. MATERIALS AND METHODS: Because all study patients had been previously deidentified by the Cancer Genome Atlas (TCGA), a publicly available data set that contains no linkage to patient identifiers and that is HIPAA compliant, no institutional review board approval was required. Presurgical MR images of 75 patients with GBM with genetic data in the TCGA portal were rated by three neuroradiologists for size, location, and tumor morphology by using a standardized feature set. Interrater agreements were analyzed by using the Krippendorff alpha statistic and intraclass correlation coefficient. Associations between survival, tumor size, and morphology were determined by using multivariate Cox regression models; associations between imaging features and genomics were studied by using the Fisher exact test. RESULTS: Interrater analysis showed significant agreement in terms of contrast material enhancement, nonenhancement, necrosis, edema, and size variables. Contrast-enhanced tumor volume and longest axis length of tumor were strongly associated with poor survival (respectively, hazard ratio: 8.84, P = .0253, and hazard ratio: 1.02, P = .00973), even after adjusting for Karnofsky performance score (P = .0208). Proneural class GBM had significantly lower levels of contrast enhancement (P = .02) than other subtypes, while mesenchymal GBM showed lower levels of nonenhanced tumor (P < .01). CONCLUSION: This analysis demonstrates a method for consistent image feature annotation capable of reproducibly characterizing brain tumors; this study shows that radiologists' estimations of macroscopic imaging features can be combined with genetic alterations and gene expression subtypes to provide deeper insight to the underlying biologic properties of GBM subsets.
National lung screening trial: variability in nodule detection rates in chest CT studies
Pinsky, P. F.
Gierada, D. S.
Nath, P. H.
Kazerooni, E.
Amorosa, J.
Radiology2013Journal Article, cited 43 times
Website
NLST
lung
LDCT
Cancer Screening
PURPOSE: To characterize the variability in radiologists' interpretations of computed tomography (CT) studies in the National Lung Screening Trial (NLST) (including assessment of false-positive rates [FPRs] and sensitivity), to examine factors that contribute to variability, and to evaluate trade-offs between FPRs and sensitivity among different groups of radiologists. MATERIALS AND METHODS: The HIPAA-compliant NLST was approved by the institutional review board at each screening center; all participants provided informed consent. NLST radiologists reported overall screening results, nodule-specific findings, and recommendations for diagnostic follow-up. A noncalcified nodule of 4 mm or larger constituted a positive screening result. The FPR was defined as the rate of positive screening examinations in participants without a cancer diagnosis within 1 year. Descriptive analyses and mixed-effects models were utilized. The average odds ratio (OR) for a false-positive result across all pairs of radiologists was used as a measure of variability. RESULTS: One hundred twelve radiologists at 32 screening centers each interpreted 100 or more NLST CT studies, interpreting 72 160 of 75 126 total NLST CT studies in aggregate. The mean FPR for radiologists was 28.7% +/- 13.7 (standard deviation), with a range of 3.8%-69.0%. The model yielded an average OR of 2.49 across all pairs of radiologists and an OR of 1.83 for pairs within the same screening center. Mean FPRs were similar for academic versus nonacademic centers (27.9% and 26.7%, respectively) and for centers inside (25.0%) versus outside (28.7%) the U.S. "histoplasmosis belt." Aggregate sensitivity was 96.5% for radiologists with FPRs higher than the median (27.1%), compared with 91.9% for those with FPRs lower than the median (P = .02). CONCLUSION: There was substantial variability in radiologists' FPRs. Higher FPRs were associated with modestly higher sensitivity.
CT Colonography: External Clinical Validation of an Algorithm for Computer-assisted Prone and Supine Registration
Boone, Darren J
Halligan, Steve
Roth, Holger R
Hampshire, Tom E
Helbren, Emma
Slabaugh, Greg G
McQuillan, Justine
McClelland, Jamie R
Hu, Mingxing
Punwani, Shonit
Radiology2013Journal Article, cited 5 times
Website
CT COLONOGRAPHY
Image registration
Computer Assisted Detection (CAD)
PURPOSE: To perform external validation of a computer-assisted registration algorithm for prone and supine computed tomographic (CT) colonography and to compare the results with those of an existing centerline method. MATERIALS AND METHODS: All contributing centers had institutional review board approval; participants provided informed consent. A validation sample of CT colonographic examinations of 51 patients with 68 polyps (6-55 mm) was selected from a publicly available, HIPAA compliant, anonymized archive. No patients were excluded because of poor preparation or inadequate distension. Corresponding prone and supine polyp coordinates were recorded, and endoluminal surfaces were registered automatically by using a computer algorithm. Two observers independently scored three-dimensional endoluminal polyp registration success. Results were compared with those obtained by using the normalized distance along the colonic centerline (NDACC) method. Pairwise Wilcoxon signed rank tests were used to compare gross registration error and McNemar tests were used to compare polyp conspicuity. RESULTS: Registration was possible in all 51 patients, and 136 paired polyp coordinates were generated (68 polyps) to test the algorithm. Overall mean three-dimensional polyp registration error (mean +/- standard deviation, 19.9 mm +/- 20.4) was significantly less than that for the NDACC method (mean, 27.4 mm +/- 15.1; P = .001). Accuracy was unaffected by colonic segment (P = .76) or luminal collapse (P = .066). During endoluminal review by two observers (272 matching tasks, 68 polyps, prone to supine and supine to prone coordinates), 223 (82%) polyp matches were visible (120 degrees field of view) compared with just 129 (47%) when the NDACC method was used (P < .001). By using multiplanar visualization, 48 (70%) polyps were visible after scrolling +/- 15 mm in any multiplanar axis compared with 16 (24%) for NDACC (P < .001). CONCLUSION: Computer-assisted registration is more accurate than the NDACC method for mapping the endoluminal surface and matching the location of polyps in corresponding prone and supine CT colonographic acquisitions.
Outcome prediction in patients with glioblastoma by using imaging, clinical, and genomic biomarkers: focus on the nonenhancing component of the tumor
Jain, R.
Poisson, L. M.
Gutman, D.
Scarpace, L.
Hwang, S. N.
Holder, C. A.
Wintermark, M.
Rao, A.
Colen, R. R.
Kirby, J.
Freymann, J.
Jaffe, C. C.
Mikkelsen, T.
Flanders, A.
Radiology2014Journal Article, cited 86 times
Website
Radiogenomics
VASARI
BRAIN
Genomics
Glioblastoma
Magnetic Resonance Imaging (MRI)
PURPOSE: To correlate patient survival with morphologic imaging features and hemodynamic parameters obtained from the nonenhancing region (NER) of glioblastoma (GBM), along with clinical and genomic markers. MATERIALS AND METHODS: An institutional review board waiver was obtained for this HIPAA-compliant retrospective study. Forty-five patients with GBM underwent baseline imaging with contrast material-enhanced magnetic resonance (MR) imaging and dynamic susceptibility contrast-enhanced T2*-weighted perfusion MR imaging. Molecular and clinical predictors of survival were obtained. Single and multivariable models of overall survival (OS) and progression-free survival (PFS) were explored with Kaplan-Meier estimates, Cox regression, and random survival forests. RESULTS: Worsening OS (log-rank test, P = .0103) and PFS (log-rank test, P = .0223) were associated with increasing relative cerebral blood volume of NER (rCBVNER), which was higher with deep white matter involvement (t test, P = .0482) and poor NER margin definition (t test, P = .0147). NER crossing the midline was the only morphologic feature of NER associated with poor survival (log-rank test, P = .0125). Preoperative Karnofsky performance score (KPS) and resection extent (n = 30) were clinically significant OS predictors (log-rank test, P = .0176 and P = .0038, respectively). No genomic alterations were associated with survival, except patients with high rCBVNER and wild-type epidermal growth factor receptor (EGFR) mutation had significantly poor survival (log-rank test, P = .0306; area under the receiver operating characteristic curve = 0.62). Combining resection extent with rCBVNER marginally improved prognostic ability (permutation, P = .084). Random forest models of presurgical predictors indicated rCBVNER as the top predictor; also important were KPS, age at diagnosis, and NER crossing the midline. A multivariable model containing rCBVNER, age at diagnosis, and KPS can be used to group patients with more than 1 year of difference in observed median survival (0.49-1.79 years). CONCLUSION: Patients with high rCBVNER and NER crossing the midline and those with high rCBVNER and wild-type EGFR mutation showed poor survival. In multivariable survival models, however, rCBVNER provided unique prognostic information that went above and beyond the assessment of all NER imaging features, as well as clinical and genomic features.
Glioblastoma Multiforme: Exploratory Radiogenomic Analysis by Using Quantitative Image Features
Gevaert, Olivier
Mitchell, Lex A
Achrol, Achal S
Xu, Jiajing
Echegaray, Sebastian
Steinberg, Gary K
Cheshier, Samuel H
Napel, Sandy
Zaharchuk, Greg
Plevritis, Sylvia K
Radiology2014Journal Article, cited 151 times
Website
TCGA-GBM
Radiomics
Radiomic features
Radiogenomics
IDH mutation
Glioblastoma Multiforme (GBM)
VASARI
Computer Aided Detection (CADe)
Purpose: To derive quantitative image features from magnetic resonance (MR) images that characterize the radiographic phenotype of glioblastoma multiforme (GBM) lesions and to create radiogenomic maps associating these features with various molecular data.; Materials and Methods: Clinical, molecular, and MR imaging data for GBMs in 55 patients were obtained from the Cancer Genome Atlas and the Cancer Imaging Archive after local ethics committee and institutional review board approval. Regions of interest (ROIs) corresponding to enhancing necrotic portions of tumor and peritumoral edema were drawn, and quantitative image features were derived from these ROIs. Robust quantitative image features were defined on the basis of an intraclass correlation coefficient of 0.6 for a digital algorithmic modification and a test-retest analysis. The robust features were visualized by using hierarchic clustering and were correlated with survival by using Cox proportional hazards modeling. Next, these robust image features were correlated with manual radiologist annotations from the Visually Accessible Rembrandt Images (VASARI) feature set and GBM molecular subgroups by using nonparametric statistical tests. A bioinformatic algorithm was used to create gene expression modules, defined as a set of coexpressed genes together with a multivariate model of cancer driver genes predictive of the module's expression pattern. Modules were correlated with robust image features by using the Spearman correlation test to create radiogenomic maps and to link robust image features with molecular pathways.; Results: Eighteen image features passed the robustness analysis and were further analyzed for the three types of ROIs, for a total of 54 image features. Three enhancement features were significantly correlated with survival, 77 significant correlations were found between robust quantitative features and the VASARI feature set, and seven image features were correlated with molecular subgroups (P < .05 for all). A radiogenomics map was created to link image features with gene expression modules and allowed linkage of 56% (30 of 54) of the image features with biologic processes.; Conclusion: Radiogenomic approaches in GBM have the potential to predict clinical and molecular characteristics of tumors noninvasively.
Radiogenomic Analysis of Breast Cancer: Luminal B Molecular Subtype Is Associated with Enhancement Dynamics at MR Imaging
Mazurowski, Maciej A
Zhang, Jing
Grimm, Lars J
Yoon, Sora C
Silber, James I
Radiology2014Journal Article, cited 88 times
Website
TCGA-BRCA
Radiogenomics
Computer Aided Detection (CADe)
Classification
Purpose: To investigate associations between breast cancer molecular subtype and semiautomatically extracted magnetic resonance (MR) imaging features.; Materials and Methods: Imaging and genomic data from the Cancer Genome Atlas and the Cancer Imaging Archive for 48 patients with breast cancer from four institutions in the United States were used in this institutional review board approval-exempt study. Computer vision algorithms were applied to extract 23 imaging features from lesions indicated by a breast radiologist on MR images. Morphologic, textural, and dynamic features were extracted. Molecular subtype was determined on the basis of genomic analysis. Associations between the imaging features and molecular subtype were evaluated by using logistic regression and likelihood ratio tests. The analysis controlled for the age of the patients, their menopausal status, and the orientation of the MR images (sagittal vs axial).; Results: There is an association (P = .0015) between the luminal B subtype and a dynamic contrast material-enhancement feature that quantifies the relationship between lesion enhancement and background parenchymal enhancement. Cancers with a higher ratio of lesion enhancement rate to background parenchymal enhancement rate are more likely to be luminal B subtype.; Conclusion: The luminal B subtype of breast cancer is associated with MR imaging features that relate the enhancement dynamics of the tumor and the background parenchyma. (C) RSNA, 2014
Glioblastoma: Imaging Genomic Mapping Reveals Sex-specific Oncogenic Associations of Cell Death
Colen, Rivka R
Wang, Jixin
Singh, Sanjay K
Gutman, David A
Zinn, Pascal O
Radiology2014Journal Article, cited 36 times
Website
TCGA-GBM
Radiogenomics
PURPOSE: To identify the molecular profiles of cell death as defined by necrosis volumes at magnetic resonance (MR) imaging and uncover sex-specific molecular signatures potentially driving oncogenesis and cell death in glioblastoma (GBM). MATERIALS AND METHODS: This retrospective study was HIPAA compliant and had institutional review board approval, with waiver of the need to obtain informed consent. The molecular profiles for 99 patients (30 female patients, 69 male patients) were identified from the Cancer Genome Atlas, and quantitative MR imaging data were obtained from the Cancer Imaging Archive. Volumes of necrosis at MR imaging were extracted. Differential gene expression profiles were obtained in those patients (including male and female patients separately) with high versus low MR imaging volumes of tumor necrosis. Ingenuity Pathway Analysis was used for messenger RNA-microRNA interaction analysis. A histopathologic data set (n = 368; 144 female patients, 224 male patients) was used to validate the MR imaging findings by assessing the amount of cell death. A connectivity map was used to identify therapeutic agents potentially targeting sex-specific cell death in GBM. RESULTS: Female patients showed significantly lower volumes of necrosis at MR imaging than male patients (6821 vs 11 050 mm(3), P = .03). Female patients, unlike male patients, with high volumes of necrosis at imaging had significantly shorter survival (6.5 vs 14.5 months, P = .01). Transcription factor analysis suggested that cell death in female patients with GBM is associated with MYC, while that in male patients is associated with TP53 activity. Additionally, a group of therapeutic agents that can potentially be tested to target cell death in a sex-specific manner was identified. CONCLUSION: The results of this study suggest that cell death in GBM may be driven by sex-specific molecular pathways.
Prognostic Imaging Biomarkers in Glioblastoma: Development and Independent Validation on the Basis of Multiregion and Quantitative Analysis of MR Images
Cui, Yi
Tha, Khin Khin
Terasaka, Shunsuke
Yamaguchi, Shigeru
Wang, Jeff
Kudo, Kohsuke
Xing, Lei
Shirato, Hiroki
Li, Ruijiang
Radiology2015Journal Article, cited 45 times
Website
TCGA-GBM
Computer Aided Diagnosis (CADx)
Segmentation
PURPOSE: To develop and independently validate prognostic imaging biomarkers for predicting survival in patients with glioblastoma on the basis of multiregion quantitative image analysis. MATERIALS AND METHODS: This retrospective study was approved by the local institutional review board, and informed consent was waived. A total of 79 patients from two independent cohorts were included. The discovery and validation cohorts consisted of 46 and 33 patients with glioblastoma from the Cancer Imaging Archive (TCIA) and the local institution, respectively. Preoperative T1-weighted contrast material-enhanced and T2-weighted fluid-attenuation inversion recovery magnetic resonance (MR) images were analyzed. For each patient, we semiautomatically delineated the tumor and performed automated intratumor segmentation, dividing the tumor into spatially distinct subregions that demonstrate coherent intensity patterns across multiparametric MR imaging. Within each subregion and for the entire tumor, we extracted quantitative imaging features, including those that fully capture the differential contrast of multimodality MR imaging. A multivariate sparse Cox regression model was trained by using TCIA data and tested on the validation cohort. RESULTS: The optimal prognostic model identified five imaging biomarkers that quantified tumor surface area and intensity distributions of the tumor and its subregions. In the validation cohort, our prognostic model achieved a concordance index of 0.67 and significant stratification of overall survival by using the log-rank test (P = .018), which outperformed conventional prognostic factors, such as age (concordance index, 0.57; P = .389) and tumor volume (concordance index, 0.59; P = .409). CONCLUSION: The multiregion analysis presented here establishes a general strategy to effectively characterize intratumor heterogeneity manifested at multimodality imaging and has the potential to reveal useful prognostic imaging biomarkers in glioblastoma.
Radiomics: Images are more than pictures, they are data
Gillies, Robert J
Kinahan, Paul E
Hricak, Hedvig
Radiology2015Journal Article, cited 694 times
Website
Radiomics
Imaging features
BRAIN
LUNG
PROSTATE
BLADDER
BREAST
In the past decade, the field of medical image analysis has grown exponentially, with an increased number of pattern recognition tools and an increase in data set sizes. These advances have facilitated the development of processes for high-throughput extraction of quantitative features that result in the conversion of images into mineable data and the subsequent analysis of these data for decision support; this practice is termed radiomics. This is in contrast to the traditional practice of treating medical images as pictures intended solely for visual interpretation. Radiomic data contain first-, second-, and higher-order statistics. These data are combined with other patient data and are mined with sophisticated bioinformatics tools to develop models that may potentially improve diagnostic, prognostic, and predictive accuracy. Because radiomics analyses are intended to be conducted with standard of care images, it is conceivable that conversion of digital images to mineable data will eventually become routine practice. This report describes the process of radiomics, its challenges, and its potential power to facilitate better clinical decision making, particularly in the care of patients with cancer.
Incremental Prognostic Value of ADC Histogram Analysis over MGMT Promoter Methylation Status in Patients with Glioblastoma
Choi, Yoon Seong
Ahn, Sung Soo
Kim, Dong Wook
Chang, Jong Hee
Kang, Seok-Gu
Kim, Eui Hyun
Kim, Se Hoon
Rim, Tyler Hyungtaek
Lee, Seung-Koo
Radiology2016Journal Article, cited 18 times
Website
Radiogenomics
Glioblastoma Multiforme (GBM)
Purpose To investigate the incremental prognostic value of apparent diffusion coefficient (ADC) histogram analysis over oxygen 6-methylguanine-DNA methyltransferase (MGMT) promoter methylation status in patients with glioblastoma and the correlation between ADC parameters and MGMT status. Materials and Methods This retrospective study was approved by institutional review board, and informed consent was waived. A total of 112 patients with glioblastoma were divided into training (74 patients) and test (38 patients) sets. Overall survival (OS) and progression-free survival (PFS) was analyzed with ADC parameters, MGMT status, and other clinical factors. Multivariate Cox regression models with and without ADC parameters were constructed. Model performance was assessed with c index and receiver operating characteristic curve analyses for 12- and 16-month OS and 12-month PFS in the training set and validated in the test set. ADC parameters were compared according to MGMT status for the entire cohort. Results By using ADC parameters, the c indices and diagnostic accuracies for 12- and 16-month OS and 12-month PFS in the models showed significant improvement, with the exception of c indices in the models for PFS (P < .05 for all) in the training set. In the test set, the diagnostic accuracy was improved by using ADC parameters and was significant, with the 25th and 50th percentiles of ADC for 16-month OS (P = .040 and P = .047) and the 25th percentile of ADC for 12-month PFS (P = .026). No significant correlation was found between ADC parameters and MGMT status. Conclusion ADC histogram analysis had incremental prognostic value over MGMT promoter methylation status in patients with glioblastoma. ((c)) RSNA, 2016 Online supplemental material is available for this article.
MR Imaging Radiomics Signatures for Predicting the Risk of Breast Cancer Recurrence as Given by Research Versions of MammaPrint, Oncotype DX, and PAM50 Gene Assays
Li, Hui
Zhu, Yitan
Burnside, Elizabeth S
Drukker, Karen
Hoadley, Katherine A
Fan, Cheng
Conzen, Suzanne D
Whitman, Gary J
Sutton, Elizabeth J
Net, Jose M
Radiology2016Journal Article, cited 103 times
Website
TCGA-Breast-Radiogenomics
radiomics
radiogenomics
Lung cancer deaths in the National Lung Screening Trial attributed to nonsolid nodules
Yip, Rowena
Yankelevitz, David F
Hu, Minxia
Li, Kunwei
Xu, Dong Ming
Jirapatnakul, Artit
Henschke, Claudia I
Radiology2016Journal Article, cited 0 times
NLST
Non-solid nodules
lung
Cancer Screening
Radiogenomics of High-Grade Serous Ovarian Cancer: Multireader Multi-Institutional Study from the Cancer Genome Atlas Ovarian Cancer Imaging Research Group
Vargas, Hebert Alberto
Huang, Erich P
Lakhman, Yulia
Ippolito, Joseph E
Bhosale, Priya
Mellnick, Vincent
Shinagare, Atul B
Anello, Maria
Kirby, Justin
Fevrier-Sullivan, Brenda
Radiology2017Journal Article, cited 3 times
Website
TCGA-OV
high-grade serous ovarian cancer
recurrent focal DNA
TP53 mutations
transcoelomic spread
transcriptomic profiles
Heterogeneous Enhancement Patterns of Tumor-adjacent Parenchyma at MR Imaging Are Associated with Dysregulated Signaling Pathways and Poor Survival in Breast Cancer
Wu, Jia
Li, Bailiang
Sun, Xiaoli
Cao, Guohong
Rubin, Daniel L
Napel, Sandy
Ikeda, Debra M
Kurian, Allison W
Li, Ruijiang
Radiology2017Journal Article, cited 9 times
Website
TCGA-BRCA
Synergy of Sex Differences in Visceral Fat Measured with CT and Tumor Metabolism Helps Predict Overall Survival in Patients with Renal Cell Carcinoma
Nguyen, Gerard K
Mellnick, Vincent M
Yim, Aldrin Kay-Yuen
Salter, Amber
Ippolito, Joseph E
Radiology2018Journal Article, cited 1 times
Website
TCGA-KIRC
CT
gene expression
KIDNEY
Precision Medicine and Radiogenomics in Breast Cancer: New Approaches toward Diagnosis and Treatment
Pinker, Katja
Chin, Joanne
Melsaether, Amy N
Morris, Elizabeth A
Moy, Linda
Radiology2018Journal Article, cited 7 times
Website
breast cancer
TCGA
radiogenomics
Classification of CT pulmonary opacities as perifissural nodules: reader variability
Schreuder, Anton
van Ginneken, Bram
Scholten, Ernst T
Jacobs, Colin
Prokop, Mathias
Sverzellati, Nicola
Desai, Sujal R
Devaraj, Anand
Schaefer-Prokop, Cornelia M
Radiology2018Journal Article, cited 3 times
Website
NLST
lung
LDCT
Cancer Screening
Bone Marrow and Tumor Radiomics at (18)F-FDG PET/CT: Impact on Outcome Prediction in Non-Small Cell Lung Cancer
Mattonen, Sarah A
Davidzon, Guido A
Benson, Jalen
Leung, Ann N C
Vasanawala, Minal
Horng, George
Shrager, Joseph B
Napel, Sandy
Nair, Viswam S.
Radiology2019Journal Article, cited 0 times
NSCLC Radiogenomics
Non-Small Cell Lung Cancer (NSCLC)
Radiomics
Segmentation
Least absolute shrinkage and selection operator (LASSO)
MATLAB
Classification
Background Primary tumor maximum standardized uptake value is a prognostic marker for non-small cell lung cancer. In the setting of malignancy, bone marrow activity from fluorine 18-fluorodeoxyglucose (FDG) PET may be informative for clinical risk stratification. Purpose To determine whether integrating FDG PET radiomic features of the primary tumor, tumor penumbra, and bone marrow identifies lung cancer disease-free survival more accurately than clinical features alone. Materials and Methods Patients were retrospectively analyzed from two distinct cohorts collected between 2008 and 2016. Each tumor, its surrounding penumbra, and bone marrow from the L3-L5 vertebral bodies was contoured on pretreatment FDG PET/CT images. There were 156 bone marrow and 512 tumor and penumbra radiomic features computed from the PET series. Randomized sparse Cox regression by least absolute shrinkage and selection operator identified features that predicted disease-free survival in the training cohort. Cox proportional hazards models were built and locked in the training cohort, then evaluated in an independent cohort for temporal validation. Results There were 227 patients analyzed; 136 for training (mean age, 69 years +/- 9 [standard deviation]; 101 men) and 91 for temporal validation (mean age, 72 years +/- 10; 91 men). The top clinical model included stage; adding tumor region features alone improved outcome prediction (log likelihood, -158 vs -152; P = .007). Adding bone marrow features continued to improve performance (log likelihood, -158 vs -145; P = .001). The top model integrated stage, two bone marrow texture features, one tumor with penumbra texture feature, and two penumbra texture features (concordance, 0.78; 95% confidence interval: 0.70, 0.85; P < .001). This fully integrated model was a predictor of poor outcome in the independent cohort (concordance, 0.72; 95% confidence interval: 0.64, 0.80; P < .001) and a binary score stratified patients into high and low risk of poor outcome (P < .001). Conclusion A model that includes pretreatment fluorine 18-fluorodeoxyglucose PET texture features from the primary tumor, tumor penumbra, and bone marrow predicts disease-free survival of patients with non-small cell lung cancer more accurately than clinical features alone. (c) RSNA, 2019 Online supplemental material is available for this article.
A Roadmap for Foundational Research on Artificial Intelligence in Medical Imaging: From the 2018 NIH/RSNA/ACR/The Academy Workshop
Langlotz, Curtis P
Allen, Bibb
Erickson, Bradley J
Kalpathy-Cramer, Jayashree
Bigelow, Keith
Cook, Tessa S
Flanders, Adam E
Lungren, Matthew P
Mendelson, David S
Rudie, Jeffrey D
Wang, Ge
Kandarpa, Krishna
Radiology2019Journal Article, cited 1 times
Website
Radiomics
National Lung Screening Trial (NLST)
Imaging research laboratories are rapidly creating machine learning systems that achieve expert human performance using open-source methods and tools. These artificial intelligence systems are being developed to improve medical image reconstruction, noise reduction, quality assurance, triage, segmentation, computer-aided detection, computer-aided classification, and radiogenomics. In August 2018, a meeting was held in Bethesda, Maryland, at the National Institutes of Health to discuss the current state of the art and knowledge gaps and to develop a roadmap for future research initiatives. Key research priorities include: 1, new image reconstruction methods that efficiently produce images suitable for human interpretation from source data; 2, automated image labeling and annotation methods, including information extraction from the imaging report, electronic phenotyping, and prospective structured image reporting; 3, new machine learning methods for clinical imaging data, such as tailored, pretrained model architectures, and federated machine learning methods; 4, machine learning methods that can explain the advice they provide to human users (so-called explainable artificial intelligence); and 5, validated methods for image de-identification and data sharing to facilitate wide availability of clinical imaging data sets. This research roadmap is intended to identify and prioritize these needs for academic research laboratories, funding agencies, professional societies, and industry.
MRI and CT Identify Isocitrate Dehydrogenase (IDH)-Mutant Lower-Grade Gliomas Misclassified to 1p/19q Codeletion Status with Fluorescence in Situ Hybridization
Patel, Sohil H
Batchala, Prem P
Mrachek, E Kelly S
Lopes, Maria-Beatriz S
Schiff, David
Fadul, Camilo E
Patrie, James T
Jain, Rajan
Druzgal, T Jason
Williams, Eli S
Radiology2020Journal Article, cited 0 times
TCGA-LGG
Radiogenomics
Radiomics
Background; Fluorescence in situ hybridization (FISH) is a standard method for 1p/19q codeletion testing in diffuse gliomas but occasionally renders erroneous results.; Purpose; To determine whether MRI/CT analysis identifies isocitrate dehydrogenase (IDH)-mutant gliomas misassigned to 1p/19q codeletion status with FISH.; Materials and Methods; Data in patients with IDH-mutant lower-grade gliomas (World Health Organization grade II/III) and 1p/19q codeletion status determined with FISH that were accrued from January 1, 2010 to October 1, 2017, were included in this retrospective study. Two neuroradiologist readers analyzed the pre-resection MRI findings (and CT findings, when available) to predict 1p/19q status (codeleted or noncodeleted) and provided a prediction confidence score (1 = low, 2 = moderate, 3 = high). Percentage concordance between the consensus neuroradiologist 1p/19q prediction and the FISH result was calculated. For gliomas where (a) consensus neuroradiologist 1p/19q prediction differed from the FISH result and (b) consensus neuroradiologist confidence score was 2 or greater, further 1p/19q testing was performed with chromosomal microarray analysis (CMA). Nine control specimens were randomly chosen from the remaining study sample for CMA. Percentage concordance between FISH and CMA among the CMA-tested cases was calculated.; Results; A total of 112 patients (median age, 38 years [interquartile range, 31–51 years]; 57 men) were evaluated (112 gliomas). Percentage concordance between the consensus neuroradiologist 1p/19q prediction and the FISH result was 84.8% (95 of 112; 95% confidence interval: 76.8%, 90.9%). Among the 17 neuroradiologist-FISH discordances, there were nine gliomas associated with a consensus neuroradiologist confidence score of 2 or greater. In six (66.7%) of these nine gliomas, the 1p/19q codeletion status as determined with CMA disagreed with the FISH result and agreed with the consensus neuroradiologist prediction. For the nine control specimens, there was 100% agreement between CMA and FISH for 1p/19q determination.; Conclusion; MRI and CT analysis can identify diffuse gliomas misassigned to 1p/19q codeletion status with fluorescence in situ hybridization (FISH). Further molecular testing should be considered for gliomas with discordant neuroimaging and FISH results.
The image biomarker standardization initiative: standardized quantitative radiomics for high-throughput image-based phenotyping
Zwanenburg, Alex
Vallieres, Martin
Abdalah, Mahmoud A
Aerts, Hugo J W L
Andrearczyk, Vincent
Apte, Aditya
Ashrafinia, Saeed
Bakas, Spyridon
Beukinga, Roelof J
Boellaard, Ronald
Bogowicz, Marta
Boldrini, Luca
Buvat, Irene
Cook, Gary J R
Davatzikos, Christos
Depeursinge, Adrien
Desseroit, Marie-Charlotte
Dinapoli, Nicola
Dinh, Cuong Viet
Echegaray, Sebastian
El Naqa, Issam
Fedorov, Andriy Y
Gatta, Roberto
Gillies, Robert J
Goh, Vicky
Gotz, Michael
Guckenberger, Matthias
Ha, Sung Min
Hatt, Mathieu
Isensee, Fabian
Lambin, Philippe
Leger, Stefan
Leijenaar, Ralph T H
Lenkowicz, Jacopo
Lippert, Fiona
Losnegard, Are
Maier-Hein, Klaus H
Morin, Olivier
Muller, Henning
Napel, Sandy
Nioche, Christophe
Orlhac, Fanny
Pati, Sarthak
Pfaehler, Elisabeth A G
Rahmim, Arman
Rao, Arvind U K
Scherer, Jonas
Siddique, Muhammad Musib
Sijtsema, Nanna M
Socarras Fernandez, Jairo
Spezi, Emiliano
Steenbakkers, Roel J H M
Tanadini-Lang, Stephanie
Thorwarth, Daniela
Troost, Esther G C
Upadhaya, Taman
Valentini, Vincenzo
van Dijk, Lisanne V
van Griethuysen, Joost
van Velden, Floris H P
Whybra, Philip
Richter, Christian
Lock, Steffen
Radiology2020Journal Article, cited 247 times
Website
Soft-tissue-Sarcoma
Radiomics
Machine Learning-based Differentiation of Benign and Premalignant Colorectal Polyps Detected with CT Colonography in an Asymptomatic Screening Population: A Proof-of-Concept Study
Grosu, S.
Wesp, P.
Graser, A.
Maurus, S.
Schulz, C.
Knosel, T.
Cyran, C. C.
Ricke, J.
Ingrisch, M.
Kazmierczak, P. M.
Radiology2021Journal Article, cited 0 times
Website
CT COLONOGRAPHY
Colon
Machine Learning
Background CT colonography does not enable definite differentiation between benign and premalignant colorectal polyps. Purpose To perform machine learning-based differentiation of benign and premalignant colorectal polyps detected with CT colonography in an average-risk asymptomatic colorectal cancer screening sample with external validation using radiomics. Materials and Methods In this secondary analysis of a prospective trial, colorectal polyps of all size categories and morphologies were manually segmented on CT colonographic images and were classified as benign (hyperplastic polyp or regular mucosa) or premalignant (adenoma) according to the histopathologic reference standard. Quantitative image features characterizing shape (n = 14), gray level histogram statistics (n = 18), and image texture (n = 68) were extracted from segmentations after applying 22 image filters, resulting in 1906 feature-filter combinations. Based on these features, a random forest classification algorithm was trained to predict the individual polyp character. Diagnostic performance was validated in an external test set. Results The random forest model was fitted using a training set consisting of 107 colorectal polyps in 63 patients (mean age, 63 years +/- 8 [standard deviation]; 40 men) comprising 169 segmentations on CT colonographic images. The external test set included 77 polyps in 59 patients comprising 118 segmentations. Random forest analysis yielded an area under the receiver operating characteristic curve of 0.91 (95% CI: 0.85, 0.96), a sensitivity of 82% (65 of 79) (95% CI: 74%, 91%), and a specificity of 85% (33 of 39) (95% CI: 72%, 95%) in the external test set. In two subgroup analyses of the external test set, the area under the receiver operating characteristic curve was 0.87 in the size category of 6-9 mm and 0.90 in the size category of 10 mm or larger. The most important image feature for decision making (relative importance of 3.7%) was quantifying first-order gray level histogram statistics. Conclusion In this proof-of-concept study, machine learning-based image analysis enabled noninvasive differentiation of benign and premalignant colorectal polyps with CT colonography. (c) RSNA, 2021 Online supplemental material is available for this article.
Deep Learning for Prediction of N2 Metastasis and Survival for Clinical Stage I Non-Small Cell Lung Cancer
Zhong, Y.
She, Y.
Deng, J.
Chen, S.
Wang, T.
Yang, M.
Ma, M.
Song, Y.
Qi, H.
Wang, Y.
Shi, J.
Wu, C.
Xie, D.
Chen, C.
Multi-omics Classifier for Pulmonary Nodules (MISSION) Collaborative Group
Radiology2022Journal Article, cited 0 times
NSCLC Radiogenomics
Biomarkers
Tumor/analysis
Carcinoma
Non-Small-Cell Lung/genetics/mortality/*pathology
Cohort Studies
*Deep Learning
Female
Humans
Lung Neoplasms/genetics/mortality/*pathology
Male
Middle Aged
Neoplasm Staging
Neoplasms
Second Primary/*diagnosis
Predictive Value of Tests
Prognosis
Prospective Studies
Reproducibility of Results
Retrospective Studies
Risk Assessment/methods
Survival Analysis
Background Preoperative mediastinal staging is crucial for the optimal management of clinical stage I non-small cell lung cancer (NSCLC). Purpose To develop a deep learning signature for N2 metastasis prediction and prognosis stratification in clinical stage I NSCLC. Materials and Methods In this retrospective study conducted from May 2020 to October 2020 in a population with clinical stage I NSCLC, an internal cohort was adopted to establish a deep learning signature. Subsequently, the predictive efficacy and biologic basis of the proposed signature were investigated in an external cohort. A multicenter diagnostic trial (registration number: ChiCTR2000041310) was also performed to evaluate its clinical utility. Finally, on the basis of the N2 risk scores, the instructive significance of the signature in prognostic stratification was explored. The diagnostic efficiency was quantified with the area under the receiver operating characteristic curve (AUC), and the survival outcomes were assessed using the Cox proportional hazards model. Results A total of 3096 patients (mean age +/- standard deviation, 60 years +/- 9; 1703 men) were included in the study. The proposed signature achieved AUCs of 0.82, 0.81, and 0.81 in an internal test set (n = 266), external test cohort (n = 133), and prospective test cohort (n = 300), respectively. In addition, higher deep learning scores were associated with a lower frequency of EGFR mutation (P = .04), higher rate of ALK fusion (P = .02), and more activation of pathways of tumor proliferation (P < .001). Furthermore, in the internal test set and external cohort, higher deep learning scores were predictive of poorer overall survival (adjusted hazard ratio, 2.9; 95% CI: 1.2, 6.9; P = .02) and recurrence-free survival (adjusted hazard ratio, 3.2; 95% CI: 1.4, 7.4; P = .007). Conclusion The deep learning signature could accurately predict N2 disease and stratify prognosis in clinical stage I non-small cell lung cancer. (c) RSNA, 2021 Online supplemental material is available for this article. See also the editorial by Park and Lee in this issue.
Informatics in Radiology: An Open-Source and Open-Access Cancer Biomedical Informatics Grid Annotation and Image Markup Template Builder
Mongkolwat, Pattanasak
Channin, David S
Kleper, Vladimir
Rubin, Daniel L
Radiographics2012Journal Article, cited 15 times
Website
Interoperability
Annotation
metadata
In a routine clinical environment or clinical trial, a case report form or structured reporting template can be used to quickly generate uniform and consistent reports. Annotation and image markup (AIM), a project supported by the National Cancer Institute's cancer biomedical informatics grid, can be used to collect information for a case report form or structured reporting template. AIM is designed to store, in a single information source, (a) the description of pixel data with use of markups or graphical drawings placed on the image, (b) calculation results (which may or may not be directly related to the markups), and (c) supplemental information. To facilitate the creation of AIM annotations with data entry templates, an AIM template schema and an open-source template creation application were developed to assist clinicians, image researchers, and designers of clinical trials to quickly create a set of data collection items, thereby ultimately making image information more readily accessible.;
Generalizability of Machine Learning Models: Quantitative Evaluation of Three Methodological Pitfalls
Maleki, Farhad
Ovens, Katie
Gupta, Rajiv
Reinhold, Caroline
Spatz, Alan
Forghani, Reza
2022Journal Article, cited 0 times
LCTSC
Purpose: To investigate the impact of the following three methodological pitfalls on model generalizability: (a) violation of the independence assumption, (b) model evaluation with an inappropriate performance indicator or baseline for comparison, and (c) batch effect.
Materials and Methods: The authors used retrospective CT, histopathologic analysis, and radiography datasets to develop machine learning models with and without the three methodological pitfalls to quantitatively illustrate their effect on model performance and generalizability. F1 score was used to measure performance, and differences in performance between models developed with and without errors were assessed using the Wilcoxon rank sum test when applicable.
Results: Violation of the independence assumption by applying oversampling, feature selection, and data augmentation before splitting data into training, validation, and test sets seemingly improved model F1 scores by 71.2% for predicting local recurrence and 5.0% for predicting 3-year overall survival in head and neck cancer and by 46.0% for distinguishing histopathologic patterns in lung cancer. Randomly distributing data points for a patient across datasets superficially improved the F1 score by 21.8%. High model performance metrics did not indicate high-quality lung segmentation. In the presence of a batch effect, a model built for pneumonia detection had an F1 score of 98.7% but correctly classified only 3.86% of samples from a new dataset of healthy patients.
Conclusion: Machine learning models developed with these methodological pitfalls, which are undetectable during internal evaluation, produce inaccurate predictions; thus, understanding and avoiding these pitfalls is necessary for developing generalizable models.Keywords: Random Forest, Diagnosis, Prognosis, Convolutional Neural Network (CNN), Medical Image Analysis, Generalizability, Machine Learning, Deep Learning, Model Evaluation Supplemental material is available for this article. Published under a CC BY 4.0 license.
The University of California San Francisco Preoperative Diffuse Glioma MRI Dataset
Purpose To evaluate the performance of an automated deep learning method in detecting ascites and subsequently quantifying its volume in patients with liver cirrhosis and ovarian cancer. Materials and Methods This retrospective study included contrast-enhanced and noncontrast abdominal-pelvic CT scans of patients with cirrhotic ascites and patients with ovarian cancer from two institutions, National Institutes of Health (NIH) and University of Wisconsin (UofW). The model, trained on The Cancer Genome Atlas Ovarian Cancer dataset (mean age, 60 years +/- 11 [SD]; 143 female), was tested on two internal (NIH-LC and NIH-OV) and one external dataset (UofW-LC). Its performance was measured by the Dice coefficient, standard deviations, and 95% confidence intervals, focusing on ascites volume in the peritoneal cavity. Results On NIH-LC (25 patients; mean age, 59 years +/- 14; 14 male) and NIH-OV (166 patients; mean age, 65 years +/- 9; all female), the model achieved Dice scores of 85.5% +/- 6.1% (CI: 83.1%-87.8%) and 82.6% +/- 15.3% (CI: 76.4%-88.7%), with median volume estimation errors of 19.6% (IQR: 13.2%-29.0%) and 5.3% (IQR: 2.4%- 9.7%), respectively. On UofW-LC (124 patients; mean age, 46 years +/- 12; 73 female), the model had a Dice score of 83.0% +/- 10.7% (CI: 79.8%-86.3%) and median volume estimation error of 9.7% (IQR: 4.5%-15.1%). The model showed strong agreement with expert assessments, with r(2) values of 0.79, 0.98, and 0.97 across the test sets. Conclusion The proposed deep learning method performed well in segmenting and quantifying the volume of ascites in concordance with expert radiologist assessments. (c)RSNA, 2024.
Three-Plane–assembled Deep Learning Segmentation of Gliomas
Wu, Shaocheng
Li, Hongyang
Quang, Daniel
Guan, Yuanfang
Radiology: Artificial Intelligence2020Journal Article, cited 0 times
Website
Algorithm Development
BraTS
BRAIN
U-Net
An accurate and fast deep learning approach developed for automatic segmentation of brain glioma on multimodal MRI scans achieved Sørensen–Dice scores of 0.80, 0.83, and 0.91 for enhancing tumor, tumor core, and whole tumor, respectively.; Purpose; ; To design a computational method for automatic brain glioma segmentation of multimodal MRI scans with high efficiency and accuracy.; Materials and Methods; ; The 2018 Multimodal Brain Tumor Segmentation Challenge (BraTS) dataset was used in this study, consisting of routine clinically acquired preoperative multimodal MRI scans. Three subregions of glioma—the necrotic and nonenhancing tumor core, the peritumoral edema, and the contrast-enhancing tumor—were manually labeled by experienced radiologists. Two-dimensional U-Net models were built using a three-plane–assembled approach to segment three subregions individually (three-region model) or to segment only the whole tumor (WT) region (WT-only model). The term three-plane–assembled means that coronal and sagittal images were generated by reformatting the original axial images. The model performance for each case was evaluated in three classes: enhancing tumor (ET), tumor core (TC), and WT.; Results; ; On the internal unseen testing dataset split from the 2018 BraTS training dataset, the proposed models achieved mean Sørensen–Dice scores of 0.80, 0.84, and 0.91, respectively, for ET, TC, and WT. On the BraTS validation dataset, the proposed models achieved mean 95% Hausdorff distances of 3.1 mm, 7.0 mm, and 5.0 mm, respectively, for ET, TC, and WT and mean Sørensen–Dice scores of 0.80, 0.83, and 0.91, respectively, for ET, TC, and WT. On the BraTS testing dataset, the proposed models ranked fourth out of 61 teams. The source code is available at https://github.com/GuanLab/Brain_Glioma.; Conclusion; ; This deep learning method consistently segmented subregions of brain glioma with high accuracy, efficiency, reliability, and generalization ability on screening images from a large population, and it can be efficiently implemented in clinical practice to assist neuro-oncologists or radiologists.; ; Supplemental material is available for this article.
Tumor Habitat–derived Radiomic Features at Pretreatment MRI That Are Prognostic for Progression-free Survival in Glioblastoma Are Associated with Key Morphologic Attributes at Histopathologic Examination: A Feasibility Study
Breast Multiparametric MRI for Prediction of Neoadjuvant Chemotherapy Response in Breast Cancer: The BMMR2 Challenge
Li, W.
Partridge, S. C.
Newitt, D. C.
Steingrimsson, J.
Marques, H. S.
Bolan, P. J.
Hirano, M.
Bearce, B. A.
Kalpathy-Cramer, J.
Boss, M. A.
Teng, X.
Zhang, J.
Cai, J.
Kontos, D.
Cohen, E. A.
Mankowski, W. C.
Liu, M.
Ha, R.
Pellicer-Valero, O. J.
Maier-Hein, K.
Rabinovici-Cohen, S.
Tlusty, T.
Ozery-Flato, M.
Parekh, V. S.
Jacobs, M. A.
Yan, R.
Sung, K.
Kazerouni, A. S.
DiCarlo, J. C.
Yankeelov, T. E.
Chenevert, T. L.
Hylton, N. M.
Radiol Imaging Cancer2024Journal Article, cited 0 times
Website
Acute myeloid leukemia
ACRIN 6698
ACRIN 6698/I-SPY2 Breast DWI
BMMR2 Challenge
Female
Humans
Middle Aged
Artificial Intelligence
*Breast Neoplasms/diagnostic imaging/drug therapy
Magnetic Resonance Imaging (MRI)
Multiparametric Magnetic Resonance Imaging (mpMRI)
Neoadjuvant Therapy
Pathologic Complete Response
Adult
BREAST
Tumor Response
Purpose To describe the design, conduct, and results of the Breast Multiparametric MRI for prediction of neoadjuvant chemotherapy Response (BMMR2) challenge. Materials and Methods The BMMR2 computational challenge opened on May 28, 2021, and closed on December 21, 2021. The goal of the challenge was to identify image-based markers derived from multiparametric breast MRI, including diffusion-weighted imaging (DWI) and dynamic contrast-enhanced (DCE) MRI, along with clinical data for predicting pathologic complete response (pCR) following neoadjuvant treatment. Data included 573 breast MRI studies from 191 women (mean age [+/-SD], 48.9 years +/- 10.56) in the I-SPY 2/American College of Radiology Imaging Network (ACRIN) 6698 trial (ClinicalTrials.gov: NCT01042379). The challenge cohort was split into training (60%) and test (40%) sets, with teams blinded to test set pCR outcomes. Prediction performance was evaluated by area under the receiver operating characteristic curve (AUC) and compared with the benchmark established from the ACRIN 6698 primary analysis. Results Eight teams submitted final predictions. Entries from three teams had point estimators of AUC that were higher than the benchmark performance (AUC, 0.782 [95% CI: 0.670, 0.893], with AUCs of 0.803 [95% CI: 0.702, 0.904], 0.838 [95% CI: 0.748, 0.928], and 0.840 [95% CI: 0.748, 0.932]). A variety of approaches were used, ranging from extraction of individual features to deep learning and artificial intelligence methods, incorporating DCE and DWI alone or in combination. Conclusion The BMMR2 challenge identified several models with high predictive performance, which may further expand the value of multiparametric breast MRI as an early marker of treatment response. Clinical trial registration no. NCT01042379 Keywords: MRI, Breast, Tumor Response Supplemental material is available for this article. (c) RSNA, 2024.
Disparities in the Demographic Composition of The Cancer Imaging Archive
Dulaney, A.
Virostko, J.
Radiol Imaging Cancer2024Journal Article, cited 1 times
Website
ACRIN-Contralateral-Breast-MR
ACRIN-DSC-MR-Brain
ACRIN-FLT-Breast
ACRIN-NSCLC-FDG-PET
ISPY1/ACRIN 6657
ACRIN-FMISO-Brain (ACRIN 6684)
ACRIN 6698
ACRIN 6657
Brain-TR-GammaKnife
Breast-Cancer-Screening-DBT
Breast-MRI-NACT-Pilot
Burdenko-GBM-Progression
CBIS-DDSM
CDD-CESM
CMMD
CPTAC-BRCA
CPTAC-GBM
CPTAC-LSCC
CPTAC-LUAD
Duke-Breast-Cancer-MRI
Lung-Fused-CT-Pathology
Lung-PET-CT-Dx
LungCT-Diagnosis
Meningioma-SEG-CLASS
NLST
NSCLC Radiogenomics
NSCLC-Radiomics
NSCLC-Radiomics-Genomics
NSCLC-Radiomics-Interobserver1
Post-NAT-BRCA
QIN-BREAST-02
REMBRANDT
TCGA-BRCA
TCGA-GBM
TCGA-LGG
TCGA-LUAD
TCGA-LUSC
UCSF-PDGM
UPENN-GBM
Female
Humans
Male
Artificial Intelligence
Ethnicity
*Neoplasms/diagnostic imaging/epidemiology
Retrospective Studies
Racial Groups
Datasets as Topic
Age
Bias
Cancer Health Disparities
Ethics
Health Disparities
Machine Learning
Meta-Analysis
Race
Sex
Purpose To characterize the demographic distribution of The Cancer Imaging Archive (TCIA) studies and compare them with those of the U.S. cancer population. Materials and Methods In this retrospective study, data from TCIA studies were examined for the inclusion of demographic information. Of 189 studies in TCIA up until April 2023, a total of 83 human cancer studies were found to contain supporting demographic data. The median patient age and the sex, race, and ethnicity proportions of each study were calculated and compared with those of the U.S. cancer population, provided by the Surveillance, Epidemiology, and End Results Program and the Centers for Disease Control and Prevention U.S. Cancer Statistics Data Visualizations Tool. Results The median age of TCIA patients was found to be 6.84 years lower than that of the U.S. cancer population (P = .047) and contained more female than male patients (53% vs 47%). American Indian and Alaska Native, Black or African American, and Hispanic patients were underrepresented in TCIA studies by 47.7%, 35.8%, and 14.7%, respectively, compared with the U.S. cancer population. Conclusion The results demonstrate that the patient demographics of TCIA data sets do not reflect those of the U.S. cancer population, which may decrease the generalizability of artificial intelligence radiology tools developed using these imaging data sets. Keywords: Ethics, Meta-Analysis, Health Disparities, Cancer Health Disparities, Machine Learning, Artificial Intelligence, Race, Ethnicity, Sex, Age, Bias Published under a CC BY 4.0 license.
CT-based Radiomic Signatures for Predicting Histopathologic Features in Head and Neck Squamous Cell Carcinoma
Mukherjee, Pritam
Cintra, Murilo
Huang, Chao
Zhou, Mu
Zhu, Shankuan
Colevas, A Dimitrios
Fischbein, Nancy
Gevaert, Olivier
Radiol Imaging Cancer2020Journal Article, cited 0 times
Website
TCGA-HNSC
Radiomics
Head and neck squamous cell carcinoma (HNSCC)
Purpose: To determine the performance of CT-based radiomic features for noninvasive prediction of histopathologic features of tumor grade, extracapsular spread, perineural invasion, lymphovascular invasion, and human papillomavirus status in head and neck squamous cell carcinoma (HNSCC). Materials and Methods: In this retrospective study, which was approved by the local institutional ethics committee, CT images and clinical data from patients with pathologically proven HNSCC from The Cancer Genome Atlas (n = 113) and an institutional test cohort (n = 71) were analyzed. A machine learning model was trained with 2131 extracted radiomic features to predict tumor histopathologic characteristics. In the model, principal component analysis was used for dimensionality reduction, and regularized regression was used for classification. Results: The trained radiomic model demonstrated moderate capability of predicting HNSCC features. In the training cohort and the test cohort, the model achieved a mean area under the receiver operating characteristic curve (AUC) of 0.75 (95% confidence interval [CI]: 0.68, 0.81) and 0.66 (95% CI: 0.45, 0.84), respectively, for tumor grade; a mean AUC of 0.64 (95% CI: 0.55, 0.62) and 0.70 (95% CI: 0.47, 0.89), respectively, for perineural invasion; a mean AUC of 0.69 (95% CI: 0.56, 0.81) and 0.65 (95% CI: 0.38, 0.87), respectively, for lymphovascular invasion; a mean AUC of 0.77 (95% CI: 0.65, 0.88) and 0.67 (95% CI: 0.15, 0.80), respectively, for extracapsular spread; and a mean AUC of 0.71 (95% CI: 0.29, 1.0) and 0.80 (95% CI: 0.65, 0.92), respectively, for human papillomavirus status. Conclusion: Radiomic CT models have the potential to predict characteristics typically identified on pathologic assessment of HNSCC.Supplemental material is available for this article.(c) RSNA, 2020.
CT Evaluation of Lymph Nodes That Merge or Split during the Course of a Clinical Trial: Limitations of RECIST 1.1
Shafiei, A.
Bagheri, M.
Farhadi, F.
Apolo, A. B.
Biassou, N. M.
Folio, L. R.
Jones, E. C.
Summers, R. M.
Radiol Imaging Cancer2021Journal Article, cited 4 times
Website
CT Lymph Nodes
Humans
*Lymph Nodes/diagnostic imaging
Male
Middle Aged
*Neoplasms/diagnostic imaging
Response Evaluation Criteria in Solid Tumors
Retrospective Studies
Tomography
X-Ray Computed
CT
Lymphatic
Tumor Response
Purpose To compare Response Evaluation Criteria in Solid Tumors (RECIST) version 1.1 with volumetric measurement in the setting of target lymph nodes that split into two or more nodes or merge into one conglomerate node. Materials and Methods In this retrospective study, target lymph nodes were evaluated on CT scans from 166 patients with different types of cancer; 158 of the scans came from The Cancer Imaging Archive. Each target node was measured using RECIST 1.1 criteria before and after merging or splitting, followed by volumetric segmentation. To compare RECIST 1.1 with volume, a single-dimension hypothetical diameter (HD) was determined from the nodal volume. The nodes were divided into three groups: (a) one-target merged (one target node merged with other nodes); (b) two-target merged (two neighboring target nodes merged); and (c) split node (a conglomerate node cleaved into smaller fragments). Bland-Altman analysis and t test were applied to compare RECIST 1.1 with HD. On the basis of the RECIST 1.1 concept, we compared response category changes between RECIST 1.1 and HD. Results The data set consisted of 30 merged nodes (19 one-target merged and 11 two-target merged) and 20 split nodes (mean age for all 50 included patients, 50 years +/- 7 [standard deviation]; 38 men). RECIST 1.1, volumetric, and HD measurements indicated an increase in size in all one-target merged nodes. While volume and HD indicated an increase in size for nodes in the two-target merged group, RECIST 1.1 showed a decrease in size in all two-target merged nodes. Although volume and HD demonstrated a decrease in size of all split nodes, RECIST 1.1 indicated an increase in size in 60% (12 of 20) of the nodes. Discrepancy of the response categories between RECIST 1.1 and HD was observed in 5% (one of 19) in one-target merged, 82% (nine of 11) in two-target merged, and 55% (11 of 20) in split nodes. Conclusion RECIST 1.1 does not optimally reflect size changes when lymph nodes merge or split. Keywords: CT, Lymphatic, Tumor Response Supplemental material is available for this article. (c) RSNA, 2021.
Radiomic Features at CT Can Distinguish Pancreatic Cancer from Noncancerous Pancreas
Chen, Po-Ting
Chang, Dawei
Yen, Huihsuan
Liu, Kao-Lang
Huang, Su-Yun
Roth, Holger
Wu, Ming-Shiang
Liao, Wei-Chih
Wang, Weichung
Radiol Imaging Cancer2021Journal Article, cited 0 times
Website
Pancreas-CT
PANCREAS
Computer Aided Diagnosis (CADx)
Computed Tomography (CT)
Purpose To identify distinguishing CT radiomic features of pancreatic ductal adenocarcinoma (PDAC) and to investigate whether radiomic analysis with machine learning can distinguish between patients who have PDAC and those who do not. Materials and Methods This retrospective study included contrast material-enhanced CT images in 436 patients with PDAC and 479 healthy controls from 2012 to 2018 from Taiwan that were randomly divided for training and testing. Another 100 patients with PDAC (enriched for small PDACs) and 100 controls from Taiwan were identified for testing (from 2004 to 2011). An additional 182 patients with PDAC and 82 healthy controls from the United States were randomly divided for training and testing. Images were processed into patches. An XGBoost (https://xgboost.ai/) model was trained to classify patches as cancerous or noncancerous. Patients were classified as either having or not having PDAC on the basis of the proportion of patches classified as cancerous. For both patch-based and patient-based classification, the models were characterized as either a local model (trained on Taiwanese data only) or a generalized model (trained on both Taiwanese and U.S. data). Sensitivity, specificity, and accuracy were calculated for patch- and patient-based analysis for the models. Results The median tumor size was 2.8 cm (interquartile range, 2.0-4.0 cm) in the 536 Taiwanese patients with PDAC (mean age, 65 years +/- 12 [standard deviation]; 289 men). Compared with normal pancreas, PDACs had lower values for radiomic features reflecting intensity and higher values for radiomic features reflecting heterogeneity. The performance metrics for the developed generalized model when tested on the Taiwanese and U.S. test data sets, respectively, were as follows: sensitivity, 94.7% (177 of 187) and 80.6% (29 of 36); specificity, 95.4% (187 of 196) and 100% (16 of 16); accuracy, 95.0% (364 of 383) and 86.5% (45 of 52); and area under the curve, 0.98 and 0.91. Conclusion Radiomic analysis with machine learning enabled accurate detection of PDAC at CT and could identify patients with PDAC. Keywords: CT, Computer Aided Diagnosis (CAD), Pancreas, Computer Applications-Detection/Diagnosis Supplemental material is available for this article. (c) RSNA, 2021.
High-resolution anatomic correlation of cyclic motor patterns in the human colon: Evidence of a rectosigmoid brake
Lin, Anthony Y
Du, Peng
Dinning, Philip G
Arkwright, John W
Kamp, Jozef P
Cheng, Leo K
Bissett, Ian P
O'Grady, Gregory
American Journal of Physiology-Gastrointestinal and Liver Physiology2017Journal Article, cited 12 times
Website
CT COLONOGRAPHY
Colonic motility
High-resolution manometry
Rectosigmoid brake
Automatic Classification of Normal and Cancer Lung CT Images Using Multiscale AM-FM Features
Magdy, Eman
Zayed, Nourhan
Fakhr, Mahmoud
International Journal of Biomedical Imaging2015Journal Article, cited 6 times
Website
Computer-aided diagnostic (CAD) systems provide fast and reliable diagnosis for medical images. In this paper, CAD system is proposed to analyze and automatically segment the lungs and classify each lung into normal or cancer. Using 70 different patients' lung CT dataset, Wiener filtering on the original CT images is applied firstly as a preprocessing step. Secondly, we combine histogram analysis with thresholding and morphological operations to segment the lung regions and extract each lung separately. Amplitude-Modulation Frequency-Modulation (AM-FM) method thirdly, has been used to extract features for ROIs. Then, the significant AM-FM features have been selected using Partial Least Squares Regression (PLSR) for classification step. Finally, K-nearest neighbour (KNN), support vector machine (SVM), naive Bayes, and linear classifiers have been used with the selected AM-FM features. The performance of each classifier in terms of accuracy, sensitivity, and specificity is evaluated. The results indicate that our proposed CAD system succeeded to differentiate between normal and cancer lungs and achieved 95% accuracy in case of the linear classifier.
Statistical Analysis of Haralick Texture Features to Discriminate Lung Abnormalities
Zayed, Nourhan
Elnemr, Heba A
International Journal of Biomedical Imaging2015Journal Article, cited 30 times
Website
SPIE-AAPM Lung CT Challenge
Segmentation
Classification
The Haralick texture features are a well-known mathematical method to detect the lung abnormalities and give the opportunity to the physician to localize the abnormality tissue type, either lung tumor or pulmonary edema. In this paper, statistical evaluation of the different features will represent the reported performance of the proposed method. Thirty-seven patients CT datasets with either lung tumor or pulmonary edema were included in this study. The CT images are first preprocessed for noise reduction and image enhancement, followed by segmentation techniques to segment the lungs, and finally Haralick texture features to detect the type of the abnormality within the lungs. In spite of the presence of low contrast and high noise in images, the proposed algorithms introduce promising results in detecting the abnormality of lungs in most of the patients in comparison with the normal and suggest that some of the features are significantly recommended than others.;
Automatic Segmentation of Colon in 3D CT Images and Removal of Opacified Fluid Using Cascade Feed Forward Neural Network
Gayathri Devi, K
Radhakrishnan, R
Computational and Mathematical Methods in Medicine2015Journal Article, cited 5 times
Website
CT Colonography
High-Throughput Quantification of Phenotype Heterogeneity Using Statistical Features
Chaddad, Ahmad
Tanougast, Camel
Advances in Bioinformatics2015Journal Article, cited 5 times
Website
Radiomics
Classification
Glioblastoma Multiforme (GBM)
Support Vector Machine (SVM)
Naïve Bayes (NB)
Machine Learning
Magnetic Resonance Imaging (MRI)
Radiomic features
Radiogenomics
TCGA-GBM
Statistical features are widely used in radiology for tumor heterogeneity assessment using magnetic resonance (MR) imaging technique. In this paper, feature selection based on decision tree is examined to determine the relevant subset of glioblastoma (GBM) phenotypes in the statistical domain. To discriminate between active tumor (vAT) and edema/invasion (vE) phenotype, we selected the significant features using analysis of variance (ANOVA) with p value < 0.01. Then, we implemented the decision tree to define the optimal subset features of phenotype classifier. Naive Bayes (NB), support vector machine (SVM), and decision tree (DT) classifier were considered to evaluate the performance of the feature based scheme in terms of its capability to discriminate vAT from vE. Whole nine features were statistically significant to classify the vAT from vE with p value < 0.01. Feature selection based on decision tree showed the best performance by the comparative study using full feature set. The feature selected showed that the two features Kurtosis and Skewness achieved a highest range value of 58.33-75.00% accuracy classifier and 73.88-92.50% AUC. This study demonstrated the ability of statistical features to provide a quantitative, individualized measurement of glioblastoma patient and assess the phenotype progression.
Automated feature extraction in brain tumor by magnetic resonance imaging using gaussian mixture models
Chaddad, Ahmad
Journal of Biomedical Imaging2015Journal Article, cited 29 times
Website
Radiomics
Liver Tumor Segmentation from MR Images Using 3D Fast Marching Algorithm and Single Hidden Layer Feedforward Neural Network
Le, Trong-Ngoc
Bao, Pham The
Huynh, Hieu Trung
BioMed Research International2016Journal Article, cited 5 times
Website
LIVER
Magnetic Resonance Imaging (MRI)
Computer Aided Detection (CADe)
Segmentation
Algorithm Development
Objective. Our objective is to develop a computerized scheme for liver tumor segmentation in MR images. Materials and Methods. Our proposed scheme consists of four main stages. Firstly, the region of interest (ROI) image which contains the liver tumor region in the T1-weighted MR image series was extracted by using seed points. The noise in this ROI image was reduced and the boundaries were enhanced. A 3D fast marching algorithm was applied to generate the initial labeled regions which are considered as teacher regions. A single hidden layer feedforward neural network (SLFN), which was trained by a noniterative algorithm, was employed to classify the unlabeled voxels. Finally, the postprocessing stage was applied to extract and refine the liver tumor boundaries. The liver tumors determined by our scheme were compared with those manually traced by a radiologist, used as the "ground truth." Results. The study was evaluated on two datasets of 25 tumors from 16 patients. The proposed scheme obtained the mean volumetric overlap error of 27.43% and the mean percentage volume error of 15.73%. The mean of the average surface distance, the root mean square surface distance, and the maximal surface distance were 0.58 mm, 1.20 mm, and 6.29 mm, respectively.
Appropriate Contrast Enhancement Measures for Brain and Breast Cancer Images
Gupta, Suneet
Porwal, Rabins
International Journal of Biomedical Imaging2016Journal Article, cited 10 times
Website
BRAIN
BREAST
Image Enhancement/methods
Medical imaging systems often produce images that require enhancement, such as improving the image contrast as they are poor in contrast. Therefore, they must be enhanced before they are examined by medical professionals. This is necessary for proper diagnosis and subsequent treatment. We do have various enhancement algorithms which enhance the medical images to different extents. We also have various quantitative metrics or measures which evaluate the quality of an image. This paper suggests the most appropriate measures for two of the medical images, namely, brain cancer images and breast cancer images.
Using Deep Learning for Classification of lung nodules on Computed Tomography Images
Lung cancer is the most common cancer that cannot be ignored and cause death with late health care. Currently, CT can be used to help doctors detect the lung cancer in the early stages. In many cases, the diagnosis of identifying the lung cancer depends on the experience of doctors, which may ignore some patients and cause some problems. Deep learning has been proved as a popular and powerful method in many medical imaging diagnosis areas. In this paper, three types of deep neural networks (e.g., CNN, DNN, and SAE) are designed for lung cancer calcification. Those networks are applied to the CT image classification task with some modification for the benign and malignant lung nodules. Those networks were evaluated on the LIDC-IDRI database. The experimental results show that the CNN network archived the best performance with an accuracy of 84.15%, sensitivity of 83.96%, and specificity of 84.32%, which has the best result among the three networks.
Solid Indeterminate Nodules with a Radiological Stability Suggesting Benignity: A Texture Analysis of Computed Tomography Images Based on the Kurtosis and Skewness of the Nodule Volume Density Histogram
Borguezan, Bruno Max
Lopes, Agnaldo José
Saito, Eduardo Haruo
Higa, Claudio
Silva, Aristófanes Corrêa
Nunes, Rodolfo Acatauassú
Pulmonary Medicine2019Journal Article, cited 0 times
Website
Radiomics
Lung
BACKGROUND: The number of incidental findings of pulmonary nodules using imaging methods to diagnose other thoracic or extrathoracic conditions has increased, suggesting the need for in-depth radiological image analyses to identify nodule type and avoid unnecessary invasive procedures. OBJECTIVES:The present study evaluated solid indeterminate nodules with a radiological stability suggesting benignity (SINRSBs) through a texture analysis of computed tomography (CT) images. METHODS: A total of 100 chest CT scans were evaluated, including 50 cases of SINRSBs and 50 cases of malignant nodules. SINRSB CT scans were performed using the same noncontrast enhanced CT protocol and equipment; the malignant nodule data were acquired from several databases. The kurtosis (KUR) and skewness (SKW) values of these tests were determined for the whole volume of each nodule, and the histograms were classified into two basic patterns: peaks or plateaus. RESULTS: The mean (MEN) KUR values of the SINRSBs and malignant nodules were 3.37 ± 3.88 and 5.88 ± 5.11, respectively. The receiver operating characteristic (ROC) curve showed that the sensitivity and specificity for distinguishing SINRSBs from malignant nodules were 65% and 66% for KUR values >6, respectively, with an area under the curve (AUC) of 0.709 (p< 0.0001). The MEN SKW values of the SINRSBs and malignant nodules were 1.73 ± 0.94 and 2.07 ± 1.01, respectively. The ROC curve showed that the sensitivity and specificity for distinguishing malignant nodules from SINRSBs were 65% and 66% for SKW values >3.1, respectively, with an AUC of 0.709 (p < 0.0001). An analysis of the peak and plateau histograms revealed sensitivity, specificity, and accuracy values of 84%, 74%, and 79%, respectively. CONCLUSION: KUR, SKW, and histogram shape can help to noninvasively diagnose SINRSBs but should not be used alone or without considering clinical data.
Lung Cancer Detection Using Image Segmentation by means of Various Evolutionary Algorithms
Kumar, K. Senthil
Venkatalakshmi, K.
Karthikeyan, K.
Computational and Mathematical Methods in Medicine2019Journal Article, cited 0 times
LungCT-Diagnosis
CT
The objective of this paper is to explore an expedient image segmentation algorithm for medical images to curtail the physicians' interpretation of computer tomography (CT) scan images. Modern medical imaging modalities generate large images that are extremely grim to analyze manually. The consequences of segmentation algorithms rely on the exactitude and convergence time. At this moment, there is a compelling necessity to explore and implement new evolutionary algorithms to solve the problems associated with medical image segmentation. Lung cancer is the frequently diagnosed cancer across the world among men. Early detection of lung cancer navigates towards apposite treatment to save human lives. CT is one of the modest medical imaging methods to diagnose the lung cancer. In the present study, the performance of five optimization algorithms, namely, k-means clustering, k-median clustering, particle swarm optimization, inertia-weighted particle swarm optimization, and guaranteed convergence particle swarm optimization (GCPSO), to extract the tumor from the lung image has been implemented and analyzed. The performance of median, adaptive median, and average filters in the preprocessing stage was compared, and it was proved that the adaptive median filter is most suitable for medical CT images. Furthermore, the image contrast is enhanced by using adaptive histogram equalization. The preprocessed image with improved quality is subject to four algorithms. The practical results are verified for 20 sample images of the lung using MATLAB, and it was observed that the GCPSO has the highest accuracy of 95.89%.
An Ad Hoc Random Initialization Deep Neural Network Architecture for Discriminating Malignant Breast Cancer Lesions in Mammographic Images
Duggento, Andrea
Aiello, Marco
Cavaliere, Carlo
Cascella, Giuseppe L
Cascella, Davide
Conte, Giovanni
Guerrisi, Maria
Toschi, Nicola
Contrast Media Mol Imaging2019Journal Article, cited 1 times
Website
CBIS-DDSM
Breast
Convolutional Neural Network (CNN)
Radiomics
Classification
Breast cancer is one of the most common cancers in women, with more than 1,300,000 cases and 450,000 deaths each year worldwide. In this context, recent studies showed that early breast cancer detection, along with suitable treatment, could significantly reduce breast cancer death rates in the long term. X-ray mammography is still the instrument of choice in breast cancer screening. In this context, the false-positive and false-negative rates commonly achieved by radiologists are extremely arduous to estimate and control although some authors have estimated figures of up to 20% of total diagnoses or more. The introduction of novel artificial intelligence (AI) technologies applied to the diagnosis and, possibly, prognosis of breast cancer could revolutionize the current status of the management of the breast cancer patient by assisting the radiologist in clinical image interpretation. Lately, a breakthrough in the AI field has been brought about by the introduction of deep learning techniques in general and of convolutional neural networks in particular. Such techniques require no a priori feature space definition from the operator and are able to achieve classification performances which can even surpass human experts. In this paper, we design and validate an ad hoc CNN architecture specialized in breast lesion classification from imaging data only. We explore a total of 260 model architectures in a train-validation-test split in order to propose a model selection criterion which can pose the emphasis on reducing false negatives while still retaining acceptable accuracy. We achieve an area under the receiver operatic characteristics curve of 0.785 (accuracy 71.19%) on the test set, demonstrating how an ad hoc random initialization architecture can and should be fine tuned to a specific problem, especially in biomedical applications.
Medical Image Classification Algorithm Based on Weight Initialization-Sliding Window Fusion Convolutional Neural Network
An, Feng-Ping
Complexity2019Journal Article, cited 0 times
Website
Radiomics
CT
Classification
ADNI
OASIS
Convolutional Neural Network (CNN)
Due to the complexity of medical images, traditional medical image classification methods have been unable to meet actual application needs. In recent years, the rapid development of deep learning theory has provided a technical approach for solving medical image classification tasks. However, deep learning has the following problems in medical image classification. First, it is impossible to construct a deep learning model hierarchy for medical image properties; second, the network initialization weights of deep learning models are not well optimized. Therefore, this paper starts from the perspective of network optimization and improves the nonlinear modeling ability of the network through optimization methods. A new network weight initialization method is proposed, which alleviates the problem that existing deep learning model initialization is limited by the type of the nonlinear unit adopted and increases the potential of the neural network to handle different visual tasks. Moreover, through an in-depth study of the multicolumn convolutional neural network framework, this paper finds that the number of features and the convolution kernel size at different levels of the convolutional neural network are different. In contrast, the proposed method can construct different convolutional neural network models that adapt better to the characteristics of the medical images of interest and thus can better train the resulting heterogeneous multicolumn convolutional neural networks. Finally, using the adaptive sliding window fusion mechanism proposed in this paper, both methods jointly complete the classification task of medical images. Based on the above ideas, this paper proposes a medical classification algorithm based on a weight initialization/sliding window fusion for multilevel convolutional neural networks. The methods proposed in this study were applied to breast mass, brain tumor tissue, and medical image database classification experiments. The results show that the proposed method not only achieves a higher average accuracy than that of traditional machine learning and other deep learning methods but also is more stable and more robust.
Research of Multimodal Medical Image Fusion Based on Parameter-Adaptive Pulse-Coupled Neural Network and Convolutional Sparse Representation
Xia, J.
Lu, Y.
Tan, L.
Comput Math Methods Med2020Journal Article, cited 0 times
Website
Image Fusion
Computer Aided Diagnosis (CADx)
BRAIN
Visual effects of medical image have a great impact on clinical assistant diagnosis. At present, medical image fusion has become a powerful means of clinical application. The traditional medical image fusion methods have the problem of poor fusion results due to the loss of detailed feature information during fusion. To deal with it, this paper proposes a new multimodal medical image fusion method based on the imaging characteristics of medical images. In the proposed method, the non-subsampled shearlet transform (NSST) decomposition is first performed on the source images to obtain high-frequency and low-frequency coefficients. The high-frequency coefficients are fused by a parameter-adaptive pulse-coupled neural network (PAPCNN) model. The method is based on parameter adaptive and optimized connection strength beta adopted to promote the performance. The low-frequency coefficients are merged by the convolutional sparse representation (CSR) model. The experimental results show that the proposed method solves the problems of difficult parameter setting and poor detail preservation of sparse representation during image fusion in traditional PCNN algorithms, and it has significant advantages in visual effect and objective indices compared with the existing mainstream fusion algorithms.
Image Classification Algorithm Based on Deep Learning-Kernel Function
Liu, Jun-e
An, Feng-Ping
Scientific Programming2020Journal Article, cited 11 times
Website
COLON
CT
Classification
deep learning
Although the existing traditional image classification methods have been widely applied in practical problems, there are some problems in the application process, such as unsatisfactory effects, low classification accuracy, and weak adaptive ability. This method separates image feature extraction and classification into two steps for classification operation. The deep learning model has a powerful learning ability, which integrates the feature extraction and classification process into a whole to complete the image classification test, which can effectively improve the image classification accuracy. However, this method has the following problems in the application process: first, it is impossible to effectively approximate the complex functions in the deep learning model. Second, the deep learning model comes with a low classifier with low accuracy. So, this paper introduces the idea of sparse representation into the architecture of the deep learning network and comprehensively utilizes the sparse representation of well multidimensional data linear decomposition ability and the deep structural advantages of multilayer nonlinear mapping to complete the complex function approximation in the deep learning model. And a sparse representation classification method based on the optimized kernel function is proposed to replace the classifier in the deep learning model, thereby improving the image classification effect. Therefore, this paper proposes an image classification algorithm based on the stacked sparse coding depth learning model-optimized kernel function nonnegative sparse representation. The experimental results show that the proposed method not only has a higher average accuracy than other mainstream methods but also can be good adapted to various image databases. Compared with other deep learning methods, it can better solve the problems of complex function approximation and poor classifier effect, thus further improving image classification accuracy.
Classification of Lung Nodules Based on Deep Residual Networks and Migration Learning
Wu, Panpan
Sun, Xuanchao
Zhao, Ziping
Wang, Haishuai
Pan, Shirui
Schuller, Bjorn
Comput Intell Neurosci2020Journal Article, cited 0 times
Website
LIDC-IDRI
Algorithm Development
Computer Assisted Detection (CAD)
The classification process of lung nodule detection in a traditional computer-aided detection (CAD) system is complex, and the classification result is heavily dependent on the performance of each step in lung nodule detection, causing low classification accuracy and high false positive rate. In order to alleviate these issues, a lung nodule classification method based on a deep residual network is proposed. Abandoning traditional image processing methods and taking the 50-layer ResNet network structure as the initial model, the deep residual network is constructed by combining residual learning and migration learning. The proposed approach is verified by conducting experiments on the lung computed tomography (CT) images from the publicly available LIDC-IDRI database. An average accuracy of 98.23% and a false positive rate of 1.65% are obtained based on the ten-fold cross-validation method. Compared with the conventional support vector machine (SVM)-based CAD system, the accuracy of our method improved by 9.96% and the false positive rate decreased by 6.95%, while the accuracy improved by 1.75% and 2.42%, respectively, and the false positive rate decreased by 2.07% and 2.22%, respectively, in contrast to the VGG19 model and InceptionV3 convolutional neural networks. The experimental results demonstrate the effectiveness of our proposed method in lung nodule classification for CT images.
Low-Dose CT Image Denoising with Improving WGAN and Hybrid Loss Function
Li, Z.
Shi, W.
Xing, Q.
Miao, Y.
He, W.
Yang, H.
Jiang, Z.
Comput Math Methods Med2021Journal Article, cited 1 times
Website
Phantom FDA
LUNG
Low-dose CT
Image denoising
Generative Adversarial Network (GAN)
The X-ray radiation from computed tomography (CT) brought us the potential risk. Simply decreasing the dose makes the CT images noisy and diagnostic performance compromised. Here, we develop a novel denoising low-dose CT image method. Our framework is based on an improved generative adversarial network coupling with the hybrid loss function, including the adversarial loss, perceptual loss, sharpness loss, and structural similarity loss. Among the loss function terms, perceptual loss and structural similarity loss are made use of to preserve textural details, and sharpness loss can make reconstruction images clear. The adversarial loss can sharp the boundary regions. The results of experiments show the proposed method can effectively remove noise and artifacts better than the state-of-the-art methods in the aspects of the visual effect, the quantitative measurements, and the texture details.
A Semiautomated Deep Learning Approach for Pancreas Segmentation
Huang, M.
Huang, C.
Yuan, J.
Kong, D.
J Healthc Eng2021Journal Article, cited 1 times
Website
Pancreas-CT
Algorithms
Deep Learning
Tomography
X-Ray Computed
Accurate pancreas segmentation from 3D CT volumes is important for pancreas diseases therapy. It is challenging to accurately delineate the pancreas due to the poor intensity contrast and intrinsic large variations in volume, shape, and location. In this paper, we propose a semiautomated deformable U-Net, i.e., DUNet for the pancreas segmentation. The key innovation of our proposed method is a deformable convolution module, which adaptively adds learned offsets to each sampling position of 2D convolutional kernel to enhance feature representation. Combining deformable convolution module with U-Net enables our DUNet to flexibly capture pancreatic features and improve the geometric modeling capability of U-Net. Moreover, a nonlinear Dice-based loss function is designed to tackle the class-imbalanced problem in the pancreas segmentation. Experimental results show that our proposed method outperforms all comparison methods on the same NIH dataset.
SGPNet: A Three-Dimensional Multitask Residual Framework for Segmentation and IDH Genotype Prediction of Gliomas
Wang, Yao
Wang, Yan
Guo, Chunjie
Zhang, Shuangquan
Yang, Lili
Rakhshan, Vahid
Computational Intelligence and Neuroscience2021Journal Article, cited 0 times
Website
BraTS
BRAIN
Deep Learning
Algorithm Development
Radiogenomics
MuSE
MuTect2
SomaticSniper
VarScan2
U-Net
IDH mutation
Glioma is the main type of malignant brain tumor in adults, and the status of isocitrate dehydrogenase (IDH) mutation highly affects the diagnosis, treatment, and prognosis of gliomas. Radiographic medical imaging provides a noninvasive platform for sampling both inter and intralesion heterogeneity of gliomas, and previous research has shown that the IDH genotype can be predicted from the fusion of multimodality radiology images. The features of medical images and IDH genotype are vital for medical treatment; however, it still lacks a multitask framework for the segmentation of the lesion areas of gliomas and the prediction of IDH genotype. In this paper, we propose a novel three-dimensional (3D) multitask deep learning model for segmentation and genotype prediction (SGPNet). The residual units are also introduced into the SGPNet that allows the output blocks to extract hierarchical features for different tasks and facilitate the information propagation. Our model reduces 26.6% classification error rates comparing with previous models on the datasets of Multimodal Brain Tumor Segmentation Challenge (BRATS) 2020 and The Cancer Genome Atlas (TCGA) gliomas’ databases. Furthermore, we first practically investigate the influence of lesion areas on the performance of IDH genotype prediction by setting different groups of learning targets. The experimental results indicate that the information of lesion areas is more important for the IDH genotype prediction. Our framework is effective and generalizable, which can serve as a highly automated tool to be applied in clinical decision making.
DL-MRI: A Unified Framework of Deep Learning-Based MRI Super Resolution
Liu, Huanyu
Liu, Jiaqi
Li, Junbao
Pan, Jeng-Shyang
Yu, Xiaqiong
Lu, Hao Chun
Journal of Healthcare Engineering2021Journal Article, cited 0 times
Website
Algorithm Development
BREAST
HEAD
BLADDER
Deep Learning
Magnetic resonance imaging (MRI) is widely used in the detection and diagnosis of diseases. High-resolution MR images will help doctors to locate lesions and diagnose diseases. However, the acquisition of high-resolution MR images requires high magnetic field intensity and long scanning time, which will bring discomfort to patients and easily introduce motion artifacts, resulting in image quality degradation. Therefore, the resolution of hardware imaging has reached its limit. Based on this situation, a unified framework based on deep learning super resolution is proposed to transfer state-of-the-art deep learning methods of natural images to MRI super resolution. Compared with the traditional image super-resolution method, the deep learning super-resolution method has stronger feature extraction and characterization ability, can learn prior knowledge from a large number of sample data, and has a more stable and excellent image reconstruction effect. We propose a unified framework of deep learning -based MRI super resolution, which has five current deep learning methods with the best super-resolution effect. In addition, a high-low resolution MR image dataset with the scales of ×2, ×3, and ×4 was constructed, covering 4 parts of the skull, knee, breast, and head and neck. Experimental results show that the proposed unified framework of deep learning super resolution has a better reconstruction effect on the data than traditional methods and provides a standard dataset and experimental benchmark for the application of deep learning super resolution in MR images.
Lung Cancer Diagnosis Based on an ANN Optimized by Improved TEO Algorithm
Shan, Rong
Rezaei, Tahereh
Computational Intelligence and Neuroscience2021Journal Article, cited 0 times
LungCT-Diagnosis
A quarter of all cancer deaths are due to lung cancer. Studies show that early diagnosis and treatment of this disease are the most effective way to increase patient life expectancy. In this paper, automatic and optimized computer-aided detection is proposed for lung cancer. The method first applies a preprocessing step for normalizing and denoising the input images. Afterward, Kapur entropy maximization is performed along with mathematical morphology to lung area segmentation. Afterward, 19 GLCM features are extracted from the segmented images for the final evaluations. The higher priority images are then selected for decreasing the system complexity. The feature selection is based on a new optimization design, called Improved Thermal Exchange Optimization (ITEO), which is designed to improve the accuracy and convergence abilities. The images are finally classified into healthy or cancerous cases based on an optimized artificial neural network by ITEO. Simulation is compared with some well-known approaches and the results showed the superiority of the suggested method. The results showed that the proposed method with 92.27% accuracy provides the highest value among the compared methods.
Multiview Self-Supervised Segmentation for OARs Delineation in Radiotherapy
Liu, C.
Zhang, X.
Si, W.
Ni, X.
Evid Based Complement Alternat Med2021Journal Article, cited 0 times
Website
Head-Neck Cetuximab
Segmentation
Machine Learning
Radiotherapy has become a common treatment option for head and neck (H&N) cancer, and organs at risk (OARs) need to be delineated to implement a high conformal dose distribution. Manual drawing of OARs is time consuming and inaccurate, so automatic drawing based on deep learning models has been proposed to accurately delineate the OARs. However, state-of-the-art performance usually requires a decent amount of delineation, but collecting pixel-level manual delineations is labor intensive and may not be necessary for representation learning. Encouraged by the recent progress in self-supervised learning, this study proposes and evaluates a novel multiview contrastive representation learning to boost the models from unlabelled data. The proposed learning architecture leverages three views of CTs (coronal, sagittal, and transverse plane) to collect positive and negative training samples. Specifically, a CT in 3D is first projected into three 2D views (coronal, sagittal, and transverse planes), then a convolutional neural network takes 3 views as inputs and outputs three individual representations in latent space, and finally, a contrastive loss is used to pull representation of different views of the same image closer ("positive pairs") and push representations of views from different images ("negative pairs") apart. To evaluate performance, we collected 220 CT images in H&N cancer patients. The experiment demonstrates that our method significantly improves quantitative performance over the state-of-the-art (from 83% to 86% in absolute Dice scores). Thus, our method provides a powerful and principled means to deal with the label-scarce problem.
Automatic Detection and Segmentation of Colorectal Cancer with Deep Residual Convolutional Neural Network
Akilandeswari, A.
Sungeetha, D.
Joseph, C.
Thaiyalnayaki, K.
Baskaran, K.
Jothi Ramalingam, R.
Al-Lohedan, H.
Al-Dhayan, D. M.
Karnan, M.
Meansbo Hadish, K.
Evid Based Complement Alternat Med2022Journal Article, cited 0 times
Website
CT COLONOGRAPHY
Deep convolutional neural network (DCNN)
Early and automatic detection of colorectal tumors is essential for cancer analysis, and the same is implemented using computer-aided diagnosis (CAD). A computerized tomography (CT) image of the colon is being used to identify colorectal carcinoma. Digital imaging and communication in medicine (DICOM) is a standard medical imaging format to process and analyze images digitally. Accurate detection of tumor cells in the complex digestive tract is necessary for optimal treatment. The proposed work is divided into two phases. The first phase involves the segmentation, and the second phase is the extraction of the colon lesions with the observed segmentation parameters. A deep convolutional neural network (DCNN) based residual network approach for the colon and polyps' segmentation from the CT images is applied over the 2D CT images. The residual stack block is being added to the hidden layers with short skip nuance, which helps to retain spatial information. ResNet-enabled CNN is employed in the current work to achieve complete boundary segmentation of the colon cancer region. The results obtained through segmentation serve as features for further extraction and classification of benign as well as malignant colon cancer. Performance evaluation metrics indicate that the proposed network model has effectively segmented and classified colorectal tumors with dice scores of 91.57% (on average), sensitivity = 98.28, specificity = 98.68, and accuracy = 98.82.
Kidney Tumor Detection and Classification Based on Deep Learning Approaches: A New Dataset in CT Scans
Alzu’bi, Dalia
Abdullah, Malak
Hmeidi, Ismail
AlAzab, Rami
Gharaibeh, Maha
El-Heis, Mwaffaq
Almotairi, Khaled H.
Forestiero, Agostino
Hussein, Ahmad MohdAziz
Abualigah, Laith
Kumar, Senthil
Journal of Healthcare Engineering2022Journal Article, cited 0 times
Website
TCGA-KIRC
TCGA-KICH
TCGA-KIRP
CPTAC-CCRCC
Algorithm Development
Classification
C4KC-KiTS
Retrospective Studies
Kidney tumor (KT) is one of the diseases that have affected our society and is the seventh most common tumor in both men and women worldwide. The early detection of KT has significant benefits in reducing death rates, producing preventive measures that reduce effects, and overcoming the tumor. Compared to the tedious and time-consuming traditional diagnosis, automatic detection algorithms of deep learning (DL) can save diagnosis time, improve test accuracy, reduce costs, and reduce the radiologist’s workload. In this paper, we present detection models for diagnosing the presence of KTs in computed tomography (CT) scans. Toward detecting and classifying KT, we proposed 2D-CNN models; three models are concerning KT detection such as a 2D convolutional neural network with six layers (CNN-6), a ResNet50 with 50 layers, and a VGG16 with 16 layers. The last model is for KT classification as a 2D convolutional neural network with four layers (CNN-4). In addition, a novel dataset from the King Abdullah University Hospital (KAUH) has been collected that consists of 8,400 images of 120 adult patients who have performed CT scans for suspected kidney masses. The dataset was divided into 80% for the training set and 20% for the testing set. The accuracy results for the detection models of 2D CNN-6 and ResNet50 reached 97%, 96%, and 60%, respectively. At the same time, the accuracy results for the classification model of the 2D CNN-4 reached 92%. Our novel models achieved promising results; they enhance the diagnosis of patient conditions with high accuracy, reducing radiologist’s workload and providing them with a tool that can automatically assess the condition of the kidneys, reducing the risk of misdiagnosis. Furthermore, increasing the quality of healthcare service and early detection can change the disease’s track and preserve the patient’s life.
Application of Deep Learning on the Prognosis of Cutaneous Melanoma Based on Full Scan Pathology Images
Li,Anhai
Li, Xiaoyuan
Li, Wenwen
Yu, Xiaoqian
Qi, Mengmeng
Li, Ding
Biomed Res Int2022Journal Article, cited 0 times
Website
CPTAC-CM
Pathomics
*Deep Learning
Humans
*Melanoma/diagnostic imaging/pathology
Prognosis
*Skin Neoplasms/diagnostic imaging/pathology
INTRODUCTION: The purpose of this study is to use deep learning and machine learning to learn and classify patients with cutaneous melanoma with different prognoses and to explore the application value of deep learning in the prognosis of cutaneous melanoma patients. METHODS: In deep learning, VGG-19 is selected as the network architecture and learning model for learning and classification. In machine learning, deep features are extracted through the VGG-19 network architecture, and the support vector machine (SVM) model is selected for learning and classification. Compare and explore the application value of deep learning and machine learning in predicting the prognosis of patients with cutaneous melanoma. RESULT: According to receiver operating characteristic (ROC) curves and area under the curve (AUC), the average accuracy of deep learning is higher than that of machine learning, and even the lowest accuracy is better than that of machine learning. CONCLUSION: As the number of learning increases, the accuracy of machine learning and deep learning will increase, but in the same number of cutaneous melanoma patient pathology maps, the accuracy of deep learning will be higher. This study provides new ideas and theories for computational pathology in predicting the prognosis of patients with cutaneous melanoma.
Integrating Radiomics with Genomics for Non-Small Cell Lung Cancer Survival Analysis
Chen, W.
Qiao, X.
Yin, S.
Zhang, X.
Xu, X.
J Oncol2022Journal Article, cited 0 times
NSCLC Radiogenomics
Radiomics
Radiogenomics
Non-Small Cell Lung Cancer (NSCLC)
PURPOSE: The objectives of our study were to assess the association of radiological imaging and gene expression with patient outcomes in non-small cell lung cancer (NSCLC) and construct a nomogram by combining selected radiomic, genomic, and clinical risk factors to improve the performance of the risk model. METHODS: A total of 116 cases of NSCLC with CT images, gene expression, and clinical factors were studied, wherein 87 patients were used as the training cohort, and 29 patients were used as an independent testing cohort. Handcrafted radiomic features and deep-learning genomic features were extracted and selected from CT images and gene expression analysis, respectively. Two risk scores were calculated through Cox regression models for each patient based on radiomic features and genomic features to predict overall survival (OS). Finally, a fusion survival model was constructed by incorporating these two risk scores and clinical factors. RESULTS: The fusion model that combined CT images, gene expression data, and clinical factors effectively stratified patients into low- and high-risk groups. The C-indexes for OS prediction were 0.85 and 0.736 in the training and testing cohorts, respectively, which was better than that based on unimodal data. CONCLUSIONS: Combining radiomics and genomics can effectively improve OS prediction for NSCLC patients.
Detecting MRI-Invisible Prostate Cancers Using a Weakly Supervised Deep Learning Model
Zheng, Y.
Zhang, J.
Huang, D.
Hao, X.
Qin, W.
Liu, Y.
Int J Biomed Imaging2024Journal Article, cited 0 times
Website
Prostate-MRI-US-Biopsy
Magnetic Resonance Imaging (MRI)
Weakly supervised U-Net
Computer Aided Detection (CADe)
BACKGROUND: MRI is an important tool for accurate detection and targeted biopsy of prostate lesions. However, the imaging appearances of some prostate cancers are similar to those of the surrounding normal tissue on MRI, which are referred to as MRI-invisible prostate cancers (MIPCas). The detection of MIPCas remains challenging and requires extensive systematic biopsy for identification. In this study, we developed a weakly supervised UNet (WSUNet) to detect MIPCas. METHODS: The study included 777 patients (training set: 600; testing set: 177), all of them underwent comprehensive prostate biopsies using an MRI-ultrasound fusion system. MIPCas were identified in MRI based on the Gleason grade (>/=7) from known systematic biopsy results. RESULTS: The WSUNet model underwent validation through systematic biopsy in the testing set with an AUC of 0.764 (95% CI: 0.728-0.798). Furthermore, WSUNet exhibited a statistically significant precision improvement of 91.3% (p < 0.01) over conventional systematic biopsy methods in the testing set. This improvement resulted in a substantial 47.6% (p < 0.01) decrease in unnecessary biopsy needles, while maintaining the same number of positively identified cores as in the original systematic biopsy. CONCLUSIONS: In conclusion, the proposed WSUNet could effectively detect MIPCas, thereby reducing unnecessary biopsies.
Analysis of Bladder Cancer Staging Prediction Using Deep Residual Neural Network, Radiomics, and RNA-Seq from High-Definition CT Images
Zhou, Yao
Zheng, Xingju
Sun, Zhucheng
Wang, Bo
Liu, Hongda
Genetics Research2024Journal Article, cited 0 times
Website
TCGA-BLCA
PyRadiomics
Cox regression
LASSO
Manual segmentation
Radiomic features
Radiogenomics
Computed Tomography (CT)
Nomograms
Bladder cancer has recently seen an alarming increase in global diagnoses, ascending as a predominant cause of cancer-related mortalities. Given this pressing scenario, there is a burgeoning need to identify effective biomarkers for both the diagnosis and therapeutic guidance of bladder cancer. This study focuses on evaluating the potential of high-definition computed tomography (CT) imagery coupled with RNA-sequencing analysis to accurately predict bladder tumor stages, utilizing deep residual networks. Data for this study, including CT images and RNA-Seq datasets for 82 high-grade bladder cancer patients, were sourced from the TCIA and TCGA databases. We employed Cox and lasso regression analyses to determine radiomics and gene signatures, leading to the identification of a three-factor radiomics signature and a four-gene signature in our bladder cancer cohort. ROC curve analyses underscored the strong predictive capacities of both these signatures. Furthermore, we formulated a nomogram integrating clinical features, radiomics, and gene signatures. This nomogram’s AUC scores stood at 0.870, 0.873, and 0.971 for 1-year, 3-year, and 5-year predictions, respectively. Our model, leveraging radiomics and gene signatures, presents significant promise for enhancing diagnostic precision in bladder cancer prognosis, advocating for its clinical adoption.
TCIApathfinder: an R client for The Cancer Imaging Archive REST API
Russell, Pamela
Fountain, Kelly
Wolverton, Dulcy
Ghosh, Debashis
Cancer research2018Journal Article, cited 1 times
Website
TCIA-API
R
Comprehensive Analysis of Radiomic Datasets by RadAR
Benelli, Matteo
Barucci, Andrea
Zoppetti, Nicola
Calusi, Silvia
Redapi, Laura
Della Gala, Giuseppe
Piffer, Stefano
Bernardi, Luca
Fusi, Franco
Pallotta, Stefania
Cancer research2020Journal Article, cited 0 times
HEAD-NECK-RADIOMICS-HN1
NSCLC-Radiomics
OPC-Radiomics
Quantitative analysis of biomedical images, referred to as radiomics, is emerging as a promising approach to facilitate clinical decisions and improve patient stratification. The typical radiomic workflow includes image acquisition, segmentation, feature extraction, and analysis of high-dimensional datasets. While procedures for primary radiomic analyses have been established in recent years, processing the resulting radiomic datasets remains a challenge due to the lack of specific tools for doing so. Here we present RadAR (Radiomics Analysis with R), a new software to perform comprehensive analysis of radiomic features. RadAR allows users to process radiomic datasets in their entirety, from data import to feature processing and visualization, and implements multiple statistical methods for analysis of these data. We used RadAR to analyze the radiomic profiles of more than 850 patients with cancer from publicly available datasets and showed that it was able to recapitulate expected results. These results demonstrate RadAR as a reliable and valuable tool for the radiomics community. SIGNIFICANCE: A new computational tool performs comprehensive analysis of high-dimensional radiomic datasets, recapitulating expected results in the analysis of radiomic profiles of >850 patients with cancer from independent datasets.
Phase I trial of preoperative chemoradiation plus sorafenib for high-risk extremity soft tissue sarcomas with dynamic contrast-enhanced MRI correlates
Meyer, Janelle M
Perlewitz, Kelly S
Hayden, James B
Doung, Yee-Cheen
Hung, Arthur Y
Vetto, John T
Pommier, Rodney F
Mansoor, Atiya
Beckett, Brooke R
Tudorica, Alina
Clinical Cancer Research2013Journal Article, cited 41 times
Website
Soft tissue sarcoma
Unsupervised clustering of quantitative image phenotypes reveals breast cancer subtypes with distinct prognoses and molecular pathways
Wu, Jia
Cui, Yi
Sun, Xiaoli
Cao, Guohong
Li, Bailiang
Ikeda, Debra M
Kurian, Allison W
Li, Ruijiang
Clinical Cancer Research2017Journal Article, cited 14 times
Website
Radiogenomics
BREAST
DCE-MRI
TCGA-BRCA
T2-FLAIR Mismatch, an Imaging Biomarker for IDH and 1p/19q Status in Lower-grade Gliomas: A TCGA/TCIA Project
Patel, S. H.
Poisson, L. M.
Brat, D. J.
Zhou, Y.
Cooper, L.
Snuderl, M.
Thomas, C.
Franceschi, A. M.
Griffith, B.
Flanders, A. E.
Golfinos, J. G.
Chi, A. S.
Jain, R.
Clin Cancer Res2017Journal Article, cited 320 times
Website
TCGA-LGG
Radiogenomics
Radiomic features
1p/19q co-deletion
Isocitrate Dehydrogenase/genetics
Adult
Aged
Aged
80 and over
Biomarkers
Brain Neoplasms/*diagnosis/*genetics/mortality
*Chromosome Aberrations
*Chromosomes
Human
Pair 1
*Chromosomes
Human
Pair 19
Female
Glioma/*diagnosis/*genetics/mortality
Humans
Image Processing
Computer-Assisted
*Magnetic Resonance Imaging
Male
Middle Aged
Neoplasm Grading
Neoplasm Staging
Prognosis
Young Adult
Purpose: Lower-grade gliomas (WHO grade II/III) have been classified into clinically relevant molecular subtypes based on IDH and 1p/19q mutation status. The purpose was to investigate whether T2/FLAIR MRI features could distinguish between lower-grade glioma molecular subtypes.Experimental Design: MRI scans from the TCGA/TCIA lower grade glioma database (n = 125) were evaluated by two independent neuroradiologists to assess (i) presence/absence of homogenous signal on T2WI; (ii) presence/absence of "T2-FLAIR mismatch" sign; (iii) sharp or indistinct lesion margins; and (iv) presence/absence of peritumoral edema. Metrics with moderate-substantial agreement underwent consensus review and were correlated with glioma molecular subtypes. Somatic mutation, DNA copy number, DNA methylation, gene expression, and protein array data from the TCGA lower-grade glioma database were analyzed for molecular-radiographic associations. A separate institutional cohort (n = 82) was analyzed to validate the T2-FLAIR mismatch sign.Results: Among TCGA/TCIA cases, interreader agreement was calculated for lesion homogeneity [kappa = 0.234 (0.111-0.358)], T2-FLAIR mismatch sign [kappa = 0.728 (0.538-0.918)], lesion margins [kappa = 0.292 (0.135-0.449)], and peritumoral edema [kappa = 0.173 (0.096-0.250)]. All 15 cases that were positive for the T2-FLAIR mismatch sign were IDH-mutant, 1p/19q non-codeleted tumors (P < 0.0001; PPV = 100%, NPV = 54%). Analysis of the validation cohort demonstrated substantial interreader agreement for the T2-FLAIR mismatch sign [kappa = 0.747 (0.536-0.958)]; all 10 cases positive for the T2-FLAIR mismatch sign were IDH-mutant, 1p/19q non-codeleted tumors (P < 0.00001; PPV = 100%, NPV = 76%).Conclusions: Among lower-grade gliomas, T2-FLAIR mismatch sign represents a highly specific imaging biomarker for the IDH-mutant, 1p/19q non-codeleted molecular subtype. Clin Cancer Res; 23(20); 6078-85. (c)2017 AACR.
Residual Convolutional Neural Network for the Determination of IDH Status in Low-and High-Grade Gliomas from MR Imaging
Chang, Ken
Bai, Harrison X
Zhou, Hao
Su, Chang
Bi, Wenya Linda
Agbodza, Ena
Kavouridis, Vasileios K
Senders, Joeky T
Boaro, Alessandro
Beers, Andrew
Clinical Cancer Research2018Journal Article, cited 26 times
Website
TCGA-LGG
Convolutional Neural Network (CNN)
A Genetic Polymorphism in CTLA-4 Is Associated with Overall Survival in Sunitinib-Treated Patients with Clear Cell Metastatic Renal Cell Carcinoma
Liu, X.
Swen, J. J.
Diekstra, M. H. M.
Boven, E.
Castellano, D.
Gelderblom, H.
Mathijssen, R. H. J.
Vermeulen, S. H.
Oosterwijk, E.
Junker, K.
Roessler, M.
Alexiusdottir, K.
Sverrisdottir, A.
Radu, M. T.
Ambert, V.
Eisen, T.
Warren, A.
Rodriguez-Antona, C.
Garcia-Donas, J.
Bohringer, S.
Koudijs, K. K. M.
Kiemeney, Lalm
Rini, B. I.
Guchelaar, H. J.
Clin Cancer Res2018Journal Article, cited 0 times
Website
TCGA-KIRC
Radiogenomics
tyrosine kinase inhibitors (TKI)
clear cell renal cell carcinoma (ccRCC)
Purpose: The survival of patients with clear cell metastatic renal cell carcinoma (cc-mRCC) has improved substantially since the introduction of tyrosine kinase inhibitors (TKI). With the fact that TKIs interact with immune responses, we investigated whether polymorphisms of genes involved in immune checkpoints are related to the clinical outcome of cc-mRCC patients treated with sunitinib as first TKI.Experimental Design: Twenty-seven single-nucleotide polymorphisms (SNP) in CD274 (PD-L1), PDCD1 (PD-1), and CTLA-4 were tested for a possible association with progression-free survival (PFS) and overall survival (OS) in a discovery cohort of 550 sunitinib-treated cc-mRCC patients. SNPs with a significant association (P < 0.05) were tested in an independent validation cohort of 138 sunitinib-treated cc-mRCC patients. Finally, data of the discovery and validation cohort were pooled for meta-analysis.Results:CTLA-4 rs231775 and CD274 rs7866740 showed significant associations with OS in the discovery cohort after correction for age, gender, and Heng prognostic risk group [HR, 0.84; 95% confidence interval (CI), 0.72-0.98; P = 0.028, and HR, 0.73; 95% CI, 0.54-0.99; P = 0.047, respectively]. In the validation cohort, the associations of both SNPs with OS did not meet the significance threshold of P < 0.05. After meta-analysis, CTLA-4 rs231775 showed a significant association with OS (HR, 0.83; 95% CI, 0.72-0.95; P = 0.008). Patients with the GG genotype had longer OS (35.1 months) compared with patients with an AG (30.3 months) or AA genotype (24.3 months). No significant associations with PFS were found.Conclusions: The G-allele of rs231775 in the CTLA-4 gene is associated with an improved OS in sunitinib-treated cc-mRCC patients and could potentially be used as a prognostic biomarker. Clin Cancer Res; 1-7. (c)2018 AACR.
Machine Learning-Based Radiomics for Molecular Subtyping of Gliomas
Lu, Chia-Feng
Hsu, Fei-Ting
Hsieh, Kevin Li-Chun
Kao, Yu-Chieh Jill
Cheng, Sho-Jen
Hsu, Justin Bo-Kai
Tsai, Ping-Huei
Chen, Ray-Jade
Huang, Chao-Ching
Yen, Yun
Clinical Cancer Research2018Journal Article, cited 1 times
Website
TCGA-GBM
TCGA-LGG
glioma
glioblastoma Multiforme (GBM)
IDH mutation
1p/19q codeletion
Imaging phenotypes of breast cancer heterogeneity in pre-operative breast Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) scans predict 10-year recurrence
Chitalia, Rhea
Rowland, Jennifer
McDonald, Elizabeth S
Pantalone, Lauren
Cohen, Eric A
Gastounioti, Aimilia
Feldman, Michael
Schnall, Mitchell
Conant, Emily
Kontos, Despina
Clinical Cancer Research2019Journal Article, cited 0 times
Website
DCE-MRI
Breast
Radiomic feature
Predicting the 1p/19q co-deletion status of presumed low grade glioma with an externally validated machine learning algorithm
van der Voort, Sebastian R
Incekara, Fatih
Wijnenga, Maarten MJ
Kapsas, Georgios
Gardeniers, Mayke
Schouten, Joost W
Starmans, Martijn PA
Tewarie, Rishie Nandoe
Lycklama, Geert J
French, Pim J
Clinical Cancer Research2019Journal Article, cited 0 times
LGG-1p19qDeletion
glioma
radiogenomics
Radiogenomic-Based Survival Risk Stratification of Tumor Habitat on Gd-T1w MRI Is Associated with Biological Processes in Glioblastoma
Beig, Niha
Bera, Kaustav
Prasanna, Prateek
Antunes, Jacob
Correa, Ramon
Singh, Salendra
Saeed Bamashmos, Anas
Ismail, Marwa
Braman, Nathaniel
Verma, Ruchika
Hill, Virginia B
Statsevych, Volodymyr
Ahluwalia, Manmeet S
Varadan, Vinay
Madabhushi, Anant
Tiwari, Pallavi
Clin Cancer Res2020Journal Article, cited 0 times
Website
TCGA-GBM
Ivy GAP
Magnetic Resonance Imaging (MRI)
BRAIN
Glioblastoma Multiforme (GBM)
Radiomics
Radiogenomics
PURPOSE: To (i) create a survival risk score using radiomic features from the tumor habitat on routine MRI to predict progression-free survival (PFS) in glioblastoma and (ii) obtain a biological basis for these prognostic radiomic features, by studying their radiogenomic associations with molecular signaling pathways. EXPERIMENTAL DESIGN: Two hundred three patients with pretreatment Gd-T1w, T2w, T2w-FLAIR MRI were obtained from 3 cohorts: The Cancer Imaging Archive (TCIA; n = 130), Ivy GAP (n = 32), and Cleveland Clinic (n = 41). Gene-expression profiles of corresponding patients were obtained for TCIA cohort. For every study, following expert segmentation of tumor subcompartments (necrotic core, enhancing tumor, peritumoral edema), 936 3D radiomic features were extracted from each subcompartment across all MRI protocols. Using Cox regression model, radiomic risk score (RRS) was developed for every protocol to predict PFS on the training cohort (n = 130) and evaluated on the holdout cohort (n = 73). Further, Gene Ontology and single-sample gene set enrichment analysis were used to identify specific molecular signaling pathway networks associated with RRS features. RESULTS: Twenty-five radiomic features from the tumor habitat yielded the RRS. A combination of RRS with clinical (age and gender) and molecular features (MGMT and IDH status) resulted in a concordance index of 0.81 (P < 0.0001) on training and 0.84 (P = 0.03) on the test set. Radiogenomic analysis revealed associations of RRS features with signaling pathways for cell differentiation, cell adhesion, and angiogenesis, which contribute to chemoresistance in GBM. CONCLUSIONS: Our findings suggest that prognostic radiomic features from routine Gd-T1w MRI may also be significantly associated with key biological processes that affect response to chemotherapy in GBM.
Short-and long-term lung cancer risk associated with noncalcified nodules observed on low-dose CT
Pinsky, Paul F
Nath, P Hrudaya
Gierada, David S
Sonavane, Sushil
Szabo, Eva
Cancer prevention research2014Journal Article, cited 10 times
Website
NLST
LUNG
Nodule classification
Multi-institutional Prognostic Modeling in Head and Neck Cancer: Evaluating Impact and Generalizability of Deep Learning and Radiomics
Kazmierski, Michal
Welch, Mattea
Kim, Sejin
McIntosh, Chris
Rey-McIntyre, Katrina
Huang, Shao Hui
Patel, Tirth
Tadic, Tony
Milosevic, Michael
Liu, Fei-Fei
Ryczkowski, Adam
Kazmierska, Joanna
Ye, Zezhong
Plana, Deborah
Aerts, Hugo J.W.L.
Kann, Benjamin H.
Bratman, Scott V.
Hope, Andrew J.
Haibe-Kains, Benjamin
Cancer Research Communications2023Journal Article, cited 0 times
Head-Neck-Radiomics-HN1
HNSCC
radiomics
Deep Learning
Artificial intelligence (AI) and machine learning (ML) are becoming critical in developing and deploying personalized medicine and targeted clinical trials. Recent advances in ML have enabled the integration of wider ranges of data including both medical records and imaging (radiomics). However, the development of prognostic models is complex as no modeling strategy is universally superior to others and validation of developed models requires large and diverse datasets to demonstrate that prognostic models developed (regardless of method) from one dataset are applicable to other datasets both internally and externally. Using a retrospective dataset of 2,552 patients from a single institution and a strict evaluation framework that included external validation on three external patient cohorts (873 patients), we crowdsourced the development of ML models to predict overall survival in head and neck cancer (HNC) using electronic medical records (EMR) and pretreatment radiological images. To assess the relative contributions of radiomics in predicting HNC prognosis, we compared 12 different models using imaging and/or EMR data. The model with the highest accuracy used multitask learning on clinical data and tumor volume, achieving high prognostic accuracy for 2-year and lifetime survival prediction, outperforming models relying on clinical data only, engineered radiomics, or complex deep neural network architecture. However, when we attempted to extend the best performing models from this large training dataset to other institutions, we observed significant reductions in the performance of the model in those datasets, highlighting the importance of detailed population-based reporting for AI/ML model utility and stronger validation frameworks. We have developed highly prognostic models for overall survival in HNC using EMRs and pretreatment radiological images based on a large, retrospective dataset of 2,552 patients from our institution.Diverse ML approaches were used by independent investigators. The model with the highest accuracy used multitask learning on clinical data and tumor volume.External validation of the top three performing models on three datasets (873 patients) with significant differences in the distributions of clinical and demographic variables demonstrated significant decreases in model performance.ML combined with simple prognostic factors outperformed multiple advanced CT radiomics and deep learning methods. ML models provided diverse solutions for prognosis of patients with HNC but their prognostic value is affected by differences in patient populations and require extensive validation.
Radiogenomic associations clear cell renal cell carcinoma: an exploratory study
Liu, D.
Dani, K.
Reddy, S. S.
Lei, X.
Demirjian, N.
Hwang, D.
Varghese, B. A.
Rhie, S. K.
Yap, F. Y.
Quinn, D. I.
Siddiqi, I.
Aron, M.
Vaishampayan, U.
Zahoor, H.
Cen, S. Y.
Gill, I. S.
Duddalwar, V.
Oncology2023Journal Article, cited 0 times
Website
TCGA-KIRC
radiomics
radiogenomics
Machine learning
clear cell renal cell carcinoma
MATLAB
Random Forest
AdaBoost
Elastic Net
OBJECTIVES: This study investigates how quantitative texture analysis can be used to non-invasively identify novel radiogenomic correlations with Clear Cell Renal Cell Carcinoma (ccRCC) biomarkers. METHODS: The Cancer Genome Atlas-Kidney Renal Clear Cell Carcinoma (TCGA-KIRC) open-source database was used to identify 190 sets of patient genomic data that had corresponding multiphase contrast-enhanced CT images in The Cancer Imaging Archive (TCIA-KIRC). 2824 radiomic features spanning fifteen texture families were extracted from CT images using a custom-built MATLAB software package. Robust radiomic features with strong inter-scanner reproducibility were selected. Random Forest (RF), AdaBoost, and Elastic Net machine learning (ML) algorithms evaluated the ability of the selected radiomic features to predict the presence of 12 clinically relevant molecular biomarkers identified from literature. ML analysis was repeated with cases stratified by stage (I/II vs. III/IV) and grade (1/2 vs. 3/4). 10-fold cross validation was used to evaluate model performance. RESULTS: Before stratification by tumor grade and stage, radiomics predicted the presence of several biomarkers with weak discrimination (AUC 0.60-0.68). Once stratified, radiomics predicted KDM5C, SETD2, PBRM1, and mTOR mutation status with acceptable to excellent predictive discrimination (AUC ranges from 0.70 to 0.86). CONCLUSIONS: Radiomic texture analysis can potentially identify a variety of clinically relevant biomarkers in patients with ccRCC and may have a prognostic implication.
Automatic Prostate Lesions Detection on MR Images Based on the Ising Model
Reis, Artur Bernardo Silva
Silva, Aristófanes Corrêa
de Paiva, Anselmo Cardoso
Gattass, Marcelo
2019Journal Article, cited 0 times
ISBI-MR-Prostate-2013
Prostate cancer is the second most prevalent type of cancer in the male population worldwide. Prostate imaging tests have adopted for the prevention, diagnosis, and treatment. It is known that early detection increases the chances of an effective treatment, improving the prognosis of
the disease. This paper proposes an automatic methodology for prostate lesions detection. It consists of the following steps: Extracting candidates for lesions with Wolff algorithm; feature extraction using the Ising model measures and finally the uses support vector machine in the classification
of a lesion or healthy tissue. The methodology was validated using a set of 28 exams containing the lesion markings and obtained a sensitivity of 95.92%, specificity of 93.89% and accuracy of 94.16%. These are promising since they were more significant than other methods compared.
Robust Computer-Aided Detection of Pulmonary Nodules from Chest Computed Tomography
Abduh, Zaid
Wahed, Manal Abdel
Kadah, Yasser M
Journal of Medical Imaging and Health Informatics2016Journal Article, cited 5 times
Website
LIDC-IDRI
Computer Assisted Detection (CAD)
Classification
LUNG
Detection of pulmonary nodules in chest computed tomography scans play an important role in the early diagnosis of lung cancer. A simple yet effective computer-aided detection system is developed to distinguish pulmonary nodules in chest CT scans. The proposed system includes feature extraction, normalization, selection and classification steps. One hundred forty-nine gray level statistical features are extracted from selected regions of interest. A min-max normalization method is used followed by sequential forward feature selection technique with logistic regression model used as criterion function that selected an optimal set of five features for classification. The classification step was done using nearest neighbor and support vector machine (SVM) classifiers with separate training and testing sets. Several measures to evaluate the system performance were used including the area under ROC curve (AUC), sensitivity, specificity, precision, accuracy, F1 score and Cohen-k factor. Excellent performance with high sensitivity and specificity is reported using data from two reference datasets as compared to previous work.
An improved method of colon segmentation in computed tomography colonography images using domain knowledge
Manjunath, KN
Siddalingaswamy, PC
Gopalakrishna Prabhu, K
Journal of Medical Imaging and Health Informatics2016Journal Article, cited 0 times
CT Colonography
colon
Study on Prognosis Factors of Non-Small Cell Lung Cancer Based on CT Image Features
Lu, Xiaoteng
Gong, Jing
Nie, Shengdong
Journal of Medical Imaging and Health Informatics2019Journal Article, cited 0 times
NSCLC-Radiomics
LUNG
This study aims to investigate the prognosis factors of non-small cell lung cancer (NSCLC) based on CT image features and develop a new quantitative image feature prognosis approach using CT images. Firstly, lung tumors were segmented and images features were extracted. Secondly, the Kaplan-Meier method was used to have a univariate survival analysis. A multiple survival analysis was carried out with the method of COX regression model. Thirdly, SMOTE algorithm was took to make the feature data balanced. Finally, classifiers based on WEKA were established to test the prognosis ability of independent prognosis factors. Univariate analysis results reflected that six features had significant influence on patients' prognosis. After multivariate analysis, angular second moment, srhge and volume were significantly related to the survival situation of NSCLC patients (P < 0.05). According to the results of classifiers, these three features could make a well prognosis on the NSCLC. The best classification accuracy was 78.4%. The results of our study suggested that angular second moment, srhge and volume were high potential independent prognosis factors of NSCLC.
A Novel Algorithm for Segmentation of Solitary Pulmonary Nodules in Chest Computed Tomography Based on Three-Dimensional Connected Voxels
Zhang, Chence
Shen, Yi
Kong, Qian
Wei, Yucheng
Zhang, Bingsen
Duan, Chongfeng
Li, Nan
Journal of Medical Imaging and Health Informatics2019Journal Article, cited 0 times
LIDC-IDRI
Background
: With the increasing incidence of lung cancer, it is prudent to do screenings for individuals at high-risk. Solitary pulmonary nodules (SPNs) are an indication of small tumors or early stages of disease. Therefore, accurate detection of SPNs is important to both clinicians
and radiologists. Since a large number of computed tomography (CT) scans are being acquired during a lung cancer screening, there is an urgent need for new automated techniques to detect SPNs.
Methods
: A novel algorithm for segmentation of SPNs in CT scans based on three-dimensional
connected voxels (3DCVs) can be used to screen out potential patients with SPNs. 120 cases of CT scans from a public database (100 positive cases with nodules and 20 negative cases without nodules) and 30 negative cases from the routine CT scans completed in a hospital were used to test the
algorithm. The algorithm is based on the fact that most pulmonary nodules are solitary at their early stages. First, find suitable CT values thresholds for CT values to convert pulmonary nodules, normal tissues and air spaces in each chest CT slice into black and white images. Then stack the
slices in their originally physical order. This will produce a three-dimensional (3D) matrix with pulmonary nodules and normal tissues constructing their own 3DCVs respectively.
Results
: Of the 100 positive cases, 93 cases showed positive detection of SPNs and 7 cases did not. Of the
50 negative cases, 48 cases returned a negative result and 2 cases showed as positive result. In this study, the sensitivity is 93% and the specificity is 96% with a 4% false positive rate (FPR).
Conclusions
: This algorithm can be used to screen out positive chest CT scans efficiently,
which will increase efficiency by two to three times than when compared with manual inspection and detection.
A Separate 3D Convolutional Neural Network Architecture for 3D Medical Image Semantic Segmentation
Dong, Shidu
Liu, Zhi
Wang, Huaqiu
Zhang, Yihao
Cui, Shaoguo
Journal of Medical Imaging and Health Informatics2019Journal Article, cited 0 times
BraTS-TCGA-LGG
Machine Learning
To exploit three-dimensional (3D) context information and improve 3D medical image semantic segmentation, we propose a separate 3D (S3D) convolution neural network (CNN) architecture. First, a two-dimensional (2D) CNN is used to extract the 2D features of each slice in the
xy
-plane
of 3D medical images. Second, one-dimensional (1D) features reassembled from the 2D features in the
z
-axis are input into a 1D-CNN and are then classified feature-wise. Analysis shows that S3D-CNN has lower time complexity, fewer parameters and less memory space requirements than other
3D-CNNs with a similar structure. As an example, we extend the deep convolutional encoder–decoder architecture (SegNet) to S3D-SegNet for brain tumor image segmentation. We also propose a method based on priority queues and the dice loss function to address the class imbalance for medical
image segmentation. The experimental results show the following: (1) S3D-SegNet extended from SegNet can improve brain tumor image segmentation. (2) The proposed imbalance accommodation method can increase the speed of training convergence and reduce the negative impact of the imbalance. (3)
S3D-SegNet with the proposed imbalance accommodation method offers performance comparable to that of some state-of-the-art 3D-CNNs and experts in brain tumor image segmentation.
ECIDS-Enhanced Cancer Image Diagnosis and Segmentation Using Artificial Neural Networks and Active Contour Modelling
Kavitha, M. S.
Shanthini, J.
Bhavadharini, R. M.
Journal of Medical Imaging and Health Informatics2020Journal Article, cited 0 times
LIDC-IDRI
MATLAB
In the present decade, image processing techniques are extensively utilized in various medical image diagnoses, specifically in dealing with cancer images for detection and treatment in advance. The quality of the image and the accuracy are the significant factors to be considered while analyzing the images for cancer diagnosis. With that note, in this paper, an Enhanced Cancer Image Diagnosis and Segmentation (ECIDS) framework has been developed for effective detection and segmentation of lung cancer cells. Initially, the Computed Tomography lung image (CT image) has been processed for denoising by employing kernel based global denoising function. Following that, the noise free lung images are given for feature extraction. The images are further classified into normal and abnormal classes using Feed Forward Artificial Neural Network Classification. With that, the classified lung cancer images are given for segmentation and the process of segmentation has been done here with the Active Contour Modelling with reduced gradient. The segmented cancer images are further given for medical processing. Moreover, the framework is experimented with MATLAB tool using the clinical dataset called LIDC-IDRI lung CT dataset. The results are analyzed and discussed based on some performance evaluation metrics such as energy, Entropy, Correlation and Homogeneity are involved in effective classification.
Segmentation of Gliomas Based on a Double-Pathway Residual Convolution Neural Network Using Multi-Modality Information
Pan, Mingyuan
Shi, Yonghong
Song, Zhijian
Journal of Medical Imaging and Health Informatics2020Journal Article, cited 0 times
BraTS-TCGA-GBM
The automatic segmentation of brain tumors in magnetic resonance (MR) images is very important in the diagnosis, radiotherapy planning, surgical navigation and several other clinical processes. As the location, size, shape, boundary of gliomas are heterogeneous, segmenting gliomas and
intratumoral structures is very difficult. Besides, the multi-center issue makes it more challenging that multimodal brain gliomas images (such as T1, T2, fluid-attenuated inversion recovery (FLAIR), and T1c images) are from different radiation centers. This paper presents a multimodal, multi-scale,
double-pathway, 3D residual convolution neural network (CNN) for automatic gliomas segmentation. In the pre-processing step, a robust gray-level normalization method is proposed to solve the multi-center problem, that the intensity range from deferent centers varies a lot. Then, a doublepathway
3D architecture based on DeepMedic toolkit is trained using multi-modality information to fuse the local and context features. In the post-processing step, a fully connected conditional random field (CRF) is built to improve the performance, filling and connecting the isolated segmentations
and holes. Experiments on the Multimodal Brain Tumor Segmentation (BRATS) 2017 and 2019 dataset showed that this methods can delineate the whole tumor with a Dice coefficient, a sensitivity and a positive predictive value (PPV) of 0.88, 0.89 and 0.88, respectively. As for the segmentation
of the tumor core and the enhancing area, the sensitivity reached 0.80. The results indicated that this method can segment gliomas and intratumoral structures from multimodal MR images accurately, and it possesses a clinical practice value.
Breast cancer cell-derived microRNA-155 suppresses tumor progression via enhancing immune cell recruitment and anti-tumor function
Wang, Junfeng
Wang, Quanyi
Guan, Yinan
Sun, Yulu
Wang, Xiaozhi
Lively, Kaylie
Wang, Yuzhen
Luo, Ming
Kim, Julian A
Murphy, E Angela
The Journal of Clinical Investigation2022Journal Article, cited 0 times
Website
TIL-WSI-TCGA
TCGA-BRCA
Estimation for finite mixture of simplex models: applications to biomedical data
Simplex distribution has been proved useful for modelling double-bounded variables in data directly. Yet, it is not sufficient for multimodal distributions. This article addresses the problem of estimating a density when data is restricted to the (0,1) interval and contains several modes. Particularly, we propose a simplex mixture model approach to model this kind of data. In order to estimate the parameters of the model, an Expectation Maximization (EM) algorithm is developed. The parameter estimation performance is evaluated through simulation studies. Models are explored using two real datasets: i) gene expressions data of patients’ survival times and the relation to adenocarcinoma and ii) magnetic resonant images (MRI) with a view in segmentation. In the latter case, given that data contains zeros, the main model is modified to consider the zero-inflated setting.
MR and mammographic imaging features of HER2-positive breast cancers according to hormone receptor status: a retrospective comparative study
Song, Sung Eun
Bae, Min Sun
Chang, Jung Min
Cho, Nariya
Ryu, Han Suk
Moon, Woo Kyung
Acta Radiologica2016Journal Article, cited 2 times
Website
TCGA-BRCA
Background Human epidermal growth factor receptor 2-positive (HER2+) breast cancer has two distinct subtypes according to hormone receptor (HR) status. Survival, pattern of recurrence, and treatment response differ between HR-/HER2+ and HR+/HER2+ cancers. Purpose To investigate imaging and clinicopathologic features of HER2+ cancers and their correlation with HR expression. Material and Methods Between 2011 and 2013, 252 consecutive patients with 252 surgically confirmed HER2+ cancers (125 HR- and 127 HR+) were included. Two experienced breast radiologists blinded to the clinicopathologic findings reviewed the mammograms and magnetic resonance (MR) images using the BI-RADS lexicon. Tumor kinetic features were acquired by computer-aided detection (CAD). The imaging and clinicopathologic features of 125 HR-/HER2+ cancers were compared with those of 127 HR+/HER2+ cancers. Association between the HR status and each feature was assessed. Results Multiple logistic regression analysis showed that circumscribed mass margin (odds ratio [OR], 4.73; P < 0.001), associated non-mass enhancement (NME) on MR images (OR, 3.29; P = 0.001), high histologic grade (OR, 3.89; P = 0.002), high Ki-67 index (OR, 3.06; P = 0.003), and older age (OR, 2.43; P = 0.006) remained independent indicators associated with HR-/HER2+ cancers. Between the two HER2+ subtypes, there were no differences in mammographic imaging presentations and calcification features and MR kinetic features by a CAD. Conclusion HER2+ breast cancers have different MR imaging (MRI) phenotypes and clinicopathologic feature according to HR status. MRI features related to HR and HER2 status have the potential to be used for the diagnosis and treatment decisions in HER2+ breast cancer patients.;
Machine learning-based unenhanced CT texture analysis for predicting BAP1 mutation status of clear cell renal cell carcinomas
Kocak, Burak
Durmaz, Emine Sebnem
Kaya, Ozlem Korkmaz
Kilickesmez, Ozgur
Acta Radiol2019Journal Article, cited 0 times
Radiogenomics
TCGA-KIRC
Radiomic features
Machine Learning
Clear cell renal cell carcinoma (ccRCC)
BACKGROUND: BRCA1-associated protein 1 (BAP1) mutation is an unfavorable factor for overall survival in patients with clear cell renal cell carcinoma (ccRCC). Radiomics literature about BAP1 mutation lacks papers that consider the reliability of texture features in their workflow. PURPOSE: Using texture features with a high inter-observer agreement, we aimed to develop and internally validate a machine learning-based radiomic model for predicting the BAP1 mutation status of ccRCCs. MATERIALS AND METHODS: For this retrospective study, 65 ccRCCs were included from a public database. Texture features were extracted from unenhanced computed tomography (CT) images, using two-dimensional manual segmentation. Dimension reduction was done in three steps: (i) inter-observer agreement analysis; (ii) collinearity analysis; and (iii) feature selection. The machine learning classifier was random forest. The model was validated using 10-fold nested cross-validation. The reference standard was the BAP1 mutation status. RESULTS: Out of 744 features, 468 had an excellent inter-observer agreement. After the collinearity analysis, the number of features decreased to 17. Finally, the wrapper-based algorithm selected six features. Using selected features, the random forest correctly classified 84.6% of the labelled slices regarding BAP1 mutation status with an area under the receiver operating characteristic curve of 0.897. For predicting ccRCCs with BAP1 mutation, the sensitivity, specificity, and precision were 90.4%, 78.8%, and 81%, respectively. For predicting ccRCCs without BAP1 mutation, the sensitivity, specificity, and precision were 78.8%, 90.4%, and 89.1%, respectively. CONCLUSION: Machine learning-based unenhanced CT texture analysis might be a potential method for predicting the BAP1 mutation status of ccRCCs.
Three-dimensional lung nodule segmentation and shape variance analysis to detect lung cancer with reduced false positives
Krishnamurthy, Senthilkumar
Narasimhan, Ganesh
Rengasamy, Umamaheswari
Proceedings of the Institution of Mechanical Engineers, Part H: Journal of Engineering in Medicine2016Journal Article, cited 17 times
Website
LIDC-IDRI
Algorithms
Analysis of Variance
Humans
Imaging
Three-Dimensional/*methods
LUNG
Radiographic Image Interpretation
Computer-Assisted/*methods
Tomography
X-Ray Computed/*methods
Computed Tomography (CT)
juxta-pleural nodule
morphology processing
shape feature extraction
three-dimensional segmentation
The three-dimensional analysis on lung computed tomography scan was carried out in this study to detect the malignant lung nodules. An automatic three-dimensional segmentation algorithm proposed here efficiently segmented the tissue clusters (nodules) inside the lung. However, an automatic morphological region-grow segmentation algorithm that was implemented to segment the well-circumscribed nodules present inside the lung did not segment the juxta-pleural nodule present on the inner surface of wall of the lung. A novel edge bridge and fill technique is proposed in this article to segment the juxta-pleural and pleural-tail nodules accurately. The centroid shift of each candidate nodule was computed. The nodules with more centroid shift in the consecutive slices were eliminated since malignant nodule's resultant position did not usually deviate. The three-dimensional shape variation and edge sharp analyses were performed to reduce the false positives and to classify the malignant nodules. The change in area and equivalent diameter was more for malignant nodules in the consecutive slices and the malignant nodules showed a sharp edge. Segmentation was followed by three-dimensional centroid, shape and edge analysis which was carried out on a lung computed tomography database of 20 patient with 25 malignant nodules. The algorithms proposed in this article precisely detected 22 malignant nodules and failed to detect 3 with a sensitivity of 88%. Furthermore, this algorithm correctly eliminated 216 tissue clusters that were initially segmented as nodules; however, 41 non-malignant tissue clusters were detected as malignant nodules. Therefore, the false positive of this algorithm was 2.05 per patient.
A versatile method for bladder segmentation in computed tomography two-dimensional images under adverse conditions
Pinto, João Ribeiro
Tavares, João Manuel RS
Proceedings of the Institution of Mechanical Engineers, Part H: Journal of Engineering in Medicine2017Journal Article, cited 1 times
Website
TCGA-BLCA
Image segmentation
ROC curves for low-dose CT in the National Lung Screening Trial
Pinsky, P. F.
Gierada, D. S.
Nath, H.
Kazerooni, E. A.
Amorosa, J.
J Med Screen2013Journal Article, cited 4 times
Website
NLST
lung
LDCT
Cancer Screening
The National Lung Screening Trial (NLST) reported a 20% reduction in lung cancer specific mortality using low-dose chest CT (LDCT) compared with chest radiograph (CXR) screening. The high number of false positive screens with LDCT (around 25%) raises concerns. NLST radiologists reported LDCT screens as either positive or not positive, based primarily on the presence of a 4+ mm non-calcified lung nodule (NCN). They did not explicitly record a propensity score for lung cancer. However, by using maximum NCN size, or alternatively, radiologists' recommendations for diagnostic follow-up categorized hierarchically, surrogate propensity scores (PSSZ and PSFR) were created. These scores were then used to compute ROC curves, which determine possible operating points of sensitivity versus false positive rate (1-Specificity). The area under the ROC curve (AUC) was 0.934 and 0.928 for PSFR and PSSZ, respectively; the former was significantly greater than the latter. With the NLST definition of a positive screen, sensitivity and specificity of LDCT was 93.1% and 76.5%, respectively. With cutoffs based on PSFR, a specificity of 92.4% could be achieved while only lowering sensitivity to 86.9%. Radiologists using LDCT have good predictive ability; the optimal operating point for sensitivity and specificity remains to be determined.
Combined Megavoltage and Contrast-Enhanced Radiotherapy as an Intrafraction Motion Management Strategy in Lung SBRT
Coronado-Delgado, Daniel A
Garnica-Garza, Hector M
Technol Cancer Res Treat2019Journal Article, cited 0 times
Website
4D-Lung
Using Monte Carlo simulation and a realistic patient model, it is shown that the volume of healthy tissue irradiated at therapeutic doses can be drastically reduced using a combination of standard megavoltage and kilovoltage X-ray beams with a contrast agent previously loaded into the tumor, without the need to reduce standard treatment margins. Four-dimensional computed tomography images of 2 patients with a centrally located and a peripherally located tumor were obtained from a public database and subsequently used to plan robotic stereotactic body radiotherapy treatments. Two modalities are assumed: conventional high-energy stereotactic body radiotherapy and a treatment with contrast agent loaded in the tumor and a kilovoltage X-ray beam replacing the megavoltage beam (contrast-enhanced radiotherapy). For each patient model, 2 planning target volumes were designed: one following the recommendations from either Radiation Therapy Oncology Group (RTOG) 0813 or RTOG 0915 task group depending on the patient model and another with a 2-mm uniform margin determined solely on beam penumbra considerations. The optimized treatments with RTOG margins were imparted to the moving phantom to model the dose distribution that would be obtained as a result of intrafraction motion. Treatment plans are then compared to the plan with the 2-mm uniform margin considered to be the ideal plan. It is shown that even for treatments in which only one-fifth of the total dose is imparted via the contrast-enhanced radiotherapy modality and with the use of standard treatment margins, the resultant absorbed dose distributions are such that the volume of healthy tissue irradiated to high doses is close to what is obtained under ideal conditions.
Mesoscopic imaging of glioblastomas: Are diffusion, perfusion and spectroscopic measures influenced by the radiogenetic phenotype?
Demerath, Theo
Simon-Gabriel, Carl Philipp
Kellner, Elias
Schwarzwald, Ralf
Lange, Thomas
Heiland, Dieter Henrik
Reinacher, Peter
Staszewski, Ori
Mast, Hansjorg
Kiselev, Valerij G
Egger, Karl
Urbach, Horst
Weyerbrock, Astrid
Mader, Irina
Neuroradiology Journal2017Journal Article, cited 5 times
Website
Radiogenomics
RIDER NEURO MRI
Magnetic resonance imaging (MRI)
Glioblastoma Multiforme (GBM)
The purpose of this study was to identify markers from perfusion, diffusion, and chemical shift imaging in glioblastomas (GBMs) and to correlate them with genetically determined and previously published patterns of structural magnetic resonance (MR) imaging. Twenty-six patients (mean age 60 years, 13 female) with GBM were investigated. Imaging consisted of native and contrast-enhanced 3D data, perfusion, diffusion, and spectroscopic imaging. In the presence of minor necrosis, cerebral blood volume (CBV) was higher (median +/- SD, 2.23% +/- 0.93) than in pronounced necrosis (1.02% +/- 0.71), pcorr = 0.0003. CBV adjacent to peritumoral fluid-attenuated inversion recovery (FLAIR) hyperintensity was lower in edema (1.72% +/- 0.31) than in infiltration (1.91% +/- 0.35), pcorr = 0.039. Axial diffusivity adjacent to peritumoral FLAIR hyperintensity was lower in severe mass effect (1.08*10(-3) mm(2)/s +/- 0.08) than in mild mass effect (1.14*10(-3) mm(2)/s +/- 0.06), pcorr = 0.048. Myo-inositol was positively correlated with a marker for mitosis (Ki-67) in contrast-enhancing tumor, r = 0.5, pcorr = 0.0002. Changed CBV and axial diffusivity, even outside FLAIR hyperintensity, in adjacent normal-appearing matter can be discussed as to be related to angiogenesis pathways and to activated proliferation genes. The correlation between myo-inositol and Ki-67 might be attributed to its binding to cell surface receptors regulating tumorous proliferation of astrocytic cells.
Machine Learning Classification of Body Part, Imaging Axis, and Intravenous Contrast Enhancement on CT Imaging
Li, Wuqi
Lin, Hui Ming
Lin, Amy
Napoleone, Marc
Moreland, Robert
Murari, Alexis
Stepanov, Maxim
Ivanov, Eric
Prasad, Abhinav Sanjeeva
Shih, George
Hu, Zixuan
Zulbayar, Suvd
Sejdić, Ervin
Colak, Errol
2023Journal Article, cited 0 times
C4KC-KiTS
CPTAC-LSCC
SPIE-AAPM Lung CT Challenge
StageII-Colorectal-CT
Purpose: The development and evaluation of machine learning models that automatically identify the body part(s) imaged, axis of imaging, and the presence of intravenous contrast material of a CT series of images. Methods: This retrospective study included 6955 series from 1198 studies (501 female, 697 males, mean age 56.5 years) obtained between January 2010 and September 2021. Each series was annotated by a trained board-certified radiologist with labels consisting of 16 body parts, 3 imaging axes, and whether an intravenous contrast agent was used. The studies were randomly assigned to the training, validation and testing sets with a proportion of 70%, 20% and 10%, respectively, to develop a 3D deep neural network for each classification task. External validation was conducted with a total of 35,272 series from 7 publicly available datasets. The classification accuracy for each series was independently assessed for each task to evaluate model performance. Results: The accuracies for identifying the body parts, imaging axes, and the presence of intravenous contrast were 96.0% (95% CI: 94.6%, 97.2%), 99.2% (95% CI: 98.5%, 99.7%), and 97.5% (95% CI: 96.4%, 98.5%) respectively. The generalizability of the models was demonstrated through external validation with accuracies of 89.7 - 97.8%, 98.6 - 100%, and 87.8 - 98.6% for the same tasks. Conclusions: The developed models demonstrated high performance on both internal and external testing in identifying key aspects of a CT series.
Assigning readers to cases in imaging studies using balanced incomplete block designs
Huang, Erich P
Shih, Joanna H
Stat Methods Med Res2021Journal Article, cited 0 times
Website
TCGA-OV-Radiogenomics
Imaging studies
balanced incomplete block designs
kappa statistics
negative predictive value
positive predictive value
reader studies
sensitivity
specificity
In many imaging studies, each case is reviewed by human readers and characterized according to one or more features. Often, the inter-reader agreement of the feature indications is of interest in addition to their diagnostic accuracy or association with clinical outcomes. Complete designs in which all participating readers review all cases maximize efficiency and guarantee estimability of agreement metrics for all pairs of readers but often involve a heavy reading burden. Assigning readers to cases using balanced incomplete block designs substantially reduces reading burden by having each reader review only a subset of cases, while still maintaining estimability of inter-reader agreement for all pairs of readers. Methodology for data analysis and power and sample size calculations under balanced incomplete block designs is presented and applied to simulation studies and an actual example. Simulation studies results suggest that such designs may reduce reading burdens by >40% while in most scenarios incurring a <20% increase in the standard errors and a <8% and <20% reduction in power to detect between-modality differences in diagnostic accuracy and kappa statistics, respectively.
Quantitative DCE Dynamics on Transformed MR Imaging Discriminates Clinically Significant Prostate Cancer
Multiparametric Magnetic Resonance Imaging/methods
Neoplasm Grading
Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI)
Magnetic Resonance Imaging (MRI)
habitats
machine learning
prostate cancer
radiomics
Dynamic contrast enhancement (DCE) imaging is a valuable sequence of multiparametric magnetic resonance imaging (mpMRI). A DCE sequence enhances the vasculature and complements T2-weighted (T2W) and Diffusion-weighted imaging (DWI), allowing early detection of prostate cancer. However, DCE assessment has remained primarily qualitative. The study proposes quantifying DCE characteristics (T1W sequences) using six time-dependent metrics computed on feature transformations (306 radiomic features) of abnormal image regions observed over time. We applied our methodology to prostate cancer patients with the DCE MRI images (n = 25) who underwent prostatectomy with confirmed pathological assessment of the disease using Gleason Score. Regions of abnormality were assessed on the T2W MRI, guided using the whole mount pathology. Preliminary analysis finds over six temporal DCE imaging features obtained on different transformations on the imaging regions showed significant differences compared to the indolent counterpart (P </= 0.05, q </= 0.01). We find classifier models using logistic regression formed on DCE features after feature-based transformation (Centre of Mass) had an AUC of 0.89-0.94. While using mean feature-based transformation, the AUC was in the range of 0.71-0.76, estimated using the 0.632 bootstrap cross-validation method and after applying sample balancing using the synthetic minority oversampling technique (SMOTE). Our study finds, radiomic transformation of DCE images (T1 sequences) provides better signal standardization. Their temporal characteristics allow improved discrimination of aggressive disease.
A DCE sequence enhances the vasculature and complements T2-weighted (T2W) and Diffusion-weighted imaging (DWI), allowing early detection of prostate cancer. However, DCE assessment has remained primarily qualitative. The study proposes quantifying DCE characteristics (T1W sequences) using six time-dependent metrics computed on radiomic feature transformations (306 radiomic features) of abnormal image regions observed over time. These characteristics discriminate against aggressive prostate disease.
eng
The Effect of Heterogenous Subregions in Glioblastomas on Survival Stratification: A Radiomics Analysis Using the Multimodality MRI
Yin, L.
Liu, Y.
Zhang, X.
Lu, H.
Liu, Y.
Technol Cancer Res Treat2021Journal Article, cited 0 times
Website
TCGA-GBM
Radiomics
Glioblastoma Multiforme (GBM)
BRAIN
Wavelet
Intratumor heterogeneity is partly responsible for the poor prognosis of glioblastoma (GBM) patients. In this study, we aimed to assess the effect of different heterogeneous subregions of GBM on overall survival (OS) stratification. A total of 105 GBM patients were retrospectively enrolled and divided into long-term and short-term OS groups. Four MRI sequences, including contrast-enhanced T1-weighted imaging (T1C), T1, T2, and FLAIR, were collected for each patient. Then, 4 heterogeneous subregions, i.e. the region of entire abnormality (rEA), the regions of contrast-enhanced tumor (rCET), necrosis (rNec) and edema/non-contrast-enhanced tumor (rE/nCET), were manually drawn from the 4 MRI sequences. For each subregion, 50 radiomics features were extracted. The stratification performance of 4 heterogeneous subregions, as well as the performances of 4 MRI sequences, was evaluated both alone and in combination. Our results showed that rEA was superior in stratifying long-and short-term OS. For the 4 MRI sequences used in this study, the FLAIR sequence demonstrated the best performance of survival stratification based on the manual delineation of heterogeneous subregions. Our results suggest that heterogeneous subregions of GBMs contain different prognostic information, which should be considered when investigating survival stratification in patients with GBM.
Compute Tomography Radiomics Analysis on Whole Pancreas Between Healthy Individual and Pancreatic Ductal Adenocarcinoma Patients: Uncertainty Analysis and Predictive Modeling
Wang, Shuo
Lin, Chi
Kolomaya, Alexander
Ostdiek-Wille, Garett P
Wong, Jeffrey
Cheng, Xiaoyue
Lei, Yu
Liu, Chang
2022Journal Article, cited 0 times
Pancreas-CT
Radiomics is a rapidly growing field that quantitatively extracts image features in a high-throughput manner from medical imaging. In this study, we analyzed the radiomics features of the whole pancreas between healthy individuals and pancreatic cancer patients, and we established a predictive model that can distinguish cancer patients from healthy individuals based on these radiomics features. Methods: We retrospectively collected venous-phase scans of contrast-enhanced computed tomography (CT) images from 181 control subjects and 85 cancer case subjects for radiomics analysis and predictive modeling. An attending radiation oncologist delineated the pancreas for all the subjects in the Varian Eclipse system, and we extracted 924 radiomics features using PyRadiomics. We established a feature selection pipeline to exclude redundant or unstable features. We randomly selected 189 cases (60 cancer and 129 control) as the training set. The remaining 77 subjects (25 cancer and 52 control) as a test set. We trained a Random Forest model utilizing the stable features to distinguish the cancer patients from the healthy individuals on the training dataset. We analyzed the performance of our best model by running 5-fold cross-validations on the training dataset and applied our best model to the test set. Results: We identified that 91 radiomics features are stable against various uncertainty sources, including bin width, resampling, image transformation, image noise, and segmentation uncertainty. Eight of the 91 features are nonredundant. Our final predictive model, using these 8 features, has achieved a mean area under the receiver operating characteristic curve (AUC) of 0.99 ± 0.01 on the training dataset (189 subjects) by cross-validation. The model achieved an AUC of 0.910 on the independent test set (77 subjects) and an accuracy of 0.935. Conclusion: CT-based radiomics analysis based on the whole pancreas can distinguish cancer patients from healthy individuals, and it could potentially become an early detection tool for pancreatic cancer.
Recasted nonlinear complex diffusion method for removal of Rician noise from breast MRI images
Kumar, Pradeep
Srivastava, Subodh
Padma Sai, Y.
The Journal of Defense Modeling and Simulation: Applications, Methodology, Technology2021Journal Article, cited 0 times
Website
RIDER Breast MRI
Magnetic Resonance Imaging (MRI)
BREAST
The evolution of magnetic resonance imaging (MRI) leads to the study of the internal anatomy of the breast. It maps the physical features along with functional characteristics of selected regions. However, its mapping accuracy is affected by the presence of Rician noise. This noise limits the qualitative and quantitative measures of breast image. This paper proposes recasted nonlinear complex diffusion filter for sharpening the details and removal of Rician noise. It follows maximum likelihood estimation along with optimal parameter selection of complex diffusion where the overall functionality is balanced by regularization parameters. To make recasted nonlinear complex diffusion, the edge threshold constraint “k” of diffusion coefficient is reformed. It is replaced by the standard deviation of the image. It offers a wide range of threshold as variability present in the image with respect to edge. It also provides an automatic selection of “k” instead of user-based value. A series of evaluation has been conducted with respect to different noise ratios further quality improvement of MRI. The qualitative and quantitative assessments of evaluations are tested for the Reference Image Database to Evaluate Therapy Response (RIDER) Breast database. The proposed method is also compared with other existing methods. The quantitative assessment includes the parameters of the full-reference image, human visual system, and no-reference image. It is observed that the proposed method is capable of preserving edges, sharpening the details, and removal of Rician noise.
Selective segmentation of a feature that has two distinct intensities
Burrows, Liam
Chen, Ke
Torella, Francesco
Journal of Algorithms & Computational Technology2021Journal Article, cited 0 times
Website
H&E-stained slides
MiMM_SBILab Dataset: Microscopic Images of Multiple Myeloma
Segmentation
Pathomics
It is common for a segmentation model to compute and locate edges or regions separated by edges according to a certain distribution of intensity. However such edge information is not always useful to extract an object or feature that has two distinct intensities e.g. segmentation of a building with signages in front or of an organ that has diseased regions, unless some of kind of manual editing is applied or a learning idea is used. This paper proposes an automatic and selective segmentation model that can segment a feature that has two distinct intensities by a single click. A patch like idea is employed to design our two stage model, given only one geometric marker to indicate the location of the inside region. The difficult case where the inside region is leaning towards the boundary of the interested feature is investigated with recommendations given and reliability tested. The model is mainly presented 2D but it can be easily generalised to 3D. We have implemented the model for segmenting both 2D and 3D images.
A review of artificial intelligence in prostate cancer detection on imaging
Bhattacharya, Indrani
Khandwala, Yash S.
Vesal, Sulaiman
Shao, Wei
Yang, Qianye
Soerensen, Simon J.C.
Fan, Richard E.
Ghanouni, Pejman
Kunder, Christian A.
Brooks, James D.
Hu, Yipeng
Rusu, Mirabela
Sonn, Geoffrey A.
2022Journal Article, cited 0 times
ISBI-MR-Prostate-2013
PROSTATEx
A multitude of studies have explored the role of artificial intelligence (AI) in providing diagnostic support to radiologists, pathologists, and urologists in prostate cancer detection, risk-stratification, and management. This review provides a comprehensive overview of relevant literature regarding the use of AI models in (1) detecting prostate cancer on radiology images (magnetic resonance and ultrasound imaging), (2) detecting prostate cancer on histopathology images of prostate biopsy tissue, and (3) assisting in supporting tasks for prostate cancer detection (prostate gland segmentation, MRI-histopathology registration, MRI-ultrasound registration). We discuss both the potential of these AI models to assist in the clinical workflow of prostate cancer diagnosis, as well as the current limitations including variability in training data sets, algorithms, and evaluation criteria. We also discuss ongoing challenges and what is needed to bridge the gap between academic research on AI for prostate cancer and commercial solutions that improve routine clinical care.
Highly accurate differentiation of bone marrow cell morphologies using deep neural networks on a large image data set
Matek, Christian
Krappe, Sebastian
Münzenmayer, Christian
Haferlach, Torsten
Marr, Carsten
2021Journal Article, cited 0 times
Cytomorphology_MLL_Helmholtz_Fraunhofer
Biomedical applications of deep learning algorithms rely on large expert annotated data sets. The classification of bone marrow (BM) cell cytomorphology, an important cornerstone of hematological diagnosis, is still done manually thousands of times every day because of a lack of data sets and trained models. We applied convolutional neural networks (CNNs) to a large data set of 171 374 microscopic cytological images taken from BM smears from 945 patients diagnosed with a variety of hematological diseases. The data set is the largest expert-annotated pool of BM cytology images available in the literature. It allows us to train high-quality classifiers of leukocyte cytomorphology that identify a wide range of diagnostically relevant cell species with high precision and recall. Our CNNs outcompete previous feature-based approaches and provide a proof-of-concept for the classification problem of single BM cells. This study is a step toward automated evaluation of BM cell morphology using state-of-the-art image-classification algorithms. The underlying data set represents an educational resource, as well as a reference for future artificial intelligence-based approaches to BM cytomorphology.
Imaging genomic mapping of an invasive MRI phenotype predicts patient outcome and metabolic dysfunction: a TCGA glioma phenotype research group project
Colen, Rivka R
Vangel, Mark
Wang, Jixin
Gutman, David A
Hwang, Scott N
Wintermark, Max
Jain, Rajan
Jilwan-Nicolas, Manal
Chen, James Y
Raghavan, Prashant
Holder, C. A.
Rubin, D.
Huang, E.
Kirby, J.
Freymann, J.
Jaffe, C. C.
Flanders, A.
TCGA Glioma Phenotype Research Group
Zinn, P. O.
BMC Medical Genomics2014Journal Article, cited 47 times
Website
TCGA-GBM
Radiomics
Radiogenomics
Computer Aided Detection (CADe)
Magnetic Resonance Imaging (MRI)
BACKGROUND: Invasion of tumor cells into adjacent brain parenchyma is a major cause of treatment failure in glioblastoma. Furthermore, invasive tumors are shown to have a different genomic composition and metabolic abnormalities that allow for a more aggressive GBM phenotype and resistance to therapy. We thus seek to identify those genomic abnormalities associated with a highly aggressive and invasive GBM imaging-phenotype. METHODS: We retrospectively identified 104 treatment-naive glioblastoma patients from The Cancer Genome Atlas (TCGA) whom had gene expression profiles and corresponding MR imaging available in The Cancer Imaging Archive (TCIA). The standardized VASARI feature-set criteria were used for the qualitative visual assessments of invasion. Patients were assigned to classes based on the presence (Class A) or absence (Class B) of statistically significant invasion parameters to create an invasive imaging signature; imaging genomic analysis was subsequently performed using GenePattern Comparative Marker Selection module (Broad Institute). RESULTS: Our results show that patients with a combination of deep white matter tracts and ependymal invasion (Class A) on imaging had a significant decrease in overall survival as compared to patients with absence of such invasive imaging features (Class B) (8.7 versus 18.6 months, p < 0.001). Mitochondrial dysfunction was the top canonical pathway associated with Class A gene expression signature. The MYC oncogene was predicted to be the top activation regulator in Class A. CONCLUSION: We demonstrate that MRI biomarker signatures can identify distinct GBM phenotypes associated with highly significant survival differences and specific molecular pathways. This study identifies mitochondrial dysfunction as the top canonical pathway in a very aggressive GBM phenotype. Thus, imaging-genomic analyses may prove invaluable in detecting novel targetable genomic pathways.
G-DOC Plus–an integrative bioinformatics platform for precision medicine
Bhuvaneshwar, Krithika
Belouali, Anas
Singh, Varun
Johnson, Robert M
Song, Lei
Alaoui, Adil
Harris, Michael A
Clarke, Robert
Weiner, Louis M
Gusev, Yuriy
BMC Bioinformatics2016Journal Article, cited 14 times
Website
TCGA
REMBRANDT
Bioinformatics
Cloud computing
Precision medicine
Survival time prediction by integrating cox proportional hazards network and distribution function network
Baek, Eu-Tteum
Yang, Hyung Jeong
Kim, Soo Hyung
Lee, Guee Sang
Oh, In-Jae
Kang, Sae-Ryung
Min, Jung-Joon
BMC Bioinformatics2021Journal Article, cited 0 times
Website
NSCLC-Radiomics
Cox proportional hazard model
Deep Learning
BACKGROUND: The Cox proportional hazards model is commonly used to predict hazard ratio, which is the risk or probability of occurrence of an event of interest. However, the Cox proportional hazard model cannot directly generate an individual survival time. To do this, the survival analysis in the Cox model converts the hazard ratio to survival times through distributions such as the exponential, Weibull, Gompertz or log-normal distributions. In other words, to generate the survival time, the Cox model has to select a specific distribution over time. RESULTS: This study presents a method to predict the survival time by integrating hazard network and a distribution function network. The Cox proportional hazards network is adapted in DeepSurv for the prediction of the hazard ratio and a distribution function network applied to generate the survival time. To evaluate the performance of the proposed method, a new evaluation metric that calculates the intersection over union between the predicted curve and ground truth was proposed. To further understand significant prognostic factors, we use the 1D gradient-weighted class activation mapping method to highlight the network activations as a heat map visualization over an input data. The performance of the proposed method was experimentally verified and the results compared to other existing methods. CONCLUSIONS: Our results confirmed that the combination of the two networks, Cox proportional hazards network and distribution function network, can effectively generate accurate survival time.
Automatic glioma segmentation based on adaptive superpixel
Wu, Yaping
Zhao, Zhe
Wu, Weiguo
Lin, Yusong
Wang, Meiyun
BMC Med Imaging2019Journal Article, cited 0 times
Algorithm Development
BraTS
Brain
BACKGROUND: The automatic glioma segmentation is of great significance for clinical practice. This study aims to propose an automatic method based on superpixel for glioma segmentation from the T2 weighted Magnetic Resonance Imaging. METHODS: The proposed method mainly includes three steps. First, we propose an adaptive superpixel generation algorithm based on simple linear iterative clustering version with 0 parameter (ASLIC0). This algorithm can acquire a superpixel image with fewer superpixels and better fit the boundary of region of interest (ROI) by automatically selecting the optimal number of superpixels. Second, we compose a training set by calculating the statistical, texture, curvature and fractal features for each superpixel. Third, Support Vector Machine (SVM) is used to train classification model based on the features of the second step. RESULTS: The experimental results on Multimodal Brain Tumor Image Segmentation Benchmark 2017 (BraTS2017) show that the proposed method has good segmentation performance. The average Dice, Hausdorff distance, sensitivity, and specificity for the segmented tumor against the ground truth are 0.8492, 3.4697 pixels, 81.47, and 99.64%, respectively. The proposed method shows good stability on high- and low-grade glioma samples. Comparative experimental results show that the proposed method has superior performance. CONCLUSIONS: This provides a close match to expert delineation across all grades of glioma, leading to a fast and reproducible method of glioma segmentation.
Deep semi-supervised learning for brain tumor classification
Ge, Chenjie
Gu, Irene Yu-Hua
Jakola, Asgeir Store
Yang, Jie
2020Journal Article, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
BackgroundThis paper addresses issues of brain tumor, glioma, classification from four modalities of Magnetic Resonance Image (MRI) scans (i.e., T1 weighted MRI, T1 weighted MRI with contrast-enhanced, T2 weighted MRI and FLAIR). Currently, many available glioma datasets often contain some unlabeled brain scans, and many datasets are moderate in size.MethodsWe propose to exploit deep semi-supervised learning to make full use of the unlabeled data. Deep CNN features were incorporated into a new graph-based semi-supervised learning framework for learning the labels of the unlabeled data, where a new 3D-2D consistent constraint is added to make consistent classifications for the 2D slices from the same 3D brain scan. A deep-learning classifier is then trained to classify different glioma types using both labeled and unlabeled data with estimated labels. To alleviate the overfitting caused by moderate-size datasets, synthetic MRIs generated by Generative Adversarial Networks (GANs) are added in the training of CNNs.ResultsThe proposed scheme has been tested on two glioma datasets, TCGA dataset for IDH-mutation prediction (molecular-based glioma subtype classification) and MICCAI dataset for glioma grading. Our results have shown good performance (with test accuracies 86.53% on TCGA dataset and 90.70% on MICCAI dataset).ConclusionsThe proposed scheme is effective for glioma IDH-mutation prediction and glioma grading, and its performance is comparable to the state-of-the-art.
Automated detection and segmentation of thoracic lymph nodes from CT using 3D foveal fully convolutional neural networks
Iuga, A. I.
Carolus, H.
Hoink, A. J.
Brosch, T.
Klinder, T.
Maintz, D.
Persigehl, T.
Baessler, B.
Pusken, M.
BMC Med Imaging2021Journal Article, cited 0 times
Website
CT Lymph Nodes
Computer Aided Detection (CADe)
Segmentation
Deep Learning
Computed Tomography (CT)
BACKGROUND: In oncology, the correct determination of nodal metastatic disease is essential for patient management, as patient treatment and prognosis are closely linked to the stage of the disease. The aim of the study was to develop a tool for automatic 3D detection and segmentation of lymph nodes (LNs) in computed tomography (CT) scans of the thorax using a fully convolutional neural network based on 3D foveal patches. METHODS: The training dataset was collected from the Computed Tomography Lymph Nodes Collection of the Cancer Imaging Archive, containing 89 contrast-enhanced CT scans of the thorax. A total number of 4275 LNs was segmented semi-automatically by a radiologist, assessing the entire 3D volume of the LNs. Using this data, a fully convolutional neuronal network based on 3D foveal patches was trained with fourfold cross-validation. Testing was performed on an unseen dataset containing 15 contrast-enhanced CT scans of patients who were referred upon suspicion or for staging of bronchial carcinoma. RESULTS: The algorithm achieved a good overall performance with a total detection rate of 76.9% for enlarged LNs during fourfold cross-validation in the training dataset with 10.3 false-positives per volume and of 69.9% in the unseen testing dataset. In the training dataset a better detection rate was observed for enlarged LNs compared to smaller LNs, the detection rate for LNs with a short-axis diameter (SAD) >/= 20 mm and SAD 5-10 mm being 91.6% and 62.2% (p < 0.001), respectively. Best detection rates were obtained for LNs located in Level 4R (83.6%) and Level 7 (80.4%). CONCLUSIONS: The proposed 3D deep learning approach achieves an overall good performance in the automatic detection and segmentation of thoracic LNs and shows reasonable generalizability, yielding the potential to facilitate detection during routine clinical work and to enable radiomics research without observer-bias.
Combining weakly and strongly supervised learning improves strong supervision in Gleason pattern classification
Otalora, S.
Marini, N.
Muller, H.
Atzori, M.
BMC Med Imaging2021Journal Article, cited 0 times
Website
Algorithm Development
TCGA-PRAD
H&E-stained slides
Pathomics
Computational pathology
Deep learning
Prostate cancer
Transfer learning
BACKGROUND: One challenge to train deep convolutional neural network (CNNs) models with whole slide images (WSIs) is providing the required large number of costly, manually annotated image regions. Strategies to alleviate the scarcity of annotated data include: using transfer learning, data augmentation and training the models with less expensive image-level annotations (weakly-supervised learning). However, it is not clear how to combine the use of transfer learning in a CNN model when different data sources are available for training or how to leverage from the combination of large amounts of weakly annotated images with a set of local region annotations. This paper aims to evaluate CNN training strategies based on transfer learning to leverage the combination of weak and strong annotations in heterogeneous data sources. The trade-off between classification performance and annotation effort is explored by evaluating a CNN that learns from strong labels (region annotations) and is later fine-tuned on a dataset with less expensive weak (image-level) labels. RESULTS: As expected, the model performance on strongly annotated data steadily increases as the percentage of strong annotations that are used increases, reaching a performance comparable to pathologists ([Formula: see text]). Nevertheless, the performance sharply decreases when applied for the WSI classification scenario with [Formula: see text]. Moreover, it only provides a lower performance regardless of the number of annotations used. The model performance increases when fine-tuning the model for the task of Gleason scoring with the weak WSI labels [Formula: see text]. CONCLUSION: Combining weak and strong supervision improves strong supervision in classification of Gleason patterns using tissue microarrays (TMA) and WSI regions. Our results contribute very good strategies for training CNN models combining few annotated data and heterogeneous data sources. The performance increases in the controlled TMA scenario with the number of annotations used to train the model. Nevertheless, the performance is hindered when the trained TMA model is applied directly to the more challenging WSI classification problem. This demonstrates that a good pre-trained model for prostate cancer TMA image classification may lead to the best downstream model if fine-tuned on the WSI target dataset. We have made available the source code repository for reproducing the experiments in the paper: https://github.com/ilmaro8/Digital_Pathology_Transfer_Learning.
Accurate pancreas segmentation using multi-level pyramidal pooling residual U-Net with adversarial mechanism
Li, M.
Lian, F.
Wang, C.
Guo, S.
BMC Med Imaging2021Journal Article, cited 0 times
Pancreas-CT
*Tomography
X-Ray Computed
*Adversarial mechanism
*Multi-level pyramidal pooling module
Segmentation
*Residual learning
BACKGROUND: A novel multi-level pyramidal pooling residual U-Net with adversarial mechanism was proposed for organ segmentation from medical imaging, and was conducted on the challenging NIH Pancreas-CT dataset. METHODS: The 82 pancreatic contrast-enhanced abdominal CT volumes were split via four-fold cross validation to test the model performance. In order to achieve accurate segmentation, we firstly involved residual learning into an adversarial U-Net to achieve a better gradient information flow for improving segmentation performance. Then, we introduced a multi-level pyramidal pooling module (MLPP), where a novel pyramidal pooling was involved to gather contextual information for segmentation, then four groups of structures consisted of a different number of pyramidal pooling blocks were proposed to search for the structure with the optimal performance, and two types of pooling blocks were applied in the experimental section to further assess the robustness of MLPP for pancreas segmentation. For evaluation, Dice similarity coefficient (DSC) and recall were used as the metrics in this work. RESULTS: The proposed method preceded the baseline network 5.30% and 6.16% on metrics DSC and recall, and achieved competitive results compared with the-state-of-art methods. CONCLUSIONS: Our algorithm showed great segmentation performance even on the particularly challenging pancreas dataset, this indicates that the proposed model is a satisfactory and promising segmentor.
A novel adaptive momentum method for medical image classification using convolutional neural network
Aytac, U. C.
Gunes, A.
Ajlouni, N.
BMC Med Imaging2022Journal Article, cited 0 times
Website
REMBRANDT
BRAIN
COVID-19
Diagnostic Imaging
LUNG
Computed Tomography (CT)
*Adaptive momentum methods
*Backpropagation algorithm
*Convolutional neural networks
*Medical image classification
*Nonconvex optimization
BACKGROUND: AI for medical diagnosis has made a tremendous impact by applying convolutional neural networks (CNNs) to medical image classification and momentum plays an essential role in stochastic gradient optimization algorithms for accelerating or improving training convolutional neural networks. In traditional optimizers in CNNs, the momentum is usually weighted by a constant. However, tuning hyperparameters for momentum can be computationally complex. In this paper, we propose a novel adaptive momentum for fast and stable convergence. METHOD: Applying adaptive momentum rate proposes increasing or decreasing based on every epoch's error changes, and it eliminates the need for momentum hyperparameter optimization. We tested the proposed method with 3 different datasets: REMBRANDT Brain Cancer, NIH Chest X-ray, COVID-19 CT scan. We compared the performance of a novel adaptive momentum optimizer with Stochastic gradient descent (SGD) and other adaptive optimizers such as Adam and RMSprop. RESULTS: Proposed method improves SGD performance by reducing classification error from 6.12 to 5.44%, and it achieved the lowest error and highest accuracy compared with other optimizers. To strengthen the outcomes of this study, we investigated the performance comparison for the state-of-the-art CNN architectures with adaptive momentum. The results shows that the proposed method achieved the highest with 95% compared to state-of-the-art CNN architectures while using the same dataset. The proposed method improves convergence performance by reducing classification error and achieves high accuracy compared with other optimizers.
Tumor segmentation via enhanced area growth algorithm for lung CT images
Khorshidi, A.
BMC Med Imaging2023Journal Article, cited 0 times
NSCLC-Radiomics
LIDC-IDRI
Algorithm Development
Segmentation
Image denoising
Humans
*Tomography
X-Ray Computed/methods
Algorithms
*Lung Neoplasms/diagnostic imaging
Lung/diagnostic imaging
Acceptance rate
Accuracy
Automatic thresholding
Comparison quantity
Computed Tomography (CT)
Contrast augmentation
Edge improvement
Enhance area growth
MATLAB
Start point
Tumor borders
BACKGROUND: Since lung tumors are in dynamic conditions, the study of tumor growth and its changes is of great importance in primary diagnosis. METHODS: Enhanced area growth (EAG) algorithm is introduced to segment the lung tumor in 2D and 3D modes on 60 patients CT images from four different databases by MATLAB software. The contrast augmentation, color intensity and maximum primary tumor radius determination, thresholding, start and neighbor points' designation in an array, and then modifying the points in the braid on average are the early steps of the proposed algorithm. To determine the new tumor boundaries, the maximum distance from the color-intensity center point of the primary tumor to the modified points is appointed via considering a larger target region and new threshold. The tumor center is divided into different subsections and then all previous stages are repeated from new designated points to define diverse boundaries for the tumor. An interpolation between these boundaries creates a new tumor boundary. The intersections with the tumor boundaries are firmed for edge correction phase, after drawing diverse lines from the tumor center at relevant angles. Each of the new regions is annexed to the core region to achieve a segmented tumor surface by meeting certain conditions. RESULTS: The multipoint-growth-starting-point grouping fashioned a desired consequence in the precise delineation of the tumor. The proposed algorithm enhanced tumor identification by more than 16% with a reasonable accuracy acceptance rate. At the same time, it largely assurances the independence of the last outcome from the starting point. By significance difference of p < 0.05, the dice coefficients were 0.80 +/- 0.02 and 0.92 +/- 0.03, respectively, for primary and enhanced algorithms. Lung area determination alongside automatic thresholding and also starting from several points along with edge improvement may reduce human errors in radiologists' interpretation of tumor areas and selection of the algorithm's starting point. CONCLUSIONS: The proposed algorithm enhanced tumor detection by more than 18% with a sufficient acceptance ratio of accuracy. Since the enhanced algorithm is independent of matrix size and image thickness, it is very likely that it can be easily applied to other contiguous tumor images. TRIAL REGISTRATION: PAZHOUHAN, PAZHOUHAN98000032. Registered 4 January 2021, http://pazhouhan.gerums.ac.ir/webreclist/view.action?webreclist_code=19300.
A hybrid deep CNN model for brain tumor image multi-classification
Srinivasan, S.
Francis, D.
Mathivanan, S. K.
Rajadurai, H.
Shivahare, B. D.
Shah, M. A.
BMC Med Imaging2024Journal Article, cited 0 times
REMBRANDT
RIDER NEURO MRI
TCGA-LGG
Humans
BRAIN
*Brain Neoplasms/diagnostic imaging
*Glioma
Neural Networks
Computer
*Meningeal Neoplasms
Brain tumor grading
Grid search
Convolutional Neural Network (CNN)
Hyperparameters
The current approach to diagnosing and classifying brain tumors relies on the histological evaluation of biopsy samples, which is invasive, time-consuming, and susceptible to manual errors. These limitations underscore the pressing need for a fully automated, deep-learning-based multi-classification system for brain malignancies. This article aims to leverage a deep convolutional neural network (CNN) to enhance early detection and presents three distinct CNN models designed for different types of classification tasks. The first CNN model achieves an impressive detection accuracy of 99.53% for brain tumors. The second CNN model, with an accuracy of 93.81%, proficiently categorizes brain tumors into five distinct types: normal, glioma, meningioma, pituitary, and metastatic. Furthermore, the third CNN model demonstrates an accuracy of 98.56% in accurately classifying brain tumors into their different grades. To ensure optimal performance, a grid search optimization approach is employed to automatically fine-tune all the relevant hyperparameters of the CNN models. The utilization of large, publicly accessible clinical datasets results in robust and reliable classification outcomes. This article conducts a comprehensive comparison of the proposed models against classical models, such as AlexNet, DenseNet121, ResNet-101, VGG-19, and GoogleNet, reaffirming the superiority of the deep CNN-based approach in advancing the field of brain tumor classification and early detection.
Unified deep learning models for enhanced lung cancer prediction with ResNet-50-101 and EfficientNet-B3 using DICOM images
Kumar, V.
Prabha, C.
Sharma, P.
Mittal, N.
Askar, S. S.
Abouhawwash, M.
BMC Med Imaging2024Journal Article, cited 0 times
LIDC-IDRI
Humans
*Lung Neoplasms/diagnostic imaging
*Deep Learning
Algorithms
Machine Learning
Research Design
Cancer Detection
Deep Learning
EfficientNet-B3
Fusion
Lung Cancer
ResNet-101
ResNet-50
Significant advancements in machine learning algorithms have the potential to aid in the early detection and prevention of cancer, a devastating disease. However, traditional research methods face obstacles, and the amount of cancer-related information is rapidly expanding. The authors have developed a helpful support system using three distinct deep-learning models, ResNet-50, EfficientNet-B3, and ResNet-101, along with transfer learning, to predict lung cancer, thereby contributing to health and reducing the mortality rate associated with this condition. This offer aims to address the issue effectively. Using a dataset of 1,000 DICOM lung cancer images from the LIDC-IDRI repository, each image is classified into four different categories. Although deep learning is still making progress in its ability to analyze and understand cancer data, this research marks a significant step forward in the fight against cancer, promoting better health outcomes and potentially lowering the mortality rate. The Fusion Model, like all other models, achieved 100% precision in classifying Squamous Cells. The Fusion Model and ResNet-50 achieved a precision of 90%, closely followed by EfficientNet-B3 and ResNet-101 with slightly lower precision. To prevent overfitting and improve data collection and planning, the authors implemented a data extension strategy. The relationship between acquiring knowledge and reaching specific scores was also connected to advancing and addressing the issue of imprecise accuracy, ultimately contributing to advancements in health and a reduction in the mortality rate associated with lung cancer.
The impact of the combat method on radiomics feature compensation and analysis of scanners from different manufacturers
Oligodendroglial tumours: subventricular zone involvement and seizure history are associated with CIC mutation status
Liu, Zhenyin
Liu, Hongsheng
Liu, Zhenqing
Zhang, Jing
BMC Neurol2019Journal Article, cited 1 times
Website
TCGA-LGG
Radiogenomics
BACKGROUND: CIC-mutant oligodendroglial tumours linked to better prognosis. We aim to investigate associations between CIC gene mutation status, MR characteristics and clinical features. METHODS: Imaging and genomic data from the Cancer Genome Atlas and the Cancer Imaging Archive (TCGA/TCIA) for 59 patients with oligodendroglial tumours were used. Differences between CIC mutation and CIC wild-type were tested using Chi-square test and binary logistic regression analysis. RESULTS: In univariate analysis, the clinical variables and MR features, which consisted 3 selected features (subventricular zone[SVZ] involvement, volume and seizure history) were associated with CIC mutation status (all p < 0.05). A multivariate logistic regression analysis identified that seizure history (no vs. yes odd ratio [OR]: 28.960, 95 confidence interval [CI]:2.625-319.49, p = 0.006) and SVZ involvement (SVZ- vs. SVZ+ OR: 77.092, p = 0.003; 95% CI: 4.578-1298.334) were associated with a higher incidence of CIC mutation status. The nomogram showed good discrimination, with a C-index of 0.906 (95% CI: 0.812-1.000) and was well calibrated. SVZ- group has increased (SVZ- vs. SVZ+, hazard ratio [HR]: 4.500, p = 0.04; 95% CI: 1.069-18.945) overall survival. CONCLUSIONS: Absence of seizure history and SVZ involvement (-) was associated with a higher incidence of CIC mutation.
Radiogenomics correlation between MR imaging features and mRNA-based subtypes in lower-grade glioma
Liu, Zhenyin
Zhang, Jing
BMC Neurology2020Journal Article, cited 0 times
Website
TCGA-LGG
Radiogenomics
glioma
To investigate associations between lower-grade glioma (LGG) mRNA-based subtypes (R1-R4) and MR features.
Imaging-genomics reveals driving pathways of MRI derived volumetric tumor phenotype features in Glioblastoma
Grossmann, Patrick
Gutman, David A
Dunn, William D
Holder, Chad A
Aerts, Hugo JWL
BMC Cancer2016Journal Article, cited 21 times
Website
TCGA-GBM
Radiomics
Magnetic Resonance Imaging (MRI)
Background; Glioblastoma (GBM) tumors exhibit strong phenotypic differences that can be quantified using magnetic resonance imaging (MRI), but the underlying biological drivers of these imaging phenotypes remain largely unknown. An Imaging-Genomics analysis was performed to reveal the mechanistic associations between MRI derived quantitative volumetric tumor phenotype features and molecular pathways.; ; Methods; One hundred fourty one patients with presurgery MRI and survival data were included in our analysis. Volumetric features were defined, including the necrotic core (NE), contrast-enhancement (CE), abnormal tumor volume assessed by post-contrast T1w (tumor bulk or TB), tumor-associated edema based on T2-FLAIR (ED), and total tumor volume (TV), as well as ratios of these tumor components. Based on gene expression where available (n = 91), pathway associations were assessed using a preranked gene set enrichment analysis. These results were put into context of molecular subtypes in GBM and prognostication.; ; Results; Volumetric features were significantly associated with diverse sets of biological processes (FDR < 0.05). While NE and TB were enriched for immune response pathways and apoptosis, CE was associated with signal transduction and protein folding processes. ED was mainly enriched for homeostasis and cell cycling pathways. ED was also the strongest predictor of molecular GBM subtypes (AUC = 0.61). CE was the strongest predictor of overall survival (C-index = 0.6; Noether test, p = 4x10−4).; ; Conclusion; GBM volumetric features extracted from MRI are significantly enriched for information about the biological state of a tumor that impacts patient outcomes. Clinical decision-support systems could exploit this information to develop personalized treatment strategies on the basis of noninvasive imaging.
Prediction of pathologic stage in non-small cell lung cancer using machine learning algorithm based on CT image feature analysis
Yu, L.
Tao, G.
Zhu, L.
Wang, G.
Li, Z.
Ye, J.
Chen, Q.
BMC Cancer2019Journal Article, cited 11 times
Website
NSCLC-Radiomics-Genomics
TCGA-LUAD
TCGA-LUSC
3D Slicer
NRRD
PyRadiomics
Gray-level co-occurrence matrix (GLCM)
Computed Tomography (CT)
PURPOSE: To explore imaging biomarkers that can be used for diagnosis and prediction of pathologic stage in non-small cell lung cancer (NSCLC) using multiple machine learning algorithms based on CT image feature analysis. METHODS: Patients with stage IA to IV NSCLC were included, and the whole dataset was divided into training and testing sets and an external validation set. To tackle imbalanced datasets in NSCLC, we generated a new dataset and achieved equilibrium of class distribution by using SMOTE algorithm. The datasets were randomly split up into a training/testing set. We calculated the importance value of CT image features by means of mean decrease gini impurity generated by random forest algorithm and selected optimal features according to feature importance (mean decrease gini impurity > 0.005). The performance of prediction model in training and testing sets were evaluated from the perspectives of classification accuracy, average precision (AP) score and precision-recall curve. The predictive accuracy of the model was externally validated using lung adenocarcinoma (LUAD) and lung squamous cell carcinoma (LUSC) samples from TCGA database. RESULTS: The prediction model that incorporated nine image features exhibited a high classification accuracy, precision and recall scores in the training and testing sets. In the external validation, the predictive accuracy of the model in LUAD outperformed that in LUSC. CONCLUSIONS: The pathologic stage of patients with NSCLC can be accurately predicted based on CT image features, especially for LUAD. Our findings extend the application of machine learning algorithms in CT image feature prediction for pathologic staging and identify potential imaging biomarkers that can be used for diagnosis of pathologic stage in NSCLC patients.
Dimension reduction and outlier detection of 3-D shapes derived from multi-organ CT images
Selle, M.
Kircher, M.
Schwennen, C.
Visscher, C.
Jung, K.
BMC Med Inform Decis Mak2024Journal Article, cited 0 times
Website
CT-ORG
Humans
Cluster Analysis
*Tomography
X-Ray Computed
Principal component analysis (PCA)
*Algorithms
Bagplots
Dimension reduction
Multiple co-inertia analysis
Outlier detection
Segmentation
BACKGROUND: Unsupervised clustering and outlier detection are important in medical research to understand the distributional composition of a collective of patients. A number of clustering methods exist, also for high-dimensional data after dimension reduction. Clustering and outlier detection may, however, become less robust or contradictory if multiple high-dimensional data sets per patient exist. Such a scenario is given when the focus is on 3-D data of multiple organs per patient, and a high-dimensional feature matrix per organ is extracted. METHODS: We use principal component analysis (PCA), t-distributed stochastic neighbor embedding (t-SNE) and multiple co-inertia analysis (MCIA) combined with bagplots to study the distribution of multi-organ 3-D data taken by computed tomography scans. After point-set registration of multiple organs from two public data sets, multiple hundred shape features are extracted per organ. While PCA and t-SNE can only be applied to each organ individually, MCIA can project the data of all organs into the same low-dimensional space. RESULTS: MCIA is the only approach, here, with which data of all organs can be projected into the same low-dimensional space. We studied how frequently (i.e., by how many organs) a patient was classified to belong to the inner or outer 50% of the population, or as an outlier. Outliers could only be detected with MCIA and PCA. MCIA and t-SNE were more robust in judging the distributional location of a patient in contrast to PCA. CONCLUSIONS: MCIA is more appropriate and robust in judging the distributional location of a patient in the case of multiple high-dimensional data sets per patient. It is still recommendable to apply PCA or t-SNE in parallel to MCIA to study the location of individual organs.
A biomarker basing on radiomics for the prediction of overall survival in non–small cell lung cancer patients
He, Bo
Zhao, Wei
Pi, Jiang-Yuan
Han, Dan
Jiang, Yuan-Ming
Zhang, Zhen-Guang
Respiratory research2018Journal Article, cited 0 times
Website
non-small cell lung cancer
Radiomics
Computer-aided detection (CADe) and diagnosis (CADx) system for lung cancer with likelihood of malignancy
Firmino, Macedo
Angelo, Giovani
Morais, Higor
Dantas, Marcel R
Valentim, Ricardo
BioMedical Engineering OnLine2016Journal Article, cited 63 times
Website
LIDC-IDRI
Computer Aided Detection (CADe)
Computer Aided Diagnosis (CADx)
LUNG
Computed Tomography (CT)
BACKGROUND: CADe and CADx systems for the detection and diagnosis of lung cancer have been important areas of research in recent decades. However, these areas are being worked on separately. CADe systems do not present the radiological characteristics of tumors, and CADx systems do not detect nodules and do not have good levels of automation. As a result, these systems are not yet widely used in clinical settings. METHODS: The purpose of this article is to develop a new system for detection and diagnosis of pulmonary nodules on CT images, grouping them into a single system for the identification and characterization of the nodules to improve the level of automation. The article also presents as contributions: the use of Watershed and Histogram of oriented Gradients (HOG) techniques for distinguishing the possible nodules from other structures and feature extraction for pulmonary nodules, respectively. For the diagnosis, it is based on the likelihood of malignancy allowing more aid in the decision making by the radiologists. A rule-based classifier and Support Vector Machine (SVM) have been used to eliminate false positives. RESULTS: The database used in this research consisted of 420 cases obtained randomly from LIDC-IDRI. The segmentation method achieved an accuracy of 97 % and the detection system showed a sensitivity of 94.4 % with 7.04 false positives per case. Different types of nodules (isolated, juxtapleural, juxtavascular and ground-glass) with diameters between 3 mm and 30 mm have been detected. For the diagnosis of malignancy our system presented ROC curves with areas of: 0.91 for nodules highly unlikely of being malignant, 0.80 for nodules moderately unlikely of being malignant, 0.72 for nodules with indeterminate malignancy, 0.67 for nodules moderately suspicious of being malignant and 0.83 for nodules highly suspicious of being malignant. CONCLUSIONS: From our preliminary results, we believe that our system is promising for clinical applications assisting radiologists in the detection and diagnosis of lung cancer.
Simultaneous encryption and compression of medical images based on optimized tensor compressed sensing with 3D Lorenz
Wang, Qingzhu
Chen, Xiaoming
Wei, Mengying
Miao, Zhuang
BioMedical Engineering OnLine2016Journal Article, cited 1 times
Website
LIDC-IDRI
CNN models discriminating between pulmonary micro-nodules and non-nodules from CT images
Monkam, Patrice
Qi, Shouliang
Xu, Mingjie
Han, Fangfang
Zhao, Xinzhuo
Qian, Wei
BioMedical Engineering OnLine2018Journal Article, cited 1 times
Website
LIDC-IDRI
lung cancer
micro nodules
Spatial mapping of tumor heterogeneity in whole-body PET-CT: a feasibility study
Jonsson, H.
Ahlstrom, H.
Kullberg, J.
Biomed Eng Online2023Journal Article, cited 0 times
BACKGROUND: Tumor heterogeneity is recognized as a predictor of treatment response and patient outcome. Quantification of tumor heterogeneity across all scales may therefore provide critical insight that ultimately improves cancer management. METHODS: An image registration-based framework for the study of tumor heterogeneity in whole-body images was evaluated on a dataset of 490 FDG-PET-CT images of lung cancer, lymphoma, and melanoma patients. Voxel-, lesion- and subject-level features were extracted from the subjects' segmented lesion masks and mapped to female and male template spaces for voxel-wise analysis. Resulting lesion feature maps of the three subsets of cancer patients were studied visually and quantitatively. Lesion volumes and lesion distances in subject spaces were compared with resulting properties in template space. The strength of the association between subject and template space for these properties was evaluated with Pearson's correlation coefficient. RESULTS: Spatial heterogeneity in terms of lesion frequency distribution in the body, metabolic activity, and lesion volume was seen between the three subsets of cancer patients. Lesion feature maps showed anatomical locations with low versus high mean feature value among lesions sampled in space and also highlighted sites with high variation between lesions in each cancer subset. Spatial properties of the lesion masks in subject space correlated strongly with the same properties measured in template space (lesion volume, R = 0.986, p < 0.001; total metabolic volume, R = 0.988, p < 0.001; maximum within-patient lesion distance, R = 0.997, p < 0.001). Lesion volume and total metabolic volume increased on average from subject to template space (lesion volume, 3.1 +/- 52 ml; total metabolic volume, 53.9 +/- 229 ml). Pair-wise lesion distance decreased on average by 0.1 +/- 1.6 cm and maximum within-patient lesion distance increased on average by 0.5 +/- 2.1 cm from subject to template space. CONCLUSIONS: Spatial tumor heterogeneity between subsets of interest in cancer cohorts can successfully be explored in whole-body PET-CT images within the proposed framework. Whole-body studies are, however, especially prone to suffer from regional variation in lesion frequency, and thus statistical power, due to the non-uniform distribution of lesions across a large field of view.
Radiogenomic analysis of cellular tumor-stroma heterogeneity as a prognostic predictor in breast cancer
Fan, M.
Wang, K.
Zhang, Y.
Ge, Y.
Lu, Z.
Li, L.
J Transl Med2023Journal Article, cited 0 times
Website
TCGA-BRCA
Breast-MRI-NACT-Pilot
ACRIN 6657
ISPY1
DCE-MRI
Radiomics
Radiogenomics
Humans
Female
Middle Aged
Prognosis
*Breast Neoplasms/diagnostic imaging/genetics
Retrospective Studies
Gene Expression Profiling/methods
Biomarkers
Tumor/genetics/analysis
Thyrotropin/genetics
Tumor Microenvironment/genetics
Breast cancer
Cell subpopulation
Radiogenomics
BACKGROUND: The tumor microenvironment and intercellular communication between solid tumors and the surrounding stroma play crucial roles in cancer initiation, progression, and prognosis. Radiomics provides clinically relevant information from radiological images; however, its biological implications in uncovering tumor pathophysiology driven by cellular heterogeneity between the tumor and stroma are largely unknown. We aimed to identify radiogenomic signatures of cellular tumor-stroma heterogeneity (TSH) to improve breast cancer management and prognosis analysis. METHODS: This retrospective multicohort study included five datasets. Cell subpopulations were estimated using bulk gene expression data, and the relative difference in cell subpopulations between the tumor and stroma was used as a biomarker to categorize patients into good- and poor-survival groups. A radiogenomic signature-based model utilizing dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) was developed to target TSH, and its clinical significance in relation to survival outcomes was independently validated. RESULTS: The final cohorts of 1330 women were included for cellular TSH biomarker identification (n = 112, mean age, 57.3 years +/- 14.6) and validation (n = 886, mean age, 58.9 years +/- 13.1), radiogenomic signature of TSH identification (n = 91, mean age, 55.5 years +/- 11.4), and prognostic (n = 241) assessments. The cytotoxic lymphocyte biomarker differentiated patients into good- and poor-survival groups (p < 0.0001) and was independently validated (p = 0.014). The good survival group exhibited denser cell interconnections. The radiogenomic signature of TSH was identified and showed a positive association with overall survival (p = 0.038) and recurrence-free survival (p = 3 x 10(-4)). CONCLUSION: Radiogenomic signatures provide insights into prognostic factors that reflect the imbalanced tumor-stroma environment, thereby presenting breast cancer-specific biological implications and prognostic significance.
Conditional generative adversarial network driven radiomic prediction of mutation status based on magnetic resonance imaging of breast cancer
Huang, Z. H.
Chen, L.
Sun, Y.
Liu, Q.
Hu, P.
J Transl Med2024Journal Article, cited 0 times
TCGA-BRCA
Radiomics
Female
Generative Adversarial Network (GAN)
*Breast Neoplasms/diagnostic imaging/genetics
Radiomics
DNA Copy Number Variations
Bayes Theorem
Magnetic Resonance Imaging/methods
Mutation/genetics
TP53
PIK3CA
CDH1
Breast cancer
Machine learning
Magnetic Resonance Imaging (MRI)
Synthetic data generation
Radiogenomics
cGANs
BACKGROUND: Breast Cancer (BC) is a highly heterogeneous and complex disease. Personalized treatment options require the integration of multi-omic data and consideration of phenotypic variability. Radiogenomics aims to merge medical images with genomic measurements but encounter challenges due to unpaired data consisting of imaging, genomic, or clinical outcome data. In this study, we propose the utilization of a well-trained conditional generative adversarial network (cGAN) to address the unpaired data issue in radiogenomic analysis of BC. The generated images will then be used to predict the mutations status of key driver genes and BC subtypes. METHODS: We integrated the paired MRI and multi-omic (mRNA gene expression, DNA methylation, and copy number variation) profiles of 61 BC patients from The Cancer Imaging Archive (TCIA) and The Cancer Genome Atlas (TCGA). To facilitate this integration, we employed a Bayesian Tensor Factorization approach to factorize the multi-omic data into 17 latent features. Subsequently, a cGAN model was trained based on the matched side-view patient MRIs and their corresponding latent features to predict MRIs for BC patients who lack MRIs. Model performance was evaluated by calculating the distance between real and generated images using the Frechet Inception Distance (FID) metric. BC subtype and mutation status of driver genes were obtained from the cBioPortal platform, where 3 genes were selected based on the number of mutated patients. A convolutional neural network (CNN) was constructed and trained using the generated MRIs for mutation status prediction. Receiver operating characteristic area under curve (ROC-AUC) and precision-recall area under curve (PR-AUC) were used to evaluate the performance of the CNN models for mutation status prediction. Precision, recall and F1 score were used to evaluate the performance of the CNN model in subtype classification. RESULTS: The FID of the images from the well-trained cGAN model based on the test set is 1.31. The CNN for TP53, PIK3CA, and CDH1 mutation prediction yielded ROC-AUC values 0.9508, 0.7515, and 0.8136 and PR-AUC are 0.9009, 0.7184, and 0.5007, respectively for the three genes. Multi-class subtype prediction achieved precision, recall and F1 scores of 0.8444, 0.8435 and 0.8336 respectively. The source code and related data implemented the algorithms can be found in the project GitHub at https://github.com/mattthuang/BC_RadiogenomicGAN . CONCLUSION: Our study establishes cGAN as a viable tool for generating synthetic BC MRIs for mutation status prediction and subtype classification to better characterize the heterogeneity of BC in patients. The synthetic images also have the potential to significantly augment existing MRI data and circumvent issues surrounding data sharing and patient privacy for future BC machine learning studies.
Radiogenomic analysis for predicting lymph node metastasis and molecular annotation of radiomic features in pancreatic cancer
Tang, Y.
Su, Y. X.
Zheng, J. M.
Zhuo, M. L.
Qian, Q. F.
Shen, Q. L.
Lin, P.
Chen, Z. K.
J Transl Med2024Journal Article, cited 0 times
Website
BACKGROUND: To provide a preoperative prediction model for lymph node metastasis in pancreatic cancer patients and provide molecular information of key radiomic features. METHODS: Two cohorts comprising 151 and 54 pancreatic cancer patients were included in the analysis. Radiomic features from the tumor region of interests were extracted by using PyRadiomics software. We used a framework that incorporated 10 machine learning algorithms and generated 77 combinations to construct radiomics-based models for lymph node metastasis prediction. Weighted gene coexpression network analysis (WGCNA) was subsequently performed to determine the relationships between gene expression levels and radiomic features. Molecular pathways enrichment analysis was performed to uncover the underlying molecular features. RESULTS: Patients in the in-house cohort (mean age, 61.3 years +/- 9.6 [SD]; 91 men [60%]) were separated into training (n = 105, 70%) and validation (n = 46, 30%) cohorts. A total of 1,239 features were extracted and subjected to machine learning algorithms. The 77 radiomic models showed moderate performance for predicting lymph node metastasis, and the combination of the StepGBM and Enet algorithms had the best performance in the training (AUC = 0.84, 95% CI = 0.77-0.91) and validation (AUC = 0.85, 95% CI = 0.73-0.98) cohorts. We determined that 15 features were core variables for lymph node metastasis. Proliferation-related processes may respond to the main molecular alterations underlying these features. CONCLUSIONS: Machine learning-based radiomics could predict the status of lymph node metastasis in pancreatic cancer, which is associated with proliferation-related alterations.
Radiomic analysis reveals diverse prognostic and molecular insights into the response of breast cancer to neoadjuvant chemotherapy: a multicohort study
Breast cancer patients exhibit various response patterns to neoadjuvant chemotherapy (NAC). However, it is uncertain whether diverse tumor response patterns to NAC in breast cancer patients can predict survival outcomes. We aimed to develop and validate radiomic signatures indicative of tumor shrinkage and therapeutic response for improved survival analysis.
Effect of machine learning methods on predicting NSCLC overall survival time based on Radiomics analysis
Sun, Wenzheng
Jiang, Mingyan
Dang, Jun
Chang, Panchun
Yin, Fang-Fang
Radiation Oncology2018Journal Article, cited 0 times
Website
NSCLC
Radiomics
machine learning
Convolutional neural networks for head and neck tumor segmentation on 7-channel multiparametric MRI: a leave-one-out analysis
Bielak, Lars
Wiedenmann, Nicole
Berlin, Arnie
Nicolay, Nils Henrik
Gunashekar, Deepa Darshini
Hagele, Leonard
Lottner, Thomas
Grosu, Anca-Ligia
Bock, Michael
Radiat Oncol2020Journal Article, cited 1 times
Website
Head-Neck-Radiomics-HN1
Radiation Therapy
Magnetic Resonance Imaging (MRI)
Convolutional neural networks (CNN)
Segmentation
BACKGROUND: Automatic tumor segmentation based on Convolutional Neural Networks (CNNs) has shown to be a valuable tool in treatment planning and clinical decision making. We investigate the influence of 7 MRI input channels of a CNN with respect to the segmentation performance of head&neck cancer. METHODS: Head&neck cancer patients underwent multi-parametric MRI including T2w, pre- and post-contrast T1w, T2*, perfusion (ktrans, ve) and diffusion (ADC) measurements at 3 time points before and during radiochemotherapy. The 7 different MRI contrasts (input channels) and manually defined gross tumor volumes (primary tumor and lymph node metastases) were used to train CNNs for lesion segmentation. A reference CNN with all input channels was compared to individually trained CNNs where one of the input channels was left out to identify which MRI contrast contributes the most to the tumor segmentation task. A statistical analysis was employed to account for random fluctuations in the segmentation performance. RESULTS: The CNN segmentation performance scored up to a Dice similarity coefficient (DSC) of 0.65. The network trained without T2* data generally yielded the worst results, with DeltaDSCGTV-T = 5.7% for primary tumor and DeltaDSCGTV-Ln = 5.8% for lymph node metastases compared to the network containing all input channels. Overall, the ADC input channel showed the least impact on segmentation performance, with DeltaDSCGTV-T = 2.4% for primary tumor and DeltaDSCGTV-Ln = 2.2% respectively. CONCLUSIONS: We developed a method to reduce overall scan times in MRI protocols by prioritizing those sequences that add most unique information for the task of automatic tumor segmentation. The optimized CNNs could be used to aid in the definition of the GTVs in radiotherapy planning, and the faster imaging protocols will reduce patient scan times which can increase patient compliance. TRIAL REGISTRATION: The trial was registered retrospectively at the German Register for Clinical Studies (DRKS) under register number DRKS00003830 on August 20th, 2015.
HLA-DQA1 expression is associated with prognosis and predictable with radiomics in breast cancer
Zhou, J.
Xie, T.
Shan, H.
Cheng, G.
Radiat Oncol2023Journal Article, cited 0 times
TCGA-BRCA
Radiogenomics
Female
*Breast Neoplasms/diagnostic imaging/genetics
Retrospective Studies
HLA-DQ alpha-Chains/genetics
Prognosis
Biomarker
Breast cancer
Human leukocyte antigen
Radiomics
BACKGROUND: High HLA-DQA1 expression is associated with a better prognosis in many cancers. However, the association between HLA-DQA1 expression and prognosis of breast cancer and the noninvasive assessment of HLA-DQA1 expression are still unclear. This study aimed to reveal the association and investigate the potential of radiomics to predict HLA-DQA1 expression in breast cancer. METHODS: In this retrospective study, transcriptome sequencing data, medical imaging data, clinical and follow-up data were downloaded from the TCIA ( https://www.cancerimagingarchive.net/ ) and TCGA ( https://portal.gdc.cancer.gov/ ) databases. The clinical characteristic differences between the high HLA-DQA1 expression group (HHD group) and the low HLA-DQA1 expression group were explored. Gene set enrichment analysis, Kaplan‒Meier survival analysis and Cox regression were performed. Then, 107 dynamic contrast-enhanced magnetic resonance imaging features were extracted, including size, shape and texture. Using recursive feature elimination and gradient boosting machine, a radiomics model was established to predict HLA-DQA1 expression. Receiver operating characteristic (ROC) curves, precision-recall curves, calibration curves, and decision curves were used for model evaluation. RESULTS: The HHD group had better survival outcomes. The differentially expressed genes in the HHD group were significantly enriched in oxidative phosphorylation (OXPHOS) and estrogen response early and late signalling pathways. The radiomic score (RS) output from the model was associated with HLA-DQA1 expression. The area under the ROC curves (95% CI), accuracy, sensitivity, specificity, positive predictive value, and negative predictive value of the radiomic model were 0.866 (0.775-0.956), 0.825, 0.939, 0.7, 0.775, and 0.913 in the training set and 0.780 (0.629-0.931), 0.659, 0.81, 0.5, 0.63, and 0.714 in the validation set, respectively, showing a good prediction effect. CONCLUSIONS: High HLA-DQA1 expression is associated with a better prognosis in breast cancer. Quantitative radiomics as a noninvasive imaging biomarker has potential value for predicting HLA-DQA1 expression.
Comparison of deep learning networks for fully automated head and neck tumor delineation on multi-centric PET/CT images
Wang, Yiling
Lombardo, Elia
Huang, Lili
Avanzo, Michele
Fanetti, Giuseppe
Franchin, Giovanni
Zschaeck, Sebastian
Weingärtner, Julian
Belka, Claus
Riboldi, Marco
Kurz, Christopher
Landry, Guillaume
Radiation Oncology2024Journal Article, cited 0 times
HEAD-NECK-RADIOMICS-HN1
PET-CT
Deep Learning
Head and Neck
ObjectivesDeep learning-based auto-segmentation of head and neck cancer (HNC) tumors is expected to have better reproducibility than manual delineation. Positron emission tomography (PET) and computed tomography (CT) are commonly used in tumor segmentation. However, current methods still face challenges in handling whole-body scans where a manual selection of a bounding box may be required. Moreover, different institutions might still apply different guidelines for tumor delineation. This study aimed at exploring the auto-localization and segmentation of HNC tumors from entire PET/CT scans and investigating the transferability of trained baseline models to external real world cohorts.MethodsWe employed 2D Retina Unet to find HNC tumors from whole-body PET/CT and utilized a regular Unet to segment the union of the tumor and involved lymph nodes. In comparison, 2D/3D Retina Unets were also implemented to localize and segment the same target in an end-to-end manner. The segmentation performance was evaluated via Dice similarity coefficient (DSC) and Hausdorff distance 95th percentile (HD95). Delineated PET/CT scans from the HECKTOR challenge were used to train the baseline models by 5-fold cross-validation. Another 271 delineated PET/CTs from three different institutions (MAASTRO, CRO, BERLIN) were used for external testing. Finally, facility-specific transfer learning was applied to investigate the improvement of segmentation performance against baseline models.ResultsEncouraging localization results were observed, achieving a maximum omnidirectional tumor center difference lower than 6.8 cm for external testing. The three baseline models yielded similar averaged cross-validation (CV) results with a DSC in a range of 0.71–0.75, while the averaged CV HD95 was 8.6, 10.7 and 9.8 mm for the regular Unet, 2D and 3D Retina Unets, respectively. More than a 10% drop in DSC and a 40% increase in HD95 were observed if the baseline models were tested on the three external cohorts directly. After the facility-specific training, an improvement in external testing was observed for all models. The regular Unet had the best DSC (0.70) for the MAASTRO cohort, and the best HD95 (7.8 and 7.9 mm) in the MAASTRO and CRO cohorts. The 2D Retina Unet had the best DSC (0.76 and 0.67) for the CRO and BERLIN cohorts, and the best HD95 (12.4 mm) for the BERLIN cohort.ConclusionThe regular Unet outperformed the other two baseline models in CV and most external testing cohorts. Facility-specific transfer learning can potentially improve HNC segmentation performance for individual institutions, where the 2D Retina Unets could achieve comparable or even better results than the regular Unet.
A 4D-CBCT correction network based on contrastive learning for dose calculation in lung cancer
Cao, N.
Wang, Z.
Ding, J.
Zhang, H.
Zhang, S.
Gao, L.
Sun, J.
Xie, K.
Ni, X.
Radiat Oncol2024Journal Article, cited 0 times
Website
4D-Lung
*Lung Neoplasms/diagnostic imaging/radiotherapy
*Carcinoma
Non-Small-Cell Lung
*Spiral Cone-Beam Computed Tomography
Cone-Beam Computed Tomography/methods
Image Processing
Computer-Assisted/methods
Four-Dimensional Computed Tomography
Radiotherapy Planning
Computer-Assisted/methods
4d-cbct
Deep learning
Image quality correction
Lung cancer
OBJECTIVE: This study aimed to present a deep-learning network called contrastive learning-based cycle generative adversarial networks (CLCGAN) to mitigate streak artifacts and correct the CT value in four-dimensional cone beam computed tomography (4D-CBCT) for dose calculation in lung cancer patients. METHODS: 4D-CBCT and 4D computed tomography (CT) of 20 patients with locally advanced non-small cell lung cancer were used to paired train the deep-learning model. The lung tumors were located in the right upper lobe, right lower lobe, left upper lobe, and left lower lobe, or in the mediastinum. Additionally, five patients to create 4D synthetic computed tomography (sCT) for test. Using the 4D-CT as the ground truth, the quality of the 4D-sCT images was evaluated by quantitative and qualitative assessment methods. The correction of CT values was evaluated holistically and locally. To further validate the accuracy of the dose calculations, we compared the dose distributions and calculations of 4D-CBCT and 4D-sCT with those of 4D-CT. RESULTS: The structural similarity index measure (SSIM) and peak signal-to-noise ratio (PSNR) of the 4D-sCT increased from 87% and 22.31 dB to 98% and 29.15 dB, respectively. Compared with cycle consistent generative adversarial networks, CLCGAN enhanced SSIM and PSNR by 1.1% (p < 0.01) and 0.42% (p < 0.01). Furthermore, CLCGAN significantly decreased the absolute mean differences of CT value in lungs, bones, and soft tissues. The dose calculation results revealed a significant improvement in 4D-sCT compared to 4D-CBCT. CLCGAN was the most accurate in dose calculations for left lung (V5Gy), right lung (V5Gy), right lung (V20Gy), PTV (D98%), and spinal cord (D2%), with the relative dose difference were reduced by 6.84%, 3.84%, 1.46%, 0.86%, 3.32% compared to 4D-CBCT. CONCLUSIONS: Based on the satisfactory results obtained in terms of image quality, CT value measurement, it can be concluded that CLCGAN-based corrected 4D-CBCT can be utilized for dose calculation in lung cancer.
BackgroundMatrix factorization is a well established pattern discovery tool that has seen numerous applications in biomedical data analytics, such as gene expression co-clustering, patient stratification, and gene-disease association mining. Matrix factorization learns a latent data model that takes a data matrix and transforms it into a latent feature space enabling generalization, noise removal and feature discovery. However, factorization algorithms are numerically intensive, and hence there is a pressing challenge to scale current algorithms to work with large datasets. Our focus in this paper is matrix tri-factorization, a popular method that is not limited by the assumption of standard matrix factorization about data residing in one latent space. Matrix tri-factorization solves this by inferring a separate latent space for each dimension in a data matrix, and a latent mapping of interactions between the inferred spaces, making the approach particularly suitable for biomedical data mining.ResultsWe developed a block-wise approach for latent factor learning in matrix tri-factorization. The approach partitions a data matrix into disjoint submatrices that are treated independently and fed into a parallel factorization system. An appealing property of the proposed approach is its mathematical equivalence with serial matrix tri-factorization. In a study on large biomedical datasets we show that our approach scales well on multi-processor and multi-GPU architectures. On a four-GPU system we demonstrate that our approach can be more than 100-times faster than its single-processor counterpart.ConclusionsA general approach for scaling non-negative matrix tri-factorization is proposed. The approach is especially useful parallel matrix factorization implemented in a multi-GPU environment. We expect the new approach will be useful in emerging procedures for latent factor analysis, notably for data integration, where many large data matrices need to be collectively factorized.
Magnetic resonance imaging and molecular features associated with tumor-infiltrating lymphocytes in breast cancer
Wu, Jia
Li, Xuejie
Teng, Xiaodong
Rubin, Daniel L
Napel, Sandy
Daniel, Bruce L
Li, Ruijiang
Breast Cancer Research2018Journal Article, cited 0 times
Website
ispy-1 DCE-MRI
breast cancer
tcga
Tumour heterogeneity revealed by unsupervised decomposition of dynamic contrast-enhanced magnetic resonance imaging is associated with underlying gene expression patterns and poor survival in breast cancer patients
Fan, M.
Xia, P.
Liu, B.
Zhang, L.
Wang, Y.
Gao, X.
Li, L.
Breast Cancer Res2019Journal Article, cited 3 times
Website
ISPY1
TCGA-BRCA
BREAST
Radiogenomics
BACKGROUND: Heterogeneity is a common finding within tumours. We evaluated the imaging features of tumours based on the decomposition of tumoural dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) data to identify their prognostic value for breast cancer survival and to explore their biological importance. METHODS: Imaging features (n = 14), such as texture, histogram distribution and morphological features, were extracted to determine their associations with recurrence-free survival (RFS) in patients in the training cohort (n = 61) from The Cancer Imaging Archive (TCIA). The prognostic value of the features was evaluated in an independent dataset of 173 patients (i.e. the reproducibility cohort) from the TCIA I-SPY 1 TRIAL dataset. Radiogenomic analysis was performed in an additional cohort, the radiogenomic cohort (n = 87), using DCE-MRI from TCGA-BRCA and corresponding gene expression data from The Cancer Genome Atlas (TCGA). The MRI tumour area was decomposed by convex analysis of mixtures (CAM), resulting in 3 components that represent plasma input, fast-flow kinetics and slow-flow kinetics. The prognostic MRI features were associated with the gene expression module in which the pathway was analysed. Furthermore, a multigene signature for each prognostic imaging feature was built, and the prognostic value for RFS and overall survival (OS) was confirmed in an additional cohort from TCGA. RESULTS: Three image features (i.e. the maximum probability from the precontrast MR series, the median value from the second postcontrast series and the overall tumour volume) were independently correlated with RFS (p values of 0.0018, 0.0036 and 0.0032, respectively). The maximum probability feature from the fast-flow kinetics subregion was also significantly associated with RFS and OS in the reproducibility cohort. Additionally, this feature had a high correlation with the gene expression module (r = 0.59), and the pathway analysis showed that Ras signalling, a breast cancer-related pathway, was significantly enriched (corrected p value = 0.0044). Gene signatures (n = 43) associated with the maximum probability feature were assessed for associations with RFS (p = 0.035) and OS (p = 0.027) in an independent dataset containing 1010 gene expression samples. Among the 43 gene signatures, Ras signalling was also significantly enriched. CONCLUSIONS: Dynamic pattern deconvolution revealed that tumour heterogeneity was associated with poor survival and cancer-related pathways in breast cancer.
Noninvasive imaging signatures of HER2 and HR using ADC in invasive breast cancer: repeatability, reproducibility, and association with pathological complete response to neoadjuvant chemotherapy
Teng, X.
Zhang, J.
Zhang, X.
Fan, X.
Zhou, T.
Huang, Y. H.
Wang, L.
Lee, E. Y. P.
Yang, R.
Cai, J.
Breast Cancer Res2023Journal Article, cited 0 times
Website
BACKGROUND: The immunohistochemical test (IHC) of HER2 and HR can provide prognostic information and treatment guidance for invasive breast cancer patients. We aimed to develop noninvasive image signatures IS(HER2) and IS(HR) of HER2 and HR, respectively. We independently evaluate their repeatability, reproducibility, and association with pathological complete response (pCR) to neoadjuvant chemotherapy. METHODS: Pre-treatment DWI, IHC receptor status HER2/HR, and pCR to neoadjuvant chemotherapy of 222 patients from the multi-institutional ACRIN 6698 trial were retrospectively collected. They were pre-separated for development, independent validation, and test-retest. 1316 image features were extracted from DWI-derived ADC maps within manual tumor segmentations. IS(HER2) and IS(HR) were developed by RIDGE logistic regression using non-redundant and test-retest reproducible features relevant to IHC receptor status. We evaluated their association with pCR using area under receiver operating curve (AUC) and odds ratio (OR) after binarization. Their reproducibility was further evaluated using the test-retest set with intra-class coefficient of correlation (ICC). RESULTS: A 5-feature IS(HER2) targeting HER2 was developed (AUC = 0.70, 95% CI 0.59 to 0.82) and validated (AUC = 0.72, 95% CI 0.58 to 0.86) with high perturbation repeatability (ICC = 0.92) and test-retest reproducibility (ICC = 0.83). IS(HR) was developed using 5 features with higher association with HR during development (AUC = 0.75, 95% CI 0.66 to 0.84) and validation (AUC = 0.74, 95% CI 0.61 to 0.86) and similar repeatability (ICC = 0.91) and reproducibility (ICC = 0.82). Both image signatures showed significant associations with pCR with AUC of 0.65 (95% CI 0.50 to 0.80) for IS(HER2) and 0.64 (95% CI 0.50 to 0.78) for IS(HER2) in the validation cohort. Patients with high IS(HER2) were more likely to achieve pCR to neoadjuvant chemotherapy with validation OR of 4.73 (95% CI 1.64 to 13.65, P value = 0.006). Low IS(HR) patients had higher pCR with OR = 0.29 (95% CI 0.10 to 0.81, P value = 0.021). Molecular subtypes derived from the image signatures showed comparable pCR prediction values to IHC-based molecular subtypes (P value > 0.05). CONCLUSION: Robust ADC-based image signatures were developed and validated for noninvasive evaluation of IHC receptors HER2 and HR. We also confirmed their value in predicting treatment response to neoadjuvant chemotherapy. Further evaluations in treatment guidance are warranted to fully validate their potential as IHC surrogates.
A deep learning pipeline to simulate fluorodeoxyglucose (FDG) uptake in head and neck cancers using non-contrast CT images without the administration of radioactive tracer
Chandrashekar, A.
Handa, A.
Ward, J.
Grau, V.
Lee, R.
Insights Imaging2022Journal Article, cited 0 times
Website
Head-Neck-PET-CT
Deep learning
Generative adversarial network
Head and neck cancer
Positron Emission Tomography (PET)
Tomography (X-ray computed)
Computed Tomography (CT)
OBJECTIVES: Positron emission tomography (PET) imaging is a costly tracer-based imaging modality used to visualise abnormal metabolic activity for the management of malignancies. The objective of this study is to demonstrate that non-contrast CTs alone can be used to differentiate regions with different Fluorodeoxyglucose (FDG) uptake and simulate PET images to guide clinical management. METHODS: Paired FDG-PET and CT images (n = 298 patients) with diagnosed head and neck squamous cell carcinoma (HNSCC) were obtained from The cancer imaging archive. Random forest (RF) classification of CT-derived radiomic features was used to differentiate metabolically active (tumour) and inactive tissues (ex. thyroid tissue). Subsequently, a deep learning generative adversarial network (GAN) was trained for this CT to PET transformation task without tracer injection. The simulated PET images were evaluated for technical accuracy (PERCIST v.1 criteria) and their ability to predict clinical outcome [(1) locoregional recurrence, (2) distant metastasis and (3) patient survival]. RESULTS: From 298 patients, 683 hot spots of elevated FDG uptake (elevated SUV, 6.03 +/- 1.71) were identified. RF models of intensity-based CT-derived radiomic features were able to differentiate regions of negligible, low and elevated FDG uptake within and surrounding the tumour. Using the GAN-simulated PET image alone, we were able to predict clinical outcome to the same accuracy as that achieved using FDG-PET images. CONCLUSION: This pipeline demonstrates a deep learning methodology to simulate PET images from CT images in HNSCC without the use of radioactive tracer. The same pipeline can be applied to other pathologies that require PET imaging.
Quantifying lung cancer heterogeneity using novel CT features: a cross-institute study
Wang, Z.
Yang, C.
Han, W.
Sui, X.
Zheng, F.
Xue, F.
Xu, X.
Wu, P.
Chen, Y.
Gu, W.
Song, W.
Jiang, J.
Insights Imaging2022Journal Article, cited 0 times
Website
NSCLC Radiogenomics
LungCT-Diagnosis
RIDER Lung CT
Lung neoplasms
Precision medicine
Prognosis
Tomography (X-ray computed)
Computed Tomography (CT)
BACKGROUND: Radiomics-based image metrics are not used in the clinic despite the rapidly growing literature. We selected eight promising radiomic features and validated their value in decoding lung cancer heterogeneity. METHODS: CT images of 236 lung cancer patients were obtained from three different institutes, whereupon radiomic features were extracted according to a standardized procedure. The predictive value for patient long-term prognosis and association with routinely used semantic, genetic (e.g., epidermal growth factor receptor (EGFR)), and histopathological cancer profiles were validated. Feature measurement reproducibility was assessed. RESULTS: All eight selected features were robust across repeat scans (intraclass coefficient range: 0.81-0.99), and were associated with at least one of the cancer profiles: prognostic, semantic, genetic, and histopathological. For instance, "kurtosis" had a high predictive value of early death (AUC at first year: 0.70-0.75 in two independent cohorts), negative association with histopathological grade (Spearman's r: - 0.30), and altered expression levels regarding EGFR mutation and semantic characteristics (solid intensity, spiculated shape, juxtapleural location, and pleura tag; all p < 0.05). Combined as a radiomic score, the features had a higher area under curve for predicting 5-year survival (train: 0.855, test: 0.780, external validation: 0.760) than routine characteristics (0.733, 0.622, 0.613, respectively), and a better capability in patient death risk stratification (hazard ratio: 5.828, 95% confidence interval: 2.915-11.561) than histopathological staging and grading. CONCLUSIONS: We highlighted the clinical value of radiomic features. Following confirmation, these features may change the way in which we approach CT imaging and improve the individualized care of lung cancer patients.
Artificial CT images can enhance variation of case images in diagnostic radiology skills training
Hofmeijer, E. I. S.
Wu, S. C.
Vliegenthart, R.
Slump, C. H.
van der Heijden, F.
Tan, C. O.
Insights Imaging2023Journal Article, cited 0 times
LIDC-IDRI
Synthetic images
Artificial image
Artificial intelligence
Medical image education
Personalized education
Radiology
Classification
OBJECTIVES: We sought to investigate if artificial medical images can blend with original ones and whether they adhere to the variable anatomical constraints provided. METHODS: Artificial images were generated with a generative model trained on publicly available standard and low-dose chest CT images (805 scans; 39,803 2D images), of which 17% contained evidence of pathological formations (lung nodules). The test set (90 scans; 5121 2D images) was used to assess if artificial images (512 x 512 primary and control image sets) blended in with original images, using both quantitative metrics and expert opinion. We further assessed if pathology characteristics in the artificial images can be manipulated. RESULTS: Primary and control artificial images attained an average objective similarity of 0.78 +/- 0.04 (ranging from 0 [entirely dissimilar] to 1[identical]) and 0.76 +/- 0.06, respectively. Five radiologists with experience in chest and thoracic imaging provided a subjective measure of image quality; they rated artificial images as 3.13 +/- 0.46 (range of 1 [unrealistic] to 4 [almost indistinguishable to the original image]), close to their rating of the original images (3.73 +/- 0.31). Radiologists clearly distinguished images in the control sets (2.32 +/- 0.48 and 1.07 +/- 0.19). In almost a quarter of the scenarios, they were not able to distinguish primary artificial images from the original ones. CONCLUSION: Artificial images can be generated in a way such that they blend in with original images and adhere to anatomical constraints, which can be manipulated to augment the variability of cases. CRITICAL RELEVANCE STATEMENT: Artificial medical images can be used to enhance the availability and variety of medical training images by creating new but comparable images that can blend in with original images. KEY POINTS: * Artificial images, similar to original ones, can be created using generative networks. * Pathological features of artificial images can be adjusted through guiding the network. * Artificial images proved viable to augment the depth and broadening of diagnostic training.
A multi-model based on radiogenomics and deep learning techniques associated with histological grade and survival in clear cell renal cell carcinoma
Wang, S.
Zhu, C.
Jin, Y.
Yu, H.
Wu, L.
Zhang, A.
Wang, B.
Zhai, J.
Insights Imaging2023Journal Article, cited 0 times
TCGA-KIRC
Computed Tomography (CT)
Deep learning
Radiomics
Renal cell carcinoma
Clear cell renal cell carcinoma (ccRCC)
OBJECTIVES: This study aims to evaluate the efficacy of multi-model incorporated by radiomics, deep learning, and transcriptomics features for predicting pathological grade and survival in patients with clear cell renal cell carcinoma (ccRCC). METHODS: In this study, data were collected from 177 ccRCC patients, including radiomics features, deep learning (DL) features, and RNA sequencing data. Diagnostic models were then created using these data through least absolute shrinkage and selection operator (LASSO) analysis. Additionally, a multi-model was developed by combining radiomics, DL, and transcriptomics features. The prognostic performance of the multi-model was evaluated based on progression-free survival (PFS) and overall survival (OS) outcomes, assessed using Harrell's concordance index (C-index). Furthermore, we conducted an analysis to investigate the relationship between the multi-model and immune cell infiltration. RESULTS: The multi-model demonstrated favorable performance in discriminating pathological grade, with area under the ROC curve (AUC) values of 0.946 (95% CI: 0.912-0.980) and 0.864 (95% CI: 0.734-0.994) in the training and testing cohorts, respectively. Additionally, it exhibited statistically significant prognostic performance for predicting PFS and OS. Furthermore, the high-grade group displayed a higher abundance of immune cells compared to the low-grade group. CONCLUSIONS: The multi-model incorporated radiomics, DL, and transcriptomics features demonstrated promising performance in predicting pathological grade and prognosis in patients with ccRCC. CRITICAL RELEVANCE STATEMENT: We developed a multi-model to predict the grade and survival in clear cell renal cell carcinoma and explored the molecular biological significance of the multi-model of different histological grades. KEY POINTS: 1. The multi-model achieved an AUC of 0.864 for assessing pathological grade. 2. The multi-model exhibited an association with survival in ccRCC patients. 3. The high-grade group demonstrated a greater abundance of immune cells.
Two-phase multi-model automatic brain tumour diagnosis system from magnetic resonance images using convolutional neural networks
Abd-Ellah, Mahmoud Khaled
Awad, Ali Ismail
Khalaf, Ashraf AM
Hamed, Hesham FA
EURASIP Journal on Image and Video Processing2018Journal Article, cited 0 times
Website
RIDER Neuro MRI
Convolutional Neural Network (CNN)
Deep Learning
Radiomics
AI-based MRI auto-segmentation of brain tumor in rodents, a multicenter study
Wang, Shuncong
Pang, Xin
de Keyzer, Frederik
Feng, Yuanbo
Swinnen, Johan V.
Yu, Jie
Ni, Yicheng
Acta Neuropathol Commun2023Journal Article, cited 0 times
Mouse-Astrocytoma
Rats
Mice
Animals
*Artificial Intelligence
Image Processing
Computer-Assisted/methods
Rodentia
*Brain Neoplasms/diagnostic imaging
Artificial intelligence
Brain malignancy
Magnetic Resonance Imaging (MRI)
Glioblastoma Multiforme (GBM)
BRAIN
Rodent
Segmentation
Automatic segmentation of rodent brain tumor on magnetic resonance imaging (MRI) may facilitate biomedical research. The current study aims to prove the feasibility for automatic segmentation by artificial intelligence (AI), and practicability of AI-assisted segmentation. MRI images, including T2WI, T1WI and CE-T1WI, of brain tumor from 57 WAG/Rij rats in KU Leuven and 46 mice from the cancer imaging archive (TCIA) were collected. A 3D U-Net architecture was adopted for segmentation of tumor bearing brain and brain tumor. After training, these models were tested with both datasets after Gaussian noise addition. Reduction of inter-observer disparity by AI-assisted segmentation was also evaluated. The AI model segmented tumor-bearing brain well for both Leuven and TCIA datasets, with Dice similarity coefficients (DSCs) of 0.87 and 0.85 respectively. After noise addition, the performance remained unchanged when the signal-noise ratio (SNR) was higher than two or eight, respectively. For the segmentation of tumor lesions, AI-based model yielded DSCs of 0.70 and 0.61 for Leuven and TCIA datasets respectively. Similarly, the performance is uncompromised when the SNR was over two and eight respectively. AI-assisted segmentation could significantly reduce the inter-observer disparities and segmentation time in both rats and mice. Both AI models for segmenting brain or tumor lesions could improve inter-observer agreement and therefore contributed to the standardization of the following biomedical studies.
Analyzing MRI scans to detect glioblastoma tumor using hybrid deep belief networks
Reddy, Annapareddy V. N.
Krishna, Ch Phani
Mallick, Pradeep Kumar
Satapathy, Sandeep Kumar
Tiwari, Prayag
Zymbler, Mikhail
Kumar, Sachin
Journal of Big Data2020Journal Article, cited 0 times
Website
RIDER NEURO MRI
Glioblastoma Multiforme (GBM)
Deep learning
Glioblastoma (GBM) is a stage 4 malignant tumor in which a large portion of tumor cells are reproducing and dividing at any moment. These tumors are life threatening and may result in partial or complete mental and physical disability. In this study, we have proposed a classification model using hybrid deep belief networks (DBN) to classify magnetic resonance imaging (MRI) for GBM tumor. DBN is composed of stacked restricted Boltzmann machines (RBM). DBN often requires a large number of hidden layers that consists of large number of neurons to learn the best features from the raw image data. Hence, computational and space complexity is high and requires a lot of training time. The proposed approach combines DTW with DBN to improve the efficiency of existing DBN model. The results are validated using several statistical parameters. Statistical validation verifies that the combination of DTW and DBN outperformed the other classifiers in terms of training time, space complexity and classification accuracy.
Most-enhancing tumor volume by MRI radiomics predicts recurrence-free survival “early on” in neoadjuvant treatment of breast cancer
Drukker, Karen
Li, Hui
Antropova, Natalia
Edwards, Alexandra
Papaioannou, John
Giger, Maryellen L
Cancer Imaging2018Journal Article, cited 0 times
ACRIN-FLT-Breast
Radiomics
BREAST
BACKGROUND: The hypothesis of this study was that MRI-based radiomics has the ability to predict recurrence-free survival "early on" in breast cancer neoadjuvant chemotherapy. METHODS: A subset, based on availability, of the ACRIN 6657 dynamic contrast-enhanced MR images was used in which we analyzed images of all women imaged at pre-treatment baseline (141 women: 40 with a recurrence, 101 without) and all those imaged after completion of the first cycle of chemotherapy, i.e., at early treatment (143 women: 37 with a recurrence vs. 105 without). Our method was completely automated apart from manual localization of the approximate tumor center. The most enhancing tumor volume (METV) was automatically calculated for the pre-treatment and early treatment exams. Performance of METV in the task of predicting a recurrence was evaluated using ROC analysis. The association of recurrence-free survival with METV was assessed using a Cox regression model controlling for patient age, race, and hormone receptor status and evaluated by C-statistics. Kaplan-Meier analysis was used to estimate survival functions. RESULTS: The C-statistics for the association of METV with recurrence-free survival were 0.69 with 95% confidence interval of [0.58; 0.80] at pre-treatment and 0.72 [0.60; 0.84] at early treatment. The hazard ratios calculated from Kaplan-Meier curves were 2.28 [1.08; 4.61], 3.43 [1.83; 6.75], and 4.81 [2.16; 10.72] for the lowest quartile, median quartile, and upper quartile cut-points for METV at early treatment, respectively. CONCLUSION: The performance of the automatically-calculated METV rivaled that of a semi-manual model described for the ACRIN 6657 study (published C-statistic 0.72 [0.60; 0.84]), which involved the same dataset but required semi-manual delineation of the functional tumor volume (FTV) and knowledge of the pre-surgical residual cancer burden.
Radiomics for glioblastoma survival analysis in pre-operative MRI: exploring feature robustness, class boundaries, and machine learning techniques
Suter, Y.
Knecht, U.
Alao, M.
Valenzuela, W.
Hewer, E.
Schucht, P.
Wiest, R.
Reyes, M.
Cancer Imaging2020Journal Article, cited 0 times
Website
TCGA-GBM
BRATS datasets
PyRadiomics
Radiomic feature
Glioblastoma Multiforme (GBM)
BACKGROUND: This study aims to identify robust radiomic features for Magnetic Resonance Imaging (MRI), assess feature selection and machine learning methods for overall survival classification of Glioblastoma multiforme patients, and to robustify models trained on single-center data when applied to multi-center data. METHODS: Tumor regions were automatically segmented on MRI data, and 8327 radiomic features extracted from these regions. Single-center data was perturbed to assess radiomic feature robustness, with over 16 million tests of typical perturbations. Robust features were selected based on the Intraclass Correlation Coefficient to measure agreement across perturbations. Feature selectors and machine learning methods were compared to classify overall survival. Models trained on single-center data (63 patients) were tested on multi-center data (76 patients). Priors using feature robustness and clinical knowledge were evaluated. RESULTS: We observed a very large performance drop when applying models trained on single-center on unseen multi-center data, e.g. a decrease of the area under the receiver operating curve (AUC) of 0.56 for the overall survival classification boundary at 1 year. By using robust features alongside priors for two overall survival classes, the AUC drop could be reduced by 21.2%. In contrast, sensitivity was 12.19% lower when applying a prior. CONCLUSIONS: Our experiments show that it is possible to attain improved levels of robustness and accuracy when models need to be applied to unseen multi-center data. The performance on multi-center data of models trained on single-center data can be increased by using robust features and introducing prior knowledge. For successful model robustification, tailoring perturbations for robustness testing to the target dataset is key.
Are radiomics features universally applicable to different organs?
Lee, S. H.
Cho, H. H.
Kwon, J.
Lee, H. Y.
Park, H.
Cancer Imaging2021Journal Article, cited 0 times
Website
NSCLC-Radiomics
LungCT-Diagnosis
TCGA-KIRC
CPTAC-GBM
Computed Tomography (CT)
Magnetic Resonance Imaging (MRI)
BACKGROUND: Many studies have successfully identified radiomics features reflecting macroscale tumor features and tumor microenvironment for various organs. There is an increased interest in applying these radiomics features found in a given organ to other organs. Here, we explored whether common radiomics features could be identified over target organs in vastly different environments. METHODS: Four datasets of three organs were analyzed. One radiomics model was constructed from the training set (lungs, n = 401), and was further evaluated in three independent test sets spanning three organs (lungs, n = 59; kidneys, n = 48; and brains, n = 43). Intensity histograms derived from the whole organ were compared to establish organ-level differences. We constructed a radiomics score based on selected features using training lung data over the tumor region. A total of 143 features were computed for each tumor. We adopted a feature selection approach that favored stable features, which can also capture survival. The radiomics score was applied to three independent test data from lung, kidney, and brain tumors, and whether the score could be used to separate high- and low-risk groups, was evaluated. RESULTS: Each organ showed a distinct pattern in the histogram and the derived parameters (mean and median) at the organ-level. The radiomics score trained from the lung data of the tumor region included seven features, and the score was only effective in stratifying survival for other lung data, not in other organs such as the kidney and brain. Eliminating the lung-specific feature (2.5 percentile) from the radiomics score led to similar results. There were no common features between training and test sets, but a common category of features (texture category) was identified. CONCLUSION: Although the possibility of a generally applicable model cannot be excluded, we suggest that radiomics score models for survival were mostly specific for a given organ; applying them to other organs would require careful consideration of organ-specific properties.
Deep learning for semi-automated unidirectional measurement of lung tumor size in CT
Woo, M.
Devane, A. M.
Lowe, S. C.
Lowther, E. L.
Gimbel, R. W.
Cancer Imaging2021Journal Article, cited 0 times
Website
NSCLC-Radiomics
CPTAC-LSCC
QIN LUNG CT
TCGA-LUAD
LungCT-Diagnosis
RIDER Lung CT
LCTSC
Lung CT Segmentation Challenge 2017
Deep Learning
Annotation
LUNG
RECIST
BACKGROUND: Performing Response Evaluation Criteria in Solid Tumor (RECISTS) measurement is a non-trivial task requiring much expertise and time. A deep learning-based algorithm has the potential to assist with rapid and consistent lesion measurement. PURPOSE: The aim of this study is to develop and evaluate deep learning (DL) algorithm for semi-automated unidirectional CT measurement of lung lesions. METHODS: This retrospective study included 1617 lung CT images from 8 publicly open datasets. A convolutional neural network was trained using 1373 training and validation images annotated by two radiologists. Performance of the DL algorithm was evaluated 244 test images annotated by one radiologist. DL algorithm's measurement consistency with human radiologist was evaluated using Intraclass Correlation Coefficient (ICC) and Bland-Altman plotting. Bonferroni's method was used to analyze difference in their diagnostic behavior, attributed by tumor characteristics. Statistical significance was set at p < 0.05. RESULTS: The DL algorithm yielded ICC score of 0.959 with human radiologist. Bland-Altman plotting suggested 240 (98.4 %) measurements realized within the upper and lower limits of agreement (LOA). Some measurements outside the LOA revealed difference in clinical reasoning between DL algorithm and human radiologist. Overall, the algorithm marginally overestimated the size of lesion by 2.97 % compared to human radiologists. Further investigation indicated tumor characteristics may be associated with the DL algorithm's diagnostic behavior of over or underestimating the lesion size compared to human radiologist. CONCLUSIONS: The DL algorithm for unidirectional measurement of lung tumor size demonstrated excellent agreement with human radiologist.
Comparison of novel multi-level Otsu (MO-PET) and conventional PET segmentation methods for measuring FDG metabolic tumor volume in patients with soft tissue sarcoma
Lee, Inki
Im, Hyung-Jun
Solaiyappan, Meiyappan
Cho, Steve Y
EJNMMI physics2017Journal Article, cited 0 times
Website
Soft-tissue Sarcoma
Algorithm Development
Segmentation
Simultaneous emission and attenuation reconstruction in time-of-flight PET using a reference object
Garcia-Perez, P.
Espana, S.
EJNMMI Phys2020Journal Article, cited 0 times
Website
Head-Neck Cetuximab
Registration
Positron emission tomography (PET)
Phantom
BACKGROUND: Simultaneous reconstruction of emission and attenuation images in time-of-flight (TOF) positron emission tomography (PET) does not provide a unique solution. In this study, we propose to solve this limitation by including additional information given by a reference object with known attenuation placed outside the patient. Different configurations of the reference object were studied including geometry, material composition, and activity, and an optimal configuration was defined. In addition, this configuration was tested for different timing resolutions and noise levels. RESULTS: The proposed strategy was tested in 2D simulations obtained by forward projection of available PET/CT data and noise was included using Monte Carlo techniques. Obtained results suggest that the optimal configuration corresponds to a water cylinder inserted in the patient table and filled with activity. In that case, mean differences between reconstructed and true images were below 10%. However, better results can be obtained by increasing the activity of the reference object. CONCLUSION: This study shows promising results that might allow to obtain an accurate attenuation map from pure TOF-PET data without prior knowledge obtained from CT, MRI, or transmission scans.
Publishing descriptions of non-public clinical datasets: proposed guidance for researchers, repositories, editors and funding organisations
Hrynaszkiewicz, Iain
Khodiyar, Varsha
Hufton, Andrew L
Sansone, Susanna-Assunta
Research Integrity and Peer Review2016Journal Article, cited 8 times
Website
Open science
Sharing of experimental clinical research data usually happens between individuals or research groups rather than via public repositories, in part due to the need to protect research participant privacy. This approach to data sharing makes it difficult to connect journal articles with their underlying datasets and is often insufficient for ensuring access to data in the long term. Voluntary data sharing services such as the Yale Open Data Access (YODA) and Clinical Study Data Request (CSDR) projects have increased accessibility to clinical datasets for secondary uses while protecting patient privacy and the legitimacy of secondary analyses but these resources are generally disconnected from journal articles-where researchers typically search for reliable information to inform future research. New scholarly journal and article types dedicated to increasing accessibility of research data have emerged in recent years and, in general, journals are developing stronger links with data repositories. There is a need for increased collaboration between journals, data repositories, researchers, funders, and voluntary data sharing services to increase the visibility and reliability of clinical research. Using the journal Scientific Data as a case study, we propose and show examples of changes to the format and peer-review process for journal articles to more robustly link them to data that are only available on request. We also propose additional features for data repositories to better accommodate non-public clinical datasets, including Data Use Agreements (DUAs).
Advanced 3D printed model of middle cerebral artery aneurysms for neurosurgery simulation
Nagassa, Ruth G
McMenamin, Paul G
Adams, Justin W
Quayle, Michelle R
Rosenfeld, Jeffrey V
3D Print Med2019Journal Article, cited 0 times
BRAIN
3D printing
Anatomical models
Aneurysm
Neurosurgical training
Simulation
Angiography
BACKGROUND: Neurosurgical residents are finding it more difficult to obtain experience as the primary operator in aneurysm surgery. The present study aimed to replicate patient-derived cranial anatomy, pathology and human tissue properties relevant to cerebral aneurysm intervention through 3D printing and 3D print-driven casting techniques. The final simulator was designed to provide accurate simulation of a human head with a middle cerebral artery (MCA) aneurysm. METHODS: This study utilized living human and cadaver-derived medical imaging data including CT angiography and MRI scans. Computer-aided design (CAD) models and pre-existing computational 3D models were also incorporated in the development of the simulator. The design was based on including anatomical components vital to the surgery of MCA aneurysms while focusing on reproducibility, adaptability and functionality of the simulator. Various methods of 3D printing were utilized for the direct development of anatomical replicas and moulds for casting components that optimized the bio-mimicry and mechanical properties of human tissues. Synthetic materials including various types of silicone and ballistics gelatin were cast in these moulds. A novel technique utilizing water-soluble wax and silicone was used to establish hollow patient-derived cerebrovascular models. RESULTS: A patient-derived 3D aneurysm model was constructed for a MCA aneurysm. Multiple cerebral aneurysm models, patient-derived and CAD, were replicated as hollow high-fidelity models. The final assembled simulator integrated six anatomical components relevant to the treatment of cerebral aneurysms of the Circle of Willis in the left cerebral hemisphere. These included models of the cerebral vasculature, cranial nerves, brain, meninges, skull and skin. The cerebral circulation was modeled through the patient-derived vasculature within the brain model. Linear and volumetric measurements of specific physical modular components were repeated, averaged and compared to the original 3D meshes generated from the medical imaging data. Calculation of the concordance correlation coefficient (rhoc: 90.2%-99.0%) and percentage difference (</=0.4%) confirmed the accuracy of the models. CONCLUSIONS: A multi-disciplinary approach involving 3D printing and casting techniques was used to successfully construct a multi-component cerebral aneurysm surgery simulator. Further study is planned to demonstrate the educational value of the proposed simulator for neurosurgery residents.
What do we know about volumetric medical image interpretation?: a review of the basic science and medical image perception literatures
Williams, Lauren H.
Drew, Trafton
2019Journal Article, cited 0 times
CPTAC-SAR
Interpretation of volumetric medical images represents a rapidly growing proportion of the workload in radiology. However, relatively little is known about the strategies that best guide search behavior when looking for abnormalities in volumetric images. Although there is extensive literature on two-dimensional medical image perception, it is an open question whether the conclusions drawn from these images can be generalized to volumetric images. Importantly, volumetric images have distinct characteristics (e.g., scrolling through depth, smooth-pursuit eye-movements, motion onset cues, etc.) that should be considered in future research. In this manuscript, we will review the literature on medical image perception and discuss relevant findings from basic science that can be used to generate predictions about expertise in volumetric image interpretation. By better understanding search through volumetric images, we may be able to identify common sources of error, characterize the optimal strategies for searching through depth, or develop new training and assessment techniques for radiology residents.
Breast MRI radiomics: comparison of computer- and human-extracted imaging phenotypes
Sutton, Elizabeth J
Huang, Erich P
Drukker, Karen
Burnside, Elizabeth S
Li, Hui
Net, Jose M
Rao, Arvind
Whitman, Gary J
Zuley, Margarita
Ganott, Marie
Bonaccio, Ermelinda
Giger, Maryellen L
Morris, Elizabeth A
European Radiology Experimental2017Journal Article, cited 17 times
Website
TCGA-BRCA
BREAST
Magnetic Resonance Imaging (MRI)
inter-observer variability
Machine learning
Radiomics
Background: In this study, we sought to investigate if computer-extracted magnetic resonance imaging (MRI) phenotypes of breast cancer could replicate human-extracted size and Breast Imaging-Reporting and Data System (BI-RADS) imaging phenotypes using MRI data from The Cancer Genome Atlas (TCGA) project of the National Cancer Institute. Methods: Our retrospective interpretation study involved analysis of Health Insurance Portability and Accountability Act-compliant breast MRI data from The Cancer Imaging Archive, an open-source database from the TCGA project. This study was exempt from institutional review board approval at Memorial Sloan Kettering Cancer Center and the need for informed consent was waived. Ninety-one pre-operative breast MRIs with verified invasive breast cancers were analysed. Three fellowship-trained breast radiologists evaluated the index cancer in each case according to size and the BI-RADS lexicon for shape, margin, and enhancement (human-extracted image phenotypes [HEIP]). Human inter-observer agreement was analysed by the intra-class correlation coefficient (ICC) for size and Krippendorff's alpha for other measurements. Quantitative MRI radiomics of computerised three-dimensional segmentations of each cancer generated computer-extracted image phenotypes (CEIP). Spearman's rank correlation coefficients were used to compare HEIP and CEIP. Results: Inter-observer agreement for HEIP varied, with the highest agreement seen for size (ICC 0.679) and shape (ICC 0.527). The computer-extracted maximum linear size replicated the human measurement with p < 10(-12). CEIP of shape, specifically sphericity and irregularity, replicated HEIP with both p values < 0.001. CEIP did not demonstrate agreement with HEIP of tumour margin or internal enhancement. Conclusions: Quantitative radiomics of breast cancer may replicate human-extracted tumour size and BI-RADS imaging phenotypes, thus enabling precision medicine.
Textural radiomic features and time-intensity curve data analysis by dynamic contrast-enhanced MRI for early prediction of breast cancer therapy response: preliminary data
Fusco, Roberta
Granata, Vincenza
Maio, Francesca
Sansone, Mario
Petrillo, Antonella
Eur Radiol Exp2020Journal Article, cited 1 times
Website
BREAST
QIN Breast DCE-MRI
QIN Breast
BACKGROUND: To investigate the potential of semiquantitative time-intensity curve parameters compared to textural radiomic features on arterial phase images by dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) for early prediction of breast cancer neoadjuvant therapy response. METHODS: A retrospective study of 45 patients subjected to DCE-MRI by public datasets containing examination performed prior to the start of treatment and after the treatment first cycle ('QIN Breast DCE-MRI' and 'QIN-Breast') was performed. In total, 11 semiquantitative parameters and 50 texture features were extracted. Non-parametric test, receiver operating characteristic analysis with area under the curve (ROC-AUC), Spearman correlation coefficient, and Kruskal-Wallis test with Bonferroni correction were applied. RESULTS: Fifteen patients with pathological complete response (pCR) and 30 patients with non-pCR were analysed. Significant differences in median values between pCR patients and non-pCR patients were found for entropy, long-run emphasis, and busyness among the textural features, for maximum signal difference, washout slope, washin slope, and standardised index of shape among the dynamic semiquantitative parameters. The standardised index of shape had the best results with a ROC-AUC of 0.93 to differentiate pCR versus non-pCR patients. CONCLUSIONS: The standardised index of shape could become a clinical tool to differentiate, in the early stages of treatment, responding to non-responding patients.
Public data homogenization for AI model development in breast cancer
Kilintzis, V.
Kalokyri, V.
Kondylakis, H.
Joshi, S.
Nikiforaki, K.
Diaz, O.
Lekadir, K.
Tsiknakis, M.
Marias, K.
Eur Radiol Exp2024Journal Article, cited 0 times
Website
I-SPY 2
Duke-Breast-Cancer-MRI
ISPY1
TCGA-BRCA
Breast-MRI-NACT-Pilot
Humans
Female
*Breast Neoplasms/diagnostic imaging
Artificial Intelligence
Magnetic Resonance Imaging (MRI)
ISPY2
ACRIN 6657
Public data
Software
BACKGROUND: Developing trustworthy artificial intelligence (AI) models for clinical applications requires access to clinical and imaging data cohorts. Reusing of publicly available datasets has the potential to fill this gap. Specifically in the domain of breast cancer, a large archive of publicly accessible medical images along with the corresponding clinical data is available at The Cancer Imaging Archive (TCIA). However, existing datasets cannot be directly used as they are heterogeneous and cannot be effectively filtered for selecting specific image types required to develop AI models. This work focuses on the development of a homogenized dataset in the domain of breast cancer including clinical and imaging data. METHODS: Five datasets were acquired from the TCIA and were harmonized. For the clinical data harmonization, a common data model was developed and a repeatable, documented "extract-transform-load" process was defined and executed for their homogenization. Further, Digital Imaging and COmmunications in Medicine (DICOM) information was extracted from magnetic resonance imaging (MRI) data and made accessible and searchable. RESULTS: The resulting harmonized dataset includes information about 2,035 subjects with breast cancer. Further, a platform named RV-Cherry-Picker enables search over both the clinical and diagnostic imaging datasets, providing unified access, facilitating the downloading of all study imaging that correspond to specific series' characteristics (e.g., dynamic contrast-enhanced series), and reducing the burden of acquiring the appropriate set of images for the respective AI model scenario. CONCLUSIONS: RV-Cherry-Picker provides access to the largest, publicly available, homogenized, imaging/clinical dataset for breast cancer to develop AI models on top. RELEVANCE STATEMENT: We present a solution for creating merged public datasets supporting AI model development, using as an example the breast cancer domain and magnetic resonance imaging images. KEY POINTS: * The proposed platform allows unified access to the largest, homogenized public imaging dataset for breast cancer. * A methodology for the semantically enriched homogenization of public clinical data is presented. * The platform is able to make a detailed selection of breast MRI data for the development of AI models.
Prediction of glioma-subtypes: comparison of performance on a DL classifier using bounding box areas versus annotated tumors
Ali, M. B.
Gu, I. Y.
Lidemar, A.
Berger, M. S.
Widhalm, G.
Jakola, A. S.
BMC Biomed Eng2022Journal Article, cited 0 times
TCGA-GBM
TCGA-LGG
Radiogenomics
Magnetic Resonance Imaging (MRI)
1p/19q codeletion
Brain tumor
Deep learning
Ellipse bounding box
IDH genotype
Convolutional Neural Networks (CNN)
BACKGROUND: For brain tumors, identifying the molecular subtypes from magnetic resonance imaging (MRI) is desirable, but remains a challenging task. Recent machine learning and deep learning (DL) approaches may help the classification/prediction of tumor subtypes through MRIs. However, most of these methods require annotated data with ground truth (GT) tumor areas manually drawn by medical experts. The manual annotation is a time consuming process with high demand on medical personnel. As an alternative automatic segmentation is often used. However, it does not guarantee the quality and could lead to improper or failed segmented boundaries due to differences in MRI acquisition parameters across imaging centers, as segmentation is an ill-defined problem. Analogous to visual object tracking and classification, this paper shifts the paradigm by training a classifier using tumor bounding box areas in MR images. The aim of our study is to see whether it is possible to replace GT tumor areas by tumor bounding box areas (e.g. ellipse shaped boxes) for classification without a significant drop in performance. METHOD: In patients with diffuse gliomas, training a deep learning classifier for subtype prediction by employing tumor regions of interest (ROIs) using ellipse bounding box versus manual annotated data. Experiments were conducted on two datasets (US and TCGA) consisting of multi-modality MRI scans where the US dataset contained patients with diffuse low-grade gliomas (dLGG) exclusively. RESULTS: Prediction rates were obtained on 2 test datasets: 69.86% for 1p/19q codeletion status on US dataset and 79.50% for IDH mutation/wild-type on TCGA dataset. Comparisons with that of using annotated GT tumor data for training showed an average of 3.0% degradation (2.92% for 1p/19q codeletion status and 3.23% for IDH genotype). CONCLUSION: Using tumor ROIs, i.e., ellipse bounding box tumor areas to replace annotated GT tumor areas for training a deep learning scheme, cause only a modest decline in performance in terms of subtype prediction. With more data that can be made available, this may be a reasonable trade-off where decline in performance may be counteracted with more data.
Open-source algorithm and software for computed tomography-based virtual pancreatoscopy and other applications
Huang, H.
Yu, X.
Tian, M.
He, W.
Li, S. X.
Liang, Z.
Gao, Y.
Vis Comput Ind Biomed Art2022Journal Article, cited 0 times
Website
Pancreas-CT
3D Slicer
Pancreatic cancer
Pancreatic duct segmentation
Virtual pancreatoscopy
Pancreatoscopy plays a significant role in the diagnosis and treatment of pancreatic diseases. However, the risk of pancreatoscopy is remarkably greater than that of other endoscopic procedures, such as gastroscopy and bronchoscopy, owing to its severe invasiveness. In comparison, virtual pancreatoscopy (VP) has shown notable advantages. However, because of the low resolution of current computed tomography (CT) technology and the small diameter of the pancreatic duct, VP has limited clinical use. In this study, an optimal path algorithm and super-resolution technique are investigated for the development of an open-source software platform for VP based on 3D Slicer. The proposed segmentation of the pancreatic duct from the abdominal CT images reached an average Dice coefficient of 0.85 with a standard deviation of 0.04. Owing to the excellent segmentation performance, a fly-through visualization of both the inside and outside of the duct was successfully reconstructed, thereby demonstrating the feasibility of VP. In addition, a quantitative analysis of the wall thickness and topology of the duct provides more insight into pancreatic diseases than a fly-through visualization. The entire VP system developed in this study is available at https://github.com/gaoyi/VirtualEndoscopy.git .
Imaging-Genomic Study of Head and Neck Squamous Cell Carcinoma: Associations Between Radiomic Phenotypes and Genomic Mechanisms via Integration of The Cancer Genome Atlas and The Cancer Imaging Archive
Zhu, Y.
Mohamed, A. S. R.
Lai, S. Y.
Yang, S.
Kanwar, A.
Wei, L.
Kamal, M.
Sengupta, S.
Elhalawani, H.
Skinner, H.
Mackin, D. S.
Shiao, J.
Messer, J.
Wong, A.
Ding, Y.
Zhang, L.
Court, L.
Ji, Y.
Fuller, C. D.
JCO Clin Cancer Inform2019Journal Article, cited 0 times
Website
TCGA-HNSC
Aged
*Biomarkers
Tumor
Computational Biology/methods
DNA Copy Number Variations
*Diagnostic Imaging
Female
Gene Expression Profiling
*Genetic Predisposition to Disease
*Genomics/methods
Humans
Image Interpretation
Computer-Assisted
*Image Processing
Computer-Assisted
Male
Middle Aged
Mutation
Neoplasm Staging
Reproducibility of Results
Retrospective Studies
Squamous Cell Carcinoma of Head and Neck/*diagnostic imaging/*genetics/pathology
Tomography
X-Ray Computed
Workflow
PURPOSE: Recent data suggest that imaging radiomic features of a tumor could be indicative of important genomic biomarkers. Understanding the relationship between radiomic and genomic features is important for basic cancer research and future patient care. We performed a comprehensive study to discover the imaginggenomic associations in head and neck squamous cell carcinoma (HNSCC) and explore the potential of predicting tumor genomic alternations using radiomic features. METHODS: Our retrospective study integrated whole-genome multiomics data from The Cancer Genome Atlas with matched computed tomography imaging data from The Cancer Imaging Archive for the same set of 126 patients with HNSCC. Linear regression and gene set enrichment analysis were used to identify statistically significant associations between radiomic imaging and genomic features. Random forest classifier was used to predict the status of two key HNSCC molecular biomarkers, human papillomavirus and disruptive TP53 mutation, on the basis of radiomic features. RESULTS: Widespread and statistically significant associations were discovered between genomic features (including microRNA expression, somatic mutations, and transcriptional activity, copy number variations, and promoter region DNA methylation changes of pathways) and radiomic features characterizing the size, shape, and texture of tumor. Prediction of human papillomavirus and TP53 mutation status using radiomic features achieved areas under the receiver operating characteristic curve of 0.71 and 0.641, respectively. CONCLUSION: Our exploratory study suggests that radiomic features are associated with genomic characteristics at multiple molecular layers in HNSCC and provides justification for continued development of radiomics as biomarkers for relevant genomic alterations in HNSCC.
Imaging-AMARETTO: An Imaging Genomics Software Tool to Interrogate Multiomics Networks for Relevance to Radiography and Histopathology Imaging Biomarkers of Clinical Outcomes
Gevaert, O.
Nabian, M.
Bakr, S.
Everaert, C.
Shinde, J.
Manukyan, A.
Liefeld, T.
Tabor, T.
Xu, J.
Lupberger, J.
Haas, B. J.
Baumert, T. F.
Hernaez, M.
Reich, M.
Quintana, F. J.
Uhlmann, E. J.
Krichevsky, A. M.
Mesirov, J. P.
Carey, V.
Pochet, N.
JCO Clin Cancer Inform2020Journal Article, cited 1 times
Website
TCGA-GBM
TCGA-LGG
VASARI
Ivy GAP
Radiomics
Radiogenomics
PURPOSE: The availability of increasing volumes of multiomics, imaging, and clinical data in complex diseases such as cancer opens opportunities for the formulation and development of computational imaging genomics methods that can link multiomics, imaging, and clinical data. METHODS: Here, we present the Imaging-AMARETTO algorithms and software tools to systematically interrogate regulatory networks derived from multiomics data within and across related patient studies for their relevance to radiography and histopathology imaging features predicting clinical outcomes. RESULTS: To demonstrate its utility, we applied Imaging-AMARETTO to integrate three patient studies of brain tumors, specifically, multiomics with radiography imaging data from The Cancer Genome Atlas (TCGA) glioblastoma multiforme (GBM) and low-grade glioma (LGG) cohorts and transcriptomics with histopathology imaging data from the Ivy Glioblastoma Atlas Project (IvyGAP) GBM cohort. Our results show that Imaging-AMARETTO recapitulates known key drivers of tumor-associated microglia and macrophage mechanisms, mediated by STAT3, AHR, and CCR2, and neurodevelopmental and stemness mechanisms, mediated by OLIG2. Imaging-AMARETTO provides interpretation of their underlying molecular mechanisms in light of imaging biomarkers of clinical outcomes and uncovers novel master drivers, THBS1 and MAP2, that establish relationships across these distinct mechanisms. CONCLUSION: Our network-based imaging genomics tools serve as hypothesis generators that facilitate the interrogation of known and uncovering of novel hypotheses for follow-up with experimental validation studies. We anticipate that our Imaging-AMARETTO imaging genomics tools will be useful to the community of biomedical researchers for applications to similar studies of cancer and other complex diseases with available multiomics, imaging, and clinical data.
Open Health Imaging Foundation Viewer: An Extensible Open-Source Framework for Building Web-Based Imaging Applications to Support Cancer Research
Ziegler, Erik
Urban, Trinity
Brown, Danny
Petts, James
Pieper, Steve D.
Lewis, Rob
Hafey, Chris
Harris, Gordon J.
2020Journal Article, cited 0 times
QIN-HEADNECK
Crowds-Cure-2017
Crowds-Cure-2018
PURPOSE: Zero-footprint Web architecture enables imaging applications to be deployed on premise or in the cloud without requiring installation of custom software on the user's computer. Benefits include decreased costs and information technology support requirements, as well as improved accessibility across sites. The Open Health Imaging Foundation (OHIF) Viewer is an extensible platform developed to leverage these benefits and address the demand for open-source Web-based imaging applications. The platform can be modified to support site-specific workflows and accommodate evolving research requirements.
MATERIALS AND METHODS: The OHIF Viewer provides basic image review functionality (eg, image manipulation and measurement) as well as advanced visualization (eg, multiplanar reformatting). It is written as a client-only, single-page Web application that can easily be embedded into third-party applications or hosted as a standalone Web site. The platform provides extension points for software developers to include custom tools and adapt the system for their workflows. It is standards compliant and relies on DICOMweb for data exchange and OpenID Connect for authentication, but it can be configured to use any data source or authentication flow. Additionally, the user interface components are provided in a standalone component library so that developers can create custom extensions.
RESULTS: The OHIF Viewer and its underlying components have been widely adopted and integrated into multiple clinical research platforms (e,g Precision Imaging Metrics, XNAT, LabCAS, ISB-CGC) and commercial applications (eg, Osirix). It has also been used to build custom imaging applications (eg, ProstateCancer.ai, Crowds Cure Cancer [presented as a case study]).
CONCLUSION: The OHIF Viewer provides a flexible framework for building applications to support imaging research. Its adoption could reduce redundancies in software development for National Cancer Institute-funded projects, including Informatics Technology for Cancer Research and the Quantitative Imaging Network.
Quantitative Imaging Informatics for Cancer Research
Fedorov, Andrey
Beichel, Reinhard
Kalpathy-Cramer, Jayashree
Clunie, David
Onken, Michael
Riesmeier, Jorg
Herz, Christian
Bauer, Christian
Beers, Andrew
Fillion-Robin, Jean-Christophe
Lasso, Andras
Pinter, Csaba
Pieper, Steve
Nolden, Marco
Maier-Hein, Klaus
Herrmann, Markus D
Saltz, Joel
Prior, Fred
Fennessy, Fiona
Buatti, John
Kikinis, Ron
JCO Clin Cancer Inform2020Journal Article, cited 0 times
Website
QIICR
QIN-HEADNECK
QIN-PROSTATE-Repeatability
TCGA-GBM
TCGA-LGG
LIDC-IDRI
NSCLC-Radiomics
NSCLC-Radiomics-Interobserver1
Head-Neck-Radiomics-HN1
PURPOSE: We summarize Quantitative Imaging Informatics for Cancer Research (QIICR; U24 CA180918), one of the first projects funded by the National Cancer Institute (NCI) Informatics Technology for Cancer Research program. METHODS: QIICR was motivated by the 3 use cases from the NCI Quantitative Imaging Network. 3D Slicer was selected as the platform for implementation of open-source quantitative imaging (QI) tools. Digital Imaging and Communications in Medicine (DICOM) was chosen for standardization of QI analysis outputs. Support of improved integration with community repositories focused on The Cancer Imaging Archive (TCIA). Priorities included improved capabilities of the standard, toolkits and tools, reference datasets, collaborations, and training and outreach. RESULTS: Fourteen new tools to support head and neck cancer, glioblastoma, and prostate cancer QI research were introduced and downloaded over 100,000 times. DICOM was amended, with over 40 correction proposals addressing QI needs. Reference implementations of the standard in a popular toolkit and standalone tools were introduced. Eight datasets exemplifying the application of the standard and tools were contributed. An open demonstration/connectathon was organized, attracting the participation of academic groups and commercial vendors. Integration of tools with TCIA was improved by implementing programmatic communication interface and by refining best practices for QI analysis results curation. CONCLUSION: Tools, capabilities of the DICOM standard, and datasets we introduced found adoption and utility within the cancer imaging community. A collaborative approach is critical to addressing challenges in imaging informatics at the national and international levels. Numerous challenges remain in establishing and maintaining the infrastructure of analysis tools and standardized datasets for the imaging community. Ideas and technology developed by the QIICR project are contributing to the NCI Imaging Data Commons currently being developed.
End-to-End Non-Small-Cell Lung Cancer Prognostication Using Deep Learning Applied to Pretreatment Computed Tomography
Torres, Felipe Soares
Akbar, Shazia
Raman, Srinivas
Yasufuku, Kazuhiro
Schmidt, Carola
Hosny, Ahmed
Baldauf-Lenschen, Felix
Leighl, Natasha B
JCO Clin Cancer Inform2021Journal Article, cited 0 times
Website
NLST
NSCLC-Radiomics
NSCLC Radiogenomics
LungCT-Diagnosis
TCGA-LUSC
TCGA-LUAD
RIDER LUNG CT
Automated computer aided diagnosis
Computer Aided Diagnosis (CADx)
Computed Tomography (CT)
PURPOSE: Clinical TNM staging is a key prognostic factor for patients with lung cancer and is used to inform treatment and monitoring. Computed tomography (CT) plays a central role in defining the stage of disease. Deep learning applied to pretreatment CTs may offer additional, individualized prognostic information to facilitate more precise mortality risk prediction and stratification. METHODS: We developed a fully automated imaging-based prognostication technique (IPRO) using deep learning to predict 1-year, 2-year, and 5-year mortality from pretreatment CTs of patients with stage I-IV lung cancer. Using six publicly available data sets from The Cancer Imaging Archive, we performed a retrospective five-fold cross-validation using pretreatment CTs of 1,689 patients, of whom 1,110 were diagnosed with non-small-cell lung cancer and had available TNM staging information. We compared the association of IPRO and TNM staging with patients' survival status and assessed an Ensemble risk score that combines IPRO and TNM staging. Finally, we evaluated IPRO's ability to stratify patients within TNM stages using hazard ratios (HRs) and Kaplan-Meier curves. RESULTS: IPRO showed similar prognostic power (concordance index [C-index] 1-year: 0.72, 2-year: 0.70, 5-year: 0.68) compared with that of TNM staging (C-index 1-year: 0.71, 2-year: 0.71, 5-year: 0.70) in predicting 1-year, 2-year, and 5-year mortality. The Ensemble risk score yielded superior performance across all time points (C-index 1-year: 0.77, 2-year: 0.77, 5-year: 0.76). IPRO stratified patients within TNM stages, discriminating between highest- and lowest-risk quintiles in stages I (HR: 8.60), II (HR: 5.03), III (HR: 3.18), and IV (HR: 1.91). CONCLUSION: Deep learning applied to pretreatment CT combined with TNM staging enhances prognostication and risk stratification in patients with lung cancer.
Multi-Institutional Validation of Deep Learning for Pretreatment Identification of Extranodal Extension in Head and Neck Squamous Cell Carcinoma
Kann, B. H.
Hicks, D. F.
Payabvash, S.
Mahajan, A.
Du, J.
Gupta, V.
Park, H. S.
Yu, J. B.
Yarbrough, W. G.
Burtness, B. A.
Husain, Z. A.
Aneja, S.
J Clin Oncol2020Journal Article, cited 5 times
Website
TCGA-HNSC
head and neck squamous cell carcinoma (HNSCC)
Deep Learning
Classification
PURPOSE: Extranodal extension (ENE) is a well-established poor prognosticator and an indication for adjuvant treatment escalation in patients with head and neck squamous cell carcinoma (HNSCC). Identification of ENE on pretreatment imaging represents a diagnostic challenge that limits its clinical utility. We previously developed a deep learning algorithm that identifies ENE on pretreatment computed tomography (CT) imaging in patients with HNSCC. We sought to validate our algorithm performance for patients from a diverse set of institutions and compare its diagnostic ability to that of expert diagnosticians. METHODS: We obtained preoperative, contrast-enhanced CT scans and corresponding pathology results from two external data sets of patients with HNSCC: an external institution and The Cancer Genome Atlas (TCGA) HNSCC imaging data. Lymph nodes were segmented and annotated as ENE-positive or ENE-negative on the basis of pathologic confirmation. Deep learning algorithm performance was evaluated and compared directly to two board-certified neuroradiologists. RESULTS: A total of 200 lymph nodes were examined in the external validation data sets. For lymph nodes from the external institution, the algorithm achieved an area under the receiver operating characteristic curve (AUC) of 0.84 (83.1% accuracy), outperforming radiologists' AUCs of 0.70 and 0.71 (P = .02 and P = .01). Similarly, for lymph nodes from the TCGA, the algorithm achieved an AUC of 0.90 (88.6% accuracy), outperforming radiologist AUCs of 0.60 and 0.82 (P < .0001 and P = .16). Radiologist diagnostic accuracy improved when receiving deep learning assistance. CONCLUSION: Deep learning successfully identified ENE on pretreatment imaging across multiple institutions, exceeding the diagnostic ability of radiologists with specialized head and neck experience. Our findings suggest that deep learning has utility in the identification of ENE in patients with HNSCC and has the potential to be integrated into clinical decision making.
Sybil: A Validated Deep Learning Model to Predict Future Lung Cancer Risk From a Single Low-Dose Chest Computed Tomography
Mikhael, P. G.
Wohlwend, J.
Yala, A.
Karstens, L.
Xiang, J.
Takigami, A. K.
Bourgouin, P. P.
Chan, P.
Mrah, S.
Amayri, W.
Juan, Y. H.
Yang, C. T.
Wan, Y. L.
Lin, G.
Sequist, L. V.
Fintelmann, F. J.
Barzilay, R.
J Clin Oncol2023Journal Article, cited 5 times
Website
NLST
National Lung Screening Trial (NLST)
Low-dose CT
Radiomics
Model
PURPOSE: Low-dose computed tomography (LDCT) for lung cancer screening is effective, although most eligible people are not being screened. Tools that provide personalized future cancer risk assessment could focus approaches toward those most likely to benefit. We hypothesized that a deep learning model assessing the entire volumetric LDCT data could be built to predict individual risk without requiring additional demographic or clinical data. METHODS: We developed a model called Sybil using LDCTs from the National Lung Screening Trial (NLST). Sybil requires only one LDCT and does not require clinical data or radiologist annotations; it can run in real time in the background on a radiology reading station. Sybil was validated on three independent data sets: a heldout set of 6,282 LDCTs from NLST participants, 8,821 LDCTs from Massachusetts General Hospital (MGH), and 12,280 LDCTs from Chang Gung Memorial Hospital (CGMH, which included people with a range of smoking history including nonsmokers). RESULTS: Sybil achieved area under the receiver-operator curves for lung cancer prediction at 1 year of 0.92 (95% CI, 0.88 to 0.95) on NLST, 0.86 (95% CI, 0.82 to 0.90) on MGH, and 0.94 (95% CI, 0.91 to 1.00) on CGMH external validation sets. Concordance indices over 6 years were 0.75 (95% CI, 0.72 to 0.78), 0.81 (95% CI, 0.77 to 0.85), and 0.80 (95% CI, 0.75 to 0.86) for NLST, MGH, and CGMH, respectively. CONCLUSION: Sybil can accurately predict an individual's future lung cancer risk from a single LDCT scan to further enable personalized screening. Future study is required to understand Sybil's clinical applications. Our model and annotations are publicly available.
Waiting for Big Changes in Limited-Stage Small-Cell Lung Cancer: For Now, More of the Same
Deek, Matthew P.
Haigentz, Missak
Jabbour, Salma K.
Journal of Clinical Oncology2023Journal Article, cited 0 times
ACRIN-NSCLC-FDG-PET
The Oncology Grand Rounds series is designed to place original reports published in the Journal into clinical context. A case presentation is followed by a description of diagnostic and management challenges, a review of the relevant literature, and a summary of the authors' suggested management approaches. The goal of this series is to help readers better understand how to apply the results of key studies, including those published in Journal of Clinical Oncology, to patients seen in their own clinical practice.Concurrent chemoradiotherapy remains central to the treatment of limited-stage small-cell lung cancer (SCLC). SCLC is one of the few tumors treated with twice-daily radiotherapy (RT) in the primary definitive setting, a regimen that was established when Intergroup 0096 demonstrated its superiority over once-daily RT. However, questions remained about the optimal chemoradiotherapy regimen given the low RT dose used in the once-daily RT arm of Intergroup 0096. CALGB 30610/RTOG 0538 and CONVERT attempted to establish whether dose-escalated once-daily RT was superior to twice-daily RT in limited-stage SCLC. Although both studies showed similar survival between treatment regimens, once-daily RT was not found to be superior to twice-daily RT, and trial design limited the ability to conclude dose-escalated once-daily RT as noninferior to twice-daily RT. Thus, twice-daily RT with concurrent chemotherapy remains a standard of care in limited-stage SCLC.
CT-based radiomic analysis of hepatocellular carcinoma patients to predict key genomic information
West, Derek L
Kotrotsou, Aikaterini
Niekamp, Andrew Scott
Idris, Tagwa
Giniebra Camejo, Dunia
Mazal, Nicolas James
Cardenas, Nicolas James
Goldberg, Jackson L
Colen, Rivka R
Journal of Clinical Oncology2017Journal Article, cited 1 times
Website
TCGA-LIHC
Radiomics
Lung Tumor Segmentation Using a 3D Densely Connected Convolutional Neural Network
Lung cancer, being one of the most fatal diseases across the globe today, poses a great threat to human beings. An early diagnosis is significant for better treatment analysis, which is very challenging. The treatment in later stages becomes even more difficult. Due to increasing number of cancer cases, the radiologists are overburdened. The lung cancer diagnosis is mostly dependent on the accurate detection and segmentation of the lung tumor regions. To assist the medical experts with a second opinion and to perform the lung tumor segmentation task in the lung computed tomography (CT) scan images, the authors have proposed an approach based on a densely connected convolutional neural network. In this approach, a 3D densely connected convolutional neural network is used in which dense connections are provided between two convolutional layers, which helps to reuse the features across the layers and is also beneficial for solving the vanishing gradient problem. The proposed network consists of an encoder to capture the features in the CT image and a decoder that reconstructs the desired segmentation masks. This proposed approach is evaluated on an online available dataset for lung tumor, non-small-cell lung cancer-Radiomics dataset, and a dice similarity coefficient of 67.34% is achieved. The proposed approach will assist the radiologists to mark the lung cancer regions in a more efficient manner, and it can be utilized in an automatic computer-aided diagnosis system for lung cancer detection.
An Optimized Deep Learning Technique for Detecting Lung Cancer from CT Images
Vanitha, M.
Mangayarkarasi, R.
Angulakshmi, M.
Deepa, M.
2023Book Section, cited 0 times
LIDC-IDRI
LUNA16 Challenge
Algorithm Development
Convolutional Neural Network (CNN)
Of all the cancer diseases that have existed lung cancer also contributes to human deaths among all the cancers. Today, the number of people getting affected is increasing rapidly. India reports 70,000 cases per year. Currently, the technological improvements in the medical domain help the physician to detect the symptoms associated with the diseases precisely in a cost-effective manner. The asymptomatic nature of the disease makes it impossible to detect in the early stage. For any chronic disease, early prediction is essential for saving lives. In this chapter, a novel optimized CNN-based classifier is presented to alleviate the practical hindrances in the existing techniques such as overfitting Pre-processing, data augmentation, and detection of lung cancer from CT images using CNN is performed on the LIDC-IDRI dataset. The tested results show that the presented CNN-based classifier results are good compared to the results from the machine learning techniques in terms of quantitative metrics with an accuracy of 98% for lung cancer detection.
Deep Learning Methods for Brain Tumor Segmentation
Sakli, Marwen
Essid, Chaker
Ben Salah, Bassem
Sakli, Hedi
2023Book Section, cited 0 times
BraTS 2020
BraTS-TCGA-LGG
Deep Learning
BRAIN
Magnetic Resonance Imaging (MRI)
Automatic Segmentation
MRI, or magnetic resonance imaging, is one of the most recent medical imaging techniques. It allows one to visualize organs and soft tissues in different planes of space with great precision. A single person's brain is scanned using MRI in several slices through a 3D anatomical viewpoint. However, it is difficult and time-consuming to manually segment brain tumors from MRI images. Furthermore, automatic segmentation of brain tumors using these images is noninvasive, avoiding biopsy and improving the safety of the diagnosis procedure. This chapter enriches the body of knowledge in the field of neuroscience. It describes a highly automated technique for segmenting brain tumors in multimodal MRI based on deep neural networks. An experimental study was carried out using the Brain Tumor Segmentation (BraTS 2020) dataset as a proof of concept. The accuracy, precision, sensitivity, and specificity exceed 99.3%. In addition, the achieved intersection over union and loss are 85.69% and 0.0177. The obtained results based on the proposed method are validated by comparing them to real values found in the state of the art.
Diffusion Weighted Magnetic Resonance Imaging Radiophenotypes and Associated Molecular Pathways in Glioblastoma
Zinn, Pascal O
Hatami, Masumeh
Youssef, Eslam
Thomas, Ginu A
Luedi, Markus M
Singh, Sanjay K
Colen, Rivka R
Neurosurgery2016Journal Article, cited 2 times
Website
TCGA-GBM
Radiogenomics
Glioblastoma Multiforme (GBM)
3D Slicer
Magnetic resonance imaging (MRI)
Imaging genomics in cancer research: limitations and promises
Bai, Harrison X
Lee, Ashley M
Yang, Li
Zhang, Paul
Davatzikos, Christos
Maris, John M
Diskin, Sharon J
The British journal of radiology2016Journal Article, cited 28 times
Website
Radiogenomics
Is an analytical dose engine sufficient for intensity modulated proton therapy in lung cancer?
Teoh, Suliana
Fiorini, Francesca
George, Ben
Vallis, Katherine A
Van den Heuvel, Frank
Br J Radiol2020Journal Article, cited 0 times
4D-Lung
Intensity-modulated proton therapy (IMPT)
Algorithm Development
OBJECTIVE: To identify a subgroup of lung cancer plans where the analytical dose calculation (ADC) algorithm may be clinically acceptable compared to Monte Carlo (MC) dose calculation in intensity modulated proton therapy (IMPT). METHODS: Robust-optimised IMPT plans were generated for 20 patients to a dose of 70 Gy (relative biological effectiveness) in 35 fractions in Raystation. For each case, four plans were generated: three with ADC optimisation using the pencil beam (PB) algorithm followed by a final dose calculation with the following algorithms: PB (PB-PB), MC (PB-MC) and MC normalised to prescription dose (PB-MC scaled). A fourth plan was generated where MC optimisation and final dose calculation was performed (MC-MC). Dose comparison and gamma analysis (PB-PB vs PB-MC) at two dose thresholds were performed: 20% (D20) and 99% (D99) with PB-PB plans as reference. RESULTS: Overestimation of the dose to 99% and mean dose of the clinical target volume was observed in all PB-MC compared to PB-PB plans (median: 3.7 Gy(RBE) (5%) (range: 2.3 to 6.9 Gy(RBE)) and 1.8 Gy(RBE) (3%) (0.5 to 4.6 Gy(RBE))). PB-MC scaled plans resulted in significantly higher CTVD2 compared to PB-PB (median difference: -4 Gy(RBE) (-6%) (-5.3 to -2.4 Gy(RBE)), p </= .001). The overall median gamma pass rates (3%-3 mm) at D20 and D99 were 93.2% (range:62.2-97.5%) and 71.3 (15.4-92.0%). On multivariate analysis, presence of mediastinal disease and absence of range shifters were significantly associated with high gamma pass rates. Median D20 and D99 pass rates with these predictors were 96.0% (95.3-97.5%) and 85.4% (75.1-92.0%). MC-MC achieved similar target coverage and doses to OAR compared to PB-PB plans. CONCLUSION: In the presence of mediastinal involvement and absence of range shifters Raystation ADC may be clinically acceptable in lung IMPT. Otherwise, MC algorithm would be recommended to ensure accuracy of treatment plans. ADVANCES IN KNOWLEDGE: Although MC algorithm is more accurate compared to ADC in lung IMPT, ADC may be clinically acceptable where there is mediastinal involvement and absence of range shifters.
Determining patient abdomen thickness from a single digital radiograph with a computational model: clinical results from a proof of concept study
Worrall, M.
Vinnicombe, S.
Sutton, D.
Br J Radiol2020Journal Article, cited 0 times
Website
TCGA-SARC
PHANTOM
OBJECTIVE: A computational model has been created to estimate the abdominal thickness of a patient following an X-ray examination; its intended application is assisting with patient dose audit of paediatric X-ray examinations. This work evaluates the accuracy of the computational model in a clinical setting for adult patients undergoing anteroposterior (AP) abdomen X-ray examinations. METHODS: The model estimates patient thickness using the radiographic image, the exposure factors with which the image was acquired, a priori knowledge of the characteristics of the X-ray unit and detector and the results of extensive Monte Carlo simulation of patient examinations. For 20 patients undergoing AP abdominal X-ray examinations, the model was used to estimate the patient thickness; these estimates were compared against a direct measurement made at the time of the examination. RESULTS: Estimates of patient thickness made using the model were on average within +/-5.8% of the measured thickness. CONCLUSION: The model can be used to accurately estimate the thickness of a patient undergoing an AP abdominal X-ray examination where the patient's size falls within the range of the size of patients used to create the computational model. ADVANCES IN KNOWLEDGE: This work demonstrates that it is possible to accurately estimate the AP abdominal thickness of an adult patient using the digital X-ray image and a computational model.
Real-time interactive holographic 3D display with a 360 degrees horizontal viewing zone
Sando, Yusuke
Satoh, Kazuo
Barada, Daisuke
Yatagai, Toyohiko
Appl Opt2019Journal Article, cited 0 times
Head-Neck Cetuximab
To realize a real-time interactive holographic three-dimensional (3D) display system, we synthesize a set of 24 full high-definition (HD) binary computer-generated holograms (CGHs) based on a 3D fast-Fourier-transform-based approach. These 24 CGHs are streamed into a digital micromirror device (DMD) as a single 24-bit image at 60 Hz: 1440 CGHs are synthesized in less than a second. Continual updates of the CGHs displayed on the DMD and synchronization with a rotating mirror enlarges the horizontal viewing zone to 360 degrees using a time-division approach. We successfully demonstrate interactive manipulation, such as object rotation, rendering mode switching, and threshold value alteration, for a medical dataset of a human head obtained by X-ray computed tomography.
Low-dose CT via convolutional neural network
Chen, Hu
Zhang, Yi
Zhang, Weihua
Liao, Peixi
Li, Ke
Zhou, Jiliu
Wang, Ge
Biomedical Optics Express2017Journal Article, cited 342 times
Website
Algorithm Development
low-dose CT
Convolutional Neural Network (CNN)
Image denoising
MATLAB
In order to reduce the potential radiation risk, low-dose CT has attracted an increasing attention. However, simply lowering the radiation dose will significantly degrade the image quality. In this paper, we propose a new noise reduction method for low-dose CT via deep learning without accessing original projection data. A deep convolutional neural network is here used to map low-dose CT images towards its corresponding normal-dose counterparts in a patch-by-patch fashion. Qualitative results demonstrate a great potential of the proposed method on artifact reduction and structure preservation. In terms of the quantitative metrics, the proposed method has showed a substantial improvement on PSNR, RMSE and SSIM than the competing state-of-art methods. Furthermore, the speed of our method is one order of magnitude faster than the iterative reconstruction and patch-based image denoising methods.
Automatic interstitial photodynamic therapy planning via convex optimization
Yassine, Abdul-Amir
Kingsford, William
Xu, Yiwen
Cassidy, Jeffrey
Lilge, Lothar
Betz, Vaughn
Biomedical Optics Express2018Journal Article, cited 3 times
Website
photodynamic therapy
Glioblastoma Multiforme (GBM)
Machine learning for real-time optical property recovery in interstitial photodynamic therapy: a stimulation-based study
Yassine, Abdul-Amir
Lilge, Lothar
Betz, Vaughn
Biomedical Optics Express2021Journal Article, cited 1 times
Website
TCGA-GBM
photodynamic therapy
simulated data
Integrating clinical access limitations into iPDT treatment planning with PDT-SPACE
Wang, Shuran
Saeidi, Tina
Lilge, Lothar
Betz, Vaughn
Biomedical Optics Express2023Journal Article, cited 0 times
TCGA-GBM
Algorithm Development
PDT-SPACE is an open-source software tool that automates interstitial photodynamic therapy treatment planning by providing patient-specific placement of light sources to destroy a tumor while minimizing healthy tissue damage. This work extends PDT-SPACE in two ways. The first enhancement allows specification of clinical access constraints on light source insertion to avoid penetrating critical structures and to minimize surgical complexity. Constraining fiber access to a single burr hole of adequate size increases healthy tissue damage by 10%. The second enhancement generates an initial placement of light sources as a starting point for refinement, rather than requiring entry of a starting solution by the clinician. This feature improves productivity and also leads to solutions with 4.5% less healthy tissue damage. The two features are used in concert to perform simulations of various surgery options of virtual glioblastoma multiforme brain tumors.
A resource for the assessment of lung nodule size estimation methods: database of thoracic CT scans of an anthropomorphic phantom
Gavrielides, Marios A
Kinnard, Lisa M
Myers, Kyle J
Peregoy, Jennifer
Pritchard, William F
Zeng, Rongping
Esparza, Juan
Karanian, John
Petrick, Nicholas
Optics express2010Journal Article, cited 50 times
Website
FDA-Phantom
LUNG
A number of interrelated factors can affect the precision and accuracy of lung nodule size estimation. To quantify the effect of these factors, we have been conducting phantom CT studies using an anthropomorphic thoracic phantom containing a vasculature insert to which synthetic nodules were inserted or attached. Ten repeat scans were acquired on different multi-detector scanners, using several sets of acquisition and reconstruction protocols and various nodule characteristics (size, shape, density, location). This study design enables both bias and variance analysis for the nodule size estimation task. The resulting database is in the process of becoming publicly available as a resource to facilitate the assessment of lung nodule size estimation methodologies and to enable comparisons between different methods regarding measurement error. This resource complements public databases of clinical data and will contribute towards the development of procedures that will maximize the utility of CT imaging for lung cancer screening and tumor therapy evaluation.
StaticCodeCT: single coded aperture tensorial X-ray CT
Cuadros, A. P.
Ma, X.
Restrepo, C. M.
Arce, G. R.
Opt Express2021Journal Article, cited 0 times
LDCT-and-Projection-data
Image resampling
Algorithm Development
Coded aperture X-ray CT (CAXCT) is a new low-dose imaging technology that promises far-reaching benefits in industrial and clinical applications. It places various coded apertures (CA) at a time in front of the X-ray source to partially block the radiation. The ill-posed inverse reconstruction problem is then solved using l1-norm-based iterative reconstruction methods. Unfortunately, to attain high-quality reconstructions, the CA patterns must change in concert with the view-angles making the implementation impractical. This paper proposes a simple yet radically different approach to CAXCT, which is coined StaticCodeCT, that uses a single-static CA in the CT gantry, thus making the imaging system amenable for practical implementations. Rather than using conventional compressed sensing algorithms for recovery, we introduce a new reconstruction framework for StaticCodeCT. Namely, we synthesize the missing measurements using low-rank tensor completion principles that exploit the multi-dimensional data correlation and low-rank nature of a 3-way tensor formed by stacking the 2D coded CT projections. Then, we use the FDK algorithm to recover the 3D object. Computational experiments using experimental projection measurements exhibit up to 10% gains in the normalized root mean square distance of the reconstruction using the proposed method compared with those attained by alternative low-dose systems.
miRNA normalization enables joint analysis of several datasets to increase sensitivity and to reveal novel miRNAs differentially expressed in breast cancer
Ben-Elazar, Shay
Aure, Miriam Ragle
Jonsdottir, Kristin
Leivonen, Suvi-Katri
Kristensen, Vessela N.
Janssen, Emiel A. M.
Sahlberg, Kristine Kleivi
Lingjærde, Ole Christian
Yakhini, Zohar
2021Journal Article, cited 0 times
TCGA-BRCA
Different miRNA profiling protocols and technologies introduce differences in the resulting quantitative expression profiles. These include differences in the presence (and measurability) of certain miRNAs. We present and examine a method based on quantile normalization, Adjusted Quantile Normalization (AQuN), to combine miRNA expression data from multiple studies in breast cancer into a single joint dataset for integrative analysis. By pooling multiple datasets, we obtain increased statistical power, surfacing patterns that do not emerge as statistically significant when separately analyzing these datasets. To merge several datasets, as we do here, one needs to overcome both technical and batch differences between these datasets. We compare several approaches for merging and jointly analyzing miRNA datasets. We investigate the statistical confidence for known results and highlight potential new findings that resulted from the joint analysis using AQuN. In particular, we detect several miRNAs to be differentially expressed in estrogen receptor (ER) positive versus ER negative samples. In addition, we identify new potential biomarkers and therapeutic targets for both clinical groups. As a specific example, using the AQuN-derived dataset we detect hsa-miR-193b-5p to have a statistically significant over-expression in the ER positive group, a phenomenon that was not previously reported. Furthermore, as demonstrated by functional assays in breast cancer cell lines, overexpression of hsa-miR-193b-5p in breast cancer cell lines resulted in decreased cell viability in addition to inducing apoptosis. Together, these observations suggest a novel functional role for this miRNA in breast cancer. Packages implementing AQuN are provided for Python and Matlab: https://github.com/YakhiniGroup/PyAQN.
Explainable AI identifies diagnostic cells of genetic AML subtypes
Hehr, Matthias
Sadafi, Ario
Matek, Christian
Lienemann, Peter
Pohlkamp, Christian
Haferlach, Torsten
Spiekermann, Karsten
Marr, Carsten
2023Journal Article, cited 0 times
AML-Cytomorphology_MLL_Helmholtz
Explainable AI is deemed essential for clinical applications as it allows rationalizing model predictions, helping to build trust between clinicians and automated decision support tools. We developed an inherently explainable AI model for the classification of acute myeloid leukemia subtypes from blood smears and found that high-attention cells identified by the model coincide with those labeled as diagnostically relevant by human experts. Based on over 80,000 single white blood cell images from digitized blood smears of 129 patients diagnosed with one of four WHO-defined genetic AML subtypes and 60 healthy controls, we trained SCEMILA, a single-cell based explainable multiple instance learning algorithm. SCEMILA could perfectly discriminate between AML patients and healthy controls and detected the APL subtype with an F1 score of 0.86±0.05 (mean±s.d., 5-fold cross-validation). Analyzing a novel multi-attention module, we confirmed that our algorithm focused with high concordance on the same AML-specific cells as human experts do. Applied to classify single cells, it is able to highlight subtype specific cells and deconvolve the composition of a patient's blood smear without the need of single-cell annotation of the training data. Our large AML genetic subtype dataset is publicly available, and an interactive online tool facilitates the exploration of data and predictions. SCEMILA enables a comparison of algorithmic and expert decision criteria and can present a detailed analysis of individual patient data, paving the way to deploy AI in the routine diagnostics for identifying hematopoietic neoplasms.
Data sharing in clinical trials: An experience with two large cancer screening trials
Zhu, Claire S
Pinsky, Paul F
Moler, James E
Kukwa, Andrew
Mabie, Jerome
Rathmell, Joshua M
Riley, Tom
Prorok, Philip C
Berg, Christine D
PLoS medicine2017Journal Article, cited 1 times
Website
NLST
TCIA General
metadata
Radiogenomic mapping of edema/cellular invasion MRI-phenotypes in glioblastoma multiforme
Zinn, Pascal O
Majadan, Bhanu
Sathyan, Pratheesh
Singh, Sanjay K
Majumder, Sadhan
Jolesz, Ferenc A
Colen, Rivka R
PLoS One2011Journal Article, cited 192 times
Website
Radiogenomics
Glioblastoma Multiforme (GBM)
Magnetic Resonance Imaging (MRI)
Computer Aided Detection (CADe)
BACKGROUND: Despite recent discoveries of new molecular targets and pathways, the search for an effective therapy for Glioblastoma Multiforme (GBM) continues. A newly emerged field, radiogenomics, links gene expression profiles with MRI phenotypes. MRI-FLAIR is a noninvasive diagnostic modality and was previously found to correlate with cellular invasion in GBM. Thus, our radiogenomic screen has the potential to reveal novel molecular determinants of invasion. Here, we present the first comprehensive radiogenomic analysis using quantitative MRI volumetrics and large-scale gene- and microRNA expression profiling in GBM. METHODS: Based on The Cancer Genome Atlas (TCGA), discovery and validation sets with gene, microRNA, and quantitative MR-imaging data were created. Top concordant genes and microRNAs correlated with high FLAIR volumes from both sets were further characterized by Kaplan Meier survival statistics, microRNA-gene correlation analyses, and GBM molecular subtype-specific distribution. RESULTS: The top upregulated gene in both the discovery (4 fold) and validation (11 fold) sets was PERIOSTIN (POSTN). The top downregulated microRNA in both sets was miR-219, which is predicted to bind to POSTN. Kaplan Meier analysis demonstrated that above median expression of POSTN resulted in significantly decreased survival and shorter time to disease progression (P<0.001). High POSTN and low miR-219 expression were significantly associated with the mesenchymal GBM subtype (P<0.0001). CONCLUSION: Here, we propose a novel diagnostic method to screen for molecular cancer subtypes and genomic correlates of cellular invasion. Our findings also have potential therapeutic significance since successful molecular inhibition of invasion will improve therapy and patient survival in GBM.
A novel volume-age-KPS (VAK) glioblastoma classification identifies a prognostic cognate microRNA-gene signature
Zinn, Pascal O
Sathyan, Pratheesh
Mahajan, Bhanu
Bruyere, John
Hegi, Monika
Majumder, Sadhan
Colen, Rivka R
PLoS One2012Journal Article, cited 63 times
Website
Radiomics
Radiogenomics
Computer Aided Diagnosis (CADx)
Classification
TCGA-GBM
REMBRANDT
BACKGROUND: Several studies have established Glioblastoma Multiforme (GBM) prognostic and predictive models based on age and Karnofsky Performance Status (KPS), while very few studies evaluated the prognostic and predictive significance of preoperative MR-imaging. However, to date, there is no simple preoperative GBM classification that also correlates with a highly prognostic genomic signature. Thus, we present for the first time a biologically relevant, and clinically applicable tumor Volume, patient Age, and KPS (VAK) GBM classification that can easily and non-invasively be determined upon patient admission. METHODS: We quantitatively analyzed the volumes of 78 GBM patient MRIs present in The Cancer Imaging Archive (TCIA) corresponding to patients in The Cancer Genome Atlas (TCGA) with VAK annotation. The variables were then combined using a simple 3-point scoring system to form the VAK classification. A validation set (N = 64) from both the TCGA and Rembrandt databases was used to confirm the classification. Transcription factor and genomic correlations were performed using the gene pattern suite and Ingenuity Pathway Analysis. RESULTS: VAK-A and VAK-B classes showed significant median survival differences in discovery (P = 0.007) and validation sets (P = 0.008). VAK-A is significantly associated with P53 activation, while VAK-B shows significant P53 inhibition. Furthermore, a molecular gene signature comprised of a total of 25 genes and microRNAs was significantly associated with the classes and predicted survival in an independent validation set (P = 0.001). A favorable MGMT promoter methylation status resulted in a 10.5 months additional survival benefit for VAK-A compared to VAK-B patients. CONCLUSIONS: The non-invasively determined VAK classification with its implication of VAK-specific molecular regulatory networks, can serve as a very robust initial prognostic tool, clinical trial selection criteria, and important step toward the refinement of genomics-based personalized therapy for GBM patients.
Quantitative Computed Tomographic Descriptors Associate Tumor Shape Complexity and Intratumor Heterogeneity with Prognosis in Lung Adenocarcinoma
Grove, Olya
Berglund, Anders E
Schabath, Matthew B
Aerts, Hugo JWL
Dekker, Andre
Wang, Hua
Velazquez, Emmanuel Rios
Lambin, Philippe
Gu, Yuhua
Balagurunathan, Yoganand
Eikman, E.
Gatenby, Robert A
Eschrich, S
Gillies, Robert J
PLoS One2015Journal Article, cited 87 times
Website
Algorithm Development
LungCT-Diagnosis
LUNG
Segmentation
Classification
Two CT features were developed to quantitatively describe lung adenocarcinomas by scoring tumor shape complexity (feature 1: convexity) and intratumor density variation (feature 2: entropy ratio) in routinely obtained diagnostic CT scans. The developed quantitative features were analyzed in two independent cohorts (cohort 1: n = 61; cohort 2: n = 47) of patients diagnosed with primary lung adenocarcinoma, retrospectively curated to include imaging and clinical data. Preoperative chest CTs were segmented semi-automatically. Segmented tumor regions were further subdivided into core and boundary sub-regions, to quantify intensity variations across the tumor. Reproducibility of the features was evaluated in an independent test-retest dataset of 32 patients. The proposed metrics showed high degree of reproducibility in a repeated experiment (concordance, CCC>/=0.897; dynamic range, DR>/=0.92). Association with overall survival was evaluated by Cox proportional hazard regression, Kaplan-Meier survival curves, and the log-rank test. Both features were associated with overall survival (convexity: p = 0.008; entropy ratio: p = 0.04) in Cohort 1 but not in Cohort 2 (convexity: p = 0.7; entropy ratio: p = 0.8). In both cohorts, these features were found to be descriptive and demonstrated the link between imaging characteristics and patient survival in lung adenocarcinoma.
Spatial Habitat Features Derived from Multiparametric Magnetic Resonance Imaging Data Are Associated with Molecular Subtype and 12-Month Survival Status in Glioblastoma Multiforme
Lee, Joonsang
Narang, Shivali
Martinez, Juan
Rao, Ganesh
Rao, Arvind
PLoS One2015Journal Article, cited 14 times
Website
TCGA-GBM
Glioblastoma
Radiomics
Magnetic Resonance Imaging (MRI)
One of the most common and aggressive malignant brain tumors is Glioblastoma multiforme. Despite the multimodality treatment such as radiation therapy and chemotherapy (temozolomide: TMZ), the median survival rate of glioblastoma patient is less than 15 months. In this study, we investigated the association between measures of spatial diversity derived from spatial point pattern analysis of multiparametric magnetic resonance imaging (MRI) data with molecular status as well as 12-month survival in glioblastoma. We obtained 27 measures of spatial proximity (diversity) via spatial point pattern analysis of multiparametric T1 post-contrast and T2 fluid-attenuated inversion recovery MRI data. These measures were used to predict 12-month survival status (</=12 or >12 months) in 74 glioblastoma patients. Kaplan-Meier with receiver operating characteristic analyses was used to assess the relationship between derived spatial features and 12-month survival status as well as molecular subtype status in patients with glioblastoma. Kaplan-Meier survival analysis revealed that 14 spatial features were capable of stratifying overall survival in a statistically significant manner. For prediction of 12-month survival status based on these diversity indices, sensitivity and specificity were 0.86 and 0.64, respectively. The area under the receiver operating characteristic curve and the accuracy were 0.76 and 0.75, respectively. For prediction of molecular subtype status, proneural subtype shows highest accuracy of 0.93 among all molecular subtypes based on receiver operating characteristic analysis. We find that measures of spatial diversity from point pattern analysis of intensity habitats from T1 post-contrast and T2 fluid-attenuated inversion recovery images are associated with both tumor subtype status and 12-month survival status and may therefore be useful indicators of patient prognosis, in addition to providing potential guidance for molecularly-targeted therapies in Glioblastoma multiforme.
Effect of Imaging Parameter Thresholds on MRI Prediction of Neoadjuvant Chemotherapy Response in Breast Cancer Subtypes
Lo, Wei-Ching
Li, Wen
Jones, Ella F
Newitt, David C
Kornak, John
Wilmes, Lisa J
Esserman, Laura J
Hylton, Nola M
PLoS One2016Journal Article, cited 7 times
Website
TCGA-BRCA
Magnetic Resonance Imaging (MRI)
Radiomics
Radiogenomics
Comparison of Safety Margin Generation Concepts in Image Guided Radiotherapy to Account for Daily Head and Neck Pose Variations
Stoll, Markus
Stoiber, Eva Maria
Grimm, Sarah
Debus, Jürgen
Bendl, Rolf
Giske, Kristina
PLoS One2016Journal Article, cited 2 times
Website
QIN-HEADNECK
Radiation Therapy
PURPOSE: Intensity modulated radiation therapy (IMRT) of head and neck tumors allows a precise conformation of the high-dose region to clinical target volumes (CTVs) while respecting dose limits to organs a risk (OARs). Accurate patient setup reduces translational and rotational deviations between therapy planning and therapy delivery days. However, uncertainties in the shape of the CTV and OARs due to e.g. small pose variations in the highly deformable anatomy of the head and neck region can still compromise the dose conformation. Routinely applied safety margins around the CTV cause higher dose deposition in adjacent healthy tissue and should be kept as small as possible. MATERIALS AND METHODS: In this work we evaluate and compare three approaches for margin generation 1) a clinically used approach with a constant isotropic 3 mm margin, 2) a previously proposed approach adopting a spatial model of the patient and 3) a newly developed approach adopting a biomechanical model of the patient. All approaches are retrospectively evaluated using a large patient cohort of over 500 fraction control CT images with heterogeneous pose changes. Automatic methods for finding landmark positions in the control CT images are combined with a patient specific biomechanical finite element model to evaluate the CTV deformation. RESULTS: The applied methods for deformation modeling show that the pose changes cause deformations in the target region with a mean motion magnitude of 1.80 mm. We found that the CTV size can be reduced by both variable margin approaches by 15.6% and 13.3% respectively, while maintaining the CTV coverage. With approach 3 an increase of target coverage was obtained. CONCLUSION: Variable margins increase target coverage, reduce risk to OARs and improve healthy tissue sparing at the same time.
Effect of a computer-aided diagnosis system on radiologists' performance in grading gliomas with MRI
Hsieh, Kevin Li-Chun
Tsai, Ruei-Je
Teng, Yu-Chuan
Lo, Chung-Ming
PLoS One2017Journal Article, cited 0 times
Algorithm Development
Computer Aided Diagnosis (CADx)
Classification
Lower-grade glioma (LGG)
Glioblastoma Multiforme (GBM)
The effects of a computer-aided diagnosis (CAD) system based on quantitative intensity features with magnetic resonance (MR) imaging (MRI) were evaluated by examining radiologists' performance in grading gliomas. The acquired MRI database included 71 lower-grade gliomas and 34 glioblastomas. Quantitative image features were extracted from the tumor area and combined in a CAD system to generate a prediction model. The effect of the CAD system was evaluated in a two-stage procedure. First, a radiologist performed a conventional reading. A sequential second reading was determined with a malignancy estimation by the CAD system. Each MR image was regularly read by one radiologist out of a group of three radiologists. The CAD system achieved an accuracy of 87% (91/105), a sensitivity of 79% (27/34), a specificity of 90% (64/71), and an area under the receiver operating characteristic curve (Az) of 0.89. In the evaluation, the radiologists' Az values significantly improved from 0.81, 0.87, and 0.84 to 0.90, 0.90, and 0.88 with p = 0.0011, 0.0076, and 0.0167, respectively. Based on the MR image features, the proposed CAD system not only performed well in distinguishing glioblastomas from lower-grade gliomas but also provided suggestions about glioma grading to reinforce radiologists' confidence rating.
Harmonizing the pixel size in retrospective computed tomography radiomics studies
Mackin, Dennis
Fave, Xenia
Zhang, Lifei
Yang, Jinzhong
Jones, A Kyle
Ng, Chaan S
PLoS One2017Journal Article, cited 19 times
Website
CC-Radiomics-Phantom
Algorithm Development
image resampling
Butterworth filtering
computed tomography (CT)
hierarchical clustering
Volumetric brain tumour detection from MRI using visual saliency
Mitra, Somosmita
Banerjee, Subhashis
Hayashi, Yoichi
PLoS One2017Journal Article, cited 2 times
Website
MICCAI BraTS challenge
BRAIN
Computer Aided Detection (CADe)
Magnetic Resonance Imaging (MRI)
Medical image processing has become a major player in the world of automatic tumour region detection and is tantamount to the incipient stages of computer aided design. Saliency detection is a crucial application of medical image processing, and serves in its potential aid to medical practitioners by making the affected area stand out in the foreground from the rest of the background image. The algorithm developed here is a new approach to the detection of saliency in a three dimensional multi channel MR image sequence for the glioblastoma multiforme (a form of malignant brain tumour). First we enhance the three channels, FLAIR (Fluid Attenuated Inversion Recovery), T2 and T1C (contrast enhanced with gadolinium) to generate a pseudo coloured RGB image. This is then converted to the CIE L*a*b* color space. Processing on cubes of sizes k = 4, 8, 16, the L*a*b* 3D image is then compressed into volumetric units; each representing the neighbourhood information of the surrounding 64 voxels for k = 4, 512 voxels for k = 8 and 4096 voxels for k = 16, respectively. The spatial distance of these voxels are then compared along the three major axes to generate the novel 3D saliency map of a 3D image, which unambiguously highlights the tumour region. The algorithm operates along the three major axes to maximise the computation efficiency while minimising loss of valuable 3D information. Thus the 3D multichannel MR image saliency detection algorithm is useful in generating a uniform and logistically correct 3D saliency map with pragmatic applicability in Computer Aided Detection (CADe). Assignment of uniform importance to all three axes proves to be an important factor in volumetric processing, which helps in noise reduction and reduces the possibility of compromising essential information. The effectiveness of the algorithm was evaluated over the BRATS MICCAI 2015 dataset having 274 glioma cases, consisting both of high grade and low grade GBM. The results were compared with that of the 2D saliency detection algorithm taken over the entire sequence of brain data. For all comparisons, the Area Under the receiver operator characteristic (ROC) Curve (AUC) has been found to be more than 0.99 ± 0.01 over various tumour types, structures and locations.
3D multi-view convolutional neural networks for lung nodule classification
Kang, Guixia
Liu, Kui
Hou, Beibei
Zhang, Ningbo
PLoS One2017Journal Article, cited 7 times
Website
LIDC-IDRI
lung cancer
3d convolutional neural network (CNN)
Methylation of L1RE1, RARB, and RASSF1 function as possible biomarkers for the differential diagnosis of lung cancer
Walter, RFH
Rozynek, P
Casjens, S
Werner, R
Mairinger, FD
Speel, EJM
Zur Hausen, A
Meier, S
Wohlschlaeger, J
Theegarten, D
PLoS One2018Journal Article, cited 1 times
Website
TCGA-LUSC
LUNG
methylation markers
classification and regression tree algorithm (CART)
Computer-aided diagnosis of lung nodule using gradient tree boosting and Bayesian optimization
Nishio, Mizuho
Nishizawa, Mitsuo
Sugiyama, Osamu
Kojima, Ryosuke
Yakami, Masahiro
Kuroda, Tomohiro
Togashi, Kaori
PLoS One2018Journal Article, cited 3 times
Website
Computer Aided Diagnosis (CADx)
LUNGx
NSCLC Radiogenomics
Algorithm Development
Performance of sparse-view CT reconstruction with multi-directional gradient operators
Hsieh, C. J.
Jin, S. C.
Chen, J. C.
Kuo, C. W.
Wang, R. T.
Chu, W. C.
PLoS One2019Journal Article, cited 0 times
Website
TCGA-STAD
To further reduce the noise and artifacts in the reconstructed image of sparse-view CT, we have modified the traditional total variation (TV) methods, which only calculate the gradient variations in x and y directions, and have proposed 8- and 26-directional (the multi-directional) gradient operators for TV calculation to improve the quality of reconstructed images. Different from traditional TV methods, the proposed 8- and 26-directional gradient operators additionally consider the diagonal directions in TV calculation. The proposed method preserves more information from original tomographic data in the step of gradient transform to obtain better reconstruction image qualities. Our algorithms were tested using two-dimensional Shepp-Logan phantom and three-dimensional clinical CT images. Results were evaluated using the root-mean-square error (RMSE), peak signal-to-noise ratio (PSNR), and universal quality index (UQI). All the experiment results show that the sparse-view CT images reconstructed using the proposed 8- and 26-directional gradient operators are superior to those reconstructed by traditional TV methods. Qualitative and quantitative analyses indicate that the more number of directions that the gradient operator has, the better images can be reconstructed. The 8- and 26-directional gradient operators we proposed have better capability to reduce noise and artifacts than traditional TV methods, and they are applicable to be applied to and combined with existing CT reconstruction algorithms derived from CS theory to produce better image quality in sparse-view reconstruction.
Automatic lung nodule detection using multi-scale dot nodule-enhancement filter and weighted support vector machines in chest computed tomography
Gu, Y.
Lu, X.
Zhang, B.
Zhao, Y.
Yu, D.
Gao, L.
Cui, G.
Wu, L.
Zhou, T.
PLoS One2019Journal Article, cited 0 times
Website
LIDC-IDRI
Computer Assisted Detection (CAD)
A novel CAD scheme for automated lung nodule detection is proposed to assist radiologists with the detection of lung cancer on CT scans. The proposed scheme is composed of four major steps: (1) lung volume segmentation, (2) nodule candidate extraction and grouping, (3) false positives reduction for the non-vessel tree group, and (4) classification for the vessel tree group. Lung segmentation is performed first. Then, 3D labeling technology is used to divide nodule candidates into two groups. For the non-vessel tree group, nodule candidates are classified as true nodules at the false positive reduction stage if the candidates survive the rule-based classifier and are not screened out by the dot filter. For the vessel tree group, nodule candidates are extracted using dot filter. Next, RSFS feature selection is used to select the most discriminating features for classification. Finally, WSVM with an undersampling approach is adopted to discriminate true nodules from vessel bifurcations in vessel tree group. The proposed method was evaluated on 154 thin-slice scans with 204 nodules in the LIDC database. The performance of the proposed CAD scheme yielded a high sensitivity (87.81%) while maintaining a low false rate (1.057 FPs/scan). The experimental results indicate the performance of our method may be better than the existing methods.
Viable and necrotic tumor assessment from whole slide images of osteosarcoma using machine-learning and deep-learning models
Arunachalam, Harish Babu
Mishra, Rashika
Daescu, Ovidiu
Cederberg, Kevin
Rakheja, Dinesh
Sengupta, Anita
Leonard, David
Hallac, Rami
Leavey, Patrick
PLoS One2019Journal Article, cited 0 times
Osteosarcoma-Tumor-Assessment
Deep Learning
Support Vector Machine
Pathological estimation of tumor necrosis after chemotherapy is essential for patients with osteosarcoma. This study reports the first fully automated tool to assess viable and necrotic tumor in osteosarcoma, employing advances in histopathology digitization and automated learning. We selected 40 digitized whole slide images representing the heterogeneity of osteosarcoma and chemotherapy response. With the goal of labeling the diverse regions of the digitized tissue into viable tumor, necrotic tumor, and non-tumor, we trained 13 machine-learning models and selected the top performing one (a Support Vector Machine) based on reported accuracy. We also developed a deep-learning architecture and trained it on the same data set. We computed the receiver-operator characteristic for discrimination of non-tumor from tumor followed by conditional discrimination of necrotic from viable tumor and found our models performing exceptionally well. We then used the trained models to identify regions of interest on image-tiles generated from test whole slide images. The classification output is visualized as a tumor-prediction map, displaying the extent of viable and necrotic tumor in the slide image. Thus, we lay the foundation for a complete tumor assessment pipeline from original histology images to tumor-prediction map generation. The proposed pipeline can also be adopted for other types of tumor.
Exploit fully automatic low-level segmented PET data for training high-level deep learning algorithms for the corresponding CT data
Gsaxner, Christina
Roth, Peter M
Wallner, Jurgen
Egger, Jan
PLoS One2019Journal Article, cited 0 times
Website
RIDER Lung PET-CT
We present an approach for fully automatic urinary bladder segmentation in CT images with artificial neural networks in this study. Automatic medical image analysis has become an invaluable tool in the different treatment stages of diseases. Especially medical image segmentation plays a vital role, since segmentation is often the initial step in an image analysis pipeline. Since deep neural networks have made a large impact on the field of image processing in the past years, we use two different deep learning architectures to segment the urinary bladder. Both of these architectures are based on pre-trained classification networks that are adapted to perform semantic segmentation. Since deep neural networks require a large amount of training data, specifically images and corresponding ground truth labels, we furthermore propose a method to generate such a suitable training data set from Positron Emission Tomography/Computed Tomography image data. This is done by applying thresholding to the Positron Emission Tomography data for obtaining a ground truth and by utilizing data augmentation to enlarge the dataset. In this study, we discuss the influence of data augmentation on the segmentation results, and compare and evaluate the proposed architectures in terms of qualitative and quantitative segmentation performance. The results presented in this study allow concluding that deep neural networks can be considered a promising approach to segment the urinary bladder in CT images.
Predicting all-cause and lung cancer mortality using emphysema score progression rate between baseline and follow-up chest CT images: A comparison of risk model performances
Schreuder, Anton
Jacobs, Colin
Gallardo-Estrella, Leticia
Prokop, Mathias
Schaefer-Prokop, Cornelia M
van Ginneken, Bram
PLoS One2019Journal Article, cited 0 times
Website
NLST
Cancer Screening
FDG PET based prediction of response in head and neck cancer treatment: Assessment of new quantitative imaging features
Beichel, Reinhard R.
Ulrich, Ethan J.
Smith, Brian J.
Bauer, Christian
Brown, Bartley
Casavant, Thomas
Sunderland, John J.
Graham, Michael M.
Buatti, John M.
PLoS One2019Journal Article, cited 0 times
QIN-HEADNECK
Head and Neck
PET
INTRODUCTION: 18 F-fluorodeoxyglucose (FDG) positron emission tomography (PET) is now a standard diagnostic imaging test performed in patients with head and neck cancer for staging, re-staging, radiotherapy planning, and outcome assessment. Currently, quantitative analysis of FDG PET scans is limited to simple metrics like maximum standardized uptake value, metabolic tumor volume, or total lesion glycolysis, which have limited predictive value. The goal of this work was to assess the predictive potential of new (i.e., nonstandard) quantitative imaging features on head and neck cancer outcome.
METHODS: This retrospective study analyzed fifty-eight pre- and post-treatment FDG PET scans of patients with head and neck squamous cell cancer to calculate five standard and seventeen new features at baseline and post-treatment. Cox survival regression was used to assess the predictive potential of each quantitative imaging feature on disease-free survival.
RESULTS: Analysis showed that the post-treatment change of the average tracer uptake in the rim background region immediately adjacent to the tumor normalized by uptake in the liver represents a novel PET feature that is associated with disease-free survival (HR 1.95; 95% CI 1.27, 2.99) and has good discriminative performance (c index 0.791).
CONCLUSION: The reported findings define a promising new direction for quantitative imaging biomarker research in head and neck squamous cell cancer and highlight the potential role of new radiomics features in oncology decision making as part of precision medicine.
Radiomics features of the primary tumor fail to improve prediction of overall survival in large cohorts of CT- and PET-imaged head and neck cancer patients
Ger, Rachel B
Zhou, Shouhao
Elgohari, Baher
Elhalawani, Hesham
Mackin, Dennis M
Meier, Joseph G
Nguyen, Callistus M
Anderson, Brian M
Gay, Casey
Ning, Jing
Fuller, Clifton D
Li, Heng
Howell, Rebecca M
Layman, Rick R
Mawlawi, Osama
Stafford, R Jason
Aerts, Hugo JWL
Court, Laurence E.
PLoS One2019Journal Article, cited 0 times
Website
Head-Neck-PET-CT
Radiomics studies require many patients in order to power them, thus patients are often combined from different institutions and using different imaging protocols. Various studies have shown that imaging protocols affect radiomics feature values. We examined whether using data from cohorts with controlled imaging protocols improved patient outcome models. We retrospectively reviewed 726 CT and 686 PET images from head and neck cancer patients, who were divided into training or independent testing cohorts. For each patient, radiomics features with different preprocessing were calculated and two clinical variables-HPV status and tumor volume-were also included. A Cox proportional hazards model was built on the training data by using bootstrapped Lasso regression to predict overall survival. The effect of controlled imaging protocols on model performance was evaluated by subsetting the original training and independent testing cohorts to include only patients whose images were obtained using the same imaging protocol and vendor. Tumor volume, HPV status, and two radiomics covariates were selected for the CT model, resulting in an AUC of 0.72. However, volume alone produced a higher AUC, whereas adding radiomics features reduced the AUC. HPV status and one radiomics feature were selected as covariates for the PET model, resulting in an AUC of 0.59, but neither covariate was significantly associated with survival. Limiting the training and independent testing to patients with the same imaging protocol reduced the AUC for CT patients to 0.55, and no covariates were selected for PET patients. Radiomics features were not consistently associated with survival in CT or PET images of head and neck patients, even within patients with the same imaging protocol.
Adverse prognosis of glioblastoma contacting the subventricular zone: Biological correlates
Berendsen, S.
van Bodegraven, E.
Seute, T.
Spliet, W. G. M.
Geurts, M.
Hendrikse, J.
Schoysman, L.
Huiszoon, W. B.
Varkila, M.
Rouss, S.
Bell, E. H.
Kroonen, J.
Chakravarti, A.
Bours, V.
Snijders, T. J.
Robe, P. A.
PLoS One2019Journal Article, cited 2 times
Website
TCGA-GBM
Radiogenomics
Magnetic Resonance Imaging (MRI)
INTRODUCTION: The subventricular zone (SVZ) in the brain is associated with gliomagenesis and resistance to treatment in glioblastoma. In this study, we investigate the prognostic role and biological characteristics of subventricular zone (SVZ) involvement in glioblastoma. METHODS: We analyzed T1-weighted, gadolinium-enhanced MR images of a retrospective cohort of 647 primary glioblastoma patients diagnosed between 2005-2013, and performed a multivariable Cox regression analysis to adjust the prognostic effect of SVZ involvement for clinical patient- and tumor-related factors. Protein expression patterns of a.o. markers of neural stem cellness (CD133 and GFAP-delta) and (epithelial-) mesenchymal transition (NF-kappaB, C/EBP-beta and STAT3) were determined with immunohistochemistry on tissue microarrays containing 220 of the tumors. Molecular classification and mRNA expression-based gene set enrichment analyses, miRNA expression and SNP copy number analyses were performed on fresh frozen tissue obtained from 76 tumors. Confirmatory analyses were performed on glioblastoma TCGA/TCIA data. RESULTS: Involvement of the SVZ was a significant adverse prognostic factor in glioblastoma, independent of age, KPS, surgery type and postoperative treatment. Tumor volume and postoperative complications did not explain this prognostic effect. SVZ contact was associated with increased nuclear expression of the (epithelial-) mesenchymal transition markers C/EBP-beta and phospho-STAT3. SVZ contact was not associated with molecular subtype, distinct gene expression patterns, or markers of stem cellness. Our main findings were confirmed in a cohort of 229 TCGA/TCIA glioblastomas. CONCLUSION: In conclusion, involvement of the SVZ is an independent prognostic factor in glioblastoma, and associates with increased expression of key markers of (epithelial-) mesenchymal transformation, but does not correlate with stem cellness, molecular subtype, or specific (mi)RNA expression patterns.
Deep learning based image reconstruction algorithm for limited-angle translational computed tomography
Wang, Jiaxi
Liang, Jun
Cheng, Jingye
Guo, Yumeng
Zeng, Li
PLoS One2020Journal Article, cited 0 times
Website
deep learning
Convolutional Neural Network (CNN)
CT
image reconstruction
Quantifying the incremental value of deep learning: Application to lung nodule detection
Warsavage, Theodore Jr
Xing, Fuyong
Baron, Anna E
Feser, William J
Hirsch, Erin
Miller, York E
Malkoski, Stephen
Wolf, Holly J
Wilson, David O
Ghosh, Debashis
PLoS One2020Journal Article, cited 0 times
Website
LIDC-IDRI
Machine Learning
Computer Aided Detection (CADe)
We present a case study for implementing a machine learning algorithm with an incremental value framework in the domain of lung cancer research. Machine learning methods have often been shown to be competitive with prediction models in some domains; however, implementation of these methods is in early development. Often these methods are only directly compared to existing methods; here we present a framework for assessing the value of a machine learning model by assessing the incremental value. We developed a machine learning model to identify and classify lung nodules and assessed the incremental value added to existing risk prediction models. Multiple external datasets were used for validation. We found that our image model, trained on a dataset from The Cancer Imaging Archive (TCIA), improves upon existing models that are restricted to patient characteristics, but it was inconclusive about whether it improves on models that consider nodule features. Another interesting finding is the variable performance on different datasets, suggesting population generalization with machine learning models may be more challenging than is often considered.
Robust radiogenomics approach to the identification of EGFR mutations among patients with NSCLC from three different countries using topologically invariant Betti numbers
OBJECTIVES: To propose a novel robust radiogenomics approach to the identification of epidermal growth factor receptor (EGFR) mutations among patients with non-small cell lung cancer (NSCLC) using Betti numbers (BNs). MATERIALS AND METHODS: Contrast enhanced computed tomography (CT) images of 194 multi-racial NSCLC patients (79 EGFR mutants and 115 wildtypes) were collected from three different countries using 5 manufacturers' scanners with a variety of scanning parameters. Ninety-nine cases obtained from the University of Malaya Medical Centre (UMMC) in Malaysia were used for training and validation procedures. Forty-one cases collected from the Kyushu University Hospital (KUH) in Japan and fifty-four cases obtained from The Cancer Imaging Archive (TCIA) in America were used for a test procedure. Radiomic features were obtained from BN maps, which represent topologically invariant heterogeneous characteristics of lung cancer on CT images, by applying histogram- and texture-based feature computations. A BN-based signature was determined using support vector machine (SVM) models with the best combination of features that maximized a robustness index (RI) which defined a higher total area under receiver operating characteristics curves (AUCs) and lower difference of AUCs between the training and the validation. The SVM model was built using the signature and optimized in a five-fold cross validation. The BN-based model was compared to conventional original image (OI)- and wavelet-decomposition (WD)-based models with respect to the RI between the validation and the test. RESULTS: The BN-based model showed a higher RI of 1.51 compared with the models based on the OI (RI: 1.33) and the WD (RI: 1.29). CONCLUSION: The proposed model showed higher robustness than the conventional models in the identification of EGFR mutations among NSCLC patients. The results suggested the robustness of the BN-based approach against variations in image scanner/scanning parameters.
Semi-supervised learning for an improved diagnosis of COVID-19 in CT images
Han, C. H.
Kim, M.
Kwak, J. T.
PLoS One2021Journal Article, cited 0 times
Website
LCTSC
COVID-19
Lung CT Segmentation Challenge 2017
RIDER Collections
SPIE-AAPM Lung CT Challenge
Deep Learning
Computer Aided Diagnosis (CADx)
Coronavirus disease 2019 (COVID-19) has been spread out all over the world. Although a real-time reverse-transcription polymerase chain reaction (RT-PCR) test has been used as a primary diagnostic tool for COVID-19, the utility of CT based diagnostic tools have been suggested to improve the diagnostic accuracy and reliability. Herein we propose a semi-supervised deep neural network for an improved detection of COVID-19. The proposed method utilizes CT images in a supervised and unsupervised manner to improve the accuracy and robustness of COVID-19 diagnosis. Both labeled and unlabeled CT images are employed. Labeled CT images are used for supervised learning. Unlabeled CT images are utilized for unsupervised learning in a way that the feature representations are invariant to perturbations in CT images. To systematically evaluate the proposed method, two COVID-19 CT datasets and three public CT datasets with no COVID-19 CT images are employed. In distinguishing COVID-19 from non-COVID-19 CT images, the proposed method achieves an overall accuracy of 99.83%, sensitivity of 0.9286, specificity of 0.9832, and positive predictive value (PPV) of 0.9192. The results are consistent between the COVID-19 challenge dataset and the public CT datasets. For discriminating between COVID-19 and common pneumonia CT images, the proposed method obtains 97.32% accuracy, 0.9971 sensitivity, 0.9598 specificity, and 0.9326 PPV. Moreover, the comparative experiments with respect to supervised learning and training strategies demonstrate that the proposed method is able to improve the diagnostic accuracy and robustness without exhaustive labeling. The proposed semi-supervised method, exploiting both supervised and unsupervised learning, facilitates an accurate and reliable diagnosis for COVID-19, leading to an improved patient care and management.
The application of a workflow integrating the variable reproducibility and harmonizability of radiomic features on a phantom dataset
Ibrahim, Abdalla
Refaee, Turkey
Leijenaar, Ralph TH
Primakov, Sergey
Hustinx, Roland
Mottaghy, Felix M
Woodruff, Henry C
Maidment, Andrew DA
Lambin, Philippe
PLoS One2021Journal Article, cited 2 times
Website
Credence Cartridge Radiomics Phantom CT Scans
radiomic features
Harnessing clinical annotations to improve deep learning performance in prostate segmentation
Sarma, Karthik V.
Raman, Alex G.
Dhinagar, Nikhil J.
Priester, Alan M.
Harmon, Stephanie
Sanford, Thomas
Mehralivand, Sherif
Turkbey, Baris
Marks, Leonard S.
Raman, Steven S.
Speier, William
Arnold, Corey W.
PLoS One2021Journal Article, cited 0 times
ISBI-MR-Prostate-2013
PURPOSE: Developing large-scale datasets with research-quality annotations is challenging due to the high cost of refining clinically generated markup into high precision annotations. We evaluated the direct use of a large dataset with only clinically generated annotations in development of high-performance segmentation models for small research-quality challenge datasets.
MATERIALS AND METHODS: We used a large retrospective dataset from our institution comprised of 1,620 clinically generated segmentations, and two challenge datasets (PROMISE12: 50 patients, ProstateX-2: 99 patients). We trained a 3D U-Net convolutional neural network (CNN) segmentation model using our entire dataset, and used that model as a template to train models on the challenge datasets. We also trained versions of the template model using ablated proportions of our dataset, and evaluated the relative benefit of those templates for the final models. Finally, we trained a version of the template model using an out-of-domain brain cancer dataset, and evaluated the relevant benefit of that template for the final models. We used five-fold cross-validation (CV) for all training and evaluation across our entire dataset.
RESULTS: Our model achieves state-of-the-art performance on our large dataset (mean overall Dice 0.916, average Hausdorff distance 0.135 across CV folds). Using this model as a pre-trained template for refining on two external datasets significantly enhanced performance (30% and 49% enhancement in Dice scores respectively). Mean overall Dice and mean average Hausdorff distance were 0.912 and 0.15 for the ProstateX-2 dataset, and 0.852 and 0.581 for the PROMISE12 dataset. Using even small quantities of data to train the template enhanced performance, with significant improvements using 5% or more of the data.
CONCLUSION: We trained a state-of-the-art model using unrefined clinical prostate annotations and found that its use as a template model significantly improved performance in other prostate segmentation tasks, even when trained with only 5% of the original dataset.
Clinical application of mask region-based convolutional neural network for the automatic detection and segmentation of abnormal liver density based on hepatocellular carcinoma computed tomography datasets
Yang, C. J.
Wang, C. K.
Fang, Y. D.
Wang, J. Y.
Su, F. C.
Tsai, H. M.
Lin, Y. J.
Tsai, H. W.
Yeh, L. R.
PLoS One2021Journal Article, cited 0 times
Website
TCGA-LIHC
LIVER
TensorFlow
Segmentation
Computed Tomography (CT)
Computer Aided Detection (CADe)
Convolutional Neural Network (CNN)
The aim of the study was to use a previously proposed mask region-based convolutional neural network (Mask R-CNN) for automatic abnormal liver density detection and segmentation based on hepatocellular carcinoma (HCC) computed tomography (CT) datasets from a radiological perspective. Training and testing datasets were acquired retrospectively from two hospitals of Taiwan. The training dataset contained 10,130 images of liver tumor densities of 11,258 regions of interest (ROIs). The positive testing dataset contained 1,833 images of liver tumor densities with 1,874 ROIs, and negative testing data comprised 20,283 images without abnormal densities in liver parenchyma. The Mask R-CNN was used to generate a medical model, and areas under the curve, true positive rates, false positive rates, and Dice coefficients were evaluated. For abnormal liver CT density detection, in each image, we identified the mean area under the curve, true positive rate, and false positive rate, which were 0.9490, 91.99%, and 13.68%, respectively. For segmentation ability, the highest mean Dice coefficient obtained was 0.8041. This study trained a Mask R-CNN on various HCC images to construct a medical model that serves as an auxiliary tool for alerting radiologists to abnormal CT density in liver scans; this model can simultaneously detect liver lesions and perform automatic instance segmentation.
Multi- class classification of breast cancer abnormalities using Deep Convolutional Neural Network (CNN)
Heenaye-Mamode Khan, M.
Boodoo-Jahangeer, N.
Dullull, W.
Nathire, S.
Gao, X.
Sinha, G. R.
Nagwanshi, K. K.
PLoS One2021Journal Article, cited 0 times
Website
CBIS-DDSM
Deep convolutional neural network (DCNN)
BREAST
The real cause of breast cancer is very challenging to determine and therefore early detection of the disease is necessary for reducing the death rate due to risks of breast cancer. Early detection of cancer boosts increasing the survival chance up to 8%. Primarily, breast images emanating from mammograms, X-Rays or MRI are analyzed by radiologists to detect abnormalities. However, even experienced radiologists face problems in identifying features like micro-calcifications, lumps and masses, leading to high false positive and high false negative. Recent advancement in image processing and deep learning create some hopes in devising more enhanced applications that can be used for the early detection of breast cancer. In this work, we have developed a Deep Convolutional Neural Network (CNN) to segment and classify the various types of breast abnormalities, such as calcifications, masses, asymmetry and carcinomas, unlike existing research work, which mainly classified the cancer into benign and malignant, leading to improved disease management. Firstly, a transfer learning was carried out on our dataset using the pre-trained model ResNet50. Along similar lines, we have developed an enhanced deep learning model, in which learning rate is considered as one of the most important attributes while training the neural network. The learning rate is set adaptively in our proposed model based on changes in error curves during the learning process involved. The proposed deep learning model has achieved a performance of 88% in the classification of these four types of breast cancer abnormalities such as, masses, calcifications, carcinomas and asymmetry mammograms.
AAWS-Net: Anatomy-aware weakly-supervised learning network for breast mass segmentation
Sun, Y.
Ji, Y.
PLoS One2021Journal Article, cited 0 times
Website
CBIS-DDSM
Computer Aided Diagnosis (CADx)
Supervised
Accurate segmentation of breast masses is an essential step in computer aided diagnosis of breast cancer. The scarcity of annotated training data greatly hinders the model's generalization ability, especially for the deep learning based methods. However, high-quality image-level annotations are time-consuming and cumbersome in medical image analysis scenarios. In addition, a large amount of weak annotations is under-utilized which comprise common anatomy features. To this end, inspired by teacher-student networks, we propose an Anatomy-Aware Weakly-Supervised learning Network (AAWS-Net) for extracting useful information from mammograms with weak annotations for efficient and accurate breast mass segmentation. Specifically, we adopt a weakly-supervised learning strategy in the Teacher to extract anatomy structure from mammograms with weak annotations by reconstructing the original image. Besides, knowledge distillation is used to suggest morphological differences between benign and malignant masses. Moreover, the prior knowledge learned from the Teacher is introduced to the Student in an end-to-end way, which improves the ability of the student network to locate and segment masses. Experiments on CBIS-DDSM have shown that our method yields promising performance compared with state-of-the-art alternative models for breast mass segmentation in terms of segmentation accuracy and IoU.
Development and validation of a deep learning model for detection of breast cancers in mammography from multi-institutional datasets
Ueda, D.
Yamamoto, A.
Onoda, N.
Takashima, T.
Noda, S.
Kashiwagi, S.
Morisaki, T.
Fukumoto, S.
Shiba, M.
Morimura, M.
Shimono, T.
Kageyama, K.
Tatekawa, H.
Murai, K.
Honjo, T.
Shimazaki, A.
Kabata, D.
Miki, Y.
PLoS One2022Journal Article, cited 0 times
CBIS-DDSM
*Breast Neoplasms/diagnostic imaging
*Deep Learning
Early Detection of Cancer
Female
Humans
Mammography/methods
Retrospective Studies
OBJECTIVES: The objective of this study was to develop and validate a state-of-the-art, deep learning (DL)-based model for detecting breast cancers on mammography. METHODS: Mammograms in a hospital development dataset, a hospital test dataset, and a clinic test dataset were retrospectively collected from January 2006 through December 2017 in Osaka City University Hospital and Medcity21 Clinic. The hospital development dataset and a publicly available digital database for screening mammography (DDSM) dataset were used to train and to validate the RetinaNet, one type of DL-based model, with five-fold cross-validation. The model's sensitivity and mean false positive indications per image (mFPI) and partial area under the curve (AUC) with 1.0 mFPI for both test datasets were externally assessed with the test datasets. RESULTS: The hospital development dataset, hospital test dataset, clinic test dataset, and DDSM development dataset included a total of 3179 images (1448 malignant images), 491 images (225 malignant images), 2821 images (37 malignant images), and 1457 malignant images, respectively. The proposed model detected all cancers with a 0.45-0.47 mFPI and had partial AUCs of 0.93 in both test datasets. CONCLUSIONS: The DL-based model developed for this study was able to detect all breast cancers with a very low mFPI. Our DL-based model achieved the highest performance to date, which might lead to improved diagnosis for breast cancer.
Teacher-student approach for lung tumor segmentation from mixed-supervised datasets
Fredriksen, Vemund
Sevle, Svein Ole M.
Pedersen, André
Langø, Thomas
Kiss, Gabriel
Lindseth, Frank
PLoS One2022Journal Article, cited 0 times
Lung-PET-CT-Dx
PURPOSE: Cancer is among the leading causes of death in the developed world, and lung cancer is the most lethal type. Early detection is crucial for better prognosis, but can be resource intensive to achieve. Automating tasks such as lung tumor localization and segmentation in radiological images can free valuable time for radiologists and other clinical personnel. Convolutional neural networks may be suited for such tasks, but require substantial amounts of labeled data to train. Obtaining labeled data is a challenge, especially in the medical domain.
METHODS: This paper investigates the use of a teacher-student design to utilize datasets with different types of supervision to train an automatic model performing pulmonary tumor segmentation on computed tomography images. The framework consists of two models: the student that performs end-to-end automatic tumor segmentation and the teacher that supplies the student additional pseudo-annotated data during training.
RESULTS: Using only a small proportion of semantically labeled data and a large number of bounding box annotated data, we achieved competitive performance using a teacher-student design. Models trained on larger amounts of semantic annotations did not perform better than those trained on teacher-annotated data. Our model trained on a small number of semantically labeled data achieved a mean dice similarity coefficient of 71.0 on the MSD Lung dataset.
CONCLUSIONS: Our results demonstrate the potential of utilizing teacher-student designs to reduce the annotation load, as less supervised annotation schemes may be performed, without any real degradation in segmentation accuracy.
A CNN-transformer fusion network for COVID-19 CXR image classification
Cao, K.
Deng, T.
Zhang, C.
Lu, L.
Li, L.
PLoS One2022Journal Article, cited 0 times
Website
MIDRC-RICORD-1C
COVID-19
Humans
*COVID-19/diagnostic imaging
*Deep Learning
Neural Networks
Computer
Algorithms
*Pneumonia
The global health crisis due to the fast spread of coronavirus disease (Covid-19) has caused great danger to all aspects of healthcare, economy, and other aspects. The highly infectious and insidious nature of the new coronavirus greatly increases the difficulty of outbreak prevention and control. The early and rapid detection of Covid-19 is an effective way to reduce the spread of Covid-19. However, detecting Covid-19 accurately and quickly in large populations remains to be a major challenge worldwide. In this study, A CNN-transformer fusion framework is proposed for the automatic classification of pneumonia on chest X-ray. This framework includes two parts: data processing and image classification. The data processing stage is to eliminate the differences between data from different medical institutions so that they have the same storage format; in the image classification stage, we use a multi-branch network with a custom convolution module and a transformer module, including feature extraction, feature focus, and feature classification sub-networks. Feature extraction subnetworks extract the shallow features of the image and interact with the information through the convolution and transformer modules. Both the local and global features are extracted by the convolution module and transformer module of feature-focus subnetworks, and are classified by the feature classification subnetworks. The proposed network could decide whether or not a patient has pneumonia, and differentiate between Covid-19 and bacterial pneumonia. This network was implemented on the collected benchmark datasets and the result shows that accuracy, precision, recall, and F1 score are 97.09%, 97.16%, 96.93%, and 97.04%, respectively. Our network was compared with other researchers' proposed methods and achieved better results in terms of accuracy, precision, and F1 score, proving that it is superior for Covid-19 detection. With further improvements to this network, we hope that it will provide doctors with an effective tool for diagnosing Covid-19.
Machine learning with textural analysis of longitudinal multiparametric MRI and molecular subtypes accurately predicts pathologic complete response in patients with invasive breast cancer
Syed, A.
Adam, R.
Ren, T.
Lu, J.
Maldjian, T.
Duong, T. Q.
PLoS One2023Journal Article, cited 9 times
Website
Multiparametric Magnetic Resonance Imaging (mpMRI)
Diffusion Magnetic Resonance Imaging/methods
Retrospective Studies
Treatment Outcome
Magnetic Resonance Imaging/methods
Neoadjuvant Therapy/methods
Machine Learning
PURPOSE: To predict pathological complete response (pCR) after neoadjuvant chemotherapy using extreme gradient boosting (XGBoost) with MRI and non-imaging data at multiple treatment timepoints. MATERIAL AND METHODS: This retrospective study included breast cancer patients (n = 117) who underwent neoadjuvant chemotherapy. Data types used included tumor ADC values, diffusion-weighted and dynamic-contrast-enhanced MRI at three treatment timepoints, and patient demographics and tumor data. GLCM textural analysis was performed on MRI data. An extreme gradient boosting machine learning algorithm was used to predict pCR. Prediction performance was evaluated using the area under the curve (AUC) of the receiver operating curve along with precision and recall. RESULTS: Prediction using texture features of DWI and DCE images at multiple treatment time points (AUC = 0.871; 95% CI: (0.768, 0.974; p<0.001) and (AUC = 0.903 95% CI: 0.854, 0.952; p<0.001) respectively), outperformed that using mean tumor ADC (AUC = 0.850 (95% CI: 0.764, 0.936; p<0.001)). The AUC using all MRI data was 0.933 (95% CI: 0.836, 1.03; p<0.001). The AUC using non-MRI data was 0.919 (95% CI: 0.848, 0.99; p<0.001). The highest AUC of 0.951 (95% CI: 0.909, 0.993; p<0.001) was achieved with all MRI and all non-MRI data at all time points as inputs. CONCLUSION: Using XGBoost on extracted GLCM features and non-imaging data accurately predicts pCR. This early prediction of response can minimize exposure to toxic chemotherapy, allowing regimen modification mid-treatment and ultimately achieving better outcomes.
A divide and conquer approach to maximise deep learning mammography classification accuracies
Jaamour, A.
Myles, C.
Patel, A.
Chen, S. J.
McMillan, L.
Harris-Birtill, D.
PLoS One2023Journal Article, cited 0 times
Website
CBIS-DDSM
Female
*Deep Learning
Mammography/methods
*Breast Neoplasms/diagnostic imaging
Convolutional Neural Network (CNN)
mini-MIAS
BREAST
Breast cancer claims 11,400 lives on average every year in the UK, making it one of the deadliest diseases. Mammography is the gold standard for detecting early signs of breast cancer, which can help cure the disease during its early stages. However, incorrect mammography diagnoses are common and may harm patients through unnecessary treatments and operations (or a lack of treatment). Therefore, systems that can learn to detect breast cancer on their own could help reduce the number of incorrect interpretations and missed cases. Various deep learning techniques, which can be used to implement a system that learns how to detect instances of breast cancer in mammograms, are explored throughout this paper. Convolution Neural Networks (CNNs) are used as part of a pipeline based on deep learning techniques. A divide and conquer approach is followed to analyse the effects on performance and efficiency when utilising diverse deep learning techniques such as varying network architectures (VGG19, ResNet50, InceptionV3, DenseNet121, MobileNetV2), class weights, input sizes, image ratios, pre-processing techniques, transfer learning, dropout rates, and types of mammogram projections. This approach serves as a starting point for model development of mammography classification tasks. Practitioners can benefit from this work by using the divide and conquer results to select the most suitable deep learning techniques for their case out-of-the-box, thus reducing the need for extensive exploratory experimentation. Multiple techniques are found to provide accuracy gains relative to a general baseline (VGG19 model using uncropped 512 x 512 pixels input images with a dropout rate of 0.2 and a learning rate of 1 x 10-3) on the Curated Breast Imaging Subset of DDSM (CBIS-DDSM) dataset. These techniques involve transfer learning pre-trained ImagetNet weights to a MobileNetV2 architecture, with pre-trained weights from a binarised version of the mini Mammography Image Analysis Society (mini-MIAS) dataset applied to the fully connected layers of the model, coupled with using weights to alleviate class imbalance, and splitting CBIS-DDSM samples between images of masses and calcifications. Using these techniques, a 5.6% gain in accuracy over the baseline model was accomplished. Other deep learning techniques from the divide and conquer approach, such as larger image sizes, do not yield increased accuracies without the use of image pre-processing techniques such as Gaussian filtering, histogram equalisation and input cropping.
The impact of variance in carnitine palmitoyltransferase-1 expression on breast cancer prognosis is stratified by clinical and anthropometric factors
Liu, R.
Ospanova, S.
Perry, R. J.
PLoS One2023Journal Article, cited 0 times
Website
CPT1A is a rate-limiting enzyme in fatty acid oxidation and is upregulated in high-risk breast cancer. Obesity and menopausal status' relationship with breast cancer prognosis is well established, but its connection with fatty acid metabolism is not. We utilized RNA sequencing data in the Xena Functional Genomics Explorer, to explore CPT1A's effect on breast cancer patients' survival probability. Using [18F]-fluorothymidine positron emission tomography-computed tomography images from The Cancer Imaging Archive, we segmented these analyses by obesity and menopausal status. In 1214 patients, higher CPT1A expression is associated with lower breast cancer survivability. We confirmed a previously observed protective relationship between obesity and breast cancer in pre-menopausal patients and supported this data using two-sided Pearson correlations. Taken together, these analyses using open-access databases bolster the potential role of CPT1A-dependent fatty acid metabolism as a pathogenic factor in breast cancer.
SIFT-GVF-based lung edge correction method for correcting the lung region in CT images
Li, X.
Feng, B.
Qiao, S.
Wei, H.
Feng, C.
PLoS One2023Journal Article, cited 0 times
Website
LIDC-IDRI
Thorax
Computed Tomography (CT)
Lung/diagnostic imaging
Segmentation
Algorithm Development
Radiomic features
Juxtapleural nodules were excluded from the segmented lung region in the Hounsfield unit threshold-based segmentation method. To re-include those regions in the lung region, a new approach was presented using scale-invariant feature transform and gradient vector flow models in this study. First, the scale-invariant feature transform method was utilized to detect all scale-invariant points in the binary lung region. The boundary points in the neighborhood of a scale-invariant point were collected to form the supportive boundary lines. Then, we utilized a Fourier descriptor to obtain a character representation of each supportive boundary line. Spectrum energy recognizes supportive boundaries that must be corrected. Third, the gradient vector flow-snake method was presented to correct the recognized supportive borders with a smooth profile curve, giving an ideal correction edge in those regions. Finally, the performance of the proposed method was evaluated through experiments on multiple authentic computed tomography images. The perfect results and robustness proved that the proposed method could correct the juxtapleural region precisely.
Detection of malignancy in whole slide images of endometrial cancer biopsies using artificial intelligence
Fell, Christina
Mohammadi, Mahnaz
Morrison, David
Arandjelović, Ognjen
Syed, Sheeba
Konanahalli, Prakash
Bell, Sarah
Bryson, Gareth
Harrison, David J.
Harris-Birtill, David
PLoS One2023Journal Article, cited 0 times
CPTAC-UCEC
In this study we use artificial intelligence (AI) to categorise endometrial biopsy whole slide images (WSI) from digital pathology as either "malignant", "other or benign" or "insufficient". An endometrial biopsy is a key step in diagnosis of endometrial cancer, biopsies are viewed and diagnosed by pathologists. Pathology is increasingly digitised, with slides viewed as images on screens rather than through the lens of a microscope. The availability of these images is driving automation via the application of AI. A model that classifies slides in the manner proposed would allow prioritisation of these slides for pathologist review and hence reduce time to diagnosis for patients with cancer. Previous studies using AI on endometrial biopsies have examined slightly different tasks, for example using images alongside genomic data to differentiate between cancer subtypes. We took 2909 slides with "malignant" and "other or benign" areas annotated by pathologists. A fully supervised convolutional neural network (CNN) model was trained to calculate the probability of a patch from the slide being "malignant" or "other or benign". Heatmaps of all the patches on each slide were then produced to show malignant areas. These heatmaps were used to train a slide classification model to give the final slide categorisation as either "malignant", "other or benign" or "insufficient". The final model was able to accurately classify 90% of all slides correctly and 97% of slides in the malignant class; this accuracy is good enough to allow prioritisation of pathologists' workload.
Brain tumor detection and segmentation: Interactive framework with a visual interface and feedback facility for dynamically improved accuracy and trust
Sailunaz, K.
Bestepe, D.
Alhajj, S.
Ozyer, T.
Rokne, J.
Alhajj, R.
PLoS One2023Journal Article, cited 0 times
Website
BraTS 2018
Humans
Feedback
*Trust
Neural Networks
Computer
Radiopharmaceuticals
*Brain Neoplasms/diagnostic imaging
Brain
Magnetic Resonance Imaging/methods
Image Processing
Computer-Assisted/methods
Brain cancers caused by malignant brain tumors are one of the most fatal cancer types with a low survival rate mostly due to the difficulties in early detection. Medical professionals therefore use various invasive and non-invasive methods for detecting and treating brain tumors at the earlier stages thus enabling early treatment. The main non-invasive methods for brain tumor diagnosis and assessment are brain imaging like computed tomography (CT), positron emission tomography (PET) and magnetic resonance imaging (MRI) scans. In this paper, the focus is on detection and segmentation of brain tumors from 2D and 3D brain MRIs. For this purpose, a complete automated system with a web application user interface is described which detects and segments brain tumors with more than 90% accuracy and Dice scores. The user can upload brain MRIs or can access brain images from hospital databases to check presence or absence of brain tumor, to check the existence of brain tumor from brain MRI features and to extract the tumor region precisely from the brain MRI using deep neural networks like CNN, U-Net and U-Net++. The web application also provides an option for entering feedbacks on the results of the detection and segmentation to allow healthcare professionals to add more precise information on the results that can be used to train the model for better future predictions and segmentations.
A CT-based transfer learning approach to predict NSCLC recurrence: The added-value of peritumoral region
Bove, S.
Fanizzi, A.
Fadda, F.
Comes, M. C.
Catino, A.
Cirillo, A.
Cristofaro, C.
Montrone, M.
Nardone, A.
Pizzutilo, P.
Tufaro, A.
Galetta, D.
Massafra, R.
PLoS One2023Journal Article, cited 0 times
Website
NSCLC Radiogenomics
*Carcinoma
Non-Small-Cell Lung/diagnostic imaging/genetics
*Lung Neoplasms/diagnostic imaging/genetics
Tomography
X-Ray Computed/methods
Machine Learning
Non-small cell lung cancer (NSCLC) represents 85% of all new lung cancer diagnoses and presents a high recurrence rate after surgery. Thus, an accurate prediction of recurrence risk in NSCLC patients at diagnosis could be essential to designate risk patients to more aggressive medical treatments. In this manuscript, we apply a transfer learning approach to predict recurrence in NSCLC patients, exploiting only data acquired during its screening phase. Particularly, we used a public radiogenomic dataset of NSCLC patients having a primary tumor CT image and clinical information. Starting from the CT slice containing the tumor with maximum area, we considered three different dilatation sizes to identify three Regions of Interest (ROIs): CROP (without dilation), CROP 10 and CROP 20. Then, from each ROI, we extracted radiomic features by means of different pre-trained CNNs. The latter have been combined with clinical information; thus, we trained a Support Vector Machine classifier to predict the NSCLC recurrence. The classification performances of the devised models were finally evaluated on both the hold-out training and hold-out test sets, in which the original sample has been previously divided. The experimental results showed that the model obtained analyzing CROP 20 images, which are the ROIs containing more peritumoral area, achieved the best performances on both the hold-out training set, with an AUC of 0.73, an Accuracy of 0.61, a Sensitivity of 0.63, and a Specificity of 0.60, and on the hold-out test set, with an AUC value of 0.83, an Accuracy value of 0.79, a Sensitivity value of 0.80, and a Specificity value of 0.78. The proposed model represents a promising procedure for early predicting recurrence risk in NSCLC patients.
The association between the amino acid transporter LAT1, tumor immunometabolic and proliferative features and menopausal status in breast cancer
Ramshankar, Gautham
Liu, Ryan
Perry, Rachel J.
PLoS One2023Journal Article, cited 0 times
ACRIN-FLT-Breast
L-type Amino Acid Transporter 1 (LAT1) facilitates the uptake of specific essential amino acids, and due to this quality, it has been correlated to worse patient outcomes in various cancer types. However, the relationship between LAT1 and various clinical factors, including menopausal status, in mediating LAT1's prognostic effects remains incompletely understood. This is particularly true in the unique subset of tumors that are both obesity-associated and responsive to immunotherapy, including breast cancer. To close this gap, we employed 6 sets of transcriptomic data using the Kaplan-Meier model in the Xena Functional Genomics Explorer, demonstrating that higher LAT1 expression diminishes breast cancer patients' survival probability. Additionally, we analyzed 3'-Deoxy-3'-18F-Fluorothymidine positron emission tomography-computed tomography (18F-FLT PET-CT) images found on The Cancer Imaging Archive (TCIA). After separating all patients based on menopausal status, we correlated the measured 18F-FLT uptake with various clinical parameters quantifying body composition, tumor proliferation, and immune cell infiltration. By analyzing a wealth of deidentified, open-access data, the current study investigates the impact of LAT1 expression on breast cancer prognosis, along with the menopausal status-dependent associations between tumor proliferation, immunometabolism, and systemic metabolism.
Development of automatic generation system for lung nodule finding descriptions
Momoki, Y.
Ichinose, A.
Nakamura, K.
Iwano, S.
Kamiya, S.
Yamada, K.
Naganawa, S.
PLoS One2024Journal Article, cited 0 times
Website
RIDER Lung CT
Humans
Automatic Segmentation
Artificial Intelligence
*Lung Neoplasms/diagnostic imaging
Tomography
X-Ray Computed/methods
LUNG
Radiomics
*Solitary Pulmonary Nodule/diagnostic imaging
Radiographic Image Interpretation
Computer-Assisted/methods
Worldwide, lung cancer is the leading cause of cancer-related deaths. To manage lung nodules, radiologists observe computed tomography images, review various imaging findings, and record these in radiology reports. The report contents should be of high quality and uniform regardless of the radiologist. Here, we propose an artificial intelligence system that automatically generates descriptions related to lung nodules in computed tomography images. Our system consists of an image recognition method for extracting contents-namely, bronchopulmonary segments and nodule characteristics from images-and a natural language processing method to generate fluent descriptions. To verify our system's clinical usefulness, we conducted an experiment in which two radiologists created nodule descriptions of findings using our system. Through our system, the similarity of the described contents between the two radiologists (p = 0.001) and the comprehensiveness of the contents (p = 0.025) improved, while the accuracy did not significantly deteriorate (p = 0.484).
Non-small cell lung cancer detection through knowledge distillation approach with teaching assistant
Pavel, M. A.
Islam, R.
Babor, S. B.
Mehadi, R.
Khan, R.
PLoS One2024Journal Article, cited 0 times
Website
NSCLC-Radiomics
Humans
*Carcinoma
Non-Small-Cell Lung/diagnostic imaging
*Lung Neoplasms/diagnostic imaging
*Tomography
X-Ray Computed/methods
*Deep Learning
Non-small cell lung cancer (NSCLC) exhibits a comparatively slower rate of metastasis in contrast to small cell lung cancer, contributing to approximately 85% of the global patient population. In this work, leveraging CT scan images, we deploy a knowledge distillation technique within teaching assistant (TA) and student frameworks for NSCLC classification. We employed various deep learning models, CNN, VGG19, ResNet152v2, Swin, CCT, and ViT, and assigned roles as teacher, teaching assistant and student. Evaluation underscores exceptional model performance in performance metrics achieved via cost-sensitive learning and precise hyperparameter (alpha and temperature) fine-tuning, highlighting the model's efficiency in lung cancer tumor prediction and classification. The applied TA (ResNet152) and student (CNN) models achieved 90.99% and 94.53% test accuracies, respectively, with optimal hyperparameters (alpha = 0.7 and temperature = 7). The implementation of the TA framework improves the overall performance of the student model. After obtaining Shapley values, explainable AI is applied with a partition explainer to check each class's contribution, further enhancing the transparency of the implemented deep learning techniques. Finally, a web application designed to make it user-friendly and classify lung types in recently captured images. The execution of the three-stage knowledge distillation technique proved efficient with significantly reduced trainable parameters and training time applicable for memory-constrained edge devices.
DLLabelsCT: Annotation tool using deep transfer learning to assist in creating new datasets from abdominal computed tomography scans, case study: Pancreas
Mustonen, H.
Isosalo, A.
Nortunen, M.
Nevalainen, M.
Nieminen, M. T.
Huhta, H.
PLoS One2024Journal Article, cited 0 times
Website
Pancreas-CT
Humans
*Deep Learning
*Pancreas/diagnostic imaging
*Tomography
X-Ray Computed/methods
*Neural Networks
Computer
Image Processing
Computer-Assisted/methods
Algorithms
The utilization of artificial intelligence (AI) is expanding significantly within medical research and, to some extent, in clinical practice. Deep learning (DL) applications, which use large convolutional neural networks (CNN), hold considerable potential, especially in optimizing radiological evaluations. However, training DL algorithms to clinical standards requires extensive datasets, and their processing is labor-intensive. In this study, we developed an annotation tool named DLLabelsCT that utilizes CNN models to accelerate the image analysis process. To validate DLLabelsCT, we trained a CNN model with a ResNet34 encoder and a UNet decoder to segment the pancreas on an open-access dataset and used the DL model to assist in annotating a local dataset, which was further used to refine the model. DLLabelsCT was also tested on two external testing datasets. The tool accelerates annotation by 3.4 times compared to a completely manual annotation method. Out of 3,715 CT scan slices in the testing datasets, 50% did not require editing when reviewing the segmentations made by the ResNet34-UNet model, and the mean and standard deviation of the Dice similarity coefficient was 0.82+/-0.24. DLLabelsCT is highly accurate and significantly saves time and resources. Furthermore, it can be easily modified to support other deep learning models for other organs, making it an efficient tool for future research involving larger datasets.
Volume-based inter difference XOR pattern: a new pattern for pulmonary nodule detection in CT images
Chitradevi, A.
Singh, N. Nirmal
Jayapriya, K.
International Journal of Biomedical Engineering and Technology2021Journal Article, cited 0 times
Website
LIDC-IDRI
Computer Aided Diagnosis (CADx)
Computer Aided Detection (CADe)
LUNG
Classification
The pulmonary nodule identification which paves the path to the cancer diagnosis is a challenging task today. The proposed work, volume-based inter difference XOR pattern (VIDXP) provides an efficient lung nodule detection system using a 3D texture-based pattern which is formed by XOR pattern calculation of inter-frame grey value differences among centre frame with its neighbourhood frames in rotationally clockwise direction, for every segmented nodule. Different classifiers such as random forest (RF), decision tree (DT) and AdaBoost are used with ten trails of five-fold cross validation test for classification. The experimental analysis in the public database, lung image database consortium-image database resource initiative (LIDC-IDRI) shows that proposed scheme gives better accuracy while comparing with existing approaches. Further, the proposed scheme is enhanced by combining shape information using histogram of oriented gradient (HOG) which improves the classification accuracy.
Quantitative evaluation of denoising techniques of lung computed tomography images: an experimental investigation
Singh, Bikesh Kumar
Nair, Neeti
Falgun, Patle Ashwini
Jain, Pankaj
International Journal of Biomedical Engineering and Technology2022Journal Article, cited 0 times
Website
SPIE-AAPM Lung CT Challenge
Radiomics
Image denoising
LUNG
Computed Tomography (CT)
Computer aided diagnosis (CADx)
edge preservation
quantitative evaluation
image contrast
time domain
frequency domain
Appropriate selection of denoising method is critical component of lung computed tomography (CT)-based computer aided diagnosis (CAD) systems since noises and artefacts may deteriorate the image quality significantly thereby leading to incorrect diagnosis. This study presents a comparative investigation of various techniques used for denoising lung CT images. Current practices, evaluation measures, research gaps and future challenges in this area are also discussed. Experiments on 20 real-time lung CT images indicate that Gaussian filter with 3 × 3 window size outperformed others achieving high picture signal-to-noise ratio (PSNR), Pratt's figure of merit (PFOM), signal-to-noise ratio (SNR) and root mean square error (RMSE) of 45.476, 97.964, 32.811, 0.948 and 0.008, respectively. Further, this approach also demonstrates good edge retrieval efficiency. Future work is needed to evaluate various filters in clinical practice along with segmentation, feature extraction, and classification of lung nodules in CT images.
LESH - feature extraction and cognitive machine learning techniques for recognition of lung cancer cells
Reddy, Ummadi Janardhan
Reddy, B. Venkata Ramana
Reddy, B. Eswara
2021Journal Article, cited 0 times
LIDC-IDRI
The novel local energy-based shape histogram (LESH) feature mining strategy was proposed for different cancer predictions. This paper stretches out unique work to apply the LESH system to distinguish lung cancer using machine learning approaches. As the traditional neural network systems are complex and time consuming, machine learning approaches are considered in this work, which atomises the process of tumour identification. Before LESH feature extraction, we upgraded the radiograph pictures utilising a complexity constrained versatile histogram adjustment approach. Subjective machine learning classifiers are chosen specifically extraordinary learning machine approach; support vector machine (SVM) connected utilising the LESH impassive features for effective analysis of right therapeutic state in the X-ray and MRI pictures. The framework comprises of feature extraction stage, including choice stage and order stage. For including extraction/choice distinctive wavelets capacities have been connected to locate the noteworthy exactness. Grouping K-nearest neighbour calculation has been created/used for arrangement. The informational collection used in the proposed work has 114 knob regions and 73 non-knob districts. Precision levels of more than 96% for characterisation that have been accomplished which exhibit the benefits of the proposed approach.
Probing into the genetic factors responsible for bladder cancer prognosis
Karunakaran, Kavinkumar Nirmala
Manoharan, Jeevitha Priya
Vidyalakshmi, Subramanian
2021Journal Article, cited 0 times
TCGA-BLCA
MicroRNAs are small non coding RNAs that can act as oncogenic suppressors and activators. Our in-silico study aims to identify the key miRNAs and their associated mRNA targets involved in bladder cancer progression. A total of seven differentially expressed miRNAs (DEMs) were found to be common between gene expression omnibus (GEO) datasets and The Cancer Genome Atlas (TCGA). The most significant DEM and its targets were validated using TCGA patient dataset. Pathway enrichment analysis and Protein - Protein network generation were done for the chosen mRNAs. Kaplan Meier survival plots were drawn for the miRNA and mRNAs. A significant down regulation of EIF3J and an up regulation of LYPLA1 were associated with poor prognosis in BLCA patients and hence EIF3J is suggested as a potential drug target. To conclude, hsa-miR-138-5p may act as a promising prognostic and a diagnostic bio marker for bladder cancer. Further experimental studies are required to support our results.
Oropharyngeal cancer prognosis based on clinicopathologic and quantitative imaging biomarkers with multiparametric model and machine learning methods
Aim: There is an unmet need for integrating quantitative imaging biomarkers into current risk stratification tools and to investigate relationships between various clinical characteristics, both radiomics features as well as other clinical prognosticators for oropharyngeal cancer (OPC). Multivariate analysis and ML algorithms can be used to predict recurrence free survival in patients with OPC. Method: Open access clinical meta data and matched baseline contrast-enhanced computed tomography (CECT) scans were accessed for a cohort of 495 OPC patients treated between 2005 and 2012 available at Head and Neck Cancer CT Atlas. DOI: 10.7937/K9/ TCIA.2017.umz8dv6s. The Cox proportional hazards (CPHs) were used to evaluate a large number of prognostic variables toward survival of cancer patients. Kaplan-Meier method was deployed to estimate mean and median survival with 95% CI and was compared using log-rank test. ML algorithms using random forest (RF) classifiers were used for prediction. Variables used in the models were age, gender, smoking status, smoking, TNM characteristics, AJCC staging, acks, subsite of origin, therapeutic combination, radiation dose, radiation duration, relapse-free survival and vital status. Results: Performance of CPH and RSF model in terms of Harell’s c-index (95% confidence interval) was compared and RSF model had an error rate of 38.94% or a c-index of 0.61 which is compared with CPH index of 0.62 which indicates a medium-level prediction. Conclusion: ML is a promising toolset for improving prediction of oral cancer outcomes. However, it is a medium-level prediction, and additional work is needed to improve its accuracy and consistency. Additional refinements in the model may provide useful inputs for an improved personalized care and improving outcomes in HNSCC patients.
AML leukocyte classification method for small samples based on ACGAN
Zhang, C.
Zhu, J.
Biomed Tech (Berl)2024Journal Article, cited 0 times
Website
AML-Cytomorphology_LMU
Generative Adversarial Network (GAN)
data augmentation
Classification
Acute myeloid leukemia
Pathomics
Leukemia is a class of hematologic malignancies, of which acute myeloid leukemia (AML) is the most common. Screening and diagnosis of AML are performed by microscopic examination or chemical testing of images of the patient's peripheral blood smear. In smear-microscopy, the ability to quickly identify, count, and differentiate different types of blood cells is critical for disease diagnosis. With the development of deep learning (DL), classification techniques based on neural networks have been applied to the recognition of blood cells. However, DL methods have high requirements for the number of valid datasets. This study aims to assess the applicability of the auxiliary classification generative adversarial network (ACGAN) in the classification task for small samples of white blood cells. The method is trained on the TCIA dataset, and the classification accuracy is compared with two classical classifiers and the current state-of-the-art methods. The results are evaluated using accuracy, precision, recall, and F1 score. The accuracy of the ACGAN on the validation set is 97.1 % and the precision, recall, and F1 scores on the validation set are 97.5 , 97.3, and 97.4 %, respectively. In addition, ACGAN received a higher score in comparison with other advanced methods, which can indicate that it is competitive in classification accuracy.
Handling images of patient postures in arms up and arms down position using a biomechanical skeleton model
Teske, Hendrik
Bartelheimer, Kathrin
Bendl, Rolf
Stoiber, Eva M
Giske, Kristina
Current Directions in Biomedical Engineering2017Journal Article, cited 0 times
Website
Head-Neck Cetuximab
Algorithm Development
biomechanical model
inverse kinematics
chainmail
posture modelling
Segmentation of elongated structures processed on breast MRI for the detection of vessels
Gierlinger, Marco
Brandner, Dinah M.
Zagar, Bernhard G.
2021Journal Article, cited 0 times
ISPY1
Abstract
The multi-seed region growing (MSRG) algorithm from previous work is extended to extract elongated segments from breast Magnetic Resonance Imaging (MRI) stacks. A model is created to adjust the MSRG parameters such that the elongated segments may reveal vessels that can support clinicians in their diagnosis of diseases or provide them with useful information before surgery during e. g. a neoadjuvant therapy. The model is a pipeline of tasks and contains user-defined parameters that influence the segmentation result. A crucial task of the model relies on a skeletonization-like algorithm that collects useful information about the segments’ thickness, length, etc. Length, thickness, and gradient information of the pixel intensity along the segment helps to determine whether the extracted segments have a tubular structure, which is assumed to be the case for vessels. In this work, we show how the results are derived and that the MSRG algorithm is capable of extracting vessel-like segments even from noisy MR images.
Classification of 1p/19q Status in Low-Grade Gliomas: Experiments with Radiomic Features and Ensemble-Based Machine Learning Methods
Medeiros, Tony Alexandre
Saraiva Junior, Raimundo Guimarães
Cassia, Guilherme de Souza e
Nascimento, Francisco Assis de Oliveira
Carvalho, João Luiz Azevedo de
Brazilian Archives of Biology and Technology2023Journal Article, cited 0 times
LGG-1p19qDeletion
Radiomics
Machine Learning
Ensemble learning
1p/19q co-deletion
Low grade glioma
BRAIN
Algorithm Development
Classification
Gliomas comprise the vast majority of all malignant brain tumors. Low-grade glioma patients with combined whole-arm losses of 1p and 19q chromosomes were shown to have significantly better overall survival rates compared to non-deleted patients. This work evaluates several approaches for assessment of 1p/19q status from T2-weighted magnetic resonance images, using radiomics and machine learning. Experiments were performed using images from a public database (102 codeleted, 57 non-deleted). We experimented with sets of 68 and 100 radiomic features, and with several classifiers, including support vector machine, k-nearest neighbors, stochastic gradient descent, logistic regression, decision tree, Gaussian naive Bayes, and linear discriminant analysis. We also experimented with several ensemble-based methods, including four boosting-based classifiers, random forest, extra-trees, and bagging. The performance of these methods was compared using various metrics. Our best results were achieved using a bagging ensemble estimator based on the decision tree classifier, using only texture-based radiomics features. Compared to other works that used the same database, this approach provided higher sensitivity. It also achieved higher sensitivity than that provided by neurosurgeons and neuroradiologists analyzing the same images. We also show that including radiomic features associated with first order statistics and shape does not improve the performance of the classifiers, and in many cases worsens it. The molecular assessment of brain tumors through biopsies is an invasive procedure, and is subject to sampling errors. Therefore, the techniques presented in this work have strong potential for aiding in better clinical, surgical, and therapeutic decision-making.
Multisite Image Data Collection and Management Using the RSNA Image Sharing Network
Erickson, Bradley J
Fajnwaks, Patricio
Langer, Steve G
Perry, John
Translational Oncology2014Journal Article, cited 3 times
Website
Algorithm Development
Image de-identification
The execution of a multisite trial frequently includes image collection. The Clinical Trials Processor (CTP) makes removal of protected health information highly reliable. It also provides reliable transfer of images to a central review site. Trials using central review of imaging should consider using CTP for handling image data when a multisite trial is being designed.
The Quantitative Imaging Network: NCI's Historical Perspective and Planned Goals
Clarke, Laurence P.
Nordstrom, Robert J.
Zhang, Huiming
Tandon, Pushpa
Zhang, Yantian
Redmond, George
Farahani, Keyvan
Kelloff, Gary
Henderson, Lori
Shankar, Lalitha
Deye, James
Capala, Jacek
Jacobs, Paula
Translational Oncology2014Journal Article, cited 0 times
QIN
Three-Dimensional Planning Tool for Breast Conserving Surgery: A Technological Review
Oliveira, Sara P
Morgado, Pedro
Gouveia, Pedro F
Teixeira, João F
Bessa, Silvia
Monteiro, João P
Zolfagharnasab, Hooshiar
Reis, Marta
Silva, Nuno L
Veiga, Diana
Cardoso, Maria J
Oliveira, Helder P
Ferreira, Manuel João
2018Journal Article, cited 0 times
CBIS-DDSM
Breast cancer is one of the most common malignancies affecting women worldwide. However, despite its incidence trends have increased, the mortality rate has significantly decreased. The primary concern in any cancer treatment is the oncological outcome but, in the case of breast cancer, the surgery aesthetic result has become an important quality indicator for breast cancer patients. In this sense, an adequate surgical planning and prediction tool would empower the patient regarding the treatment decision process, enabling a better communication between the surgeon and the patient and a better understanding of the impact of each surgical option. To develop such tool, it is necessary to create complete 3D model of the breast, integrating both inner and outer breast data. In this review, we thoroughly explore and review the major existing works that address, directly or not, the technical challenges involved in the development of a 3D software planning tool in the field of breast conserving surgery.
Learning-based parameter prediction for quality control in three-dimensional medical image compression
Hou, Y. X.
Ren, Z.
Tao, Y. B.
Chen, W.
Frontiers of Information Technology & Electronic Engineering2021Journal Article, cited 0 times
Website
LIDC-IDRI
RIDER Lung CT
LungCT-Diagnosis
REMBRANDT
TCGA-HNSC
Imaging Feature
medical image compression
high efficiency video coding (hevc)
quality control
learning-based
Quality control is of vital importance in compressing three-dimensional (3D) medical imaging data. Optimal compression parameters need to be determined based on the specific quality requirement. In high efficiency video coding (HEVC), regarded as the state-of-the-art compression tool, the quantization parameter (QP) plays a dominant role in controlling quality. The direct application of a video-based scheme in predicting the ideal parameters for 3D medical image compression cannot guarantee satisfactory results. In this paper we propose a learning-based parameter prediction scheme to achieve efficient quality control. Its kernel is a support vector regression (SVR) based learning model that is capable of predicting the optimal QP from both video-based and structural image features extracted directly from raw data, avoiding time-consuming processes such as pre-encoding and iteration, which are often needed in existing techniques. Experimental results on several datasets verify that our approach outperforms current video-based quality control methods.; ; 质量控制是三维医学图像压缩过程至关重要的环节, 需设定最佳图像压缩参数才能满足特定的压缩质量需求. 高效视频编码 (HEVC) 是目前最先进的压缩工具. 其中, 量化参数 (QP) 对HEVC的压缩质量控制起决定性作用, 如能对其精确预测, 就能完成质量控制的目标; 然而, 直接将视频压缩领域中的预测方法套用到三维医学数据压缩, 精度和效率无法取得令人满意的结果. 为此, 提出一种基于学习的参数预测方法, 用于实现三维医学图像压缩中的高效质量控制. 本文方法基于支撑向量回归 (SVR), 可以直接利用从原始数据中提取的基于视频的特征与基于结构的特征来预测最佳QP, 无需经过耗时长的预编码或迭代过程. 在若干数据集上的实验结果证明, 本文方法比现有方法在预测准确度和速度上表现更好.
Development and Validation of Pre- and Post-Operative Models to Predict Recurrence After Resection of Solitary Hepatocellular Carcinoma: A Multi-Institutional Study
Wu, Ming-Yu
Qiao, Qian
Wang, Ke
Ji, Gu-Wei
Cai, Bing
Li, Xiang-Cheng
Cancer Manag Res2020Journal Article, cited 1 times
Website
LIVER
TCGA-LIHC
Background: The ideal candidates for resection are patients with solitary hepatocellular carcinoma (HCC); however, postoperative recurrence rate remains high. We aimed to establish prognostic models to predict HCC recurrence based on readily accessible clinical parameters and multi-institutional databases. Patients and Methods: A total of 485 patients undergoing curative resection for solitary HCC were recruited from two independent institutions and the Cancer Imaging Archive database. We randomly divided the patients into training (n=323) and validation cohorts (n=162). Two models were developed: one using pre-operative and one using pre- and post-operative parameters. Performance of the models was compared with staging systems. Results: Using multivariable analysis, albumin-bilirubin grade, serum alpha-fetoprotein and tumor size were selected into the pre-operative model; albumin-bilirubin grade, serum alpha-fetoprotein, tumor size, microvascular invasion and cirrhosis were selected into the postoperative model. The two models exhibited better discriminative ability (concordance index: 0.673-0.728) and lower prediction error (integrated Brier score: 0.169-0.188) than currently used staging systems for predicting recurrence in both cohorts. Both models stratified patients into low- and high-risk subgroups of recurrence with distinct recurrence patterns. Conclusion: The two models with corresponding user-friendly calculators are useful tools to predict recurrence before and after resection that may facilitate individualized management of solitary HCC.
Cell Nuclear Segmentation of B-ALL Images Based on MSFF-SegNeXt
Purpose: The diagnosis and treatment of B-Lineage Acute Lymphoblastic Leukemia (B-ALL) typically rely on cytomorphologic analysis of bone marrow smears. However, traditional morphological analysis methods require manual operation, leading to challenges such as high subjectivity and low efficiency. Accurate segmentation of individual cell nuclei is crucial for obtaining detailed morphological characterization data, thereby improving the objectivity and consistency of diagnoses.
Patients and Methods: To enhance the accuracy of nucleus segmentation of lymphoblastoid cells in B-ALL bone marrow smear images, the Multi-scale Feature Fusion-SegNeXt (MSFF-SegNeXt) model is hereby proposed, building upon the SegNeXt framework. This model introduces a novel multi-scale feature fusion technique that effectively integrates edge feature maps with feature representations across different scales. Integrating the Edge-Guided Attention (EGA) module in the decoder further enhances the segmentation process by focusing on intricate edge details. Additionally, Hamburger structures are strategically incorporated at various stages of the network to enhance feature expression.
Results: These combined innovations enable MSFF-SegNeXt to achieve superior segmentation performance on the SN-AM dataset, as evidenced by an accuracy of 0.9659 and a Dice coefficient of 0.9422.
Conclusion: The results show that MSFF-SegNeXt outperforms existing models in managing the complexities of cell nucleus segmentation, particularly in capturing detailed edge structures. This advancement offers a robust and reliable solution for subsequent morphological analysis of B-ALL single cells.
Keywords: single nucleus segmentation, B-ALL image, bone marrow smear, Edge-Guided Attention, multi-scale feature fusion
Optimized Deformable Model-based Segmentation and Deep Learning for Lung Cancer Classification
Shetty, M. V.
D, J.
Tunga, S.
J Med Invest2022Journal Article, cited 0 times
Website
LIDC-IDRI
Deep Learning
Algorithm Development
LUNG
Convolutional Neural Network (CNN)
Lung cancer is one of the life taking disease and causes more deaths worldwide. Early detection and treatment is necessary to save life. It is very difficult for doctors to interpret and identify diseases using imaging modalities alone. Therefore computer aided diagnosis can assist doctors for the early detection of cancer very accurately. In the proposed work, optimized deformable models and deep learning techniques are applied for the detection and classification of lung cancer. This method involves pre-processing, lung lobe segmentation, lung cancer segmentation, Data augmentation and lung cancer classification. The median filtering is considered for pre-processing and the Bayesian fuzzy clustering is applied for segmenting the lung lobes. The lung cancer segmentation is carried out using Water Cycle Sea Lion Optimization (WSLnO) based deformable model. The data augmentation process is used to augment the size of segmented region in order to perform better classification. The lung cancer classification is done effectively using Shepard Convolutional Neural Network (ShCNN), which is trained by WSLnO algorithm. The proposed WSLnO algorithm is designed by incorporating Water cycle algorithm (WCA) and Sea Lion Optimization (SLnO) algorithm. The performance of the proposed technique is analyzed with various performance metrics and attained the better results in terms of accuracy, sensitivity, specificity and average segmentation accuracy of 0.9303, 0.9123, 0.9133 and 0.9091 respectively. J. Med. Invest. 69 : 244-255, August, 2022.
Clinically Applicable Segmentation of Head and Neck Anatomy for Radiotherapy: Deep Learning Algorithm Development and Validation Study
Nikolov, Stanislav
Blackwell, Sam
Zverovitch, Alexei
Mendes, Ruheena
Livne, Michelle
De Fauw, Jeffrey
Patel, Yojan
Meyer, Clemens
Askham, Harry
Romera-Paredes, Bernadino
Kelly, Christopher
Karthikesalingam, Alan
Chu, Carlton
Carnell, Dawn
Boon, Cheng
D'Souza, Derek
Moinuddin, Syed Ali
Garie, Bethany
McQuinlan, Yasmin
Ireland, Sarah
Hampton, Kiarna
Fuller, Krystle
Montgomery, Hugh
Rees, Geraint
Suleyman, Mustafa
Back, Trevor
Hughes, Cian Owen
Ledsam, Joseph R
Ronneberger, Olaf
J Med Internet Res2021Journal Article, cited 0 times
Website
Head-Neck Cetuximab
TCGA-HNSC
U-net
Convolutional neural networks (CNN)
Radiation Therapy
Segmentation
Algorithm Development
BACKGROUND: Over half a million individuals are diagnosed with head and neck cancer each year globally. Radiotherapy is an important curative treatment for this disease, but it requires manual time to delineate radiosensitive organs at risk. This planning process can delay treatment while also introducing interoperator variability, resulting in downstream radiation dose differences. Although auto-segmentation algorithms offer a potentially time-saving solution, the challenges in defining, quantifying, and achieving expert performance remain. OBJECTIVE: Adopting a deep learning approach, we aim to demonstrate a 3D U-Net architecture that achieves expert-level performance in delineating 21 distinct head and neck organs at risk commonly segmented in clinical practice. METHODS: The model was trained on a data set of 663 deidentified computed tomography scans acquired in routine clinical practice and with both segmentations taken from clinical practice and segmentations created by experienced radiographers as part of this research, all in accordance with consensus organ at risk definitions. RESULTS: We demonstrated the model's clinical applicability by assessing its performance on a test set of 21 computed tomography scans from clinical practice, each with 21 organs at risk segmented by 2 independent experts. We also introduced surface Dice similarity coefficient, a new metric for the comparison of organ delineation, to quantify the deviation between organ at risk surface contours rather than volumes, better reflecting the clinical task of correcting errors in automated organ segmentations. The model's generalizability was then demonstrated on 2 distinct open-source data sets, reflecting different centers and countries to model training. CONCLUSIONS: Deep learning is an effective and clinically applicable technique for the segmentation of the head and neck anatomy for radiotherapy. With appropriate validation studies and regulatory approvals, this system could improve the efficiency, consistency, and safety of radiotherapy pathways.
Accuracy of emphysema quantification performed with reduced numbers of CT sections
Pilgram, Thomas K
Quirk, James D
Bierhals, Andrew J
Yusen, Roger D
Lefrak, Stephen S
Cooper, Joel D
Gierada, David S
American Journal of Roentgenology2010Journal Article, cited 8 times
Website
NLST
lung
LDCT
Quantitative Multiparametric MRI Features and PTEN Expression of Peripheral Zone Prostate Cancer: A Pilot Study
McCann, Stephanie M
Jiang, Yulei
Fan, Xiaobing
Wang, Jianing
Antic, Tatjana
Prior, Fred
VanderWeele, David
Oto, Aytekin
American Journal of Roentgenology2016Journal Article, cited 11 times
Website
Radiogenomics
Lung Cancers Manifesting as Part-Solid Nodules in the National Lung Screening Trial
Yip, Rowena
Henschke, Claudia I
Xu, Dong Ming
Li, Kunwei
Jirapatnakul, Artit
Yankelevitz, David F
American Journal of Roentgenology2017Journal Article, cited 13 times
Website
National Lung Screening Trial (NLST)
LUNG
LDCT
Cancer Screening
JOURNAL CLUB: Computer-Aided Detection of Lung Nodules on CT With a Computerized Pulmonary Vessel Suppressed Function
Lo, ShihChung B
Freedman, Matthew T
Gillis, Laura B
White, Charles S
Mun, Seong K
American Journal of Roentgenology2018Journal Article, cited 4 times
Website
NLST
lung
LDCT
CAD
Unenhanced CT Texture Analysis of Clear Cell Renal Cell Carcinomas: A Machine Learning-Based Study for Predicting Histopathologic Nuclear Grade
Kocak, Burak
Durmaz, Emine Sebnem
Ates, Ece
Kaya, Ozlem Korkmaz
Kilickesmez, Ozgur
American Journal of Roentgenology2019Journal Article, cited 0 times
Website
TCGA-KIRC
Machine learning
Radiomics
OBJECTIVE: The purpose of this study is to investigate the predictive performance of machine learning (ML)-based unenhanced CT texture analysis in distinguishing low (grades I and II) and high (grades III and IV) nuclear grade clear cell renal cell carcinomas (RCCs). MATERIALS AND METHODS: For this retrospective study, 81 patients with clear cell RCC (56 high and 25 low nuclear grade) were included from a public database. Using 2D manual segmentation, 744 texture features were extracted from unenhanced CT images. Dimension reduction was done in three consecutive steps: reproducibility analysis by two radiologists, collinearity analysis, and feature selection. Models were created using artificial neural network (ANN) and binary logistic regression, with and without synthetic minority oversampling technique (SMOTE), and were validated using 10-fold cross-validation. The reference standard was histopathologic nuclear grade (low vs high). RESULTS: Dimension reduction steps yielded five texture features for the ANN and six for the logistic regression algorithm. None of clinical variables was selected. ANN alone and ANN with SMOTE correctly classified 81.5% and 70.5%, respectively, of clear cell RCCs, with AUC values of 0.714 and 0.702, respectively. The logistic regression algorithm alone and with SMOTE correctly classified 75.3% and 62.5%, respectively, of the tumors, with AUC values of 0.656 and 0.666, respectively. The ANN performed better than the logistic regression (p < 0.05). No statistically significant difference was present between the model performances created with and without SMOTE (p > 0.05). CONCLUSION: ML-based unenhanced CT texture analysis using ANN can be a promising noninvasive method in predicting the nuclear grade of clear cell RCCs.
Reliability of Single-Slice–Based 2D CT Texture Analysis of Renal Masses: Influence of Intra- and Interobserver Manual Segmentation Variability on Radiomic Feature Reproducibility
Kocak, Burak
Durmaz, Emine Sebnem
Kaya, Ozlem Korkmaz
Ates, Ece
Kilickesmez, Ozgur
AJR Am J Roentgenol2019Journal Article, cited 0 times
Website
TCGA-KIRC
Radiomics
Segmentation
KIDNEY
OBJECTIVE. The objective of our study was to investigate the potential influence of intra- and interobserver manual segmentation variability on the reliability of single-slice-based 2D CT texture analysis of renal masses. MATERIALS AND METHODS. For this retrospective study, 30 patients with clear cell renal cell carcinoma were included from a public database. For intra- and interobserver analyses, three radiologists with varying degrees of experience segmented the tumors from unenhanced CT and corticomedullary phase contrast-enhanced CT (CECT) in different sessions. Each radiologist was blind to the image slices selected by other radiologists and him- or herself in the previous session. A total of 744 texture features were extracted from original, filtered, and transformed images. The intraclass correlation coefficient was used for reliability analysis. RESULTS. In the intraobserver analysis, the rates of features with good to excellent reliability were 84.4-92.2% for unenhanced CT and 85.5-93.1% for CECT. Considering the mean rates of unenhanced CT and CECT, having high experience resulted in better reliability rates in terms of the intraobserver analysis. In the interobserver analysis, the rates were 76.7% for unenhanced CT and 84.9% for CECT. The gray-level cooccurrence matrix and first-order feature groups yielded higher good to excellent reliability rates on both unenhanced CT and CECT. Filtered and transformed images resulted in more features with good to excellent reliability than the original images did on both unenhanced CT and CECT. CONCLUSION. Single-slice-based 2D CT texture analysis of renal masses is sensitive to intra- and interobserver manual segmentation variability. Therefore, it may lead to nonreproducible results in radiomic analysis unless a reliability analysis is considered in the workflow.
Assessment of Renal Cell Carcinoma by Texture Analysis in Clinical Practice: A Six-Site, Six-Platform Analysis of Reliability
Doshi, A. M.
Tong, A.
Davenport, M. S.
Khalaf, A.
Mresh, R.
Rusinek, H.
Schieda, N.
Shinagare, A.
Smith, A. D.
Thornhill, R.
Vikram, R.
Chandarana, H.
AJR Am J Roentgenol2021Journal Article, cited 0 times
Website
TCGA-KIRC
Background: Multiple commercial and open-source software applications are available for texture analysis. Nonstandard techniques can cause undesirable variability that impedes result reproducibility and limits clinical utility. Objective: The purpose of this study is to measure agreement of texture metrics extracted by 6 software packages. Methods: This retrospective study included 40 renal cell carcinomas with contrast-enhanced CT from The Cancer Genome Atlas and Imaging Archive. Images were analyzed by 7 readers at 6 sites. Each reader used 1 of 6 software packages to extract commonly studied texture features. Inter and intra-reader agreement for segmentation was assessed with intra-class correlation coefficients. First-order (available in 6 packages) and second-order (available in 3 packages) texture features were compared between software pairs using Pearson correlation. Results: Inter- and intra-reader agreement was excellent (ICC 0.93-1). First-order feature correlations were strong (r>0.8, p<0.001) between 75% (21/28) of software pairs for mean and standard deviation, 48% (10/21) for entropy, 29% (8/28) for skewness, and 25% (7/28) for kurtosis. Of 15 second-order features, only co-occurrence matrix correlation, grey-level non-uniformity, and run-length non-uniformity showed strong correlation between software packages (0.90-1, p<0.001). Conclusion: Variability in first and second order texture features was common across software configurations and produced inconsistent results. Standardized algorithms and reporting methods are needed before texture data can be reliably used for clinical applications. Clinical Impact: It is important to be aware of variability related to texture software processing and configuration when reporting and comparing outputs.
Deep Learning Models for Abdominal CT Organ Segmentation in Children: Development and Validation in Internal and Heterogeneous Public Datasets
Somasundaram, E.
Taylor, Z.
Alves, V. V.
Qiu, L.
Fortson, B. L.
Mahalingam, N.
Dudley, J. A.
Li, H.
Brady, S. L.
Trout, A. T.
Dillman, J. R.
AJR Am J Roentgenol2024Journal Article, cited 0 times
Website
Pediatric-CT-SEG
Humans
*Deep Learning
Child
Adolescent
Retrospective Studies
*Pancreas/diagnostic imaging
*Tomography
X-Ray Computed/methods
*Spleen/diagnostic imaging
Male
Child
Preschool
Female
Infant
*Liver/diagnostic imaging
Radiography
Abdominal/methods
Datasets as Topic
Infant
Newborn
children
deep learning
liver
segmentation
spleen
BACKGROUND. Deep learning abdominal organ segmentation algorithms have shown excellent results in adults; validation in children is sparse. OBJECTIVE. The purpose of this article is to develop and validate deep learning models for liver, spleen, and pancreas segmentation on pediatric CT examinations. METHODS. This retrospective study developed and validated deep learning models for liver, spleen, and pancreas segmentation using 1731 CT examinations (1504 training, 221 testing), derived from three internal institutional pediatric (age </= 18 years) datasets (n = 483) and three public datasets comprising pediatric and adult examinations with various pathologies (n = 1248). Three deep learning model architectures (SegResNet, DynUNet, and SwinUNETR) from the Medical Open Network for Artificial Intelligence (MONAI) framework underwent training using native training (NT), relying solely on institutional datasets, and transfer learning (TL), incorporating pretraining on public datasets. For comparison, TotalSegmentator, a publicly available segmentation model, was applied to test data without further training. Segmentation performance was evaluated using mean Dice similarity coefficient (DSC), with manual segmentations as reference. RESULTS. For internal pediatric data, the DSC for TotalSegmentator, NT models, and TL models for normal liver was 0.953, 0.964-0.965, and 0.965-0.966, respectively; for normal spleen, 0.914, 0.942-0.945, and 0.937-0.945; for normal pancreas, 0.733, 0.774-0.785, and 0.775-0.786; and for pancreas with pancreatitis, 0.703, 0.590-0.640, and 0.667-0.711. For public pediatric data, the DSC for TotalSegmentator, NT models, and TL models for liver was 0.952, 0.871-0.908, and 0.941-0.946, respectively; for spleen, 0.905, 0.771-0.827, and 0.897-0.926; and for pancreas, 0.700, 0.577-0.648, and 0.693-0.736. For public primarily adult data, the DSC for TotalSegmentator, NT models, and TL models for liver was 0.991, 0.633-0.750, and 0.926-0.952, respectively; for spleen, 0.983, 0.569-0.604, and 0.923-0.947; and for pancreas, 0.909, 0.148-0.241, and 0.699-0.775. The DynUNet TL model was selected as the best-performing NT or TL model considering DSC values across organs and test datasets and was made available as an open-source MONAI bundle (https://github.com/cchmc-dll/pediatric_abdominal_segmentation_bundle.git). CONCLUSION. TL models trained on heterogeneous public datasets and fine-tuned using institutional pediatric data outperformed internal NT models and Total-Segmentator across internal and external pediatric test data. Segmentation performance was better in liver and spleen than in pancreas. CLINICAL IMPACT. The selected model may be used for various volumetry applications in pediatric imaging.
Radiomics features based on T2-weighted fluid-attenuated inversion recovery MRI predict the expression levels of CD44 and CD133 in lower-grade gliomas
Wang, Z.
Tang, X.
Wu, J.
Zhang, Z.
He, K.
Wu, D.
Chen, S.
Xiao, X.
Future Oncol2021Journal Article, cited 0 times
Website
TCGA-LGG
Radiogenomics
Magnetic Resonance Imaging (MRI)
CD133
CD44
T2-weighted
lower-grade gliomas
prediction model
Radiomics
Objective: To verify the association between CD44 and CD133 expression levels and the prognosis of patients with lower-grade gliomas (LGGs) and constructing radiomic models to predict those two genes' expression levels before surgery. Materials & methods: Genomic data of patients with LGG and the corresponding T2-weighted fluid-attenuated inversion recovery images were downloaded from the Cancer Genome Atlas and the Cancer Imaging Archive, which were utilized for prognosis analysis, radiomic feature extraction and model construction, respectively. Results & conclusion: CD44 and CD133 expression levels in LGG can significantly affect the prognosis of patients with LGG. Based on the T2-weighted fluid-attenuated inversion recovery images, the radiomic features can effectively predict the expression levels of CD44 and CD133 before surgery.
Intuitive Error Space Exploration of Medical Image Data in Clinical Daily Routine
Predicting, Analyzing and Communicating Outcomes of COVID-19 Hospitalizations with Medical Images and Clinical Data
Stritzel, Oliver
Raidou, Renata Georgia
Eurographics2022Journal Article, cited 0 times
Website
COVID-19-NY-SBU
Computer Aided Diagnosis (CADx)
We propose PACO, a visual analytics framework to support the prediction, analysis, and communication of COVID-19 hospitalization outcomes. Although several real-world data sets about COVID-19 are openly available, most of the current research focuses on the detection of the disease. Until now, no previous work exists on combining insights from medical image data with knowledge extracted from clinical data, predicting the likelihood of an intensive care unit (ICU) visit, ventilation, or decease.; Moreover, available literature has not yet focused on communicating such results to the broader society. To support the prediction, analysis and communication of the outcomes of COVID-19 hospitalizations on the basis of a publicly available data set comprising both electronic health data and medical image data [SSP∗21], we conduct the following three steps: (1) automated segmentation of the available X-ray images and processing of clinical data, (2) development of a model for the prediction of disease outcomes and a comparison to state-of-the-art prediction scores for both data sources, i.e., medical images and clinical data, and (3) the communication of outcomes to two different groups (i.e., clinical experts and the general population) through interactive dashboards. Preliminary results indicate that the prediction, analysis and communication of hospitalization outcomes is a significant topic in the context of COVID-19 prevention.
Tumor Metabolic Features Identified by (18)F-FDG PET Correlate with Gene Networks of Immune Cell Microenvironment in Head and Neck Cancer
Na, Kwon Joong
Choi, Hongyoon
Journal of Nuclear Medicine2018Journal Article, cited 18 times
Website
TCGA-HNSC
Radiogenomics
Positron emission tomography (PET)
Head and neck squamous cell carcinoma (HNSCC)
HEAD AND NECK
The importance of (18)F-FDG PET in imaging head and neck squamous cell carcinoma (HNSCC) has grown in recent decades. Because PET has prognostic values, and provides functional and molecular information in HNSCC, the genetic and biologic backgrounds associated with PET parameters are of great interest. Here, as a systems biology approach, we aimed to investigate gene networks associated with tumor metabolism and their biologic function using RNA sequence and (18)F-FDG PET data. Methods: Using RNA sequence data of HNSCC downloaded from The Cancer Genome Atlas data portal, we constructed a gene coexpression network. PET parameters including lesion-to-blood-pool ratio, metabolic tumor volume, and tumor lesion glycolysis were calculated. The Pearson correlation test was performed between module eigengene-the first principal component of modules' expression profile-and the PET parameters. The significantly correlated module was functionally annotated with gene ontology terms, and its hub genes were identified. Survival analysis of the significantly correlated module was performed. Results: We identified 9 coexpression network modules from the preprocessed RNA sequence data. A network module was significantly correlated with total lesion glycolysis as well as maximum and mean (18)F-FDG uptake. The expression profiles of hub genes of the network were inversely correlated with (18)F-FDG uptake. The significantly annotated gene ontology terms of the module were associated with immune cell activation and aggregation. The module demonstrated significant association with overall survival, and the group with higher module eigengene showed better survival than the other groups with statistical significance (P = 0.022). Conclusion: We showed that a gene network that accounts for immune cell microenvironment was associated with (18)F-FDG uptake as well as prognosis in HNSCC. Our result supports the idea that competition for glucose between cancer cell and immune cell plays an important role in cancer progression associated with hypermetabolic features. In the future, PET parameters could be used as a surrogate marker of HNSCC for estimating molecular status of immune cell microenvironment.;
PARaDIM - A PHITS-based Monte Carlo tool for internal dosimetry with tetrahedral mesh computational phantoms
Carter, L. M.
Crawford, T. M.
Sato, T.
Furuta, T.
Choi, C.
Kim, C. H.
Brown, J. L.
Bolch, W. E.
Zanzonico, P. B.
Lewis, J. S.
J Nucl Med2019Journal Article, cited 0 times
Website
Anti-PD-1_MELANOMA
Radiation Dosage
Radiation Therapy
Mesh-type and voxel-based computational phantoms comprise the current state-of-the-art for internal dose assessment via Monte Carlo simulations, but excel in different aspects, with mesh-type phantoms offering advantages over their voxel counterparts in terms of their flexibility and realistic representation of detailed patient- or subject-specific anatomy. We have developed PARaDIM, a freeware application for implementing tetrahedral mesh-type phantoms in absorbed dose calculations via the Particle and Heavy Ion Transport code System (PHITS). It considers all medically relevant radionuclides including alpha, beta, gamma, positron, and Auger/conversion electron emitters, and handles calculation of mean dose to individual regions, as well as 3D dose distributions for visualization and analysis in a variety of medical imaging softwares. This work describes the development of PARaDIM, documents the measures taken to test and validate its performance, and presents examples to illustrate its uses. Methods: Human, small animal, and cell-level dose calculations were performed with PARaDIM and the results compared with those of widely accepted dosimetry programs and literature data. Several tetrahedral phantoms were developed or adapted using computer-aided modeling techniques for these comparisons. Results: For human dose calculations, agreement of PARaDIM with OLINDA 2.0 was good - within 10-20% for most organs - despite geometric differences among the phantoms tested. Agreement with MIRDcell for cell-level S-value calculations was within 5% in most cases. Conclusion: PARaDIM extends the use of Monte Carlo dose calculations to the broader community in nuclear medicine by providing a user-friendly graphical user interface for calculation setup and execution. PARaDIM leverages the enhanced anatomical realism provided by advanced computational reference phantoms or bespoke image-derived phantoms to enable improved assessments of radiation doses in a variety of radiopharmaceutical use cases, research, and preclinical development.
Need for Objective Task-Based Evaluation of Image Segmentation Algorithms for Quantitative PET: A Study with ACRIN 6668/RTOG 0235 Multicenter Clinical Trial Data
Liu, Z.
Mhlanga, J. C.
Xia, H.
Siegel, B. A.
Jha, A. K.
J Nucl Med2024Journal Article, cited 0 times
Website
ACRIN-NSCLC-FDG-PET
ACRIN 6668
artificial intelligence
deep learning
multicenter clinical trial
quantitative imaging
segmentation
task-based evaluation
Reliable performance of PET segmentation algorithms on clinically relevant tasks is required for their clinical translation. However, these algorithms are typically evaluated using figures of merit (FoMs) that are not explicitly designed to correlate with clinical task performance. Such FoMs include the Dice similarity coefficient (DSC), the Jaccard similarity coefficient (JSC), and the Hausdorff distance (HD). The objective of this study was to investigate whether evaluating PET segmentation algorithms using these task-agnostic FoMs yields interpretations consistent with evaluation on clinically relevant quantitative tasks. Methods: We conducted a retrospective study to assess the concordance in the evaluation of segmentation algorithms using the DSC, JSC, and HD and on the tasks of estimating the metabolic tumor volume (MTV) and total lesion glycolysis (TLG) of primary tumors from PET images of patients with non-small cell lung cancer. The PET images were collected from the American College of Radiology Imaging Network 6668/Radiation Therapy Oncology Group 0235 multicenter clinical trial data. The study was conducted in 2 contexts: (1) evaluating conventional segmentation algorithms, namely those based on thresholding (SUV(max)40% and SUV(max)50%), boundary detection (Snakes), and stochastic modeling (Markov random field-Gaussian mixture model); (2) evaluating the impact of network depth and loss function on the performance of a state-of-the-art U-net-based segmentation algorithm. Results: Evaluation of conventional segmentation algorithms based on the DSC, JSC, and HD showed that SUV(max)40% significantly outperformed SUV(max)50%. However, SUV(max)40% yielded lower accuracy on the tasks of estimating MTV and TLG, with a 51% and 54% increase, respectively, in the ensemble normalized bias. Similarly, the Markov random field-Gaussian mixture model significantly outperformed Snakes on the basis of the task-agnostic FoMs but yielded a 24% increased bias in estimated MTV. For the U-net-based algorithm, our evaluation showed that although the network depth did not significantly alter the DSC, JSC, and HD values, a deeper network yielded substantially higher accuracy in the estimated MTV and TLG, with a decreased bias of 91% and 87%, respectively. Additionally, whereas there was no significant difference in the DSC, JSC, and HD values for different loss functions, up to a 73% and 58% difference in the bias of the estimated MTV and TLG, respectively, existed. Conclusion: Evaluation of PET segmentation algorithms using task-agnostic FoMs could yield findings discordant with evaluation on clinically relevant quantitative tasks. This study emphasizes the need for objective task-based evaluation of image segmentation algorithms for quantitative PET.
Impact of (18)F-FDG PET Intensity Normalization on Radiomic Features of Oropharyngeal Squamous Cell Carcinomas and Machine Learning-Generated Biomarkers
Haider, S. P.
Zeevi, T.
Sharaf, K.
Gross, M.
Mahajan, A.
Kann, B. H.
Judson, B. L.
Prasad, M. L.
Burtness, B.
Aboian, M.
Canis, M.
Reichel, C. A.
Baumeister, P.
Payabvash, S.
J Nucl Med2024Journal Article, cited 0 times
Website
TCGA-HNSC
HNSCC
Head-Neck-PET-CT
Radiomics-Tumor-Phenotypes
Humans
*Machine Learning
*Oropharyngeal Neoplasms/diagnostic imaging
*Fluorodeoxyglucose F18
Male
Female
Middle Aged
Positron-Emission Tomography/methods
Image Processing
Computer-Assisted/methods
Aged
Carcinoma
Squamous Cell/diagnostic imaging
Biomarkers
Tumor/metabolism
Reproducibility of Results
Radiomics
Pet
Suv
machine learning
normalization
We aimed to investigate the effects of (18)F-FDG PET voxel intensity normalization on radiomic features of oropharyngeal squamous cell carcinoma (OPSCC) and machine learning-generated radiomic biomarkers. Methods: We extracted 1,037 (18)F-FDG PET radiomic features quantifying the shape, intensity, and texture of 430 OPSCC primary tumors. The reproducibility of individual features across 3 intensity-normalized images (body-weight SUV, reference tissue activity ratio to lentiform nucleus of brain and cerebellum) and the raw PET data was assessed using an intraclass correlation coefficient (ICC). We investigated the effects of intensity normalization on the features' utility in predicting the human papillomavirus (HPV) status of OPSCCs in univariate logistic regression, receiver-operating-characteristic analysis, and extreme-gradient-boosting (XGBoost) machine-learning classifiers. Results: Of 1,037 features, a high (ICC >/= 0.90), medium (0.90 > ICC >/= 0.75), and low (ICC < 0.75) degree of reproducibility across normalization methods was attained in 356 (34.3%), 608 (58.6%), and 73 (7%) features, respectively. In univariate analysis, features from the PET normalized to the lentiform nucleus had the strongest association with HPV status, with 865 of 1,037 (83.4%) significant features after multiple testing corrections and a median area under the receiver-operating-characteristic curve (AUC) of 0.65 (interquartile range, 0.62-0.68). Similar tendencies were observed in XGBoost models, with the lentiform nucleus-normalized model achieving the numerically highest average AUC of 0.72 (SD, 0.07) in the cross validation within the training cohort. The model generalized well to the validation cohorts, attaining an AUC of 0.73 (95% CI, 0.60-0.85) in independent validation and 0.76 (95% CI, 0.58-0.95) in external validation. The AUCs of the XGBoost models were not significantly different. Conclusion: Only one third of the features demonstrated a high degree of reproducibility across intensity-normalization techniques, making uniform normalization a prerequisite for interindividual comparability of radiomic markers. The choice of normalization technique may affect the radiomic features' predictive value with respect to HPV. Our results show trends that normalization to the lentiform nucleus may improve model performance, although more evidence is needed to draw a firm conclusion.
PET/CT-Based Radiogenomics Supports KEAP1/NFE2L2 Pathway Targeting for Non–Small Cell Lung Cancer Treated with Curative Radiotherapy
Bourbonne, Vincent
Morjani, Moncef
Pradier, Olivier
Hatt, Mathieu
Jaouen, Vincent
Querellou, Solène
Visvikis, Dimitris
Lucia, François
Schick, Ulrike
Journal of Nuclear Medicine2024Journal Article, cited 0 times
CPTAC-LSCC
CPTAC-LUAD
TCGA-LUAD
TCGA-LUSC
In lung cancer patients, radiotherapy is associated with a increased risk of local relapse (LR) when compared with surgery but with a preferable toxicity profile. The KEAP1/NFE2L2 mutational status (MutKEAP1/NFE2L2) is significantly correlated with LR in patients treated with radiotherapy but is rarely available. Prediction of MutKEAP1/NFE2L2 with noninvasive modalities could help to further personalize each therapeutic strategy. Methods: Based on a public cohort of 770 patients, model RNA (M-RNA) was first developed using continuous gene expression levels to predict MutKEAP1/NFE2L2, resulting in a binary output. The model PET/CT (M-PET/CT) was then built to predict M-RNA binary output using PET/CT-extracted radiomics features. M-PET/CT was validated on an external cohort of 151 patients treated with curative volumetric modulated arc radiotherapy. Each model was built, internally validated, and evaluated on a separate cohort using a multilayer perceptron network approach. Results: The M-RNA resulted in a C statistic of 0.82 in the testing cohort. With a training cohort of 101 patients, the retained M-PET/CT resulted in an area under the curve of 0.90 (P < 0.001). With a probability threshold of 20% applied to the testing cohort, M-PET/CT achieved a C statistic of 0.7. The same radiomics model was validated on the volumetric modulated arc radiotherapy cohort as patients were significantly stratified on the basis of their risk of LR with a hazard ratio of 2.61 (P = 0.02). Conclusion: Our approach enables the prediction of MutKEAP1/NFE2L2 using PET/CT-extracted radiomics features and efficiently classifies patients at risk of LR in an external cohort treated with radiotherapy.
Impact of Various Data Splitting Ratios on the Performance of Machine Learning Models in the Classification of Lung Cancer
Nazarkar, Archana
Kuchulakanti, Harish
Paidimarry, Chandra Sekhar
Kulkarni, Sravya
2023Book Section, cited 0 times
LIDC-IDRI
Owing to revolutionary technological advancements and exceptional experimental data, particularly in the area of image analysis and processing, artificial intelligence (AI) and Machine Learning has lately become widely popular buzzword. This opportunity has been taken by medical specialties where imaging is essential, such as radiology, pathology, or cancer, and significant research and development efforts have been made to translate the promise of AI and ML into therapeutic applications. As these tools are increasingly being used for common medical imaging analytic tasks including diagnosis, segmentation, and classification. The four classifiers Artificial Neural Network (ANN), Support Vector Machine (SVM), Naïve Bayes (NB), and K Nearest Neighbour (KNN) are used in this study to classify lung cancer based on the features that are extracted from lung segmentation Algorithm. The feature data is estimated from 90 image sets and are combined for normalization and divided into training, validation, and testing sets with a ratio of 80:10:10. Different ratios (i.e., 80/20, 70/30, 60/40, 50/50) were used to divide the datasets into the training and the testing datasets to assess the model performance. ANN and KNN were very precise in achieving an accuracy of 99.8% with moderate and high training data.
A combinatorial radiographic phenotype may stratify patient survival and be associated with invasion and proliferation characteristics in glioblastoma
Rao, Arvind
Rao, Ganesh
Gutman, David A
Flanders, Adam E
Hwang, Scott N
Rubin, Daniel L
Colen, Rivka R
Zinn, Pascal O
Jain, Rajan
Wintermark, Max
Journal of neurosurgery2016Journal Article, cited 19 times
Website
TCGA-GBM
Radiogenomics
Radiomic features
OBJECTIVE Individual MRI characteristics (e.g., volume) are routinely used to identify survival-associated phenotypes for glioblastoma (GBM). This study investigated whether combinations of MRI features can also stratify survival. Furthermore, the molecular differences between phenotype-induced groups were investigated.; METHODS Ninety-two patients with imaging, molecular, and survival data from the TCGA (The Cancer Genome Atlas)GBM collection were included in this study. For combinatorial phenotype analysis, hierarchical clustering was used. Groups were defined based on a cutpoint obtained via tree-based partitioning. Furthermore, differential expression analysis of microRNA (miRNA) and mRNA expression data was performed using GenePattern Suite. Functional analysis of the resulting genes and miRNAs was performed using Ingenuity Pathway Analysis. Pathway analysis was performed using Gene Set Enrichment Analysis.; RESULTS Clustering analysis reveals that image-based grouping of the patients is driven by 3 features: volume-class, hemorrhage, and T1/FLAIR-envelope ratio. A combination of these features stratifies survival in a statistically significant manner. A cutpoint analysis yields a significant survival difference in the training set (median survival difference: 12 months, p = 0.004) as well as a validation set (p = 0.0001). Specifically, a low value for any of these 3 features indicates favorable survival characteristics. Differential expression analysis between cutpoint-induced groups suggests that several immune-associated (natural killer cell activity, T-cell lymphocyte differentiation) and metabolism-associated (mitochondrial activity, oxidative phosphorylation) pathways underlie the transition of this phenotype. Integrating data for mRNA and miRNA suggests the roles of several genes regulating proliferation and invasion.; CONCLUSIONS A 3-way combination of MRI phenotypes may be capable of stratifying survival in GBM. Examination of molecular processes associated with groups created by this combinatorial phenotype suggests the role of biological processes associated with growth and invasion characteristics.
Correlation of perfusion parameters with genes related to angiogenesis regulation in glioblastoma: a feasibility study
Jain, R
Poisson, L
Narang, J
Scarpace, L
Rosenblum, ML
Rempel, S
Mikkelsen, T
American Journal of Neuroradiology2012Journal Article, cited 39 times
Website
Glioblastoma Multiforme (GBM)
BRAIN
TCGA
Radiomics
Radiogenomics
PET/CT
BACKGROUND AND PURPOSE: Integration of imaging and genomic data is critical for a better understanding of gliomas, particularly considering the increasing focus on the use of imaging biomarkers for patient survival and treatment response. The purpose of this study was to correlate CBV and PS measured by using PCT with the genes regulating angiogenesis in GBM. MATERIALS AND METHODS: Eighteen patients with WHO grade IV gliomas underwent pretreatment PCT and measurement of CBV and PS values from enhancing tumor. Tumor specimens were analyzed by TCGA by using Human Gene Expression Microarrays and were interrogated for correlation between CBV and PS estimates across the genome. We used the GO biologic process pathways for angiogenesis regulation to select genes of interest. RESULTS: We observed expression levels for 92 angiogenesis-associated genes (332 probes), 19 of which had significant correlation with PS and 9 of which had significant correlation with CBV (P < .05). Proangiogenic genes such as TNFRSF1A (PS = 0.53, P = .024), HIF1A (PS = 0.62, P = .0065), KDR (CBV = 0.60, P = .0084; PS = 0.59, P = .0097), TIE1 (CBV = 0.54, P = .022; PS = 0.49, P = .039), and TIE2/TEK (CBV = 0.58, P = .012) showed a significant positive correlation; whereas antiangiogenic genes such as VASH2 (PS = -0.72, P = .00011) showed a significant inverse correlation. CONCLUSIONS: Our findings are provocative, with some of the proangiogenic genes showing a positive correlation and some of the antiangiogenic genes showing an inverse correlation with tumor perfusion parameters, suggesting a molecular basis for these imaging biomarkers; however, this should be confirmed in a larger patient population.
Iterative Probabilistic Voxel Labeling: Automated Segmentation for Analysis of The Cancer Imaging Archive Glioblastoma Images
Steed, TC
Treiber, JM
Patel, KS
Taich, Z
White, NS
Treiber, ML
Farid, N
Carter, BS
Dale, AM
Chen, CC
American Journal of Neuroradiology2015Journal Article, cited 12 times
Website
Algorithm Development
Glioblastoma Multiforme (GBM)
BACKGROUND AND PURPOSE: Robust, automated segmentation algorithms are required for quantitative analysis of large imaging datasets. We developed an automated method that identifies and labels brain tumor-associated pathology by using an iterative probabilistic voxel labeling using k-nearest neighbor and Gaussian mixture model classification. Our purpose was to develop a segmentation method which could be applied to a variety of imaging from The Cancer Imaging Archive. MATERIALS AND METHODS: Images from 2 sets of 15 randomly selected subjects with glioblastoma from The Cancer Imaging Archive were processed by using the automated algorithm. The algorithm-defined tumor volumes were compared with those segmented by trained operators by using the Dice similarity coefficient. RESULTS: Compared with operator volumes, algorithm-generated segmentations yielded mean Dice similarities of 0.92 +/- 0.03 for contrast-enhancing volumes and 0.84 +/- 0.09 for FLAIR hyperintensity volumes. These values compared favorably with the means of Dice similarity coefficients between the operator-defined segmentations: 0.92 +/- 0.03 for contrast-enhancing volumes and 0.92 +/- 0.05 for FLAIR hyperintensity volumes. Robust segmentations can be achieved when only postcontrast T1WI and FLAIR images are available. CONCLUSIONS: Iterative probabilistic voxel labeling defined tumor volumes that were highly consistent with operator-defined volumes. Application of this algorithm could facilitate quantitative assessment of neuroimaging from patients with glioblastoma for both research and clinical indications.;
Texture feature ratios from relative CBV maps of perfusion MRI are associated with patient survival in glioblastoma
Lee, J
Jain, R
Khalil, K
Griffith, B
Bosca, R
Rao, G
Rao, A
American Journal of Neuroradiology2016Journal Article, cited 27 times
Website
TCGA-GBM
Texture analysis
BACKGROUND AND PURPOSE: Texture analysis has been applied to medical images to assist in tumor tissue classification and characterization. In this study, we obtained textural features from parametric (relative CBV) maps of dynamic susceptibility contrast-enhanced MR images in glioblastoma and assessed their relationship with patient survival. MATERIALS AND METHODS: MR perfusion data of 24 patients with glioblastoma from The Cancer Genome Atlas were analyzed in this study. One- and 2D texture feature ratios and kinetic textural features based on relative CBV values in the contrast-enhancing and nonenhancing lesions of the tumor were obtained. Receiver operating characteristic, Kaplan-Meier, and multivariate Cox proportional hazards regression analyses were used to assess the relationship between texture feature ratios and overall survival. RESULTS: Several feature ratios are capable of stratifying survival in a statistically significant manner. These feature ratios correspond to homogeneity (P = .008, based on the log-rank test), angular second moment (P = .003), inverse difference moment (P = .013), and entropy (P = .008). Multivariate Cox proportional hazards regression analysis showed that homogeneity, angular second moment, inverse difference moment, and entropy from the contrast-enhancing lesion were significantly associated with overall survival. For the nonenhancing lesion, skewness and variance ratios of relative CBV texture were associated with overall survival in a statistically significant manner. For the kinetic texture analysis, the Haralick correlation feature showed a P value close to .05. CONCLUSIONS: Our study revealed that texture feature ratios from contrast-enhancing and nonenhancing lesions and kinetic texture analysis obtained from perfusion parametric maps provide useful information for predicting survival in patients with glioblastoma.;
Computational Identification of Tumor Anatomic Location Associated with Survival in 2 Large Cohorts of Human Primary Glioblastomas
Liu, T T
Achrol, A S
Mitchell, L A
Du, W A
Loya, J J
Rodriguez, S A
Feroze, A
Westbroek, E M
Yeom, K W
Stuart, J M
Chang, S D
Harsh, G R 4th
Rubin, D L
American Journal of Neuroradiology2016Journal Article, cited 6 times
Website
TCGA-GBM
Radiomics
Radiogenomics
Classification
BACKGROUND AND PURPOSE: Tumor location has been shown to be a significant prognostic factor in patients with glioblastoma. The purpose of this study was to characterize glioblastoma lesions by identifying MR imaging voxel-based tumor location features that are associated with tumor molecular profiles, patient characteristics, and clinical outcomes. MATERIALS AND METHODS: Preoperative T1 anatomic MR images of 384 patients with glioblastomas were obtained from 2 independent cohorts (n = 253 from the Stanford University Medical Center for training and n = 131 from The Cancer Genome Atlas for validation). An automated computational image-analysis pipeline was developed to determine the anatomic locations of tumor in each patient. Voxel-based differences in tumor location between good (overall survival of >17 months) and poor (overall survival of <11 months) survival groups identified in the training cohort were used to classify patients in The Cancer Genome Atlas cohort into 2 brain-location groups, for which clinical features, messenger RNA expression, and copy number changes were compared to elucidate the biologic basis of tumors located in different brain regions. RESULTS: Tumors in the right occipitotemporal periventricular white matter were significantly associated with poor survival in both training and test cohorts (both, log-rank P < .05) and had larger tumor volume compared with tumors in other locations. Tumors in the right periatrial location were associated with hypoxia pathway enrichment and PDGFRA amplification, making them potential targets for subgroup-specific therapies. CONCLUSIONS: Voxel-based location in glioblastoma is associated with patient outcome and may have a potential role for guiding personalized treatment.;
Relationship between Glioblastoma Heterogeneity and Survival Time: An MR Imaging Texture Analysis
Liu, Y
Xu, X
Yin, L
Zhang, X
Li, L
Lu, H
American Journal of Neuroradiology2017Journal Article, cited 8 times
Website
TCGA-GBM
Radiomics
postcontrast TI-weighted imaging
co-occurrence matrix
run-length matrix
histogram
global spatial variations
cancer genome atlas
recursive feature-elimination–based support vector machine classifier (SVM)
Radiomics in Brain Tumor: Image Assessment, Quantitative Feature Descriptors, and Machine-Learning Approaches
Zhou, M
Scott, J
Chaudhury, B
Hall, L
Goldgof, D
Yeom, KW
Iv, M
Ou, Y
Kalpathy-Cramer, J
Napel, S
American Journal of Neuroradiology2017Journal Article, cited 20 times
Website
QIN
Radiomics
Deep-Learning Convolutional Neural Networks Accurately Classify Genetic Mutations in Gliomas
Chang, P
Grinband, J
Weinberg, BD
Bardis, M
Khy, M
Cadena, G
Su, M-Y
Cha, S
Filippi, CG
Bota, D
American Journal of Neuroradiology2018Journal Article, cited 5 times
Website
TCGA-GBM
TCGA-LGG
Multisite Concordance of DSC-MRI Analysis for Brain Tumors: Results of a National Cancer Institute Quantitative Imaging Network Collaborative Project
Schmainda, KM
Prah, MA
Rand, SD
Liu, Y
Logan, B
Muzi, M
Rane, SD
Da, X
Yen, Y-F
Kalpathy-Cramer, J
American Journal of Neuroradiology2018Journal Article, cited 0 times
Website
DSC-MRI
QIN
The ASNR-ACR-RSNA Common Data Elements Project: What Will It Do for the House of Neuroradiology?
Flanders, AE
Jordan, JE
American Journal of Neuroradiology2018Journal Article, cited 0 times
Website
REMBRANDT
VASARI
BRAIN
Predicting Genotype and Survival in Glioma Using Standard Clinical MR Imaging Apparent Diffusion Coefficient Images: A Pilot Study from The Cancer Genome Atlas
Wu, C-C
Jain, R
Radmanesh, A
Poisson, LM
Guo, W-Y
Zagzag, D
Snuderl, M
Placantonakis, DG
Golfinos, J
Chi, AS
American Journal of Neuroradiology2018Journal Article, cited 1 times
Website
TCGA-GBM
glioma
isocitrate dehydrogenase (IDH)
tcga
relative ADC (rADC)
Neuroimaging-Based Classification Algorithm for Predicting 1p/19q-Codeletion Status in IDH-Mutant Lower Grade Gliomas
Batchala, P.P.
Muttikkal, T.J.E.
Donahue, J.H.
Patrie, J.T.
Schiff, D.
Fadul, C.E.
Mrachek, E.K.
Lopes, M.-B.
Jain, R.
Patel, S.H.
American Journal of Neuroradiology2019Journal Article, cited 0 times
TCGA-LGG
MRI
Oligodendroglioma
BACKGROUND AND PURPOSE: Isocitrate dehydrogenase (IDH)-mutant lower grade gliomas are classified as oligodendrogliomas or diffuse astrocytomas based on 1p/19q-codeletion status. We aimed to test and validate neuroradiologists' performances in predicting the codeletion status of IDH-mutant lower grade gliomas based on simple neuroimaging metrics.
MATERIALS AND METHODS: One hundred two IDH-mutant lower grade gliomas with preoperative MR imaging and known 1p/19q status from The Cancer Genome Atlas composed a training dataset. Two neuroradiologists in consensus analyzed the training dataset for various imaging features: tumor texture, margins, cortical infiltration, T2-FLAIR mismatch, tumor cyst, T2* susceptibility, hydrocephalus, midline shift, maximum dimension, primary lobe, necrosis, enhancement, edema, and gliomatosis. Statistical analysis of the training data produced a multivariate classification model for codeletion prediction based on a subset of MR imaging features and patient age. To validate the classification model, 2 different independent neuroradiologists analyzed a separate cohort of 106 institutional IDH-mutant lower grade gliomas.
RESULTS: Training dataset analysis produced a 2-step classification algorithm with 86.3% codeletion prediction accuracy, based on the following: 1) the presence of the T2-FLAIR mismatch sign, which was 100% predictive of noncodeleted lower grade gliomas, (n = 21); and 2) a logistic regression model based on texture, patient age, T2* susceptibility, primary lobe, and hydrocephalus. Independent validation of the classification algorithm rendered codeletion prediction accuracies of 81.1% and 79.2% in 2 independent readers. The metrics used in the algorithm were associated with moderate-substantial interreader agreement (κ = 0.56-0.79).
CONCLUSIONS: We have validated a classification algorithm based on simple, reproducible neuroimaging metrics and patient age that demonstrates a moderate prediction accuracy of 1p/19q-codeletion status among IDH-mutant lower grade gliomas.
Disorder in Pixel-Level Edge Directions on T1WI Is Associated with the Degree of Radiation Necrosis in Primary and Metastatic Brain Tumors: Preliminary Findings
Prasanna, P
Rogers, L
Lam, TC
Cohen, M
Siddalingappa, A
Wolansky, L
Pinho, M
Gupta, A
Hatanpaa, KJ
Madabhushi, A
American Journal of Neuroradiology2019Journal Article, cited 0 times
Website
GBM. MRI. Radiomics
Deep Transfer Learning and Radiomics Feature Prediction of Survival of Patients with High-Grade Gliomas
Han, W.
Qin, L.
Bay, C.
Chen, X.
Yu, K. H.
Miskin, N.
Li, A.
Xu, X.
Young, G.
AJNR Am J Neuroradiol2020Journal Article, cited 16 times
Website
GBM
BRAIN
TCGA
Radiomics
BACKGROUND AND PURPOSE: Patient survival in high-grade glioma remains poor, despite the recent developments in cancer treatment. As new chemo-, targeted molecular, and immune therapies emerge and show promising results in clinical trials, image-based methods for early prediction of treatment response are needed. Deep learning models that incorporate radiomics features promise to extract information from brain MR imaging that correlates with response and prognosis. We report initial production of a combined deep learning and radiomics model to predict overall survival in a clinically heterogeneous cohort of patients with high-grade gliomas. MATERIALS AND METHODS: Fifty patients with high-grade gliomas from our hospital and 128 patients with high-grade glioma from The Cancer Genome Atlas were included. For each patient, we calculated 348 hand-crafted radiomics features and 8192 deep features generated by a pretrained convolutional neural network. We then applied feature selection and Elastic Net-Cox modeling to differentiate patients into long- and short-term survivors. RESULTS: In the 50 patients with high-grade gliomas from our institution, the combined feature analysis framework classified the patients into long- and short-term survivor groups with a log-rank test P value < .001. In the 128 patients from The Cancer Genome Atlas, the framework classified patients into long- and short-term survivors with a log-rank test P value of .014. For the mixed cohort of 50 patients from our institution and 58 patients from The Cancer Genome Atlas, it yielded a log-rank test P value of .035. CONCLUSIONS: A deep learning model combining deep and radiomics features can dichotomize patients with high-grade gliomas into long- and short-term survivors.
Prediction of Human Papillomavirus Status and Overall Survival in Patients with Untreated Oropharyngeal Squamous Cell Carcinoma: Development and Validation of CT-Based Radiomics
Choi, Y.
Nam, Y.
Jang, J.
Shin, N. Y.
Ahn, K. J.
Kim, B. S.
Lee, Y. S.
Kim, M. S.
AJNR Am J Neuroradiol2020Journal Article, cited 0 times
Website
Head-Neck-Radiomics-HN1
Algorithm Development
Radiomics
Computed Tomography (CT)
Retrospective Studies
BACKGROUND AND PURPOSE: Human papillomavirus is a prognostic marker for oropharyngeal squamous cell carcinoma. We aimed to determine the value of CT-based radiomics for predicting the human papillomavirus status and overall survival in patients with oropharyngeal squamous cell carcinoma. MATERIALS AND METHODS: Eighty-six patients with oropharyngeal squamous cell carcinoma were retrospectively collected and grouped into training (n = 61) and test (n = 25) sets. For human papillomavirus status and overall survival prediction, radiomics features were selected via a random forest-based algorithm and Cox regression analysis, respectively. Relevant features were used to build multivariate Cox regression models and calculate the radiomics score. Human papillomavirus status and overall survival prediction were assessed via the area under the curve and concordance index, respectively. The models were validated in the test and The Cancer Imaging Archive cohorts (n = 78). RESULTS: For prediction of human papillomavirus status, radiomics features yielded areas under the curve of 0.865, 0.747, and 0.834 in the training, test, and validation sets, respectively. In the univariate Cox regression, the human papillomavirus status (positive: hazard ratio, 0.257; 95% CI, 0.09-0.7; P = .008), T-stage (>/=III: hazard ratio, 3.66; 95% CI, 1.34-9.99; P = .011), and radiomics score (high-risk: hazard ratio, 3.72; 95% CI, 1.21-11.46; P = .022) were associated with overall survival. The addition of the radiomics score to the clinical Cox model increased the concordance index from 0.702 to 0.733 (P = .01). Validation yielded concordance indices of 0.866 and 0.720. CONCLUSIONS: CT-based radiomics may be useful in predicting human papillomavirus status and overall survival in patients with oropharyngeal squamous cell carcinoma.
MRI-Based Deep-Learning Method for Determining Glioma <em>MGMT</em> Promoter Methylation Status
Yogananda, C.G.B.
Shah, B.R.
Nalawade, S.S.
Murugesan, G.K.
Yu, F.F.
Pinho, M.C.
Wagner, B.C.
Mickey, B.
Patel, T.R.
Fei, B.
Madhuranthakam, A.J.
Maldjian, J.A.
American Journal of Neuroradiology2021Journal Article, cited 0 times
LGG-1p19qDeletion
deep learning
glioma
BACKGROUND AND PURPOSE: O6-Methylguanine-DNA methyltransferase (MGMT) promoter methylation confers an improved prognosis and treatment response in gliomas. We developed a deep learning network for determining MGMT promoter methylation status using T2 weighted Images (T2WI) only.MATERIALS AND METHODS: Brain MR imaging and corresponding genomic information were obtained for 247 subjects from The Cancer Imaging Archive and The Cancer Genome Atlas. One hundred sixty-three subjects had a methylated MGMT promoter. A T2WI-only network (MGMT-net) was developed to determine MGMT promoter methylation status and simultaneous single-label tumor segmentation. The network was trained using 3D-dense-UNets. Three-fold cross-validation was performed to generalize the performance of the networks. Dice scores were computed to determine tumor-segmentation accuracy.RESULTS: The MGMT-net demonstrated a mean cross-validation accuracy of 94.73% across the 3 folds (95.12%, 93.98%, and 95.12%, [SD, 0.66%]) in predicting MGMT methylation status with a sensitivity and specificity of 96.31% [SD, 0.04%] and 91.66% [SD, 2.06%], respectively, and a mean area under the curve of 0.93 [SD, 0.01]. The whole tumor-segmentation mean Dice score was 0.82 [SD, 0.008].CONCLUSIONS: We demonstrate high classification accuracy in predicting MGMT promoter methylation status using only T2WI. Our network surpasses the sensitivity, specificity, and accuracy of histologic and molecular methods. This result represents an important milestone toward using MR imaging to predict prognosis and treatment response.IDHisocitrate dehydrogenaseMGMTO6-methylguanine-DNA methyltransferasePCRpolymerase chain reactionT2WIT2 weighted ImagesTCGAThe Cancer Genome AtlasTCIAThe Cancer Imaging Archive
Repeatability of Automated Image Segmentation with BraTumIA in Patients with Recurrent Glioblastoma
Abu Khalaf, N.
Desjardins, A.
Vredenburgh, J. J.
Barboriak, D. P.
AJNR Am J Neuroradiol2021Journal Article, cited 0 times
Website
RIDER Neuro MRI
Machine Learning
Segmentation
BRAIN
Algorithm Development
GBM
BACKGROUND AND PURPOSE: Despite high interest in machine-learning algorithms for automated segmentation of MRIs of patients with brain tumors, there are few reports on the variability of segmentation results. The purpose of this study was to obtain benchmark measures of repeatability for a widely accessible software program, BraTumIA (Versions 1.2 and 2.0), which uses a machine-learning algorithm to segment tumor features on contrast-enhanced brain MR imaging. MATERIALS AND METHODS: Automatic segmentation of enhancing tumor, tumor edema, nonenhancing tumor, and necrosis was performed on repeat MR imaging scans obtained approximately 2 days apart in 20 patients with recurrent glioblastoma. Measures of repeatability and spatial overlap, including repeatability and Dice coefficients, are reported. RESULTS: Larger volumes of enhancing tumor were obtained on later compared with earlier scans (mean, 26.3 versus 24.2 mL for BraTumIA 1.2; P < .05; and 24.9 versus 22.9 mL for BraTumIA 2.0, P < .01). In terms of percentage change, repeatability coefficients ranged from 31% to 46% for enhancing tumor and edema components and from 87% to 116% for nonenhancing tumor and necrosis. Dice coefficients were highest (>0.7) for enhancing tumor and edema components, intermediate for necrosis, and lowest for nonenhancing tumor and did not differ between software versions. Enhancing tumor and tumor edema were smaller, and necrotic tumor larger using BraTumIA 2.0 rather than 1.2. CONCLUSIONS: Repeatability and overlap metrics varied by segmentation type, with better performance for segmentations of enhancing tumor and tumor edema compared with other components. Incomplete washout of gadolinium contrast agents could account for increasing enhancing tumor volumes on later scans.
Quantifying T2-FLAIR Mismatch Using Geographically Weighted Regression and Predicting Molecular Status in Lower-Grade Gliomas
Mohammed, S.
Ravikumar, V.
Warner, E.
Patel, S. H.
Bakas, S.
Rao, A.
Jain, R.
AJNR Am J Neuroradiol2022Journal Article, cited 0 times
Website
BraTS-TCGA-LGG
BRAIN
multi-parametric MRI
Radiogenomics
BACKGROUND AND PURPOSE: The T2-FLAIR mismatch sign is a validated imaging sign of isocitrate dehydrogenase-mutant 1p/19q noncodeleted gliomas. It is identified by radiologists through visual inspection of preoperative MR imaging scans and has been shown to identify isocitrate dehydrogenase-mutant 1p/19q noncodeleted gliomas with a high positive predictive value. We have developed an approach to quantify the T2-FLAIR mismatch signature and use it to predict the molecular status of lower-grade gliomas. MATERIALS AND METHODS: We used multiparametric MR imaging scans and segmentation labels of 108 preoperative lower-grade glioma tumors from The Cancer Imaging Archive. Clinical information and T2-FLAIR mismatch sign labels were obtained from supplementary material of relevant publications. We adopted an objective analytic approach to estimate this sign through a geographically weighted regression and used the residuals for each case to construct a probability density function (serving as a residual signature). These functions were then analyzed using an appropriate statistical framework. RESULTS: We observed statistically significant (P value = .05) differences between the averages of residual signatures for an isocitrate dehydrogenase-mutant 1p/19q noncodeleted class of tumors versus other categories. Our classifier predicts these cases with area under the curve of 0.98 and high specificity and sensitivity. It also predicts the T2-FLAIR mismatch sign within these cases with an under the curve of 0.93. CONCLUSIONS: On the basis of this retrospective study, we show that geographically weighted regression-based residual signatures are highly informative of the T2-FLAIR mismatch sign and can identify isocitrate dehydrogenase-mutation and 1p/19q codeletion status with high predictive power. The utility of the proposed quantification of the T2-FLAIR mismatch sign can be potentially validated through a prospective multi-institutional study.
Stable and Discriminatory Radiomic Features from the Tumor and Its Habitat Associated with Progression-Free Survival in Glioblastoma: A Multi-Institutional Study
Verma, R.
Hill, V. B.
Statsevych, V.
Bera, K.
Correa, R.
Leo, P.
Ahluwalia, M.
Madabhushi, A.
Tiwari, P.
American Journal of Neuroradiology2022Journal Article, cited 0 times
Website
TCGA-GBM
Ivy GAP
multiparametric Magnetic Resonance Imaging
Classification
Radiomics
BACKGROUND AND PURPOSE: Glioblastoma is an aggressive brain tumor, with no validated prognostic biomarkers for survival before surgical resection. Although recent approaches have demonstrated the prognostic ability of tumor habitat (constituting necrotic core, enhancing lesion, T2/FLAIR hyperintensity subcompartments) derived radiomic features for glioblastoma survival on treatment-naive MR imaging scans, radiomic features are known to be sensitive to MR imaging acquisitions across sites and scanners. In this study, we sought to identify the radiomic features that are both stable across sites and discriminatory of poor and improved progression-free survival in glioblastoma tumors.; ; MATERIALS AND METHODS: We used 150 treatment-naive glioblastoma MR imaging scans (Gadolinium-T1w, T2w, FLAIR) obtained from 5 sites. For every tumor subcompartment (enhancing tumor, peritumoral FLAIR-hyperintensities, necrosis), a total of 316 three-dimensional radiomic features were extracted. The training cohort constituted studies from 4 sites (n = 93) to select the most stable and discriminatory radiomic features for every tumor subcompartment. These features were used on a hold-out cohort (n = 57) to evaluate their ability to discriminate patients with poor survival from those with improved survival.; ; RESULTS: Incorporating the most stable and discriminatory features within a linear discriminant analysis classifier yielded areas under the curve of 0.71, 0.73, and 0.76 on the test set for distinguishing poor and improved survival compared with discriminatory features alone (areas under the curve of 0.65, 0.54, 0.62) from the necrotic core, enhancing tumor, and peritumoral T2/FLAIR hyperintensity, respectively.; ; CONCLUSIONS: Incorporating stable and discriminatory radiomic features extracted from tumors and associated habitats across multisite MR imaging sequences may yield robust prognostic classifiers of patient survival in glioblastoma tumors.
Quantification of T2-FLAIR Mismatch in Nonenhancing Diffuse Gliomas Using Digital Subtraction
Cho, N. S.
Sanvito, F.
Le, V. L.
Oshima, S.
Teraishi, A.
Yao, J.
Telesca, D.
Raymond, C.
Pope, W. B.
Nghiemphu, P. L.
Lai, A.
Cloughesy, T. F.
Salamon, N.
Ellingson, B. M.
AJNR Am J Neuroradiol2024Journal Article, cited 0 times
Website
UCSF-PDGM
Radiomics
Radiogenomics
Isocitrate dehydrogenase (IDH) mutation
T2-weighted
FLAIR
Magnetic Resonance Imaging (MRI)
Astrocytoma
BRAIN
Image subtraction
BACKGROUND AND PURPOSE: The T2-FLAIR mismatch sign on MR imaging is a highly specific imaging biomarker of isocitrate dehydrogenase (IDH)-mutant astrocytomas, which lack 1p/19q codeletion. However, most studies using the T2-FLAIR mismatch sign have used visual assessment. This study quantified the degree of T2-FLAIR mismatch using digital subtraction of fluid-nulled T2-weighted FLAIR images from non-fluid-nulled T2-weighted images in human nonenhancing diffuse gliomas and then used this information to assess improvements in diagnostic performance and investigate subregion characteristics within these lesions. MATERIALS AND METHODS: Two cohorts of treatment-naive, nonenhancing gliomas with known IDH and 1p/19q status were studied (n = 71 from The Cancer Imaging Archive (TCIA) and n = 34 in the institutional cohort). 3D volumes of interest corresponding to the tumor were segmented, and digital subtraction maps of T2-weighted MR imaging minus T2-weighted FLAIR MR imaging were used to partition each volume of interest into a T2-FLAIR mismatched subregion (T2-FLAIR mismatch, corresponding to voxels with positive values on the subtraction maps) and nonmismatched subregion (T2-FLAIR nonmismatch corresponding to voxels with negative values on the subtraction maps). Tumor subregion volumes, percentage of T2-FLAIR mismatch volume, and T2-FLAIR nonmismatch subregion thickness were calculated, and 2 radiologists assessed the T2-FLAIR mismatch sign with and without the aid of T2-FLAIR subtraction maps. RESULTS: Thresholds of >/=42% T2-FLAIR mismatch volume classified IDH-mutant astrocytoma with a specificity/sensitivity of 100%/19.6% (TCIA) and 100%/31.6% (institutional); >/=25% T2-FLAIR mismatch volume showed 92.0%/32.6% and 100%/63.2% specificity/sensitivity, and >/=15% T2-FLAIR mismatch volume showed 88.0%/39.1% and 93.3%/79.0% specificity/sensitivity. In IDH-mutant astrocytomas with >/=15% T2-FLAIR mismatch volume, T2-FLAIR nonmismatch subregion thickness was negatively correlated with the percentage T2-FLAIR mismatch volume (P < .0001) across both cohorts. The percentage T2-FLAIR mismatch volume was higher in grades 3-4 compared with grade 2 IDH-mutant astrocytomas (P < .05), and >/=15% T2-FLAIR mismatch volume IDH-mutant astrocytomas were significantly larger than <15% T2-FLAIR mismatch volume IDH-mutant astrocytoma (P < .05) across both cohorts. When evaluated by 2 radiologists, the additional use of T2-FLAIR subtraction maps did not show a significant difference in interreader agreement, sensitivity, or specificity compared with a separate evaluation of T2-FLAIR and T2-weighted MR imaging alone. CONCLUSIONS: T2-FLAIR digital subtraction maps may be a useful, automated tool to obtain objective segmentations of tumor subregions based on quantitative thresholds for classifying IDH-mutant astrocytomas using the percentage T2 FLAIR mismatch volume with 100% specificity and exploring T2-FLAIR mismatch/T2-FLAIR nonmismatch subregion characteristics. Conversely, the addition of T2-FLAIR subtraction maps did not enhance the sensitivity or specificity of the visual T2-FLAIR mismatch sign assessment by experienced radiologists.
Computer-aided detection of lung nodules using outer surface features
Demir, Önder
Yılmaz Çamurcu, Ali
Bio-Medical Materials and Engineering2015Journal Article, cited 28 times
Website
LIDC-IDRI
Computed Tomography (CT)
Computer Aided Detection (CADe)
LUNG
Classification
In this study, a computer-aided detection (CAD) system was developed for the detection of lung nodules in computed tomography images. The CAD system consists of four phases, including two-dimensional and three-dimensional preprocessing phases. In the feature extraction phase, four different groups of features are extracted from volume of interests: morphological features, statistical and histogram features, statistical and histogram features of outer surface, and texture features of outer surface. The support vector machine algorithm is optimized using particle swarm optimization for classification. The CAD system provides 97.37% sensitivity, 86.38% selectivity, 88.97% accuracy and 2.7 false positive per scan using three groups of classification features. After the inclusion of outer surface texture features, classification results of the CAD system reaches 98.03% sensitivity, 87.71% selectivity, 90.12% accuracy and 2.45 false positive per scan. Experimental results demonstrate that outer surface texture features of nodule candidates are useful to increase sensitivity and decrease the number of false positives in the detection of lung nodules in computed tomography images.
Prostate Cancer Delineation in MRI; Images Based on Deep Learning:; Quantitative Comparison and Promising; Perspective
Prostate cancer is the most common malignant male tumor. Magnetic Resonance Imaging (MRI) plays a crucial role in the detection, diagnosis, and treatment of prostate cancer diseases. Computer-aided diagnosis systems can help doctors to analyze MRI images and detect prostate cancer earlier. One of the key stages of prostate cancer CAD systems is the automatic delineation of the prostate. Deep learning has recently demonstrated promising segmentation results with medical images. The purpose of this paper is to compare the state-of-the-art of deep learning-based approaches for prostate delineation in MRI images and discussing their limitations and strengths. Besides, we introduce a promising perspective for prostate tumor classification in MRI images. This perspective includes the use of the best segmentation model to detect the prostate tumors in MRI images. Then, we will employ the segmented images to extract the radiomics features that will be used to discriminate benign or malignant prostate tumors.
Analyzing the Reliability of Different Machine Radiomics Features Considering Various Segmentation Approaches in Lung Cancer CT Images
Tahmooresi, Maryam
Abdel-Nasser, Mohamed
Puig, Domenec
2022Book Section, cited 0 times
NSCLC-Radiomics
Radiomic features
Cancer is generally defined as the uncontrollable increase of number of cells in the body. These cells might be formed anywhere in the body and spread to other parts of the body. Although the mortality rate of cancer is high, it is possible to decrease cancer cases by up to 30% to 50% through taking a healthy lifestyle and avoiding unhealthy habits. Imaging is one of the powerful technologies used for detecting and treating cancer at its early stages. Nowadays, scientists admit that medical images hold more information than their diagnosis, which is called a radiomics approach. Radiomics demonstrate that images comprise numerous quantitative features that are useful in predicting, detecting, and treating cancers in a personalized manner. While radiomics can extract numerous features, not all of them are useful. It should not be neglected that the outcome of data analysis is highly dependent on the selected features. There are different ways of finding the most reliable features. One possible way is to select all extracted features, analyze them, and find the most reproducible and reliable ones. Different statistical analysis metrics could analyze the features. To discover and introduce the most accurate metrics, in this paper, different statistical metrics used for measuring the stability and reproducibility of the features are investigated.
Computer-aided grading of prostate cancer from MRI images using Convolutional Neural Networks
Abraham, Bejoy
Nair, Madhu S.
2019Journal Article, cited 0 times
PROSTATEx
Grading of prostate cancer is usually done using Transrectal Ultrasound (TRUS) biopsy followed by microscopic examination of histological images by the pathologist. TRUS is a painful procedure which leads to infections of severe nature. In the recent past, Magnetic Resonance Imaging (MRI) has emerged as a modality which can be used for the diagnosis of prostate cancer without subjecting patients to biopsies. A novel method for grading of prostate cancer based on MRI utilizing Convolutional Neural Networks (CNN) and LADTree classifier is explored in this paper. T2 weighted (T2W), high B-value Diffusion Weighted (BVALDW) and Apparent Diffusion Coefficient (ADC) MRI images obtained from the training dataset of PROSTATEx-2 2017 challenge are used for this study. A quadratic weighted Cohen’s kappa score of 0.3772 is attained in predicting different grade groups of cancer and a positive predictive value of 81.58% in predicting high-grade cancer. The method also attained an unweighted kappa score of 0.3993, and weighted Area Under Receiver Operating Characteristic Curve (AUC), accuracy and F-score of 0.74, 58.04 and 0.56, respectively. The above-mentioned results are better than that obtained by the winning method of PROSTATEx-2 2017 challenge.
Correlation between CT based radiomics features and gene expression data in non-small cell lung cancer
Wang, Ting
Gong, Jing
Duan, Hui-Hong
Wang, Li-Jia
Ye, Xiao-Dan
Nie, Sheng-Dong
Journal of X-ray science and technology2019Journal Article, cited 0 times
radiogenomics
NSCLC
CT imaging-based radiomics signatures improve prognosis prediction in postoperative colorectal cancer
OBJECTIVE: To investigate the use of non-contrast-enhanced (NCE) and contrast-enhanced (CE) CT radiomics signatures (Rad-scores) as prognostic factors to help improve the prediction of the overall survival (OS) of postoperative colorectal cancer (CRC) patients. METHODS: A retrospective analysis was performed on 65 CRC patients who underwent surgical resection in our hospital as the training set, and 19 patient images retrieved from The Cancer Imaging Archive (TCIA) as the external validation set. In training, radiomics features were extracted from the preoperative NCE/CE-CT, then selected through 5-fold cross validation LASSO Cox method and used to construct Rad-scores. Models derived from Rad-scores and clinical factors were constructed and compared. Kaplan-Meier analyses were also used to compare the survival probability between the high- and low-risk Rad-score groups. Finally, a nomogram was developed to predict the OS. RESULTS: In training, a clinical model achieved a C-index of 0.796 (95% CI: 0.722-0.870), while clinical and two Rad-scores combined model performed the best, achieving a C-index of 0.821 (95% CI: 0.743-0.899). Furthermore, the models with the CE-CT Rad-score yielded slightly better performance than that of NCE-CT in training. For the combined model with CE-CT Rad-scores, a C-index of 0.818 (95% CI: 0.742-0.894) and 0.774 (95% CI: 0.556-0.992) were achieved in both the training and validation sets. Kaplan-Meier analysis demonstrated a significant difference in survival probability between the high- and low-risk groups. Finally, the areas under the receiver operating characteristics (ROC) curves for the model were 0.904, 0.777, and 0.843 for 1, 3, and 5-year survival, respectively. CONCLUSION: NCE-CT or CE-CT radiomics and clinical combined models can predict the OS for CRC patients, and both Rad-scores are recommended to be included when available.
Automatic vertebrae localization and segmentation in computed tomography (CT) are fundamental for computer-aided detection (CADe) and computer-aided diagnosis (CADx) systems. However, they remain challenging due to the high variation in spinal anatomy among patients. In this paper, we propose a simple, model-free approach for automatic CT vertebrae localization and segmentation. The segmentation pipeline consists of 3 stages. In the first stage the center line of the spinal cord is estimated using convolution. In the second stage a baseline segmentation of the spine is created using morphological reconstruction and other classical image processing algorithms. Finally, the baseline spine segmentation is refined by limiting its boundaries using simple heuristics based on expert knowledge. We evaluated our method on the COVID-19 subdataset of the CTSpine1K dataset. Our solution achieved a dice coefficient of 0.8160±0.0432 (mean±std) and an intersection over union of 0.6914±0.0618 for spine segmentation. The experimental results have demonstrated the feasibility of the proposed method in a real environment.
Correlation between MR Image-Based Radiomics Features and Risk Scores Associated with Gene Expression Profiles in Breast Cancer
Kim, Ga Ram
Ku, You Jin
Kim, Jun Ho
Kim, Eun-Kyung
Journal of the Korean Society of Radiology2020Journal Article, cited 0 times
Website
TCGA-BRCA
Radiogenomics
Radiomic features
Magnetic Resonance Imaging (MRI)
Quality of Radiomic Features in Glioblastoma Multiforme: Impact of Semi-Automated Tumor Segmentation Software
Lee, Myungeun
Woo, Boyeong
Kuo, Michael D
Jamshidi, Neema
Kim, Jong Hyo
Korean journal of radiology2017Journal Article, cited 7 times
Website
TCGA-GBM
Radiomics
BRAIN
Magnetic Resonance Imaging (MRI)
Cancer as a Model System for Testing Metabolic Scaling Theory
Brummer, Alexander B.
Savage, Van M.
Frontiers in Ecology and Evolution2021Journal Article, cited 0 times
Website
NSCLC Radiogenomics
LUNG
Biological allometries, such as the scaling of metabolism to mass, are hypothesized to result from natural selection to maximize how vascular networks fill space yet minimize internal transport distances and resistance to blood flow. Metabolic scaling theory argues two guiding principles—conservation of fluid flow and space-filling fractal distributions—describe a diversity of biological networks and predict how the geometry of these networks influences organismal metabolism. Yet, mostly absent from past efforts are studies that directly, and independently, measure metabolic rate from respiration and vascular architecture for the same organ, organism, or tissue. Lack of these measures may lead to inconsistent results and conclusions about metabolism, growth, and allometric scaling. We present simultaneous and consistent measurements of metabolic scaling exponents from clinical images of lung cancer, serving as a first-of-its-kind test of metabolic scaling theory, and identifying potential quantitative imaging biomarkers indicative of tumor growth. We analyze data for 535 clinical PET-CT scans of patients with non-small cell lung carcinoma to establish the presence of metabolic scaling between tumor metabolism and tumor volume. Furthermore, we use computer vision and mathematical modeling to examine predictions of metabolic scaling based on the branching geometry of the tumor-supplying blood vessel networks in a subset of 56 patients diagnosed with stage II-IV lung cancer. Examination of the scaling of maximum standard uptake value with metabolic tumor volume, and metabolic tumor volume with gross tumor volume, yield metabolic scaling exponents of 0.64 (0.20) and 0.70 (0.17), respectively. We compare these to the value of 0.85 (0.06) derived from the geometric scaling of the tumor-supplying vasculature. These results: (1) inform energetic models of growth and development for tumor forecasting; (2) identify imaging biomarkers in vascular geometry related to blood volume and flow; and (3) highlight unique opportunities to develop and test the metabolic scaling theory of ecology in tumors transitioning from avascular to vascular geometries.
Joint Modeling of RNAseq and Radiomics Data for Glioma Molecular Characterization and Prediction
Shboul, Z. A.
Diawara, N.
Vossough, A.
Chen, J. Y.
Iftekharuddin, K. M.
Front Med (Lausanne)2021Journal Article, cited 0 times
Website
BraTS-TCGA-LGG
RNA sequencing
Radiogenomics
Radiomics
RNA sequencing (RNAseq) is a recent technology that profiles gene expression by measuring the relative frequency of the RNAseq reads. RNAseq read counts data is increasingly used in oncologic care and while radiology features (radiomics) have also been gaining utility in radiology practice such as disease diagnosis, monitoring, and treatment planning. However, contemporary literature lacks appropriate RNA-radiomics (henceforth, radiogenomics ) joint modeling where RNAseq distribution is adaptive and also preserves the nature of RNAseq read counts data for glioma grading and prediction. The Negative Binomial (NB) distribution may be useful to model RNAseq read counts data that addresses potential shortcomings. In this study, we propose a novel radiogenomics-NB model for glioma grading and prediction. Our radiogenomics-NB model is developed based on differentially expressed RNAseq and selected radiomics/volumetric features which characterize tumor volume and sub-regions. The NB distribution is fitted to RNAseq counts data, and a log-linear regression model is assumed to link between the estimated NB mean and radiomics. Three radiogenomics-NB molecular mutation models (e.g., IDH mutation, 1p/19q codeletion, and ATRX mutation) are investigated. Additionally, we explore gender-specific effects on the radiogenomics-NB models. Finally, we compare the performance of the proposed three mutation prediction radiogenomics-NB models with different well-known methods in the literature: Negative Binomial Linear Discriminant Analysis (NBLDA), differentially expressed RNAseq with Random Forest (RF-genomics), radiomics and differentially expressed RNAseq with Random Forest (RF-radiogenomics), and Voom-based count transformation combined with the nearest shrinkage classifier (VoomNSC). Our analysis shows that the proposed radiogenomics-NB model significantly outperforms (ANOVA test, p < 0.05) for prediction of IDH and ATRX mutations and offers similar performance for prediction of 1p/19q codeletion, when compared to the competing models in the literature, respectively.
COVID-Net CT-2: Enhanced Deep Neural Networks for Detection of COVID-19 From Chest CT Images Through Bigger, More Diverse Learning
Gunraj, Hayden
Sabri, Ali
Koff, David
Wong, Alexander
2022Journal Article, cited 0 times
CT Images in COVID-19
LIDC-IDRI
The COVID-19 pandemic continues to rage on, with multiple waves causing substantial harm to health and economies around the world. Motivated by the use of computed tomography (CT) imaging at clinical institutes around the world as an effective complementary screening method to RT-PCR testing, we introduced COVID-Net CT, a deep neural network tailored for detection of COVID-19 cases from chest CT images, along with a large curated benchmark dataset comprising 1,489 patient cases as part of the open-source COVID-Net initiative. However, one potential limiting factor is restricted data quantity and diversity given the single nation patient cohort used in the study. To address this limitation, in this study we introduce enhanced deep neural networks for COVID-19 detection from chest CT images which are trained using a large, diverse, multinational patient cohort. We accomplish this through the introduction of two new CT benchmark datasets, the largest of which comprises a multinational cohort of 4,501 patients from at least 16 countries. To the best of our knowledge, this represents the largest, most diverse multinational cohort for COVID-19 CT images in open-access form. Additionally, we introduce a novel lightweight neural network architecture called COVID-Net CT S, which is significantly smaller and faster than the previously introduced COVID-Net CT architecture. We leverage explainability to investigate the decision-making behavior of the trained models and ensure that decisions are based on relevant indicators, with the results for select cases reviewed and reported on by two board-certified radiologists with over 10 and 30 years of experience, respectively. The best-performing deep neural network in this study achieved accuracy, COVID-19 sensitivity, positive predictive value, specificity, and negative predictive value of 99.0%/99.1%/98.0%/99.4%/99.7%, respectively. Moreover, explainability-driven performance validation shows consistency with radiologist interpretation by leveraging correct, clinically relevant critical factors. The results are promising and suggest the strong potential of deep neural networks as an effective tool for computer-aided COVID-19 assessment. While not a production-ready solution, we hope the open-source, open-access release of COVID-Net CT-2 and the associated benchmark datasets will continue to enable researchers, clinicians, and citizen data scientists alike to build upon them.
Radiomics analysis of contrast-enhanced CT scans can distinguish between clear cell and non-clear cell renal cell carcinoma in different imaging protocols
Budai, Bettina Katalin
Stollmayer, Róbert
Rónaszéki, Aladár Dávid
Körmendy, Borbála
Zsombor, Zita
Palotás, Lõrinc
Fejér, Bence
Szendrõi, Attila
Székely, Eszter
Maurovich-Horvat, Pál
Kaposi, Pál Novák
2022Journal Article, cited 0 times
C4KC-KiTS
TCGA-KIRC
TCGA-KIRP
Introduction: This study aimed to construct a radiomics-based machine learning (ML) model for differentiation between non-clear cell and clear cell renal cell carcinomas (ccRCC) that is robust against institutional imaging protocols and scanners.
Materials and methods: Preoperative unenhanced (UN), corticomedullary (CM), and excretory (EX) phase CT scans from 209 patients diagnosed with RCCs were retrospectively collected. After the three-dimensional segmentation, 107 radiomics features (RFs) were extracted from the tumor volumes in each contrast phase. For the ML analysis, the cases were randomly split into training and test sets with a 3:1 ratio. Highly correlated RFs were filtered out based on Pearson's correlation coefficient (r > 0.95). Intraclass correlation coefficient analysis was used to select RFs with excellent reproducibility (ICC ≥ 0.90). The most predictive RFs were selected by the least absolute shrinkage and selection operator (LASSO). A support vector machine algorithm-based binary classifier (SVC) was constructed to predict tumor types and its performance was evaluated based-on receiver operating characteristic curve (ROC) analysis. The "Kidney Tumor Segmentation 2019" (KiTS19) publicly available dataset was used during external validation of the model. The performance of the SVC was also compared with an expert radiologist's.
Results: The training set consisted of 121 ccRCCs and 38 non-ccRCCs, while the independent internal test set contained 40 ccRCCs and 13 non-ccRCCs. For external validation, 50 ccRCCs and 23 non-ccRCCs were identified from the KiTS19 dataset with the available UN, CM, and EX phase CTs. After filtering out the highly correlated and poorly reproducible features, the LASSO algorithm selected 10 CM phase RFs that were then used for model construction. During external validation, the SVC achieved an area under the ROC curve (AUC) value, accuracy, sensitivity, and specificity of 0.83, 0.78, 0.80, and 0.74, respectively. UN and/or EX phase RFs did not further increase the model's performance. Meanwhile, in the same comparison, the expert radiologist achieved similar performance with an AUC of 0.77, an accuracy of 0.79, a sensitivity of 0.84, and a specificity of 0.69.
Conclusion: Radiomics analysis of CM phase CT scans combined with ML can achieve comparable performance with an expert radiologist in differentiating ccRCCs from non-ccRCCs.
Medical decision support system using weakly-labeled lung CT scans
Murillo-González, Alejandro
González, David
Jaramillo, Laura
Galeano, Carlos
Tavera, Fabby
Mejía, Marcia
Hernández, Alejandro
Rivera, David Restrepo
Paniagua, J. G.
Ariza-Jiménez, Leandro
Echeverri, José Julián Garcés
León, Christian Andrés Diaz
Serna-Higuita, Diana Lucia
Barrios, Wayner
Arrázola, Wiston
Mejía, Miguel Ángel
Arango, Sebastián
Ramírez, Daniela Marín
Salinas-Miranda, Emmanuel
Quintero, O. L.
2022Journal Article, cited 0 times
4D-Lung
COVID-19-AR
LungCT-Diagnosis
PleThora
SPIE-AAPM Lung CT Challenge
Purpose: Determination and development of an effective set of models leveraging Artificial Intelligence techniques to generate a system able to support clinical practitioners working with COVID-19 patients. It involves a pipeline including classification, lung and lesion segmentation, as well as lesion quantification of axial lung CT studies.
Approach: A deep neural network architecture based on DenseNet is introduced for the classification of weakly-labeled, variable-sized (and possibly sparse) axial lung CT scans. The models are trained and tested on aggregated, publicly available data sets with over 10 categories. To further assess the models, a data set was collected from multiple medical institutions in Colombia, which includes healthy, COVID-19 and patients with other diseases. It is composed of 1,322 CT studies from a diverse set of CT machines and institutions that make over 550,000 slices. Each CT study was labeled based on a clinical test, and no per-slice annotation took place. This enabled a classification into Normal vs. Abnormal patients, and for those that were considered abnormal, an extra classification step into Abnormal (other diseases) vs. COVID-19. Additionally, the pipeline features a methodology to segment and quantify lesions of COVID-19 patients on the complete CT study, enabling easier localization and progress tracking. Moreover, multiple ablation studies were performed to appropriately assess the elements composing the classification pipeline.
Results: The best performing lung CT study classification models achieved 0.83 accuracy, 0.79 sensitivity, 0.87 specificity, 0.82 F1 score and 0.85 precision for the Normal vs. Abnormal task. For the Abnormal vs COVID-19 task, the model obtained 0.86 accuracy, 0.81 sensitivity, 0.91 specificity, 0.84 F1 score and 0.88 precision. The ablation studies showed that using the complete CT study in the pipeline resulted in greater classification performance, restating that relevant COVID-19 patterns cannot be ignored towards the top and bottom of the lung volume.
Discussion: The lung CT classification architecture introduced has shown that it can handle weakly-labeled, variable-sized and possibly sparse axial lung studies, reducing the need for expert annotations at a per-slice level.
Conclusions: This work presents a working methodology that can guide the development of decision support systems for clinical reasoning in future interventionist or prospective studies.
DeepNet model empowered cuckoo search algorithm for the effective identification of lung cancer nodules
John, Grace
Baskar, S
2023Journal Article, cited 0 times
Lung-PET-CT-Dx
Introduction: Globally, lung cancer is a highly harmful type of cancer. An efficient diagnosis system can enable pathologists to recognize the type and nature of lung nodules and the mode of therapy to increase the patient's chance of survival. Hence, implementing an automatic and reliable system to segment lung nodules from a computed tomography (CT) image is useful in the medical industry.
Methods: This study develops a novel fully convolutional deep neural network (hereafter called DeepNet) model for segmenting lung nodules from CT scans. This model includes an encoder/decoder network that achieves pixel-wise image segmentation. The encoder network exploits a Visual Geometry Group (VGG-19) model as a base architecture, while the decoder network exploits 16 upsampling and deconvolution modules. The encoder used in this model has a very flexible structural design that can be modified and trained for any resolution based on the size of input scans. The decoder network upsamples and maps the low-resolution attributes of the encoder. Thus, there is a considerable drop in the number of variables used for the learning process as the network recycles the pooling indices of the encoder for segmentation. The Thresholding method and the cuckoo search algorithm determines the most useful features when categorizing cancer nodules.
Results and discussion: The effectiveness of the intended DeepNet model is cautiously assessed on the real-world database known as The Cancer Imaging Archive (TCIA) dataset and its effectiveness is demonstrated by comparing its representation with some other modern segmentation models in terms of selected performance measures. The empirical analysis reveals that DeepNet significantly outperforms other prevalent segmentation algorithms with 0.962 ± 0.023% of volume error, 0.968 ± 0.011 of dice similarity coefficient, 0.856 ± 0.011 of Jaccard similarity index, and 0.045 ± 0.005s average processing time.
Identification of a 6-RBP gene signature for a comprehensive analysis of glioma and ischemic stroke: Cognitive impairment and aging-related hypoxic stress
Lin, Weiwei
Wang, Qiangwei
Chen, Yisheng
Wang, Ning
Ni, Qingbin
Qi, Chunhua
Wang, Qian
Zhu, Yongjian
2022Journal Article, cited 0 times
DICOM-Glioma-SEG
There is mounting evidence that ischemic cerebral infarction contributes to vascular cognitive impairment and dementia in elderly. Ischemic stroke and glioma are two majorly fatal diseases worldwide, which promote each other's development based on some common underlying mechanisms. As a post-transcriptional regulatory protein, RNA-binding protein is important in the development of a tumor and ischemic stroke (IS). The purpose of this study was to search for a group of RNA-binding protein (RBP) gene markers related to the prognosis of glioma and the occurrence of IS, and elucidate their underlying mechanisms in glioma and IS. First, a 6-RBP (POLR2F, DYNC1H1, SMAD9, TRIM21, BRCA1, and ERI1) gene signature (RBPS) showing an independent overall survival prognostic prediction was identified using the transcriptome data from TCGA-glioma cohort (n = 677); following which, it was independently verified in the CGGA-glioma cohort (n = 970). A nomogram, including RBPS, 1p19q codeletion, radiotherapy, chemotherapy, grade, and age, was established to predict the overall survival of patients with glioma, convenient for further clinical transformation. In addition, an automatic machine learning classification model based on radiomics features from MRI was developed to stratify according to the RBPS risk. The RBPS was associated with immunosuppression, energy metabolism, and tumor growth of gliomas. Subsequently, the six RBP genes from blood samples showed good classification performance for IS diagnosis (AUC = 0.95, 95% CI: 0.902-0.997). The RBPS was associated with hypoxic responses, angiogenesis, and increased coagulation in IS. Upregulation of SMAD9 was associated with dementia, while downregulation of POLR2F was associated with aging-related hypoxic stress. Irf5/Trim21 in microglia and Taf7/Trim21 in pericytes from the mouse cerebral cortex were identified as RBPS-related molecules in each cell type under hypoxic conditions. The RBPS is expected to serve as a novel biomarker for studying the common mechanisms underlying glioma and IS.
Prediction of 1p/19q Codeletion in Diffuse Glioma Patients Using Preoperative Multiparametric Magnetic Resonance Imaging
Kim, Donnie
Wang, Nicholas C
Ravikumar, Visweswaran
Raghuram, DR
Li, Jinju
Patel, Ankit
Wendt, Richard E
Rao, Ganesh
Rao, Arvind
Frontiers in Computational Neuroscience2019Journal Article, cited 0 times
glioma
BRATS
radiogenomics
CNN
Automatic Brain Tumor Segmentation Based on Cascaded Convolutional Neural Networks With Uncertainty Estimation
Wang, Guotai
Li, Wenqi
Ourselin, Sébastien
Vercauteren, Tom
Frontiers in Computational Neuroscience2019Journal Article, cited 0 times
BraTS-TCGA-GBM
Automatic segmentation of brain tumors from medical images is important for clinical assessment and treatment planning of brain tumors. Recent years have seen an increasing use of convolutional neural networks (CNNs) for this task, but most of them use either 2D networks with relatively low memory requirement while ignoring 3D context, or 3D networks exploiting 3D features while with large memory consumption. In addition, existing methods rarely provide uncertainty information associated with the segmentation result. We propose a cascade of CNNs to segment brain tumors with hierarchical subregions from multi-modal Magnetic Resonance images (MRI), and introduce a 2.5D network that is a trade-off between memory consumption, model complexity and receptive field. In addition, we employ test-time augmentation to achieve improved segmentation accuracy, which also provides voxel-wise and structure-wise uncertainty information of the segmentation result. Experiments with BraTS 2017 dataset showed that our cascaded framework with 2.5D CNNs was one of the top performing methods (second-rank) for the BraTS challenge. We also validated our method with BraTS 2018 dataset and found that test-time augmentation improves brain tumor segmentation accuracy and that the resulting uncertainty information can indicate potential mis-segmentations and help to improve segmentation accuracy.
Multivariate Analysis of Preoperative Magnetic Resonance Imaging Reveals Transcriptomic Classification of de novo Glioblastoma Patients
Rathore, Saima
Akbari, Hamed
Bakas, Spyridon
Pisapia, Jared M
Shukla, Gaurav
Rudie, Jeffrey D
Da, Xiao
Davuluri, Ramana V
Dahmane, Nadia
O'Rourke, Donald M
Frontiers in Computational Neuroscience2019Journal Article, cited 0 times
TCGA-GBM
BRATS
radiogenomics
brain
Multi-Disease Segmentation of Gliomas and White Matter Hyperintensities in the BraTS Data Using a 3D Convolutional Neural Network
Rudie, Jeffrey D.
Weiss, David A.
Saluja, Rachit
Rauschecker, Andreas M.
Wang, Jiancong
Sugrue, Leo
Bakas, Spyridon
Colby, John B.
Frontiers in Computational Neuroscience2019Journal Article, cited 0 times
Radiomics
BraTS
Segmentation
An important challenge in segmenting real-world biomedical imaging data is the presence of multiple disease processes within individual subjects. Most adults above age 60 exhibit a variable degree of small vessel ischemic disease, as well as chronic infarcts, which will manifest as white matter hyperintensities (WMH) on brain MRIs. Subjects diagnosed with gliomas will also typically exhibit some degree of abnormal T2 signal due to WMH, rather than just due to tumor. We sought to develop a fully automated algorithm to distinguish and quantify these distinct disease processes within individual subjects’ brain MRIs. To address this multi-disease problem, we trained a 3D U-Net to distinguish between abnormal signal arising from tumors vs. WMH in the 3D multi-parametric MRI (mpMRI, i.e., native T1-weighted, T1-post-contrast, T2, T2-FLAIR) scans of the International Brain Tumor Segmentation (BraTS) 2018 dataset (ntraining = 285, nvalidation = 66). Our trained neuroradiologist manually annotated WMH on the BraTS training subjects, finding that 69% of subjects had WMH. Our 3D U-Net model had a 4-channel 3D input patch (80 × 80 × 80) from mpMRI, four encoding and decoding layers, and an output of either four [background, active tumor (AT), necrotic core (NCR), peritumoral edematous/infiltrated tissue (ED)] or five classes (adding WMH as the fifth class). For both the four- and five-class output models, the median Dice for whole tumor (WT) extent (i.e., union of AT, ED, NCR) was 0.92 in both training and validation sets. Notably, the five-class model achieved significantly (p = 0.002) lower/better Hausdorff distances for WT extent in the training subjects. There was strong positive correlation between manually segmented and predicted volumes for WT (r = 0.96) and WMH (r = 0.89). Larger lesion volumes were positively correlated with higher/better Dice scores for WT (r = 0.33), WMH (r = 0.34), and across all lesions (r = 0.89) on a log(10) transformed scale. While the median Dice for WMH was 0.42 across training subjects with WMH, the median Dice was 0.62 for those with at least 5 cm3 of WMH. We anticipate the development of computational algorithms that are able to model multiple diseases within a single subject will be a critical step toward translating and integrating artificial intelligence systems into the heterogeneous real-world clinical workflow.
Novel Volumetric Sub-region Segmentation in Brain Tumors
Banerjee, Subhashis
Mitra, Sushmita
Frontiers in Computational Neuroscience2020Journal Article, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
A novel deep learning based model called Multi-Planar Spatial Convolutional Neural Network (MPS-CNN) is proposed for effective, automated segmentation of different sub-regions viz. peritumoral edema (ED), necrotic core (NCR), enhancing and non-enhancing tumor core (ET/NET), from multi-modal MR images of the brain. An encoder-decoder type CNN model is designed for pixel-wise segmentation of the tumor along three anatomical planes (axial, sagittal, and coronal) at the slice level. These are then combined, by incorporating a consensus fusion strategy with a fully connected Conditional Random Field (CRF) based post-refinement, to produce the final volumetric segmentation of the tumor and its constituent sub-regions. Concepts, such as spatial-pooling and unpooling are used to preserve the spatial locations of the edge pixels, for reducing segmentation error around the boundaries. A new aggregated loss function is also developed for effectively handling data imbalance. The MPS-CNN is trained and validated on the recent Multimodal Brain Tumor Segmentation Challenge (BraTS) 2018 dataset. The Dice scores obtained for the validation set for whole tumor (WT :NCR/NE +ET +ED), tumor core (TC:NCR/NET +ET), and enhancing tumor (ET) are 0.90216, 0.87247, and 0.82445. The proposed MPS-CNN is found to perform the best (based on leaderboard scores) for ET and TC segmentation tasks, in terms of both the quantitative measures (viz. Dice and Hausdorff). In case of the WT segmentation it also achieved the second highest accuracy, with a score which was only 1% less than that of the best performing method.
Segmenting Brain Tumor Using Cascaded V-Nets in Multimodal MR Images
Hua, Rui
Huo, Quan
Gao, Yaozong
Sui, He
Zhang, Bing
Sun, Yu
Mo, Zhanhao
Shi, Feng
Frontiers in Computational Neuroscience2020Journal Article, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
In this work, we propose a novel cascaded V-Nets method to segment brain tumor substructures in multimodal brain magnetic resonance imaging. Although V-Net has been successfully used in many segmentation tasks, we demonstrate that its performance could be further enhanced by using a cascaded structure and ensemble strategy. Briefly, our baseline V-Net consists of four levels with encoding and decoding paths and intra- and inter-path skip connections. Focal loss is chosen to improve performance on hard samples as well as balance the positive and negative samples. We further propose three preprocessing pipelines for multimodal magnetic resonance images to train different models. By ensembling the segmentation probability maps obtained from these models, segmentation result is further improved. In other hand, we propose to segment the whole tumor first, and then divide it into tumor necrosis, edema, and enhancing tumor. Experimental results on BraTS 2018 online validation set achieve average Dice scores of 0.9048, 0.8364, and 0.7748 for whole tumor, tumor core and enhancing tumor, respectively. The corresponding values for BraTS 2018 online testing set are 0.8761, 0.7953, and 0.7364, respectively. We also evaluate the proposed method in two additional data sets from local hospitals comprising of 28 and 28 subjects, and the best results are 0.8635, 0.8036, and 0.7217, respectively. We further make a prediction of patient overall survival by ensembling multiple classifiers for long, mid and short groups, and achieve accuracy of 0.519, mean square error of 367240 and Spearman correlation coefficient of 0.168 for BraTS 2018 online testing set.
A Novel Approach for Fully Automatic Intra-Tumor Segmentation With 3D U-Net Architecture for Gliomas
Baid, Ujjwal
Talbar, Sanjay
Rane, Swapnil
Gupta, Sudeep
Thakur, Meenakshi H.
Moiyadi, Aliasgar
Sable, Nilesh
Akolkar, Mayuresh
Mahajan, Abhishek
Frontiers in Computational Neuroscience2020Journal Article, cited 0 times
BraTS-TCGA-GBM
Purpose: Gliomas are the most common primary brain malignancies, with varying degrees of aggressiveness and prognosis. Understanding of tumor biology and intra-tumor heterogeneity is necessary for planning personalized therapy and predicting response to therapy. Accurate tumoral and intra-tumoral segmentation on MRI is the first step toward understanding the tumor biology through computational methods. The purpose of this study was to design a segmentation algorithm and evaluate its performance on pre-treatment brain MRIs obtained from patients with gliomas. Materials and Methods: In this study, we have designed a novel 3D U-Net architecture that segments various radiologically identifiable sub-regions like edema, enhancing tumor, and necrosis. Weighted patch extraction scheme from the tumor border regions is proposed to address the problem of class imbalance between tumor and non-tumorous patches. The architecture consists of a contracting path to capture context and the symmetric expanding path that enables precise localization. The Deep Convolutional Neural Network (DCNN) based architecture is trained on 285 patients, validated on 66 patients and tested on 191 patients with Glioma from Brain Tumor Segmentation (BraTS) 2018 challenge dataset. Three dimensional patches are extracted from multi-channel BraTS training dataset to train 3D U-Net architecture. The efficacy of the proposed approach is also tested on an independent dataset of 40 patients with High Grade Glioma from our tertiary cancer center. Segmentation results are assessed in terms of Dice Score, Sensitivity, Specificity, and Hausdorff 95 distance (ITCN intra-tumoral classification network). Result: Our proposed architecture achieved Dice scores of 0.88, 0.83, and 0.75 for the whole tumor, tumor core and enhancing tumor, respectively, on BraTS validation dataset and 0.85, 0.77, 0.67 on test dataset. The results were similar on the independent patients' dataset from our hospital, achieving Dice scores of 0.92, 0.90, and 0.81 for the whole tumor, tumor core and enhancing tumor, respectively. Conclusion: The results of this study show the potential of patch-based 3D U-Net for the accurate intra-tumor segmentation. From experiments, it is observed that the weighted patch-based segmentation approach gives comparable performance with the pixel-based approach when there is a thin boundary between tumor subparts.
Deep Learning-Based Concurrent Brain Registration and Tumor Segmentation
Estienne, T.
Lerousseau, M.
Vakalopoulou, M.
Alvarez Andres, E.
Battistella, E.
Carre, A.
Chandra, S.
Christodoulidis, S.
Sahasrabudhe, M.
Sun, R.
Robert, C.
Talbot, H.
Paragios, N.
Deutsch, E.
Front Comput Neurosci2020Journal Article, cited 15 times
Website
BraTS-TCGA-GBM
BraTS-TCGA-LGG
brain tumor segmentation
convolutional neural networks (CNN)
deep learning
deformable registration
multi-task networks
Algorithm Development
Image registration and segmentation are the two most studied problems in medical image analysis. Deep learning algorithms have recently gained a lot of attention due to their success and state-of-the-art results in variety of problems and communities. In this paper, we propose a novel, efficient, and multi-task algorithm that addresses the problems of image registration and brain tumor segmentation jointly. Our method exploits the dependencies between these tasks through a natural coupling of their interdependencies during inference. In particular, the similarity constraints are relaxed within the tumor regions using an efficient and relatively simple formulation. We evaluated the performance of our formulation both quantitatively and qualitatively for registration and segmentation problems on two publicly available datasets (BraTS 2018 and OASIS 3), reporting competitive results with other recent state-of-the-art methods. Moreover, our proposed framework reports significant amelioration (p < 0.005) for the registration performance inside the tumor locations, providing a generic method that does not need any predefined conditions (e.g., absence of abnormalities) about the volumes to be registered. Our implementation is publicly available online at https://github.com/TheoEst/joint_registration_tumor_segmentation.
Overall Survival Prediction in Glioblastoma With Radiomic Features Using Machine Learning
Baid, Ujjwal
Rane, Swapnil U.
Talbar, Sanjay
Gupta, Sudeep
Thakur, Meenakshi H.
Moiyadi, Aliasgar
Mahajan, Abhishek
Frontiers in Computational Neuroscience2020Journal Article, cited 0 times
BraTS-TCGA-GBM
Glioblastoma is a WHO grade IV brain tumor, which leads to poor overall survival (OS) of patients. For precise surgical and treatment planning, OS prediction of glioblastoma (GBM) patients is highly desired by clinicians and oncologists. Radiomic research attempts at predicting disease prognosis, thus providing beneficial information for personalized treatment from a variety of imaging features extracted from multiple MR images. In this study, first-order, intensity-based volume and shape-based and textural radiomic features are extracted from fluid-attenuated inversion recovery (FLAIR) and T1ce MRI data. The region of interest is further decomposed with stationary wavelet transform with low-pass and high-pass filtering. Further, radiomic features are extracted on these decomposed images, which helped in acquiring the directional information. The efficiency of the proposed algorithm is evaluated on Brain Tumor Segmentation (BraTS) challenge training, validation, and test datasets. The proposed approach achieved 0.695, 0.571, and 0.558 on BraTS training, validation, and test datasets. The proposed approach secured the third position in BraTS 2018 challenge for the OS prediction task.
Survival prediction for patients with glioblastoma multiforme using a Cox proportional hazards denoising autoencoder network
Yan, Ting
Yan, Zhenpeng
Liu, Lili
Zhang, Xiaoyu
Chen, Guohui
Xu, Feng
Li, Ying
Zhang, Lijuan
Peng, Meilan
Wang, Lu
Li, Dandan
Zhao, Dong
Frontiers in Computational Neuroscience2023Journal Article, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Objectives: This study aimed to establish and validate a prognostic model based on magnetic resonance imaging and clinical features to predict the survival time of patients with glioblastoma multiforme (GBM).
Methods: In this study, a convolutional denoising autoencoder (DAE) network combined with the loss function of the Cox proportional hazard regression model was used to extract features for survival prediction. In addition, the Kaplan-Meier curve, the Schoenfeld residual analysis, the time-dependent receiver operating characteristic curve, the nomogram, and the calibration curve were performed to assess the survival prediction ability.
Results: The concordance index (C-index) of the survival prediction model, which combines the DAE and the Cox proportional hazard regression model, reached 0.78 in the training set, 0.75 in the validation set, and 0.74 in the test set. Patients were divided into high- and low-risk groups based on the median prognostic index (PI). Kaplan-Meier curve was used for survival analysis (p = < 2e-16 in the training set, p = 3e-04 in the validation set, and p = 0.007 in the test set), which showed that the survival probability of different groups was significantly different, and the PI of the network played an influential role in the prediction of survival probability. In the residual verification of the PI, the fitting curve of the scatter plot was roughly parallel to the x-axis, and the p-value of the test was 0.11, proving that the PI and survival time were independent of each other and the survival prediction ability of the PI was less affected than survival time. The areas under the curve of the training set were 0.843, 0.871, 0.903, and 0.941; those of the validation set were 0.687, 0.895, 1.000, and 0.967; and those of the test set were 0.757, 0.852, 0.683, and 0.898.
Conclusion: The survival prediction model, which combines the DAE and the Cox proportional hazard regression model, can effectively predict the prognosis of patients with GBM.
Asymmetric Ensemble of Asymmetric U-Net Models for Brain Tumor Segmentation With Uncertainty Estimation
Rosas-Gonzalez, Sarahi
Birgui-Sekou, Taibou
Hidane, Moncef
Zemmoura, Ilyess
Tauber, Clovis
Frontiers in Neurology2021Journal Article, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Accurate brain tumor segmentation is crucial for clinical assessment, follow-up, and subsequent treatment of gliomas. While convolutional neural networks (CNN) have become state of the art in this task, most proposed models either use 2D architectures ignoring 3D contextual information or 3D models requiring large memory capacity and extensive learning databases. In this study, an ensemble of two kinds of U-Net-like models based on both 3D and 2.5D convolutions is proposed to segment multimodal magnetic resonance images (MRI). The 3D model uses concatenated data in a modified U-Net architecture. In contrast, the 2.5D model is based on a multi-input strategy to extract low-level features from each modality independently and on a new 2.5D Multi-View Inception block that aims to merge features from different views of a 3D image aggregating multi-scale features. The Asymmetric Ensemble of Asymmetric U-Net (AE AU-Net) based on both is designed to find a balance between increasing multi-scale and 3D contextual information extraction and keeping memory consumption low. Experiments on 2019 dataset show that our model improves enhancing tumor sub-region segmentation. Overall, performance is comparable with state-of-the-art results, although with less learning data or memory requirements. In addition, we provide voxel-wise and structure-wise uncertainties of the segmentation results, and we have established qualitative and quantitative relationships between uncertainty and prediction errors. Dice similarity coefficient for the whole tumor, tumor core, and tumor enhancing regions on BraTS 2019 validation dataset were 0.902, 0.815, and 0.773. We also applied our method in BraTS 2018 with corresponding Dice score values of 0.908, 0.838, and 0.800.
The involvement of brain regions associated with lower KPS and shorter survival time predicts a poor prognosis in glioma
Bao, Hongbo
Wang, Huan
Sun, Qian
Wang, Yujie
Liu, Hui
Liang, Peng
Lv, Zhonghua
Frontiers in Neurology2023Journal Article, cited 0 times
Website
TCGA-LGG
LGG-1p19qDeletion
TCGA-GBM
Radiogenomics
Radiomics
Isocitrate dehydrogenase (IDH) mutation
Algorithm Development
Background: Isocitrate dehydrogenase-wildtype glioblastoma (IDH-wildtype GBM) and IDH-mutant astrocytoma have distinct biological behaviors and clinical outcomes. The location of brain tumors is closely associated not only with clinical symptoms and prognosis but also with key molecular alterations such as IDH. Therefore, we hypothesize that the key brain regions influencing the prognosis of glioblastoma and astrocytoma are likely to differ. This study aims to (1) identify specific regions that are associated with the Karnofsky Performance Scale (KPS) or overall survival (OS) in IDH-wildtype GBM and IDH-mutant astrocytoma and (2) test whether the involvement of these regions could act as a prognostic indicator.; ; Methods: A total of 111 patients with IDH-wildtype GBM and 78 patients with IDH-mutant astrocytoma from the Cancer Imaging Archive database were included in the study. Voxel-based lesion-symptom mapping (VLSM) was used to identify key brain areas for lower KPS and shorter OS. Next, we analyzed the structural and cognitive dysfunction associated with these regions. The survival analysis was carried out using Kaplan–Meier survival curves. Another 72 GBM patients and 48 astrocytoma patients from Harbin Medical University Cancer Hospital were used as a validation cohort.; ; Results: Tumors located in the insular cortex, parahippocampal gyrus, and middle and superior temporal gyrus of the left hemisphere tended to lead to lower KPS and shorter OS in IDH-wildtype GBM. The regions that were significantly correlated with lower KPS in IDH-mutant astrocytoma included the subcallosal cortex and cingulate gyrus. These regions were associated with diverse structural and cognitive impairments. The involvement of these regions was an independent predictor for shorter survival in both GBM and astrocytoma.; ; Conclusion: This study identified the specific regions that were significantly associated with OS or KPS in glioma. The results may help neurosurgeons evaluate patient survival before surgery and understand the pathogenic mechanisms of glioma in depth.
Web based tools for visualizing imaging data and development of XNATView, a zero footprint image viewer
Gutman, David A
Dunn Jr, William D
Cobb, Jake
Stoner, Richard M
Kalpathy-Cramer, Jayashree
Erickson, Bradley
Frontiers in Neuroinformatics2014Journal Article, cited 12 times
Website
Algorithm Development
XNAT
DICOM
Advances in web technologies now allow direct visualization of imaging data sets without necessitating the download of large file sets or the installation of software. This allows centralization of file storage and facilitates image review and analysis. XNATView is a light framework recently developed in our lab to visualize DICOM images stored in The Extensible Neuroimaging Archive Toolkit (XNAT). It consists of a PyXNAT-based framework to wrap around the REST application programming interface (API) and query the data in XNAT. XNATView was developed to simplify quality assurance, help organize imaging data, and facilitate data sharing for intra- and inter-laboratory collaborations. Its zero-footprint design allows the user to connect to XNAT from a web browser, navigate through projects, experiments, and subjects, and view DICOM images with accompanying metadata all within a single viewing instance.
Recommendations for Processing Head CT Data
Muschelli, J.
Frontiers in Neuroinformatics2019Journal Article, cited 0 times
Website
Radiomics
CPTAC-GBM
CQ500 dataset
Computed Tomography (CT)
NIfTI
MATLAB
RESTful
image analysis
image de-identification
Many research applications of neuroimaging use magnetic resonance imaging (MRI). As such, recommendations for image analysis and standardized imaging pipelines exist. Clinical imaging, however, relies heavily on X-ray computed tomography (CT) scans for diagnosis and prognosis. Currently, there is only one image processing pipeline for head CT, which focuses mainly on head CT data with lesions. We present tools and a complete pipeline for processing CT data, focusing on open-source solutions, that focus on head CT but are applicable to most CT analyses. We describe going from raw DICOM data to a spatially normalized brain within CT presenting a full example with code. Overall, we recommend anonymizing data with Clinical Trials Processor, converting DICOM data to NIfTI using dcm2niix, using BET for brain extraction, and registration using a publicly-available CT template for analysis.
Analysis of Vestibular Labyrinthine Geometry and Variation in the Human Temporal Bone
Johnson Chacko, Lejo
Schmidbauer, Dominik T
Handschuh, Stephan
Reka, Alen
Fritscher, Karl D
Raudaschl, Patrik
Saba, Rami
Handler, Michael
Schier, Peter P
Baumgarten, Daniel
Fischer, Natalie
Pechriggl, Elisabeth J
Brenner, Erich
Hoermann, Romed
Glueckert, Rudolf
Schrott-Fischer, Anneliese
Frontiers in Neuroscience2018Journal Article, cited 4 times
Website
Vestibular Labyrinth
modeling
Computed Tomography (CT)
Stable posture and body movement in humans is dictated by the precise functioning of the ampulla organs in the semi-circular canals. Statistical analysis of the interrelationship between bony and membranous compartments within the semi-circular canals is dependent on the visualization of soft tissue structures. Thirty-one human inner ears were prepared, post-fixed with osmium tetroxide and decalcified for soft tissue contrast enhancement. High resolution X-ray microtomography images at 15 mum voxel-size were manually segmented. This data served as templates for centerline generation and cross-sectional area extraction. Our estimates demonstrate the variability of individual specimens from averaged centerlines of both bony and membranous labyrinth. Centerline lengths and cross-sectional areas along these lines were identified from segmented data. Using centerlines weighted by the inverse squares of the cross-sectional areas, plane angles could be quantified. The fit planes indicate that the bony labyrinth resembles a Cartesian coordinate system more closely than the membranous labyrinth. A widening in the membranous labyrinth of the lateral semi-circular canal was observed in some of the specimens. Likewise, the cross-sectional areas in the perilymphatic spaces of the lateral canal differed from the other canals. For the first time we could precisely describe the geometry of the human membranous labyrinth based on a large sample size. Awareness of the variations in the canal geometry of the membranous and bony labyrinth would be a helpful reference in designing electrodes for future vestibular prosthesis and simulating fluid dynamics more precisely.;
Brain Tumor Segmentation and Survival Prediction Using Multimodal MRI Scans With Deep Learning
Sun, Li
Zhang, Songtao
Chen, Hang
Luo, Lin
Frontiers in Neuroscience2019Journal Article, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Gliomas are the most common primary brain malignancies. Accurate and robust tumor segmentation and prediction of patients' overall survival are important for diagnosis, treatment planning and risk factor identification. Here we present a deep learning-based framework for brain tumor segmentation and survival prediction in glioma, using multimodal MRI scans. For tumor segmentation, we use ensembles of three different 3D CNN architectures for robust performance through a majority rule. This approach can effectively reduce model bias and boost performance. For survival prediction, we extract 4,524 radiomic features from segmented tumor regions, then, a decision tree and cross validation are used to select potent features. Finally, a random forest model is trained to predict the overall survival of patients. The 2018 MICCAI Multimodal Brain Tumor Segmentation Challenge (BraTS), ranks our method at 2nd and 5th place out of 60+ participating teams for survival prediction tasks and segmentation tasks respectively, achieving a promising 61.0% accuracy on the classification of short-survivors, mid-survivors and long-survivors.
Divide and Conquer: Stratifying Training Data by Tumor Grade Improves Deep Learning-Based Brain Tumor Segmentation
Rebsamen, Michael
Knecht, Urspeter
Reyes, Mauricio
Wiest, Roland
Meier, Raphael
McKinley, Richard
Frontiers in Neuroscience2019Journal Article, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
It is a general assumption in deep learning that more training data leads to better performance, and that models will learn to generalize well across heterogeneous input data as long as that variety is represented in the training set. Segmentation of brain tumors is a well-investigated topic in medical image computing, owing primarily to the availability of a large publicly-available dataset arising from the long-running yearly Multimodal Brain Tumor Segmentation (BraTS) challenge. Research efforts and publications addressing this dataset focus predominantly on technical improvements of model architectures and less on properties of the underlying data. Using the dataset and the method ranked third in the BraTS 2018 challenge, we performed experiments to examine the impact of tumor type on segmentation performance. We propose to stratify the training dataset into high-grade glioma (HGG) and low-grade glioma (LGG) subjects and train two separate models. Although we observed only minor gains in overall mean dice scores by this stratification, examining case-wise rankings of individual subjects revealed statistically significant improvements. Compared to a baseline model trained on both HGG and LGG cases, two separately trained models led to better performance in 64.9% of cases (p < 0.0001) for the tumor core. An analysis of subjects which did not profit from stratified training revealed that cases were missegmented which had poor image quality, or which presented clinically particularly challenging cases (e.g., underrepresented subtypes such as IDH1-mutant tumors), underlining the importance of such latent variables in the context of tumor segmentation. In summary, we found that segmentation models trained on the BraTS 2018 dataset, stratified according to tumor type, lead to a significant increase in segmentation performance. Furthermore, we demonstrated that this gain in segmentation performance is evident in the case-wise ranking of individual subjects but not in summary statistics. We conclude that it may be useful to consider the segmentation of brain tumors of different types or grades as separate tasks, rather than developing one tool to segment them all. Consequently, making this information available for the test data should be considered, potentially leading to a more clinically relevant BraTS competition.
Systematic Evaluation of Image Tiling Adverse Effects on Deep Learning Semantic Segmentation
Reina, G. Anthony
Panchumarthy, Ravi
Thakur, Siddhesh Pravin
Bastidas, Alexei
Bakas, Spyridon
Frontiers in Neuroscience2020Journal Article, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Convolutional neural network (CNN) models obtain state of the art performance on image classification, localization, and segmentation tasks. Limitations in computer hardware, most notably memory size in deep learning accelerator cards, prevent relatively large images, such as those from medical and satellite imaging, from being processed as a whole in their original resolution. A fully convolutional topology, such as U-Net, is typically trained on down-sampled images and inferred on images of their original size and resolution, by simply dividing the larger image into smaller (typically overlapping) tiles, making predictions on these tiles, and stitching them back together as the prediction for the whole image. In this study, we show that this tiling technique combined with translationally-invariant nature of CNNs causes small, but relevant differences during inference that can be detrimental in the performance of the model. Here we quantify these variations in both medical (i.e., BraTS) and non-medical (i.e., satellite) images and show that training a 2D U-Net model on the whole image substantially improves the overall model performance. Finally, we compare 2D and 3D semantic segmentation models to show that providing CNN models with a wider context of the image in all three dimensions leads to more accurate and consistent predictions. Our results suggest that tiling the input to CNN models-while perhaps necessary to overcome the memory limitations in computer hardware-may lead to undesirable and unpredictable errors in the model's output that can only be adequately mitigated by increasing the input of the model to the largest possible tile size.
Analyzing the Quality and Challenges of Uncertainty Estimations for Brain Tumor Segmentation
Jungo, Alain
Balsiger, Fabian
Reyes, Mauricio
Frontiers in Neuroscience2020Journal Article, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Automatic segmentation of brain tumors has the potential to enable volumetric measures and high-throughput analysis in the clinical setting. Reaching this potential seems almost achieved, considering the steady increase in segmentation accuracy. However, despite segmentation accuracy, the current methods still do not meet the robustness levels required for patient-centered clinical use. In this regard, uncertainty estimates are a promising direction to improve the robustness of automated segmentation systems. Different uncertainty estimation methods have been proposed, but little is known about their usefulness and limitations for brain tumor segmentation. In this study, we present an analysis of the most commonly used uncertainty estimation methods in regards to benefits and challenges for brain tumor segmentation. We evaluated their quality in terms of calibration, segmentation error localization, and segmentation failure detection. Our results show that the uncertainty methods are typically well-calibrated when evaluated at the dataset level. Evaluated at the subject level, we found notable miscalibrations and limited segmentation error localization (e.g., for correcting segmentations), which hinder the direct use of the voxel-wise uncertainties. Nevertheless, voxel-wise uncertainty showed value to detect failed segmentations when uncertainty estimates are aggregated at the subject level. Therefore, we suggest a careful usage of voxel-wise uncertainty measures and highlight the importance of developing solutions that address the subject-level requirements on calibration and segmentation error localization.
3D-BoxSup: Positive-Unlabeled Learning of Brain Tumor Segmentation Networks From 3D Bounding Boxes
Xu, Yanwu
Gong, Mingming
Chen, Junxiang
Chen, Ziye
Batmanghelich, Kayhan
Frontiers in Neuroscience2020Journal Article, cited 0 times
BraTS-TCGA-LGG
Accurate segmentation is an essential task when working with medical images. Recently, deep convolutional neural networks achieved a state-of-the-art performance for many segmentation benchmarks. Regardless of the network architecture, the deep learning-based segmentation methods view the segmentation problem as a supervised task that requires a relatively large number of annotated images. Acquiring a large number of annotated medical images is time consuming, and high-quality segmented images (i.e., strong labels) crafted by human experts are expensive. In this paper, we have proposed a method that achieves competitive accuracy from a "weakly annotated" image where the weak annotation is obtained via a 3D bounding box denoting an object of interest. Our method, called "3D-BoxSup," employs a positive-unlabeled learning framework to learn segmentation masks from 3D bounding boxes. Specially, we consider the pixels outside of the bounding box as positively labeled data and the pixels inside the bounding box as unlabeled data. Our method can suppress the negative effects of pixels residing between the true segmentation mask and the 3D bounding box and produce accurate segmentation masks. We applied our method to segment a brain tumor. The experimental results on the BraTS 2017 dataset (Menze et al., 2015; Bakas et al., 2017a,b,c) have demonstrated the effectiveness of our method.
An Improvement of Survival Stratification in Glioblastoma Patients via Combining Subregional Radiomics Signatures
Yang, Y.
Han, Y.
Hu, X.
Wang, W.
Cui, G.
Guo, L.
Zhang, X.
Front Neurosci2021Journal Article, cited 0 times
Website
TCGA-GBM
BraTS-TCGA-GBM
Magnetic Resonance Imaging (MRI)
Radiomics
Purpose: To investigate whether combining multiple radiomics signatures derived from the subregions of glioblastoma (GBM) can improve survival prediction of patients with GBM. Methods: In total, 129 patients were included in this study and split into training (n = 99) and test (n = 30) cohorts. Radiomics features were extracted from each tumor region then radiomics scores were obtained separately using least absolute shrinkage and selection operator (LASSO) COX regression. A clinical nomogram was also constructed using various clinical risk factors. Radiomics nomograms were constructed by combing a single radiomics signature from the whole tumor region with clinical risk factors or combining three radiomics signatures from three tumor subregions with clinical risk factors. The performance of these models was assessed by the discrimination, calibration and clinical usefulness metrics, and was compared with that of the clinical nomogram. Results: Incorporating the three radiomics signatures, i.e., Radscores for ET, NET, and ED, into the radiomics-based nomogram improved the performance in estimating survival (C-index: training/test cohort: 0.717/0.655) compared with that of the clinical nomogram (C-index: training/test cohort: 0.633/0.560) and that of the radiomics nomogram based on single region radiomics signatures (C-index: training/test cohort: 0.656/0.535). Conclusion: The multiregional radiomics nomogram exhibited a favorable survival stratification accuracy.
Stratification by Tumor Grade Groups in a Holistic Evaluation of Machine Learning for Brain Tumor Segmentation
Prabhudesai, S.
Wang, N. C.
Ahluwalia, V.
Huan, X.
Bapuraj, J. R.
Banovic, N.
Rao, A.
Front Neurosci2021Journal Article, cited 0 times
Website
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Segmentation
Machine Learning
Classification
BRAIN
Glioblastoma Multiforme (GBM)
Accurate and consistent segmentation plays an important role in the diagnosis, treatment planning, and monitoring of both High Grade Glioma (HGG), including Glioblastoma Multiforme (GBM), and Low Grade Glioma (LGG). Accuracy of segmentation can be affected by the imaging presentation of glioma, which greatly varies between the two tumor grade groups. In recent years, researchers have used Machine Learning (ML) to segment tumor rapidly and consistently, as compared to manual segmentation. However, existing ML validation relies heavily on computing summary statistics and rarely tests the generalizability of an algorithm on clinically heterogeneous data. In this work, our goal is to investigate how to holistically evaluate the performance of ML algorithms on a brain tumor segmentation task. We address the need for rigorous evaluation of ML algorithms and present four axes of model evaluation-diagnostic performance, model confidence, robustness, and data quality. We perform a comprehensive evaluation of a glioma segmentation ML algorithm by stratifying data by specific tumor grade groups (GBM and LGG) and evaluate these algorithms on each of the four axes. The main takeaways of our work are-(1) ML algorithms need to be evaluated on out-of-distribution data to assess generalizability, reflective of tumor heterogeneity. (2) Segmentation metrics alone are limited to evaluate the errors made by ML algorithms and their describe their consequences. (3) Adoption of tools in other domains such as robustness (adversarial attacks) and model uncertainty (prediction intervals) lead to a more comprehensive performance evaluation. Such a holistic evaluation framework could shed light on an algorithm's clinical utility and help it evolve into a more clinically valuable tool.
Deep Convolutional Neural Network With a Multi-Scale Attention Feature Fusion Module for Segmentation of Multimodal Brain Tumor
He, Xueqin
Xu, Wenjie
Yang, Jane
Mao, Jianyao
Chen, Sifang
Wang, Zhanxiang
Frontiers in Neuroscience2021Journal Article, cited 0 times
BraTS-TCGA-GBM
As a non-invasive, low-cost medical imaging technology, magnetic resonance imaging (MRI) has become an important tool for brain tumor diagnosis. Many scholars have carried out some related researches on MRI brain tumor segmentation based on deep convolutional neural networks, and have achieved good performance. However, due to the large spatial and structural variability of brain tumors and low image contrast, the segmentation of MRI brain tumors is challenging. Deep convolutional neural networks often lead to the loss of low-level details as the network structure deepens, and they cannot effectively utilize the multi-scale feature information. Therefore, a deep convolutional neural network with a multi-scale attention feature fusion module (MAFF-ResUNet) is proposed to address them. The MAFF-ResUNet consists of a U-Net with residual connections and a MAFF module. The combination of residual connections and skip connections fully retain low-level detailed information and improve the global feature extraction capability of the encoding block. Besides, the MAFF module selectively extracts useful information from the multi-scale hybrid feature map based on the attention mechanism to optimize the features of each layer and makes full use of the complementary feature information of different scales. The experimental results on the BraTs 2019 MRI dataset show that the MAFF-ResUNet can learn the edge structure of brain tumors better and achieve high accuracy.
Radiomics Analysis Based on Magnetic Resonance Imaging for Preoperative Overall Survival Prediction in Isocitrate Dehydrogenase Wild-Type Glioblastoma
Wang, S.
Xiao, F.
Sun, W.
Yang, C.
Ma, C.
Huang, Y.
Xu, D.
Li, L.
Chen, J.
Li, H.
Xu, H.
Front Neurosci2021Journal Article, cited 1 times
Website
TCGA-GBM
Magnetic Resonance Imaging (MRI)
BRAIN
isocitrate dehydrogenase wildtype
Radiomics
Purpose: This study aimed to develop a radiomics signature for the preoperative prognosis prediction of isocitrate dehydrogenase (IDH)-wild-type glioblastoma (GBM) patients and to provide personalized assistance in the clinical decision-making for different patients. Materials and Methods: A total of 142 IDH-wild-type GBM patients classified using the new classification criteria of WHO 2021 from two centers were included in the study and randomly divided into a training set and a test set. Firstly, their clinical characteristics were screened using univariate Cox regression. Then, the radiomics features were extracted from the tumor and peritumoral edema areas on their contrast-enhanced T1-weighted image (CE-T1WI), T2-weighted image (T2WI), and T2-weighted fluid-attenuated inversion recovery (T2-FLAIR) magnetic resonance imaging (MRI) images. Subsequently, inter- and intra-class correlation coefficient (ICC) analysis, Spearman's correlation analysis, univariate Cox, and the least absolute shrinkage and selection operator (LASSO) Cox regression were used step by step for feature selection and the construction of a radiomics signature. The combined model was established by integrating the selected clinical factors. Kaplan-Meier analysis was performed for the validation of the discrimination ability of the model, and the C-index was used to evaluate consistency in the prediction. Finally, a Radiomics + Clinical nomogram was generated for personalized prognosis analysis and then validated using the calibration curve. Results: Analysis of the clinical characteristics resulted in the screening of four risk factors. The combination of ICC, Spearman's correlation, and univariate and LASSO Cox resulted in the selection of eight radiomics features, which made up the radiomics signature. Both the radiomics and combined models can significantly stratify high- and low-risk patients (p < 0.001 and p < 0.05 for the training and test sets, respectively) and obtained good prediction consistency (C-index = 0.74-0.86). The calibration plots exhibited good agreement in both 1- and 2-year survival between the prediction of the model and the actual observation. Conclusion: Radiomics is an independent preoperative non-invasive prognostic tool for patients who were newly classified as having IDH-wild-type GBM. The constructed nomogram, which combined radiomics features with clinical factors, can predict the overall survival (OS) of IDH-wild-type GBM patients and could be a new supplement to treatment guidelines.
Clinical implementation of artificial intelligence in neuroradiology with development of a novel workflow-efficient picture archiving and communication system-based automated brain tumor segmentation and radiomic feature extraction
Aboian, Mariam
Bousabarah, Khaled
Kazarian, Eve
Zeevi, Tal
Holler, Wolfgang
Merkaj, Sara
Petersen, Gabriel Cassinelli
Bahar, Ryan
Subramanian, Harry
Sunku, Pranay
Schrickel, Elizabeth
Bhawnani, Jitendra
Zawalich, Mathew
Mahajan, Amit
Malhotra, Ajay
Payabvash, Sam
Tocino, Irena
Lin, MingDe
Westerhoff, Malte
Frontiers in Neuroscience2022Journal Article, cited 0 times
Website
BraTS-TCGA-GBM
Purpose: Personalized interpretation of medical images is critical for optimum patient care, but current tools available to physicians to perform quantitative analysis of patient's medical images in real time are significantly limited. In this work, we describe a novel platform within PACS for volumetric analysis of images and thus development of large expert annotated datasets in parallel with radiologist performing the reading that are critically needed for development of clinically meaningful AI algorithms. Specifically, we implemented a deep learning-based algorithm for automated brain tumor segmentation and radiomics extraction, and embedded it into PACS to accelerate a supervised, end-to- end workflow for image annotation and radiomic feature extraction.
Materials and methods: An algorithm was trained to segment whole primary brain tumors on FLAIR images from multi-institutional glioma BraTS 2021 dataset. Algorithm was validated using internal dataset from Yale New Haven Health (YHHH) and compared (by Dice similarity coefficient [DSC]) to radiologist manual segmentation. A UNETR deep-learning was embedded into Visage 7 (Visage Imaging, Inc., San Diego, CA, United States) diagnostic workstation. The automatically segmented brain tumor was pliable for manual modification. PyRadiomics (Harvard Medical School, Boston, MA) was natively embedded into Visage 7 for feature extraction from the brain tumor segmentations.
Results: UNETR brain tumor segmentation took on average 4 s and the median DSC was 86%, which is similar to published literature but lower than the RSNA ASNR MICCAI BRATS challenge 2021. Finally, extraction of 106 radiomic features within PACS took on average 5.8 ± 0.01 s. The extracted radiomic features did not vary over time of extraction or whether they were extracted within PACS or outside of PACS. The ability to perform segmentation and feature extraction before radiologist opens the study was made available in the workflow. Opening the study in PACS, allows the radiologists to verify the segmentation and thus annotate the study.
Conclusion: Integration of image processing algorithms for tumor auto-segmentation and feature extraction into PACS allows curation of large datasets of annotated medical images and can accelerate translation of research into development of personalized medicine applications in the clinic. The ability to use familiar clinical tools to revise the AI segmentations and natively embedding the segmentation and radiomic feature extraction tools on the diagnostic workstation accelerates the process to generate ground-truth data.
A novel federated deep learning scheme for glioma and its subtype classification
Ali, Muhaddisa Barat
Gu, Irene Yu-Hua
Berger, Mitchel S.
Jakola, Asgeir Store
Frontiers in Neuroscience2023Journal Article, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Deep Learning
MRI
glioma
Background: Deep learning (DL) has shown promising results in molecular-based classification of glioma subtypes from MR images. DL requires a large number of training data for achieving good generalization performance. Since brain tumor datasets are usually small in size, combination of such datasets from different hospitals are needed. Data privacy issue from hospitals often poses a constraint on such a practice. Federated learning (FL) has gained much attention lately as it trains a central DL model without requiring data sharing from different hospitals.
Method: We propose a novel 3D FL scheme for glioma and its molecular subtype classification. In the scheme, a slice-based DL classifier, EtFedDyn, is exploited which is an extension of FedDyn, with the key differences on using focal loss cost function to tackle severe class imbalances in the datasets, and on multi-stream network to exploit MRIs in different modalities. By combining EtFedDyn with domain mapping as the pre-processing and 3D scan-based post-processing, the proposed scheme makes 3D brain scan-based classification on datasets from different dataset owners. To examine whether the FL scheme could replace the central learning (CL) one, we then compare the classification performance between the proposed FL and the corresponding CL schemes. Furthermore, detailed empirical-based analysis were also conducted to exam the effect of using domain mapping, 3D scan-based post-processing, different cost functions and different FL schemes.
Results: Experiments were done on two case studies: classification of glioma subtypes (IDH mutation and wild-type on TCGA and US datasets in case A) and glioma grades (high/low grade glioma HGG and LGG on MICCAI dataset in case B). The proposed FL scheme has obtained good performance on the test sets (85.46%, 75.56%) for IDH subtypes and (89.28%, 90.72%) for glioma LGG/HGG all averaged on five runs. Comparing with the corresponding CL scheme, the drop in test accuracy from the proposed FL scheme is small (-1.17%, -0.83%), indicating its good potential to replace the CL scheme. Furthermore, the empirically tests have shown that an increased classification test accuracy by applying: domain mapping (0.4%, 1.85%) in case A; focal loss function (1.66%, 3.25%) in case A and (1.19%, 1.85%) in case B; 3D post-processing (2.11%, 2.23%) in case A and (1.81%, 2.39%) in case B and EtFedDyn over FedAvg classifier (1.05%, 1.55%) in case A and (1.23%, 1.81%) in case B with fast convergence, which all contributed to the improvement of overall performance in the proposed FL scheme.
Conclusion: The proposed FL scheme is shown to be effective in predicting glioma and its subtypes by using MR images from test sets, with great potential of replacing the conventional CL approaches for training deep networks. This could help hospitals to maintain their data privacy, while using a federated trained classifier with nearly similar performance as that from a centrally trained one. Further detailed experiments have shown that different parts in the proposed 3D FL scheme, such as domain mapping (make datasets more uniform) and post-processing (scan-based classification), are essential.
Predicting Gleason Score of Prostate Cancer Patients using Radiomic Analysis
Chaddad, Ahmad
Niazi, Tamim
Probst, Stephan
Bladou, Franck
Anidjar, Moris
Bahoric, Boris
Frontiers in Oncology2018Journal Article, cited 0 times
Website
PROSTATEx
Radiomics
MRI
PROSTATE
Volumetric and Voxel-Wise Analysis of Dominant Intraprostatic Lesions on Multiparametric MRI
Lee, Joon
Carver, Eric
Feldman, Aharon
Pantelic, Milan V
Elshaikh, Mohamed
Wen, Ning
Front Oncol2019Journal Article, cited 0 times
SPIE-AAPM PROSTATEx Challenge
Radiomics
Classification
Introduction: Multiparametric MR imaging (mpMRI) has shown promising results in the diagnosis and localization of prostate cancer. Furthermore, mpMRI may play an important role in identifying the dominant intraprostatic lesion (DIL) for radiotherapy boost. We sought to investigate the level of correlation between dominant tumor foci contoured on various mpMRI sequences. Methods: mpMRI data from 90 patients with MR-guided biopsy-proven prostate cancer were obtained from the SPIE-AAPM-NCI Prostate MR Classification Challenge. Each case consisted of T2-weighted (T2W), apparent diffusion coefficient (ADC), and K(trans) images computed from dynamic contrast-enhanced sequences. All image sets were rigidly co-registered, and the dominant tumor foci were identified and contoured for each MRI sequence. Hausdorff distance (HD), mean distance to agreement (MDA), and Dice and Jaccard coefficients were calculated between the contours for each pair of MRI sequences (i.e., T2 vs. ADC, T2 vs. K(trans), and ADC vs. K(trans)). The voxel wise spearman correlation was also obtained between these image pairs. Results: The DILs were located in the anterior fibromuscular stroma, central zone, peripheral zone, and transition zone in 35.2, 5.6, 32.4, and 25.4% of patients, respectively. Gleason grade groups 1-5 represented 29.6, 40.8, 15.5, and 14.1% of the study population, respectively (with group grades 4 and 5 analyzed together). The mean contour volumes for the T2W images, and the ADC and K(trans) maps were 2.14 +/- 2.1, 2.22 +/- 2.2, and 1.84 +/- 1.5 mL, respectively. K(trans) values were indistinguishable between cancerous regions and the rest of prostatic regions for 19 patients. The Dice coefficient and Jaccard index were 0.74 +/- 0.13, 0.60 +/- 0.15 for T2W-ADC and 0.61 +/- 0.16, 0.46 +/- 0.16 for T2W-K(trans). The voxel-based Spearman correlations were 0.20 +/- 0.20 for T2W-ADC and 0.13 +/- 0.25 for T2W-K(trans). Conclusions: The DIL contoured on T2W images had a high level of agreement with those contoured on ADC maps, but there was little to no quantitative correlation of these results with tumor location and Gleason grade group. Technical hurdles are yet to be solved for precision radiotherapy to target the DILs based on physiological imaging. A Boolean sum volume (BSV) incorporating all available MR sequences may be reasonable in delineating the DIL boost volume.
Preliminary Clinical Study of the Differences Between Interobserver Evaluation and Deep Convolutional Neural Network-Based Segmentation of Multiple Organs at Risk in CT Images of Lung Cancer
Zhu, Jinhan
Liu, Yimei
Zhang, Jun
Wang, Yixuan
Chen, Lixin
Frontiers in Oncology2019Journal Article, cited 0 times
SPIE-AAPM Lung CT Challenge
Machine Learning
Background: In this study, publicly datasets with organs at risk (OAR) structures were used as reference data to compare the differences of several observers. Convolutional neural network (CNN)-based auto-contouring was also used in the analysis. We evaluated the variations among observers and the effect of CNN-based auto-contouring in clinical applications.; ; Materials and methods: A total of 60 publicly available lung cancer CT with structures were used; 48 cases were used for training, and the other 12 cases were used for testing. The structures of the datasets were used as reference data. Three observers and a CNN-based program performed contouring for 12 testing cases, and the 3D dice similarity coefficient (DSC) and mean surface distance (MSD) were used to evaluate differences from the reference data. The three observers edited the CNN-based contours, and the results were compared to those of manual contouring. A value of P<0.05 was considered statistically significant.; ; Results: Compared to the reference data, no statistically significant differences were observed for the DSCs and MSDs among the manual contouring performed by the three observers at the same institution for the heart, esophagus, spinal cord, and left and right lungs. The 95% confidence interval (CI) and P-values of the CNN-based auto-contouring results comparing to the manual results for the heart, esophagus, spinal cord, and left and right lungs were as follows: the DSCs were CNN vs. A: 0.914~0.939(P = 0.004), 0.746~0.808(P = 0.002), 0.866~0.887(P = 0.136), 0.952~0.966(P = 0.158) and 0.960~0.972 (P = 0.136); CNN vs. B: 0.913~0.936 (P = 0.002), 0.745~0.807 (P = 0.005), 0.864~0.894 (P = 0.239), 0.952~0.964 (P = 0.308), and 0.959~0.971 (P = 0.272); and CNN vs. C: 0.912~0.933 (P = 0.004), 0.748~0.804(P = 0.002), 0.867~0.890 (P = 0.530), 0.952~0.964 (P = 0.308), and 0.958~0.970 (P = 0.480), respectively. The P-values of MSDs are similar to DSCs. The P-values of heart and esophagus is smaller than 0.05. No significant differences were found between the edited CNN-based auto-contouring results and the manual results.; ; Conclusion: For the spinal cord, both lungs, no statistically significant differences were found between CNN-based auto-contouring and manual contouring. Further modifications to contouring of the heart and esophagus are necessary. Overall, editing based on CNN-based auto-contouring can effectively shorten the contouring time without affecting the results. CNNs have considerable potential for automatic contouring applications.
Identifying BAP1 Mutations in Clear-Cell Renal Cell Carcinoma by CT Radiomics: Preliminary Findings
Feng, Zhan
Zhang, Lixia
Qi, Zhong
Shen, Qijun
Hu, Zhengyu
Chen, Feng
Frontiers in Oncology2020Journal Article, cited 0 times
Website
TCGA-KIRC
Radiogenomics
KIDNEY
Renal cancer
Clear cell renal cell carcinoma (ccRCC)
To evaluate the potential application of computed tomography (CT) radiomics in the prediction of BRCA1-associated protein 1 (BAP1) mutation status in patients with clear-cell renal cell carcinoma (ccRCC). In this retrospective study, clinical and CT imaging data of 54 patients were retrieved from The Cancer Genome Atlas–Kidney Renal Clear Cell Carcinoma database. Among these, 45 patients had wild-type BAP1 and nine patients had BAP1 mutation. The texture features of tumor images were extracted using the Matlab-based IBEX package. To produce class-balanced data and improve the stability of prediction, we performed data augmentation for the BAP1 mutation group during cross validation. A model to predict BAP1 mutation status was constructed using Random Forest Classification algorithms, and was evaluated using leave-one-out-cross-validation. Random Forest model of predict BAP1 mutation status had an accuracy of 0.83, sensitivity of 0.72, specificity of 0.87, precision of 0.65, AUC of 0.77, F-score of 0.68. CT radiomics is a potential and feasible method for predicting BAP1 mutation status in patients with ccRCC.
Preoperative CT Radiomics Predicting the SSIGN Risk Groups in Patients With Clear Cell Renal Cell Carcinoma: Development and Multicenter Validation
Jiang, Yi
Li, Wuchao
Huang, Chencui
Tian, Chong
Chen, Qi
Zeng, Xianchun
Cao, Yin
Chen, Yi
Yang, Yintong
Liu, Heng
Bo, Yonghua
Luo, Chenggong
Li, Yiming
Zhang, Tijiang
Wang, Rongping
Frontiers in Oncology2020Journal Article, cited 0 times
TCGA-KIRC
Objective: The stage, size, grade, and necrosis (SSIGN) score can facilitate the assessment of tumor aggressiveness and the personal management for patients with clear cell renal cell carcinoma (ccRCC). However, this score is only available after the postoperative pathological evaluation. The aim of this study was to develop and validate a CT radiomic signature for the preoperative prediction of SSIGN risk groups in patients with ccRCC in multicenters. Methods: In total, 330 patients with ccRCC from three centers were classified into the training, external validation 1, and external validation 2 cohorts. Through consistent analysis and the least absolute shrinkage and selection operator, a radiomic signature was developed to predict the SSIGN low-risk group (scores 0-3) and intermediate- to high-risk group (score ≥ 4). An image feature model was developed according to the independent image features, and a fusion model was constructed integrating the radiomic signature and the independent image features. Furthermore, the predictive performance of the above models for the SSIGN risk groups was evaluated with regard to their discrimination, calibration, and clinical usefulness. Results: A radiomic signature consisting of sixteen relevant features from the nephrographic phase CT images achieved a good calibration (all Hosmer-Lemeshow p > 0.05) and favorable prediction efficacy in the training cohort [area under the curve (AUC): 0.940, 95% confidence interval (CI): 0.884-0.973] and in the external validation cohorts (AUC: 0.876, 95% CI: 0.811-0.942; AUC: 0.928, 95% CI: 0.844-0.975, respectively). The radiomic signature performed better than the image feature model constructed by intra-tumoral vessels (all p < 0.05) and showed similar performance with the fusion model integrating radiomic signature and intra-tumoral vessels (all p > 0.05) in terms of the discrimination in all cohorts. Moreover, the decision curve analysis verified the clinical utility of the radiomic signature in both external cohorts. Conclusion: Radiomic signature could be used as a promising non-invasive tool to predict SSIGN risk groups and to facilitate preoperative clinical decision-making for patients with ccRCC.
Automated Quality Assurance of OAR Contouring for Lung Cancer Based on Segmentation With Deep Active Learning
Men, Kuo
Geng, Huaizhi
Biswas, Tithi
Liao, Zhongxing
Xiao, Ying
Frontiers in Oncology2020Journal Article, cited 0 times
LCTSC
Purpose: Ensuring high-quality data for clinical trials in radiotherapy requires the generation of contours that comply with protocol definitions. The current workflow includes a manual review of the submitted contours, which is time-consuming and subjective. In this study, we developed an automated quality assurance (QA) system for lung cancer based on a segmentation model trained with deep active learning. Methods: The data included a gold atlas with 36 cases and 110 cases from the "NRG Oncology/RTOG 1308 Trial". The first 70 cases enrolled to the RTOG 1308 formed the candidate set, and the remaining 40 cases were randomly assigned to validation and test sets (each with 20 cases). The organs-at-risk included the heart, esophagus, spinal cord, and lungs. A preliminary convolutional neural network segmentation model was trained with the gold standard atlas. To address the deficiency of the limited training data, we selected quality images from the candidate set to be added to the training set for fine-tuning of the model with deep active learning. The trained robust segmentation models were used for QA purposes. The segmentation evaluation metrics derived from the validation set, including the Dice and Hausdorff distance, were used to develop the criteria for QA decision making. The performance of the strategy was assessed using the test set. Results: The QA method achieved promising contouring error detection, with the following metrics for the heart, esophagus, spinal cord, left lung, and right lung: balanced accuracy, 0.96, 0.95, 0.96, 0.97, and 0.97, respectively; sensitivity, 0.95, 0.98, 0.96, 1.0, and 1.0, respectively; specificity, 0.98, 0.92, 0.97, 0.94, and 0.94, respectively; and area under the receiving operator characteristic curve, 0.96, 0.95, 0.96, 0.97, and 0.94, respectively. Conclusions: The proposed system automatically detected contour errors for QA. It could provide consistent and objective evaluations with much reduced investigator intervention in multicenter clinical trials.
Machine Learning for Histologic Subtype Classification of Non-Small Cell Lung Cancer: A Retrospective Multicenter Radiomics Study
Yang, Fengchang
Chen, Wei
Wei, Haifeng
Zhang, Xianru
Yuan, Shuanghu
Qiao, Xu
Chen, Yen-Wei
Frontiers in Oncology2021Journal Article, cited 0 times
NSCLC Radiogenomics
NSCLC-Radiomics
BACKGROUND: Histologic phenotype identification of Non-Small Cell Lung Cancer (NSCLC) is essential for treatment planning and prognostic prediction. The prediction model based on radiomics analysis has the potential to quantify tumor phenotypic characteristics non-invasively. However, most existing studies focus on relatively small datasets, which limits the performance and potential clinical applicability of their constructed models.
METHODS: To fully explore the impact of different datasets on radiomics studies related to the classification of histological subtypes of NSCLC, we retrospectively collected three datasets from multi-centers and then performed extensive analysis. Each of the three datasets was used as the training dataset separately to build a model and was validated on the remaining two datasets. A model was then developed by merging all the datasets into a large dataset, which was randomly split into a training dataset and a testing dataset. For each model, a total of 788 radiomic features were extracted from the segmented tumor volumes. Then three widely used features selection methods, including minimum Redundancy Maximum Relevance Feature Selection (mRMR), Sequential Forward Selection (SFS), and Least Absolute Shrinkage and Selection Operator (LASSO) were used to select the most important features. Finally, three classification methods, including Logistics Regression (LR), Support Vector Machines (SVM), and Random Forest (RF) were independently evaluated on the selected features to investigate the prediction ability of the radiomics models.
RESULTS: When using a single dataset for modeling, the results on the testing set were poor, with AUC values ranging from 0.54 to 0.64. When the merged dataset was used for modeling, the average AUC value in the testing set was 0.78, showing relatively good predictive performance.
CONCLUSIONS: Models based on radiomics analysis have the potential to classify NSCLC subtypes, but their generalization capabilities should be carefully considered.
The Prognostic Value of Radiomics Features Extracted From Computed Tomography in Patients With Localized Clear Cell Renal Cell Carcinoma After Nephrectomy
Tang, Xin
Pang, Tong
Yan, Wei-Feng
Qian, Wen-Lei
Gong, You-Ling
Yang, Zhi-Gang
Front Oncol2021Journal Article, cited 0 times
Website
TCGA-KIRC
Radiogenomics
clear cell renal cell carcinoma
Computed Tomography (CT)
predictive model
prognosis
Radiomics
IBSI
Background and purpose: Radiomics is an emerging field of quantitative imaging. The prognostic value of radiomics analysis in patients with localized clear cell renal cell carcinoma (ccRCC) after nephrectomy remains unknown. Methods: Computed tomography images of 167 eligible cases were obtained from the Cancer Imaging Archive database. Radiomics features were extracted from the region of interest contoured manually for each patient. Hierarchical clustering was performed to divide patients into distinct groups. Prognostic assessments were performed by Kaplan-Meier curves, COX regression, and least absolute shrinkage and selection operator COX regression. Besides, transcriptome mRNA data were also included in the prognostic analyses. Endpoints were overall survival (OS) and disease-free survival (DFS). Concordance index (C-index), decision curve analysis and calibration curves with 1,000 bootstrapping replications were used for model's validation. Results: Hierarchical clustering groups from nephrographic features and mRNA can divide patients into different prognostic groups while clustering groups from corticomedullary or unenhanced phase couldn't distinguish patients' prognosis. In multivariate analyses, 11 OS-predicting and eight DFS-predicting features were identified in nephrographic phase. Similarly, seven OS-predictors and seven DFS-predictors were confirmed in mRNA data. In contrast, limited prognostic features were found in corticomedullary (two OS-predictor and two DFS-predictors) and unenhanced phase (one OS-predictors and two DFS-predictors). Prognostic models combining both nephrographic features and mRNA showed improved C-index than any model alone (C-index: 0.927 and 0.879 for OS- and DFS-predicting, respectively). In addition, decision curves and calibration curves also revealed the great performance of the novel models. Conclusion: We firstly investigated the prognostic significance of preoperative radiomics signatures in ccRCC patients. Radiomics features obtained from nephrographic phase had stronger predictive ability than features from corticomedullary or unenhanced phase. Multi-omics models combining radiomics and transcriptome data could further increase the predictive accuracy.
A Voxel-Based Radiographic Analysis Reveals the Biological Character of Proneural-Mesenchymal Transition in Glioblastoma
Qi, T.
Meng, X.
Wang, Z.
Wang, X.
Sun, N.
Ming, J.
Ren, L.
Jiang, C.
Cai, J.
Front Oncol2021Journal Article, cited 0 times
Website
TCGA
BRAIN
Glioblastoma Multiforme (GBM)
Radiomics
VASARI
Radiogenomics
Computer Aided Diagnosis (CADx)
Classification
Introduction: Proneural and mesenchymal subtypes are the most distinct demarcated categories in classification scheme, and there is often a shift from proneural type to mesenchymal subtype in the progression of glioblastoma (GBM). The molecular characters are determined by specific genomic methods, however, the application of radiography in clinical practice remains to be further studied. Here, we studied the topography features of GBM in proneural subtype, and further demonstrated the survival characteristics and proneural-mesenchymal transition (PMT) progression of samples by combining with the imaging variables.; ; Methods: Data were acquired from The Cancer Imaging Archive (TCIA, http://cancerimagingarchive.net). The radiography image, clinical variables and transcriptome subtype from 223 samples were used in this study. Proneural and mesenchymal subtype on GBM topography based on overlay and Voxel-based lesion-symptom mapping (VLSM) analysis were revealed. Besides, we carried out the comparison of survival analysis and PMT progression in and outside the VLSM-determined area.; ; Results: The overlay of total GBM and separated image of proneural and mesenchymal subtype revealed a correlation of the two subtypes. By VLSM analysis, proneural subtype was confirmed to be related to left inferior temporal medulla, and no significant voxel was found for mesenchymal subtype. The subsequent comparison between samples in and outside the VLSM-determined area showed difference in overall survival (OS) time, tumor purity, epithelial-mesenchymal transition (EMT) score and clinical variables.; ; Conclusions: PMT progression was determined by radiography approach. GBM samples in the VLSM-determined area tended to harbor the signature of proneural subtype. This study provides a valuable VLSM-determined area related to the predilection site, prognosis and PMT progression by the association between GBM topography and molecular characters.
Additional Value of PET/CT-Based Radiomics to Metabolic Parameters in Diagnosing Lynch Syndrome and Predicting PD1 Expression in Endometrial Carcinoma
Wang, X.
Wu, K.
Li, X.
Jin, J.
Yu, Y.
Sun, H.
Front Oncol2021Journal Article, cited 0 times
Website
TCGA-UCEC
18F-FDG PET/CT
Lynch syndrome
PD1 expression
Radiomics
Purpose: We aim to compare the radiomic features and parameters on 2-deoxy-2-[fluorine-18] fluoro-D-glucose (18F-FDG) positron emission tomography/computed tomography (PET/CT) between patients with endometrial cancer with Lynch syndrome and those with endometrial cancer without Lynch syndrome. We also hope to explore the biologic significance of selected radiomic features. Materials and Methods: We conducted a retrospective cohort study, first using the 18F-FDG PET/CT images and clinical data from 100 patients with endometrial cancer to construct a training group (70 patients) and a test group (30 patients). The metabolic parameters and radiomic features of each tumor were compared between patients with and without Lynch syndrome. An independent cohort of 23 patients with solid tumors was used to evaluate the value of selected radiomic features in predicting the expression of the programmed cell death 1 (PD1), using 18F-FDG PET/CT images and RNA-seq genomic data. Results: There was no statistically significant difference in the standardized uptake values on PET between patients with endometrial cancer with Lynch syndrome and those with endometrial cancer without Lynch syndrome. However, there were significant differences between the 2 groups in metabolic tumor volume and total lesion glycolysis (p < 0.005). There was a difference in the radiomic feature of gray level co-occurrence matrix entropy (GLCMEntropy; p < 0.001) between the groups: the area under the curve was 0.94 in the training group (sensitivity, 82.86%; specificity, 97.14%) and 0.893 in the test group (sensitivity, 80%; specificity, 93.33%). In the independent cohort of 23 patients, differences in GLCMEntropy were related to the expression of PD1 (rs =0.577; p < 0.001). Conclusions: In patients with endometrial cancer, higher metabolic tumor volumes, total lesion glycolysis values, and GLCMEntropy values on 18F-FDG PET/CT could suggest a higher risk for Lynch syndrome. The radiomic feature of GLCMEntropy for tumors is a potential predictor of PD1 expression.
Training and Validation of Deep Learning-Based Auto-Segmentation Models for Lung Stereotactic Ablative Radiotherapy Using Retrospective Radiotherapy Planning Contours
Wong, J.
Huang, V.
Giambattista, J. A.
Teke, T.
Kolbeck, C.
Giambattista, J.
Atrchian, S.
Front Oncol2021Journal Article, cited 0 times
Website
LCTSC
Lung CT Segmentation Challenge 2017
CPTAC-LSCC
CPTAC-LUAD
4D-Lung
LUNG
Deep Learning
Machine Learning
Radiation Therapy
Semi-automatic segmentation
Computed Tomography (CT)
Purpose: Deep learning-based auto-segmented contour (DC) models require high quality data for their development, and previous studies have typically used prospectively produced contours, which can be resource intensive and time consuming to obtain. The aim of this study was to investigate the feasibility of using retrospective peer-reviewed radiotherapy planning contours in the training and evaluation of DC models for lung stereotactic ablative radiotherapy (SABR). Methods: Using commercial deep learning-based auto-segmentation software, DC models for lung SABR organs at risk (OAR) and gross tumor volume (GTV) were trained using a deep convolutional neural network and a median of 105 contours per structure model obtained from 160 publicly available CT scans and 50 peer-reviewed SABR planning 4D-CT scans from center A. DCs were generated for 50 additional planning CT scans from center A and 50 from center B, and compared with the clinical contours (CC) using the Dice Similarity Coefficient (DSC) and 95% Hausdorff distance (HD). Results: Comparing DCs to CCs, the mean DSC and 95% HD were 0.93 and 2.85mm for aorta, 0.81 and 3.32mm for esophagus, 0.95 and 5.09mm for heart, 0.98 and 2.99mm for bilateral lung, 0.52 and 7.08mm for bilateral brachial plexus, 0.82 and 4.23mm for proximal bronchial tree, 0.90 and 1.62mm for spinal cord, 0.91 and 2.27mm for trachea, and 0.71 and 5.23mm for GTV. DC to CC comparisons of center A and center B were similar for all OAR structures. Conclusions: The DCs developed with retrospective peer-reviewed treatment contours approximated CCs for the majority of OARs, including on an external dataset. DCs for structures with more variability tended to be less accurate and likely require using a larger number of training cases or novel training approaches to improve performance. Developing DC models from existing radiotherapy planning contours appears feasible and warrants further clinical workflow testing.
Development of a Prognostic AI-Monitor for Metastatic Urothelial Cancer Patients Receiving Immunotherapy
Trebeschi, S.
Bodalal, Z.
van Dijk, N.
Boellaard, T. N.
Apfaltrer, P.
Tareco Bucho, T. M.
Nguyen-Kim, T. D. L.
van der Heijden, M. S.
Aerts, Hjwl
Beets-Tan, R. G. H.
Front Oncol2021Journal Article, cited 0 times
Website
BREAST-DIAGNOSIS
Head-Neck Cetuximab
BLADDER
CT COLONOGRAPHY
C4KC-KiTS
QIN-HEADNECK
ACRIN-NSCLC-FDG-PET
ACRIN 6668
HNSCC
ACRIN-FLT-Breast
NSCLC Radiogenomics
CT Lymph Nodes
Anti-PD-1_Lung
Anti-PD-1_MELANOMA
NaF PROSTATE
RIDER Lung PET-CT
Soft Tissue Sarcoma
NSCLC-Radiomics-Genomics
NSCLC-Radiomics- Interobserver1
BREAST-DIAGNOSIS
Pelvic-Reference-Data
Head-Neck-PET-CT
NSCLC-Radiomics
CHEST
ABDOMEN
Computed Tomography (CT)
Background: Immune checkpoint inhibitor efficacy in advanced cancer patients remains difficult to predict. Imaging is the only technique available that can non-invasively provide whole body information of a patient's response to treatment. We hypothesize that quantitative whole-body prognostic information can be extracted by leveraging artificial intelligence (AI) for treatment monitoring, superior and complementary to the current response evaluation methods. Methods: To test this, a cohort of 74 stage-IV urothelial cancer patients (37 in the discovery set, 37 in the independent test, 1087 CTs), who received anti-PD1 or anti-PDL1 were retrospectively collected. We designed an AI system [named prognostic AI-monitor (PAM)] able to identify morphological changes in chest and abdominal CT scans acquired during follow-up, and link them to survival. Results: Our findings showed significant performance of PAM in the independent test set to predict 1-year overall survival from the date of image acquisition, with an average area under the curve (AUC) of 0.73 (p < 0.001) for abdominal imaging, and 0.67 AUC (p < 0.001) for chest imaging. Subanalysis revealed higher accuracy of abdominal imaging around and in the first 6 months of treatment, reaching an AUC of 0.82 (p < 0.001). Similar accuracy was found by chest imaging, 5-11 months after start of treatment. Univariate comparison with current monitoring methods (laboratory results and radiological assessments) revealed higher or similar prognostic performance. In multivariate analysis, PAM remained significant against all other methods (p < 0.001), suggesting its complementary value in current clinical settings. Conclusions: Our study demonstrates that a comprehensive AI-based method such as PAM, can provide prognostic information in advanced urothelial cancer patients receiving immunotherapy, leveraging morphological changes not only in tumor lesions, but also tumor spread, and side-effects. Further investigations should focus beyond anatomical imaging. Prospective studies are warranted to test and validate our findings.
Effect of Applying Leakage Correction on rCBV Measurement Derived From DSC-MRI in Enhancing and Nonenhancing Glioma
Arzanforoosh, Fatemeh
Croal, Paula L.
van Garderen, Karin A.
Smits, Marion
Chappell, Michael A.
Warnert, Esther A. H.
Frontiers in Oncology2021Journal Article, cited 0 times
Website
QIN-BRAIN-DSC-MRI
Prognostic model
Purpose: Relative cerebral blood volume (rCBV) is the most widely used parameter derived from DSC perfusion MR imaging for predicting brain tumor aggressiveness. However, accurate rCBV estimation is challenging in enhancing glioma, because of contrast agent extravasation through a disrupted blood-brain barrier (BBB), and even for nonenhancing glioma with an intact BBB, due to an elevated steady-state contrast agent concentration in the vasculature after first passage. In this study a thorough investigation of the effects of two different leakage correction algorithms on rCBV estimation for enhancing and nonenhancing tumors was conducted.; ; Methods: Two datasets were used retrospectively in this study: 1. A publicly available TCIA dataset (49 patients with 35 enhancing and 14 nonenhancing glioma); 2. A dataset acquired clinically at Erasmus MC (EMC, Rotterdam, NL) (47 patients with 20 enhancing and 27 nonenhancing glial brain lesions). The leakage correction algorithms investigated in this study were: a unidirectional model-based algorithm with flux of contrast agent from the intra- to the extravascular extracellular space (EES); and a bidirectional model-based algorithm additionally including flow from EES to the intravascular space.; ; Results: In enhancing glioma, the estimated average contrast-enhanced tumor rCBV significantly (Bonferroni corrected Wilcoxon Signed Rank Test, p < 0.05) decreased across the patients when applying unidirectional and bidirectional correction: 4.00 ± 2.11 (uncorrected), 3.19 ± 1.65 (unidirectional), and 2.91 ± 1.55 (bidirectional) in TCIA dataset and 2.51 ± 1.3 (uncorrected), 1.72 ± 0.84 (unidirectional), and 1.59 ± 0.9 (bidirectional) in EMC dataset. In nonenhancing glioma, a significant but smaller difference in observed rCBV was found after application of both correction methods used in this study: 1.42 ± 0.60 (uncorrected), 1.28 ± 0.46 (unidirectional), and 1.24 ± 0.37 (bidirectional) in TCIA dataset and 0.91 ± 0.49 (uncorrected), 0.77 ± 0.37 (unidirectional), and 0.67 ± 0.34 (bidirectional) in EMC dataset.; ; Conclusion: Both leakage correction algorithms were found to change rCBV estimation with BBB disruption in enhancing glioma, and to a lesser degree in nonenhancing glioma. Stronger effects were found for bidirectional leakage correction than for unidirectional leakage correction.
Efficacy of Location-Based Features for Survival Prediction of Patients With Glioblastoma Depending on Resection Status
Soltani, Madjid
Bonakdar, Armin
Shakourifar, Nastaran
Babaei, Reza
Raahemifar, Kaamran
Front Oncol2021Journal Article, cited 0 times
Website
BraTS-TCGA-LGG
BraTS-TCGA-GBM
BraTS 2019
Artificial Neural Network (ANN)
BRAIN
Machine Learning
Radiomics
Cancer stands out as one of the fatal diseases people are facing all the time. Each year, a countless number of people die because of the late diagnosis of cancer or wrong treatments. Glioma, one of the most common primary brain tumors, has different aggressiveness and sub-regions, which can affect the risk of disease. Although prediction of overall survival based on multimodal magnetic resonance imaging (MRI) is challenging, in this study, we assess if and how location-based features of tumors can affect overall survival prediction. This approach is evaluated independently and in combination with radiomic features. The process is carried out on a data set entailing MRI images of patients with glioblastoma. To assess the impact of resection status, the data set is divided into two groups, patients were reported as gross total resection and unknown resection status. Then, different machine learning algorithms were used to evaluate how location features are linked with overall survival. Results from regression models indicate that location-based features have considerable effects on the patients' overall survival independently. Additionally, classifier models show an improvement in prediction accuracy by the addition of location-based features to radiomic features.
Total Lesion Glycolysis Estimated by a Radiomics Model From CT Image Alone
Si, H.
Hao, X.
Zhang, L.
Xu, X.
Cao, J.
Wu, P.
Li, L.
Wu, Z.
Zhang, S.
Li, S.
Front Oncol2021Journal Article, cited 0 times
Website
RIDER Lung PET-CT
Computed Tomography (CT)
Positron Emission Tomography (PET)
Radiomics
Purpose: In this study, total lesion glycolysis (TLG) on positron emission tomography images was estimated by a trained and validated CT radiomics model, and its prognostic ability was explored among lung cancer (LC) and esophageal cancer patients (EC). Methods: Using the identical features between the combined and thin-section CT, the estimation model of SUVsum (summed standard uptake value) was trained from the lymph nodes (LNs) of LC patients (n = 1239). Besides LNs of LC patients from other centers, the validation cohorts also included LNs and primary tumors of LC/EC from the same center. After calculating TLG (accumulated SUVsum of each individual) based on the model, the prognostic ability of the estimated and measured values was compared and analyzed. Results: In the training cohort, the model of 3 features was trained by the deep learning and linear regression method. It performed well in all validation cohorts (n = 5), and a linear regression could correct the bias from different scanners. Additionally, the absolute biases of the model were not significantly affected by the evaluated factors whether they included LN metastasis or not. Between the estimated natural logarithm of TLG (elnTLG) and the measured values (mlnTLG), significant difference existed among both LC (n = 137, bias = 0.510 +/- 0.519, r = 0.956, P<0.001) and EC patients (n = 56, bias = 0.251+/- 0.463, r = 0.934, P<0.001). However, for both cancers, the overall shapes of the curves of hazard ratio (HR) against elnTLG or mlnTLG were quite alike. Conclusion: Total lesion glycolysis can be estimated by three CT features with particular coefficients for different scanners, and it similar to the measured values in predicting the outcome of cancer patients.
Preoperative Contrast-Enhanced MRI in Differentiating Glioblastoma From Low-Grade Gliomas in The Cancer Imaging Archive Database: A Proof-of-Concept Study
Zhang, Huangqi
Zhang, Binhao
Pan, Wenting
Dong, Xue
Li, Xin
Chen, Jinyao
Wang, Dongnv
Ji, Wenbin
Frontiers in Oncology2022Journal Article, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
PURPOSE: This study aimed to develop a repeatable MRI-based machine learning model to differentiate between low-grade gliomas (LGGs) and glioblastoma (GBM) and provide more clinical information to improve treatment decision-making.
METHODS: Preoperative MRIs of gliomas from The Cancer Imaging Archive (TCIA)-GBM/LGG database were selected. The tumor on contrast-enhanced MRI was segmented. Quantitative image features were extracted from the segmentations. A random forest classification algorithm was used to establish a model in the training set. In the test phase, a random forest model was tested using an external test set. Three radiologists reviewed the images for the external test set. The area under the receiver operating characteristic curve (AUC) was calculated. The AUCs of the radiomics model and radiologists were compared.
RESULTS: The random forest model was fitted using a training set consisting of 142 patients [mean age, 52 years ± 16 (standard deviation); 78 men] comprising 88 cases of GBM. The external test set included 25 patients (14 with GBM). Random forest analysis yielded an AUC of 1.00 [95% confidence interval (CI): 0.86-1.00]. The AUCs for the three readers were 0.92 (95% CI 0.74-0.99), 0.70 (95% CI 0.49-0.87), and 0.59 (95% CI 0.38-0.78). Statistical differences were only found between AUC and Reader 1 (1.00 vs. 0.92, respectively; p = 0.16).
CONCLUSION: An MRI radiomics-based random forest model was proven useful in differentiating GBM from LGG and showed better diagnostic performance than that of two inexperienced radiologists.
RDAU-Net: Based on a Residual Convolutional Neural Network With DFP and CBAM for Brain Tumor Segmentation
Wang, Jingjing
Yu, Zishu
Luan, Zhenye
Ren, Jinwen
Zhao, Yanhua
Yu, Gang
Frontiers in Oncology2022Journal Article, cited 0 times
BraTS-TCGA-GBM
Due to the high heterogeneity of brain tumors, automatic segmentation of brain tumors remains a challenging task. In this paper, we propose RDAU-Net by adding dilated feature pyramid blocks with 3D CBAM blocks and inserting 3D CBAM blocks after skip-connection layers. Moreover, a CBAM with channel attention and spatial attention facilitates the combination of more expressive feature information, thereby leading to more efficient extraction of contextual information from images of various scales. The performance was evaluated on the Multimodal Brain Tumor Segmentation (BraTS) challenge data. Experimental results show that RDAU-Net achieves state-of-the-art performance. The Dice coefficient for WT on the BraTS 2019 dataset exceeded the baseline value by 9.2%.
Generating Full-Field Digital Mammogram From Digitized Screen-Film Mammogram for Breast Cancer Screening With High-Resolution Generative Adversarial Network
Zhou, Yuanpin
Wei, Jun
Wu, Dongmei
Zhang, Yaqin
Frontiers in Oncology2022Journal Article, cited 0 times
CBIS-DDSM
Purpose: Developing deep learning algorithms for breast cancer screening is limited due to the lack of labeled full-field digital mammograms (FFDMs). Since FFDM is a new technique that rose in recent decades and replaced digitized screen-film mammograms (DFM) as the main technique for breast cancer screening, most mammogram datasets were still stored in the form of DFM. A solution for developing deep learning algorithms based on FFDM while leveraging existing labeled DFM datasets is a generative algorithm that generates FFDM from DFM. Generating high-resolution FFDM from DFM remains a challenge due to the limitations of network capacity and lacking GPU memory.
Method: In this study, we developed a deep-learning-based generative algorithm, HRGAN, to generate synthesized FFDM (SFFDM) from DFM. More importantly, our algorithm can keep the image resolution and details while using high-resolution DFM as input. Our model used FFDM and DFM for training. First, a sliding window was used to crop DFMs and FFDMs into 256 × 256 pixels patches. Second, the patches were divided into three categories (breast, background, and boundary) by breast masks. Patches from the DFM and FFDM datasets were paired as inputs for training our model where these paired patches should be sampled from the same category of the two different image sets. U-Net liked generators and modified discriminators with two-channels output, one channel for distinguishing real and SFFDMs and the other for representing a probability map for breast mask, were used in our algorithm. Last, a study was designed to evaluate the usefulness of HRGAN. A mass segmentation task and a calcification detection task were included in the study.
Results: Two public mammography datasets, the CBIS-DDSM dataset and the INbreast dataset, were included in our experiment. The CBIS-DDSM dataset includes 753 calcification cases and 891 mass cases with verified pathology information, resulting in a total of 3568 DFMs. The INbreast dataset contains a total of 410 FFDMs with annotations of masses, calcifications, asymmetries, and distortions. There were 1784 DFMs and 205 FFDM randomly selected as Dataset A. The remaining DFMs from the CBIS-DDSM dataset were selected as Dataset B. The remaining FFDMs from the INbreast dataset were selected as Dataset C. All DFMs and FFDMs were normalized to 100μm × 100μm in our experiments. A study with a mass segmentation task and a calcification detection task was performed to evaluate the usefulness of HRGAN.
Conclusions: The proposed HRGAN can generate high-resolution SFFDMs from DFMs. Extensive experiments showed the SFFDMs were able to help improve the performance of deep-learning-based algorithms for breast cancer screening on DFM when the size of the training dataset is small.
18F-Fluorodeoxyglucose Positron Emission Tomography of Head and Neck Cancer: Location and HPV Specific Parameters for Potential Treatment Individualization
Zschaeck, Sebastian
Weingärtner, Julian
Lombardo, Elia
Marschner, Sebastian
Hajiyianni, Marina
Beck, Marcus
Zips, Daniel
Li, Yimin
Lin, Qin
Amthauer, Holger
Troost, Esther G. C.
van den Hoff, Jörg
Budach, Volker
Kotzerke, Jörg
Ferentinos, Konstantinos
Karagiannis, Efstratios
Kaul, David
Gregoire, Vincent
Holzgreve, Adrien
Albert, Nathalie L.
Nikulin, Pavel
Bachmann, Michael
Kopka, Klaus
Krause, Mechthild
Baumann, Michael
Kazmierska, Joanna
Cegla, Paulina
Cholewinski, Witold
Strouthos, Iosif
Zöphel, Klaus
Majchrzak, Ewa
Landry, Guillaume
Belka, Claus
Stromberger, Carmen
Hofheinz, Frank
Frontiers in Oncology2022Journal Article, cited 0 times
HEAD-NECK-RADIOMICS-HN1
HNSCC
QIN-HEADNECK
TCGA-HNSC
Purpose: 18F-fluorodeoxyglucose positron emission tomography (FDG-PET) is utilized for staging and treatment planning of head and neck squamous cell carcinomas (HNSCC). Some older publications on the prognostic relevance showed inconclusive results, most probably due to small study sizes. This study evaluates the prognostic and potentially predictive value of FDG-PET in a large multi-center analysis.
Methods: Original analysis of individual FDG-PET and patient data from 16 international centers (8 institutional datasets, 8 public repositories) with 1104 patients. All patients received curative intent radiotherapy/chemoradiation (CRT) and pre-treatment FDG-PET imaging. Primary tumors were semi-automatically delineated for calculation of SUVmax, SUVmean, metabolic tumor volume (MTV) and total lesion glycolysis (TLG). Cox regression analyses were performed for event-free survival (EFS), overall survival (OS), loco-regional control (LRC) and freedom from distant metastases (FFDM).
Results: FDG-PET parameters were associated with patient outcome in the whole cohort regarding clinical endpoints (EFS, OS, LRC, FFDM), in uni- and multivariate Cox regression analyses. Several previously published cut-off values were successfully validated. Subgroup analyses identified tumor- and human papillomavirus (HPV) specific parameters. In HPV positive oropharynx cancer (OPC) SUVmax was well suited to identify patients with excellent LRC for organ preservation. Patients with SUVmax of 14 or less were unlikely to develop loco-regional recurrence after definitive CRT. In contrast FDG PET parameters deliver only limited prognostic information in laryngeal cancer.
Conclusion: FDG-PET parameters bear considerable prognostic value in HNSCC and potential predictive value in subgroups of patients, especially regarding treatment de-intensification and organ-preservation. The potential predictive value needs further validation in appropriate control groups. Further research on advanced imaging approaches including radiomics or artificial intelligence methods should implement the identified cut-off values as benchmark routine imaging parameters.
Spherical Convolutional Neural Networks for Survival Rate Prediction in Cancer Patients
Sinzinger, Fabian
Astaraki, Mehdi
Smedby, Örjan
Moreno, Rodrigo
Frontiers in Oncology2022Journal Article, cited 0 times
HEAD-NECK-RADIOMICS-HN1
NSCLC-Radiomics
NSCLC-Radiomics-Genomics
Objective: Survival Rate Prediction (SRP) is a valuable tool to assist in the clinical diagnosis and treatment planning of lung cancer patients. In recent years, deep learning (DL) based methods have shown great potential in medical image processing in general and SRP in particular. This study proposes a fully-automated method for SRP from computed tomography (CT) images, which combines an automatic segmentation of the tumor and a DL-based method for extracting rotational-invariant features.
Methods: In the first stage, the tumor is segmented from the CT image of the lungs. Here, we use a deep-learning-based method that entails a variational autoencoder to provide more information to a U-Net segmentation model. Next, the 3D volumetric image of the tumor is projected onto 2D spherical maps. These spherical maps serve as inputs for a spherical convolutional neural network that approximates the log risk for a generalized Cox proportional hazard model.
Results: The proposed method is compared with 17 baseline methods that combine different feature sets and prediction models using three publicly-available datasets: Lung1 (n=422), Lung3 (n=89), and H&N1 (n=136). We observed comparable C-index scores compared to the best-performing baseline methods in a 5-fold cross-validation on Lung1 (0.59 ± 0.03 vs. 0.62 ± 0.04). In comparison, it slightly outperforms all methods in inter-data set evaluation (0.64 vs. 0.63). The best-performing method from the first experiment reduced its performance to 0.61 and 0.62 for Lung3 and H&N1, respectively.
Discussion: The experiments suggest that the performance of spherical features is comparable with previous approaches, but they generalize better when applied to unseen datasets. That might imply that orientation-independent shape features are relevant for SRP. The performance of the proposed method was very similar, using manual and automatic segmentation methods. This makes the proposed model useful in cases where expert annotations are not available or difficult to obtain.
Artificial intelligence in the radiomic analysis of glioblastomas: A review, taxonomy, and perspective
Zhu, Ming
Li, Sijia
Kuang, Yu
Hill, Virginia B.
Heimberger, Amy B.
Zhai, Lijie
Zhai, Shengjie
Frontiers in Oncology2022Journal Article, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
CPTAC-GBM
TCGA-GBM
Radiological imaging techniques, including magnetic resonance imaging (MRI) and positron emission tomography (PET), are the standard-of-care non-invasive diagnostic approaches widely applied in neuro-oncology. Unfortunately, accurate interpretation of radiological imaging data is constantly challenged by the indistinguishable radiological image features shared by different pathological changes associated with tumor progression and/or various therapeutic interventions. In recent years, machine learning (ML)-based artificial intelligence (AI) technology has been widely applied in medical image processing and bioinformatics due to its advantages in implicit image feature extraction and integrative data analysis. Despite its recent rapid development, ML technology still faces many hurdles for its broader applications in neuro-oncological radiomic analysis, such as lack of large accessible standardized real patient radiomic brain tumor data of all kinds and reliable predictions on tumor response upon various treatments. Therefore, understanding ML-based AI technologies is critically important to help us address the skyrocketing demands of neuro-oncology clinical deployments. Here, we provide an overview on the latest advancements in ML techniques for brain tumor radiomic analysis, emphasizing proprietary and public dataset preparation and state-of-the-art ML models for brain tumor diagnosis, classifications (e.g., primary and secondary tumors), discriminations between treatment effects (pseudoprogression, radiation necrosis) and true progression, survival prediction, inflammation, and identification of brain tumor biomarkers. We also compare the key features of ML models in the realm of neuroradiology with ML models employed in other medical imaging fields and discuss open research challenges and directions for future work in this nascent precision medicine area.
Deep learning auto-segmentation of cervical skeletal muscle for sarcopenia analysis in patients with head and neck cancer
Naser, Mohamed A.
Wahid, Kareem A.
Grossberg, Aaron J.
Olson, Brennan
Jain, Rishab
El-Habashy, Dina
Dede, Cem
Salama, Vivian
Abobakr, Moamen
Mohamed, Abdallah S. R.
He, Renjie
Jaskari, Joel
Sahlsten, Jaakko
Kaski, Kimmo
Fuller, Clifton D.
Frontiers in Oncology2022Journal Article, cited 0 times
HNSCC
Background/Purpose: Sarcopenia is a prognostic factor in patients with head and neck cancer (HNC). Sarcopenia can be determined using the skeletal muscle index (SMI) calculated from cervical neck skeletal muscle (SM) segmentations. However, SM segmentation requires manual input, which is time-consuming and variable. Therefore, we developed a fully-automated approach to segment cervical vertebra SM.
Materials/Methods: 390 HNC patients with contrast-enhanced CT scans were utilized (300-training, 90-testing). Ground-truth single-slice SM segmentations at the C3 vertebra were manually generated. A multi-stage deep learning pipeline was developed, where a 3D ResUNet auto-segmented the C3 section (33 mm window), the middle slice of the section was auto-selected, and a 2D ResUNet auto-segmented the auto-selected slice. Both the 3D and 2D approaches trained five sub-models (5-fold cross-validation) and combined sub-model predictions on the test set using majority vote ensembling. Model performance was primarily determined using the Dice similarity coefficient (DSC). Predicted SMI was calculated using the auto-segmented SM cross-sectional area. Finally, using established SMI cutoffs, we performed a Kaplan-Meier analysis to determine associations with overall survival.
Results: Mean test set DSC of the 3D and 2D models were 0.96 and 0.95, respectively. Predicted SMI had high correlation to the ground-truth SMI in males and females (r>0.96). Predicted SMI stratified patients for overall survival in males (log-rank p = 0.01) but not females (log-rank p = 0.07), consistent with ground-truth SMI.
Conclusion: We developed a high-performance, multi-stage, fully-automated approach to segment cervical vertebra SM. Our study is an essential step towards fully-automated sarcopenia-related decision-making in patients with HNC.
Improving radiomic model reliability using robust features from perturbations for head-and-neck carcinoma
Teng, Xinzhi
Zhang, Jiang
Ma, Zongrui
Zhang, Yuanpeng
Lam, Saikit
Li, Wen
Xiao, Haonan
Li, Tian
Li, Bing
Zhou, Ta
Ren, Ge
Lee, Francis Kar-ho
Au, Kwok-hung
Lee, Victor Ho-fun
Chang, Amy Tien Yee
Cai, Jing
Frontiers in Oncology2022Journal Article, cited 0 times
Head-Neck-PET-CT
OPC-Radiomics
Background: Using high robust radiomic features in modeling is recommended, yet its impact on radiomic model is unclear. This study evaluated the radiomic model's robustness and generalizability after screening out low-robust features before radiomic modeling. The results were validated with four datasets and two clinically relevant tasks.
Materials and methods: A total of 1,419 head-and-neck cancer patients' computed tomography images, gross tumor volume segmentation, and clinically relevant outcomes (distant metastasis and local-regional recurrence) were collected from four publicly available datasets. The perturbation method was implemented to simulate images, and the radiomic feature robustness was quantified using intra-class correlation of coefficient (ICC). Three radiomic models were built using all features (ICC > 0), good-robust features (ICC > 0.75), and excellent-robust features (ICC > 0.95), respectively. A filter-based feature selection and Ridge classification method were used to construct the radiomic models. Model performance was assessed with both robustness and generalizability. The robustness of the model was evaluated by the ICC, and the generalizability of the model was quantified by the train-test difference of Area Under the Receiver Operating Characteristic Curve (AUC).
Results: The average model robustness ICC improved significantly from 0.65 to 0.78 (P< 0.0001) using good-robust features and to 0.91 (P< 0.0001) using excellent-robust features. Model generalizability also showed a substantial increase, as a closer gap between training and testing AUC was observed where the mean train-test AUC difference was reduced from 0.21 to 0.18 (P< 0.001) in good-robust features and to 0.12 (P< 0.0001) in excellent-robust features. Furthermore, good-robust features yielded the best average AUC in the unseen datasets of 0.58 (P< 0.001) over four datasets and clinical outcomes.
Conclusions: Including robust only features in radiomic modeling significantly improves model robustness and generalizability in unseen datasets. Yet, the robustness of radiomic model has to be verified despite building with robust radiomic features, and tightly restricted feature robustness may prevent the optimal model performance in the unseen dataset as it may lower the discrimination power of the model.
The relationship between radiomics and pathomics in Glioblastoma patients: Preliminary results from a cross-scale association study
Brancato, Valentina
Cavaliere, Carlo
Garbino, Nunzia
Isgrò, Francesco
Salvatore, Marco
Aiello, Marco
Frontiers in Oncology2022Journal Article, cited 0 times
CPTAC-GBM
Glioblastoma multiforme (GBM) typically exhibits substantial intratumoral heterogeneity at both microscopic and radiological resolution scales. Diffusion Weighted Imaging (DWI) and dynamic contrast-enhanced (DCE) magnetic resonance imaging (MRI) are two functional MRI techniques that are commonly employed in clinic for the assessment of GBM tumor characteristics. This work presents initial results aiming at determining if radiomics features extracted from preoperative ADC maps and post-contrast T1 (T1C) images are associated with pathomic features arising from H&E digitized pathology images. 48 patients from the public available CPTAC-GBM database, for which both radiology and pathology images were available, were involved in the study. 91 radiomics features were extracted from ADC maps and post-contrast T1 images using PyRadiomics. 65 pathomic features were extracted from cell detection measurements from H&E images. Moreover, 91 features were extracted from cell density maps of H&E images at four different resolutions. Radiopathomic associations were evaluated by means of Spearman's correlation (ρ) and factor analysis. p values were adjusted for multiple correlations by using a false discovery rate adjustment. Significant cross-scale associations were identified between pathomics and ADC, both considering features (n = 186, 0.45 < ρ < 0.74 in absolute value) and factors (n = 5, 0.48 < ρ < 0.54 in absolute value). Significant but fewer ρ values were found concerning the association between pathomics and radiomics features (n = 53, 0.5 < ρ < 0.65 in absolute value) and factors (n = 2, ρ = 0.63 and ρ = 0.53 in absolute value). The results of this study suggest that cross-scale associations may exist between digital pathology and ADC and T1C imaging. This can be useful not only to improve the knowledge concerning GBM intratumoral heterogeneity, but also to strengthen the role of radiomics approach and its validation in clinical practice as "virtual biopsy", introducing new insights for omics integration toward a personalized medicine approach.
Assessing and testing anomaly detection for finding prostate cancer in spatially registered multi-parametric MRI
Mayer, Rulon
Turkbey, Baris
Choyke, Peter
Simone, Charles B.
Frontiers in Oncology2023Journal Article, cited 0 times
PROSTATE-MRI
Background: Evaluating and displaying prostate cancer through non-invasive imagery such as Multi-Parametric MRI (MP-MRI) bolsters management of patients. Recent research quantitatively applied supervised target algorithms using vectoral tumor signatures to spatially registered T1, T2, Diffusion, and Dynamic Contrast Enhancement images. This is the first study to apply the Reed-Xiaoli (RX) multi-spectral anomaly detector (unsupervised target detector) to prostate cancer, which searches for voxels that depart from the background normal tissue, and detects aberrant voxels, presumably tumors.
Methods: MP-MRI (T1, T2, diffusion, dynamic contrast-enhanced images, or seven components) were prospectively collected from 26 patients and then resized, translated, and stitched to form spatially registered multi-parametric cubes. The covariance matrix (CM) and mean μ were computed from background normal tissue. For RX, noise was reduced for the CM by filtering out principal components (PC), regularization, and elliptical envelope minimization. The RX images were compared to images derived from the threshold Adaptive Cosine Estimator (ACE) and quantitative color analysis. Receiver Operator Characteristic (ROC) curves were used for RX and reference images. To quantitatively assess algorithm performance, the Area Under the Curve (AUC) and the Youden Index (YI) points for the ROC curves were computed.
Results: The patient average for the AUC and [YI] from ROC curves for RX from filtering 3 and 4 PC was 0.734[0.706] and 0.727[0.703], respectively, relative to the ACE images. The AUC[YI] for RX from modified Regularization was 0.638[0.639], Regularization 0.716[0.690], elliptical envelope minimization 0.544[0.597], and unprocessed CM 0.581[0.608] using the ACE images as Reference Image. The AUC[YI] for RX from filtering 3 and 4 PC was 0.742[0.711] and 0.740[0.708], respectively, relative to the quantitative color images. The AUC[YI] for RX from modified Regularization was 0.643[0.648], Regularization 0.722[0.695], elliptical envelope minimization 0.508[0.605], and unprocessed CM 0.569[0.615] using the color images as Reference Image. All standard errors were less than 0.020.
Conclusions: This first study of spatially registered MP-MRI applied anomaly detection using RX, an unsupervised target detection algorithm for prostate cancer. For RX, filtering out PC and applying Regularization achieved higher AUC and YI using ACE and color images as references than unprocessed CM, modified Regularization, and elliptical envelope minimization.
Pilot study for generating and assessing nomograms and decision curves analysis to predict clinically significant prostate cancer using only spatially registered multi-parametric MRI
Mayer, Rulon
Turkbey, Baris
Choyke, Peter
Simone, Charles B.
Frontiers in Oncology2023Journal Article, cited 0 times
PROSTATE-MRI
Background: Current prostate cancer evaluation can be inaccurate and burdensome. To help non-invasive prostate tumor assessment, recent algorithms applied to spatially registered multi-parametric (SRMP) MRI extracted novel clinically relevant metrics, namely the tumor's eccentricity (shape), signal-to-clutter ratio (SCR), and volume.
Purpose: Conduct a pilot study to predict the risk of developing clinically significant prostate cancer using nomograms and employing Decision Curves Analysis (DCA) from the SRMP MRI-based features to help clinicians non-invasively manage prostate cancer.
Methods: This study retrospectively analyzed 25 prostate cancer patients. MP-MRI (T1, T2, diffusion, dynamic contrast-enhanced) were resized, translated, and stitched to form SRMP MRI. Target detection algorithm [adaptive cosine estimator (ACE)] applied to SRMP MRI determines tumor's eccentricity, noise reduced SCR (by regularizing or eliminating principal components (PC) from the covariance matrix), and volume. Pathology assessed wholemount prostatectomy for Gleason score (GS). Tumors with GS >=4+3 (<=3+4) were judged as "Clinically Significant" ("Insignificant"). Logistic regression combined eccentricity, SCR, volume to generate probability distribution. Nomograms, DCA used all patients plus training (13 patients) and test (12 patients) sets. Area Under the Curves for (AUC) for Receiver Operator Curves (ROC) and p-values evaluated the performance.
Results: Combining eccentricity (0.45 ACE threshold), SCR (3, 4 PCs), SCR (regularized, modified regularization) with tumor volume (0.65 ACE threshold) improved AUC (>0.70) for ROC curves and p-values (<0.05) for logistic fit. DCA showed greater net benefit from model fit than univariate analysis, treating "all," or "none." Training/test sets achieved comparable AUC but with higher p-values.
Conclusions: Performance of nomograms and DCA based on metrics derived from SRMP-MRI in this pilot study were comparable to those using prostate serum antigen, age, and PI-RADS.
Morphometry-based radiomics for predicting therapeutic response in patients with gliomas following radiotherapy
Sherminie, Lahanda Purage G.
Jayatilake, Mohan L.
Hewavithana, Badra
Weerakoon, Bimali S.
Vijithananda, Sahan M.
Frontiers in Oncology2023Journal Article, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Introduction: Gliomas are still considered as challenging in oncologic management despite the developments in treatment approaches. The complete elimination of a glioma might not be possible even after a treatment and assessment of therapeutic response is important to determine the future course of actions for patients with such cancers. In the recent years radiomics has emerged as a promising solution with potential applications including prediction of therapeutic response. Hence, this study was focused on investigating whether morphometry-based radiomics signature could be used to predict therapeutic response in patients with gliomas following radiotherapy.
Methods: 105 magnetic resonance (MR) images including segmented and non-segmented images were used to extract morphometric features and develop a morphometry-based radiomics signature. After determining the appropriate machine learning algorithm, a prediction model was developed to predict the therapeutic response eliminating the highly correlated features as well as without eliminating the highly correlated features. Then the model performance was evaluated.
Results: Tumor grade had the highest contribution to develop the morphometry-based signature. Random forest provided the highest accuracy to train the prediction model derived from the morphometry-based radiomics signature. An accuracy of 86% and area under the curve (AUC) value of 0.91 were achieved for the prediction model evaluated without eliminating the highly correlated features whereas accuracy and AUC value were 84% and 0.92 respectively for the prediction model evaluated after eliminating the highly correlated features.
Discussion: Nonetheless, the developed morphometry-based radiomics signature could be utilized as a noninvasive biomarker for therapeutic response in patients with gliomas following radiotherapy.
Fully automated 3D body composition analysis and its association with overall survival in head and neck squamous cell carcinoma patients
Rozynek, Miłosz
Gut, Daniel
Kucybała, Iwona
Strzałkowska-Kominiak, Ewa
Tabor, Zbisław
Urbanik, Andrzej
Kłęk, Stanisław
Wojciechowski, Wadim
Frontiers in Oncology2023Journal Article, cited 0 times
Head-Neck-CT-Atlas
Automatic Segmentation
Computer Aided Detection (CADe)
Classification
Organ segmentation
Algorithm Development
Objectives: We developed a method for a fully automated deep-learning segmentation of tissues to investigate if 3D body composition measurements are significant for survival of Head and Neck Squamous Cell Carcinoma (HNSCC) patients.; ; Methods: 3D segmentation of tissues including spine, spine muscles, abdominal muscles, subcutaneous adipose tissue (SAT), visceral adipose tissue (VAT), and internal organs within volumetric region limited by L1 and L5 levels was accomplished using deep convolutional segmentation architecture - U-net implemented in a nnUnet framework. It was trained on separate dataset of 560 single-channel CT slices and used for 3D segmentation of pre-radiotherapy (Pre-RT) and post-radiotherapy (Post-RT) whole body PET/CT or abdominal CT scans of 215 HNSCC patients. Percentages of tissues were used for overall survival analysis using Cox proportional hazard (PH) model.; ; Results: Our deep learning model successfully segmented all mentioned tissues with Dice’s coefficient exceeding 0.95. The 3D measurements including difference between Pre-RT and post-RT abdomen and spine muscles percentage, difference between Pre-RT and post-RT VAT percentage and sum of Pre-RT abdomen and spine muscles percentage together with BMI and Cancer Site were selected and significant at the level of 5% for the overall survival. Aside from Cancer Site, the lowest hazard ratio (HR) value (HR, 0.7527; 95% CI, 0.6487-0.8735; p = 0.000183) was observed for the difference between Pre-RT and post-RT abdomen and spine muscles percentage.; ; Conclusion: Fully automated 3D quantitative measurements of body composition are significant for overall survival in Head and Neck Squamous Cell Carcinoma patients.
A classifier model for prostate cancer diagnosis using CNNs and transfer learning with multi-parametric MRI
Mehmood, Mubashar
Abbasi, Sadam Hussain
Aurangzeb, Khursheed
Majeed, Muhammad Faran
Anwar, Muhammad Shahid
Alhussein, Musaed
Frontiers in Oncology2023Journal Article, cited 0 times
PROSTATEx
Prostate cancer (PCa) is a major global concern, particularly for men, emphasizing the urgency of early detection to reduce mortality. As the second leading cause of cancer-related male deaths worldwide, precise and efficient diagnostic methods are crucial. Due to high and multiresolution MRI in PCa, computer-aided diagnostic (CAD) methods have emerged to assist radiologists in identifying anomalies. However, the rapid advancement of medical technology has led to the adoption of deep learning methods. These techniques enhance diagnostic efficiency, reduce observer variability, and consistently outperform traditional approaches. Resource constraints that can distinguish whether a cancer is aggressive or not is a significant problem in PCa treatment. This study aims to identify PCa using MRI images by combining deep learning and transfer learning (TL). Researchers have explored numerous CNN-based Deep Learning methods for classifying MRI images related to PCa. In this study, we have developed an approach for the classification of PCa using transfer learning on a limited number of images to achieve high performance and help radiologists instantly identify PCa. The proposed methodology adopts the EfficientNet architecture, pre-trained on the ImageNet dataset, and incorporates three branches for feature extraction from different MRI sequences. The extracted features are then combined, significantly enhancing the model's ability to distinguish MRI images accurately. Our model demonstrated remarkable results in classifying prostate cancer, achieving an accuracy rate of 88.89%. Furthermore, comparative results indicate that our approach achieve higher accuracy than both traditional hand-crafted feature techniques and existing deep learning techniques in PCa classification. The proposed methodology can learn more distinctive features in prostate images and correctly identify cancer.
An integrated method for detecting lung cancer via CT scanning via optimization, deep learning, and IoT data transmission
Karimullah, Shaik
Khan, Mudassir
Shaik, Fahimuddin
Alabduallah, Bayan
Almjally, Abrar
Frontiers in Oncology2024Journal Article, cited 0 times
Website
LIDC-IDRI
Segmentation of glioblastomas via 3D FusionNet
Guo, X.
Zhang, B.
Peng, Y.
Chen, F.
Li, W.
Front Oncol2024Journal Article, cited 0 times
Website
UPENN-GBM
3D deep learning model
Magnetic Resonance Imaging (MRI)
SegNet
U-net
brain tumor segmentation
INTRODUCTION: This study presented an end-to-end 3D deep learning model for the automatic segmentation of brain tumors. METHODS: The MRI data used in this study were obtained from a cohort of 630 GBM patients from the University of Pennsylvania Health System (UPENN-GBM). Data augmentation techniques such as flip and rotations were employed to further increase the sample size of the training set. The segmentation performance of models was evaluated by recall, precision, dice score, Lesion False Positive Rate (LFPR), Average Volume Difference (AVD) and Average Symmetric Surface Distance (ASSD). RESULTS: When applying FLAIR, T1, ceT1, and T2 MRI modalities, FusionNet-A and FusionNet-C the best-performing model overall, with FusionNet-A particularly excelling in the enhancing tumor areas, while FusionNet-C demonstrates strong performance in the necrotic core and peritumoral edema regions. FusionNet-A excels in the enhancing tumor areas across all metrics (0.75 for recall, 0.83 for precision and 0.74 for dice scores) and also performs well in the peritumoral edema regions (0.77 for recall, 0.77 for precision and 0.75 for dice scores). Combinations including FLAIR and ceT1 tend to have better segmentation performance, especially for necrotic core regions. Using only FLAIR achieves a recall of 0.73 for peritumoral edema regions. Visualization results also indicate that our model generally achieves segmentation results similar to the ground truth. DISCUSSION: FusionNet combines the benefits of U-Net and SegNet, outperforming the tumor segmentation performance of both. Although our model effectively segments brain tumors with competitive accuracy, we plan to extend the framework to achieve even better segmentation performance.
Morphological and Fractal Properties of Brain Tumors
Sánchez, J.
Martin-Landrove, M.
Front Physiol2022Journal Article, cited 0 times
Website
REMBRANDT
TCGA-LGG
TCGA-GBM
Radiomics
fractal dimension
local roughness exponent
morphological parameters
scaling analysis
Principal component analysis (PCA)
tumor growth dynamics
tumor interface
tumor surface regularity
visibility graphs
Tumor interface dynamics is a complex process determined by cell proliferation and invasion to neighboring tissues. Parameters extracted from the tumor interface fluctuations allow for the characterization of the particular growth model, which could be relevant for an appropriate diagnosis and the correspondent therapeutic strategy. Previous work, based on scaling analysis of the tumor interface, demonstrated that gliomas strictly behave as it is proposed by the Family-Vicsek ansatz, which corresponds to a proliferative-invasive growth model, while for meningiomas and acoustic schwannomas, a proliferative growth model is more suitable. In the present work, other morphological and dynamical descriptors are used as a complementary view, such as surface regularity, one-dimensional fluctuations represented as ordered series and bi-dimensional fluctuations of the tumor interface. These fluctuations were analyzed by Detrended Fluctuation Analysis to determine generalized fractal dimensions. Results indicate that tumor interface fractal dimension, local roughness exponent and surface regularity are parameters that discriminate between gliomas and meningiomas/schwannomas.
Automated Koos Classification of Vestibular Schwannoma
Kujawa, Aaron
Dorent, Reuben
Connor, Steve
Oviedova, Anna
Okasha, Mohamed
Grishchuk, Diana
Ourselin, Sebastien
Paddick, Ian
Kitchen, Neil
Vercauteren, Tom
Shapey, Jonathan
Frontiers in Radiology2022Journal Article, cited 0 times
Website
Vestibular-Schwannoma-SEG
Classification
Machine Learning
Objective: The Koos grading scale is a frequently used classification system for vestibular schwannoma (VS) that accounts for extrameatal tumor dimension and compression of the brain stem. We propose an artificial intelligence (AI) pipeline to fully automate the segmentation and Koos classification of VS from MRI to improve clinical workflow and facilitate patient management.; ; Methods: We propose a method for Koos classification that does not only rely on available images but also on automatically generated segmentations. Artificial neural networks were trained and tested based on manual tumor segmentations and ground truth Koos grades of contrast-enhanced T1-weighted (ceT1) and high-resolution T2-weighted (hrT2) MR images from subjects with a single sporadic VS, acquired on a single scanner and with a standardized protocol. The first stage of the pipeline comprises a convolutional neural network (CNN) which can segment the VS and 7 adjacent structures. For the second stage, we propose two complementary approaches that are combined in an ensemble. The first approach applies a second CNN to the segmentation output to predict the Koos grade, the other approach extracts handcrafted features which are passed to a Random Forest classifier. The pipeline results were compared to those achieved by two neurosurgeons.; ; Results: Eligible patients (n = 308) were pseudo-randomly split into 5 groups to evaluate the model performance with 5-fold cross-validation. The weighted macro-averaged mean absolute error (MA-MAE), weighted macro-averaged F1 score (F1), and accuracy score of the ensemble model were assessed on the testing sets as follows: MA-MAE = 0.11 ± 0.05, F1 = 89.3 ± 3.0%, accuracy = 89.3 ± 2.9%, which was comparable to the average performance of two neurosurgeons: MA-MAE = 0.11 ± 0.08, F1 = 89.1 ± 5.2, accuracy = 88.6 ± 5.8%. Inter-rater reliability was assessed by calculating Fleiss' generalized kappa (k = 0.68) based on all 308 cases, and intra-rater reliabilities of annotator 1 (k = 0.95) and annotator 2 (k = 0.82) were calculated according to the weighted kappa metric with quadratic (Fleiss-Cohen) weights based on 15 randomly selected cases.; ; Conclusions: We developed the first AI framework to automatically classify VS according to the Koos scale. The excellent results show that the accuracy of the framework is comparable to that of neurosurgeons and may therefore facilitate management of patients with VS. The models, code, and ground truth Koos grades for a subset of publicly available images (n = 188) will be released upon publication.
Relevance maps: A weakly supervised segmentation method for 3D brain tumours in MRIs
Rajapaksa, Sajith
Khalvati, Farzad
Frontiers in Radiology2022Journal Article, cited 0 times
BraTS-TCGA-GBM
With the increased reliance on medical imaging, Deep convolutional neural networks (CNNs) have become an essential tool in the medical imaging-based computer-aided diagnostic pipelines. However, training accurate and reliable classification models often require large fine-grained annotated datasets. To alleviate this, weakly-supervised methods can be used to obtain local information such as region of interest from global labels. This work proposes a weakly-supervised pipeline to extract Relevance Maps of medical images from pre-trained 3D classification models using localized perturbations. The extracted Relevance Map describes a given region's importance to the classification model and produces the segmentation for the region. Furthermore, we propose a novel optimal perturbation generation method that exploits 3D superpixels to find the most relevant area for a given classification using U-net architecture. This model is trained with perturbation loss, which maximizes the difference between unperturbed and perturbed predictions. We validated the effectiveness of our methodology by applying it to the segmentation of Glioma brain tumours in MRI scans using only classification labels for glioma type. The proposed method outperforms existing methods in both Dice Similarity Coefficient for segmentation and resolution for visualizations.
Artificial neural network-assisted prediction of radiobiological indices in head and neck cancer
Ahmed, S. B. S.
Naeem, S.
Khan, A. M. H.
Qureshi, B. M.
Hussain, A.
Aydogan, B.
Muhammad, W.
Front Artif Intell2024Journal Article, cited 0 times
Website
Head-Neck-CT-Atlas
HNSCC-3DCT-RT
Artificial Neural Network (ANN)
head and neck cancer
normal tissue complication probability
radiation therapy
tumor control probability
BACKGROUND AND PURPOSE: We proposed an artificial neural network model to predict radiobiological parameters for the head and neck squamous cell carcinoma patients treated with radiation therapy. The model uses the tumor specification, demographics, and radiation dose distribution to predict the tumor control probability and the normal tissue complications probability. These indices are crucial for the assessment and clinical management of cancer patients during treatment planning. METHODS: Two publicly available datasets of 31 and 215 head and neck squamous cell carcinoma patients treated with conformal radiation therapy were selected. The demographics, tumor specifications, and radiation therapy treatment parameters were extracted from the datasets used as inputs for the training of perceptron. Radiobiological indices are calculated by open-source software using dosevolume histograms from radiation therapy treatment plans. Those indices were used as output in the training of a single-layer neural network. The distribution of data used for training, validation, and testing purposes was 70, 15, and 15%, respectively. RESULTS: The best performance of the neural network was noted at epoch number 32 with the mean squared error of 0.0465. The accuracy of the prediction of radiobiological indices by the artificial neural network in training, validation, and test phases were determined to be 0.89, 0.87, and 0.82, respectively. We also found that the percentage volume of parotid inside the planning target volume is the significant parameter for the prediction of normal tissue complications probability. CONCLUSION: We believe that the model has significant potential to predict radiobiological indices and help clinicians in treatment plan evaluation and treatment management of head and neck squamous cell carcinoma patients.
Survey of Image Processing Techniques for Brain Pathology Diagnosis: Challenges and Opportunities
In recent years, a number of new products introduced to the global market combine intelligent robotics, artificial intelligence and smart interfaces to provide powerful tools to support professional decision making. However, while brain disease diagnosis from the brain scan images is supported by imaging robotics, the data analysis to form a medical diagnosis is performed solely by highly trained medical professionals. Recent advances in medical imaging techniques, artificial intelligence, machine learning and computer vision present new opportunities to build intelligent decision support tools to aid the diagnostic process, increase the disease detection accuracy, reduce error, automate the monitoring of patient's recovery, and discover new knowledge about the disease cause and its treatment. This article introduces the topic of medical diagnosis of brain diseases from the MRI based images. We describe existing, multi-modal imaging techniques of the brain's soft tissue and describe in detail how are the resulting images are analyzed by a radiologist to form a diagnosis. Several comparisons between the best results of classifying natural scenes and medical image analysis illustrate the challenges of applying existing image processing techniques to the medical image analysis domain. The survey of medical image processing methods also identified several knowledge gaps, the need for automation of image processing analysis, and the identification of the brain structures in the medical images that differentiate healthy tissue from a pathology. This survey is grounded in the cases of brain tumor analysis and the traumatic brain injury diagnoses, as these two case studies illustrate the vastly different approaches needed to define, extract, and synthesize meaningful information from multiple MRI image sets for a diagnosis. Finally, the article summarizes artificial intelligence frameworks that are built as multi-stage, hybrid, hierarchical information processing work-flows and the benefits of applying these models for medical diagnosis to build intelligent physician's aids with knowledge transparency, expert knowledge embedding, and increased analytical quality.
Clinical implementation of artificial intelligence in neuroradiology with development of a novel workflow-efficient picture archiving and communication system-based automated brain tumor segmentation and radiomic feature extraction
Aboian, M.
Bousabarah, K.
Kazarian, E.
Zeevi, T.
Holler, W.
Merkaj, S.
Cassinelli Petersen, G.
Bahar, R.
Subramanian, H.
Sunku, P.
Schrickel, E.
Bhawnani, J.
Zawalich, M.
Mahajan, A.
Malhotra, A.
Payabvash, S.
Tocino, I.
Lin, M.
Westerhoff, M.
Front Neurosci2022Journal Article, cited 0 times
Website
BraTS-TCGA-GBM
IBSI
PACS (picture archiving and communication system)
artificial intelligence (AL)
brain tumor
feature extraction
glioma
machine learning (ML)
segmentation
Purpose: Personalized interpretation of medical images is critical for optimum patient care, but current tools available to physicians to perform quantitative analysis of patient's medical images in real time are significantly limited. In this work, we describe a novel platform within PACS for volumetric analysis of images and thus development of large expert annotated datasets in parallel with radiologist performing the reading that are critically needed for development of clinically meaningful AI algorithms. Specifically, we implemented a deep learning-based algorithm for automated brain tumor segmentation and radiomics extraction, and embedded it into PACS to accelerate a supervised, end-to- end workflow for image annotation and radiomic feature extraction. Materials and methods: An algorithm was trained to segment whole primary brain tumors on FLAIR images from multi-institutional glioma BraTS 2021 dataset. Algorithm was validated using internal dataset from Yale New Haven Health (YHHH) and compared (by Dice similarity coefficient [DSC]) to radiologist manual segmentation. A UNETR deep-learning was embedded into Visage 7 (Visage Imaging, Inc., San Diego, CA, United States) diagnostic workstation. The automatically segmented brain tumor was pliable for manual modification. PyRadiomics (Harvard Medical School, Boston, MA) was natively embedded into Visage 7 for feature extraction from the brain tumor segmentations. Results: UNETR brain tumor segmentation took on average 4 s and the median DSC was 86%, which is similar to published literature but lower than the RSNA ASNR MICCAI BRATS challenge 2021. Finally, extraction of 106 radiomic features within PACS took on average 5.8 +/- 0.01 s. The extracted radiomic features did not vary over time of extraction or whether they were extracted within PACS or outside of PACS. The ability to perform segmentation and feature extraction before radiologist opens the study was made available in the workflow. Opening the study in PACS, allows the radiologists to verify the segmentation and thus annotate the study. Conclusion: Integration of image processing algorithms for tumor auto-segmentation and feature extraction into PACS allows curation of large datasets of annotated medical images and can accelerate translation of research into development of personalized medicine applications in the clinic. The ability to use familiar clinical tools to revise the AI segmentations and natively embedding the segmentation and radiomic feature extraction tools on the diagnostic workstation accelerates the process to generate ground-truth data.
A Weighted Voting Ensemble Self-Labeled Algorithm for the Detection of Lung Abnormalities from X-Rays
Livieris, Ioannis
Kanavos, Andreas
Tampakas, Vassilis
Pintelas, Panagiotis
Algorithms2019Journal Article, cited 0 times
Website
TCGA-LUAD
Classification
Algorithm Development
During the last decades, intensive efforts have been devoted to the extraction of useful knowledge from large volumes of medical data employing advanced machine learning and data mining techniques. Advances in digital chest radiography have enabled research and medical centers to accumulate large repositories of classified (labeled) images and mostly of unclassified (unlabeled) images from human experts. Machine learning methods such as semi-supervised learning algorithms have been proposed as a new direction to address the problem of shortage of available labeled data, by exploiting the explicit classification information of labeled data with the information hidden in the unlabeled data. In the present work, we propose a new ensemble semi-supervised learning algorithm for the classification of lung abnormalities from chest X-rays based on a new weighted voting scheme. The proposed algorithm assigns a vector of weights on each component classifier of the ensemble based on its accuracy on each class. Our numerical experiments illustrate the efficiency of the proposed ensemble methodology against other state-of-the-art classification methods.
Noninvasive Grading of Glioma Tumor Using Magnetic Resonance Imaging with Convolutional Neural Networks
Khawaldeh, Saed
Pervaiz, Usama
Rafiq, Azhar
Alkhawaldeh, Rami S.
Applied Sciences2017Journal Article, cited 187 times
Website
REMBRANDT
Machine Learning
In recent years, Convolutional Neural Networks (ConvNets) have rapidly emerged as a widespread machine learning technique in a number of applications especially in the area of medical image classification and segmentation. In this paper, we propose a novel approach that uses ConvNet for classifying brain medical images into healthy and unhealthy brain images. The unhealthy images of brain tumors are categorized also into low grades and high grades. In particular, we use the modified version of the Alex Krizhevsky network (AlexNet) deep learning architecture on magnetic resonance images as a potential tumor classification technique. The classification is performed on the whole image where the labels in the training set are at the image level rather than the pixel level. The results showed a reasonable performance in characterizing the brain medical images with an accuracy of 91.16%.
An Automated Segmentation Method for Lung Parenchyma Image Sequences Based on Fractal Geometry and Convex Hull Algorithm
Xiao, Xiaojiao
Zhao, Juanjuan
Qiang, Yan
Wang, Hua
Xiao, Yingze
Zhang, Xiaolong
Zhang, Yudong
Applied Sciences2018Journal Article, cited 1 times
Website
LIDC-IDRI
lung cancer
pulmonary nodules
juxtapleural nodules
A Hybrid End-to-End Approach Integrating Conditional Random Fields into CNNs for Prostate Cancer Detection on MRI
Lapa, Paulo
Castelli, Mauro
Gonçalves, Ivo
Sala, Evis
Rundo, Leonardo
Applied Sciences2020Journal Article, cited 0 times
PROSTATEx
Convolutional Neural Network (CNN)
Prostate
Automatic Pancreas Segmentation Using Coarse-Scaled 2D Model of Deep Learning: Usefulness of Data Augmentation and Deep U-Net
Nishio, Mizuho
Noguchi, Shunjiro
Fujimoto, Koji
Applied Sciences2020Journal Article, cited 0 times
Pancreas-CT
Combinations of data augmentation methods and deep learning architectures for automatic pancreas segmentation on CT images are proposed and evaluated. Images from a public CT dataset of pancreas segmentation were used to evaluate the models. Baseline U-net and deep U-net were chosen for the deep learning models of pancreas segmentation. Methods of data augmentation included conventional methods, mixup, and random image cropping and patching (RICAP). Ten combinations of the deep learning models and the data augmentation methods were evaluated. Four-fold cross validation was performed to train and evaluate these models with data augmentation methods. The dice similarity coefficient (DSC) was calculated between automatic segmentation results and manually annotated labels and these were visually assessed by two radiologists. The performance of the deep U-net was better than that of the baseline U-net with mean DSC of 0.703–0.789 and 0.686–0.748, respectively. In both baseline U-net and deep U-net, the methods with data augmentation performed better than methods with no data augmentation, and mixup and RICAP were more useful than the conventional method. The best mean DSC was obtained using a combination of deep U-net, mixup, and RICAP, and the two radiologists scored the results from this model as good or perfect in 76 and 74 of the 82 cases.
Simulation Study of Low-Dose Sparse-Sampling CT with Deep Learning-Based Reconstruction: Usefulness for Evaluation of Ovarian Cancer Metastasis
Urase, Yasuyo
Nishio, Mizuho
Ueno, Yoshiko
Kono, Atsushi K.
Sofue, Keitaro
Kanda, Tomonori
Maeda, Takaki
Nogami, Munenobu
Hori, Masatoshi
Murakami, Takamichi
Applied Sciences2020Journal Article, cited 0 times
TCGA-OV
The usefulness of sparse-sampling CT with deep learning-based reconstruction for detection of metastasis of malignant ovarian tumors was evaluated. We obtained contrast-enhanced CT images (n = 141) of ovarian cancers from a public database, whose images were randomly divided into 71 training, 20 validation, and 50 test cases. Sparse-sampling CT images were calculated slice-by-slice by software simulation. Two deep-learning models for deep learning-based reconstruction were evaluated: Residual Encoder-Decoder Convolutional Neural Network (RED-CNN) and deeper U-net. For 50 test cases, we evaluated the peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) as quantitative measures. Two radiologists independently performed a qualitative evaluation for the following points: entire CT image quality; visibility of the iliac artery; and visibility of peritoneal dissemination, liver metastasis, and lymph node metastasis. Wilcoxon signed-rank test and McNemar test were used to compare image quality and metastasis detectability between the two models, respectively. The mean PSNR and SSIM performed better with deeper U-net over RED-CNN. For all items of the visual evaluation, deeper U-net scored significantly better than RED-CNN. The metastasis detectability with deeper U-net was more than 95%. Sparse-sampling CT with deep learning-based reconstruction proved useful in detecting metastasis of malignant ovarian tumors and might contribute to reducing overall CT-radiation exposure.
Prediction of Glioma Grades Using Deep Learning with Wavelet Radiomic Features
Çinarer, Gökalp
Emiroğlu, Bülent Gürsel
Yurttakal, Ahmet Haşim
Applied Sciences2020Journal Article, cited 0 times
LGG-1p19qDeletion
Gliomas are the most common primary brain tumors. They are classified into 4 grades (Grade I–II-III–IV) according to the guidelines of the World Health Organization (WHO). The accurate grading of gliomas has clinical significance for planning prognostic treatments, pre-diagnosis, monitoring and administration of chemotherapy. The purpose of this study is to develop a deep learning-based classification method using radiomic features of brain tumor glioma grades with deep neural network (DNN). The classifier was combined with the discrete wavelet transform (DWT) the powerful feature extraction tool. This study primarily focuses on the four main aspects of the radiomic workflow, namely tumor segmentation, feature extraction, analysis, and classification. We evaluated data from 121 patients with brain tumors (Grade II, n = 77; Grade III, n = 44) from The Cancer Imaging Archive, and 744 radiomic features were obtained by applying low sub-band and high sub-band 3D wavelet transform filters to the 3D tumor images. Quantitative values were statistically analyzed with MannWhitney U tests and 126 radiomic features with significant statistical properties were selected in eight different wavelet filters. Classification performances of 3D wavelet transform filter groups were measured using accuracy, sensitivity, F1 score, and specificity values using the deep learning classifier model. The proposed model was highly effective in grading gliomas with 96.15% accuracy, 94.12% precision, 100% recall, 96.97% F1 score, and 98.75% Area under the ROC curve. As a result, deep learning and feature selection techniques with wavelet transform filters can be accurately applied using the proposed method in glioma grade classification.
A Fully Automatic Procedure for Brain Tumor Segmentation from Multi-Spectral MRI Records Using Ensemble Learning and Atlas-Based Data Enhancement
Győrfi, Ágnes
Szilágyi, László
Kovács, Levente
Applied Sciences2021Journal Article, cited 0 times
BraTS 2015
BraTS 2019
Algorithm Development
Magnetic Resonance Imaging (MRI)
Supervised training
The accurate and reliable segmentation of gliomas from magnetic resonance image (MRI) data has an important role in diagnosis, intervention planning, and monitoring the tumor’s evolution during and after therapy. Segmentation has serious anatomical obstacles like the great variety of the tumor’s location, size, shape, and appearance and the modified position of normal tissues. Other phenomena like intensity inhomogeneity and the lack of standard intensity scale in MRI data represent further difficulties. This paper proposes a fully automatic brain tumor segmentation procedure that attempts to handle all the above problems. Having its foundations on the MRI data provided by the MICCAI Brain Tumor Segmentation (BraTS) Challenges, the procedure consists of three main phases. The first pre-processing phase prepares the MRI data to be suitable for supervised classification, by attempting to fix missing data, suppressing the intensity inhomogeneity, normalizing the histogram of observed data channels, generating additional morphological, gradient-based, and Gabor-wavelet features, and optionally applying atlas-based data enhancement. The second phase accomplishes the main classification process using ensembles of binary decision trees and provides an initial, intermediary labeling for each pixel of test records. The last phase reevaluates these intermediary labels using a random forest classifier, then deploys a spatial region growing-based structural validation of suspected tumors, thus achieving a high-quality final segmentation result. The accuracy of the procedure is evaluated using the multi-spectral MRI records of the BraTS 2015 and BraTS 2019 training data sets. The procedure achieves high-quality segmentation results, characterized by average Dice similarity scores of up to 86%.
Robust Resolution-Enhanced Prostate Segmentation in Magnetic Resonance and Ultrasound Images through Convolutional Neural Networks
Pellicer-Valero, Oscar J.
Gonzalez-Perez, Victor
Ramón-Borja, Juan Luis Casanova
García, Isabel Martín
Benito, María Barrios
Gómez, Paula Pelechano
Rubio-Briones, José
Rupérez, María José
Martín-Guerrero, José D.
Applied Sciences2021Journal Article, cited 1 times
Website
Prostate segmentations are required for an ever-increasing number of medical applications,; such as image-based lesion detection, fusion-guided biopsy and focal therapies. However, obtaining accurate segmentations is laborious, requires expertise and, even then, the inter-observer variability remains high. In this paper, a robust, accurate and generalizable model for Magnetic Resonance (MR) and three-dimensional (3D) Ultrasound (US) prostate image segmentation is proposed. It uses a densenet-resnet-based Convolutional Neural Network (CNN) combined with techniques such as deep supervision, checkpoint ensembling and Neural Resolution Enhancement. The MR prostate segmentation model was trained with five challenging and heterogeneous MR prostate datasets (and two US datasets), with segmentations from many different experts with varying segmentation criteria. The model achieves a consistently strong performance in all datasets independently (mean Dice Similarity Coefficient -DSC- above 0.91 for all datasets except for one), outperforming the inter-expert variability significantly in MR (mean DSC of 0.9099 vs. 0.8794). When evaluated on the publicly available Promise12 challenge dataset, it attains a similar performance to the best entries. In summary, the model has the potential of having a significant impact on current prostate procedures,; undercutting, and even eliminating, the need of manual segmentations through improvements in; terms of robustness, generalizability and output resolution.; ; Featured Application: The proposed model has the potential of having a significant impact on; current prostate procedures, undercutting, and even eliminating, the need of manual segmentations through improvements in terms of robustness, generalizability and output resolution.
Machine Learning and Feature Selection Methods for EGFR Mutation Status Prediction in Lung Cancer
Morgado, Joana
Pereira, Tania
Silva, Francisco
Freitas, Cláudia
Negrão, Eduardo
de Lima, Beatriz Flor
da Silva, Miguel Correia
Madureira, António J.
Ramos, Isabel
Hespanhol, Venceslau
Costa, José Luis
Cunha, António
Oliveira, Hélder P.
Applied Sciences2021Journal Article, cited 0 times
Website
NSCLC Radiogenomics
Support Vector Machine (SVM)
Machine Learning
LUNG
The evolution of personalized medicine has changed the therapeutic strategy fromclassical chemotherapy and radiotherapy to a genetic modification targeted therapy, and althoughbiopsy is the traditional method to genetically characterize lung cancer tumor, it is an invasive andpainful procedure for the patient. Nodule image features extracted from computed tomography(CT) scans have been used to create machine learning models that predict gene mutation status ina noninvasive, fast, and easy-to-use manner. However, recent studies have shown that radiomicfeatures extracted from an extended region of interest (ROI) beyond the tumor, might be morerelevant to predict the mutation status in lung cancer, and consequently may be used to significantlydecrease the mortality rate of patients battling this condition. In this work, we investigated therelation between image phenotypes and the mutation status of Epidermal Growth Factor Receptor(EGFR), the most frequently mutated gene in lung cancer with several approved targeted-therapies,using radiomic features extracted from the lung containing the nodule. A variety of linear, nonlinear,and ensemble predictive classification models, along with several feature selection methods, wereused to classify the binary outcome of wild-type or mutantEGFRmutation status. The resultsshow that a comprehensive approach using a ROI that included the lung with nodule can capturerelevant information and successfully predict theEGFRmutation status with increased performancecompared to local nodule analyses. Linear Support Vector Machine, Elastic Net, and LogisticRegression, combined with the Principal Component Analysis feature selection method implementedwith 70% of variance in the feature set, were the best-performing classifiers, reaching Area Underthe Curve (AUC) values ranging from 0.725 to 0.737. This approach that exploits a holistic analysisindicates that information from more extensive regions of the lung containing the nodule allows amore complete lung cancer characterization and should be considered in future radiogenomic studies.
HGG and LGG Brain Tumor Segmentation in Multi-Modal MRI Using Pretrained Convolutional Neural Networks of Amazon Sagemaker
Lefkovits, S.
Lefkovits, L.
Szilagyi, L.
Applied Sciences-Basel2022Journal Article, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
BraTS 2020
BRAIN
Segmentation
Magnetic Resonance Imaging (MRI)
Deep learning
Convolutional Neural Network (CNN)
Cloud computing
Automatic brain tumor segmentation from multimodal MRI plays a significant role in assisting the diagnosis, treatment, and surgery of glioblastoma and lower glade glioma. In this article, we propose applying several deep learning techniques implemented in AWS SageMaker Framework. The different CNN architectures are adapted and fine-tuned for our purpose of brain tumor segmentation.The experiments are evaluated and analyzed in order to obtain the best parameters as possible for the models created. The selected architectures are trained on the publicly available BraTS 2017-2020 dataset. The segmentation distinguishes the background, healthy tissue, whole tumor, edema, enhanced tumor, and necrosis. Further, a random search for parameter optimization is presented to additionally improve the architectures obtained. Lastly, we also compute the detection results of the ensemble model created from the weighted average of the six models described. The goal of the ensemble is to improve the segmentation at the tumor tissue boundaries. Our results are compared to the BraTS 2020 competition and leaderboard and are among the first 25% considering the ranking of Dice scores.
Intelligent Computer-Aided Model for Efficient Diagnosis of Digital Breast Tomosynthesis 3D Imaging Using Deep Learning
El-Shazli, Alaa M. Adel
Youssef, Sherin M.
Soliman, Abdel Hamid
Applied Sciences2022Journal Article, cited 0 times
Breast-Cancer-Screening-DBT
Digital breast tomosynthesis (DBT) is a highly promising 3D imaging modality for breast diagnosis. Tissue overlapping is a challenge with traditional 2D mammograms; however, since digital breast tomosynthesis can obtain three-dimensional images, tissue overlapping is reduced, making it easier for radiologists to detect abnormalities and resulting in improved and more accurate diagnosis. In this study, a new computer-aided multi-class diagnosis system is proposed that integrates DBT augmentation and colour feature map with a modified deep learning architecture (Mod_AlexNet). To the proposed modified deep learning architecture (Mod AlexNet), an optimization layer with multiple high performing optimizers is incorporated so that it can be evaluated and optimised using various optimization techniques. Two experimental scenarios are applied, the first scenario proposed a computer-aided diagnosis (CAD) model that integrated DBT augmentation, image enhancement techniques and colour feature mapping with six deep learning models for feature extraction, including ResNet-18, AlexNet, GoogleNet, MobileNetV2, VGG-16 and DenseNet-201, to efficiently classify DBT slices. The second scenario compared the performance of the newly proposed Mod_AlexNet architecture and traditional AlexNet, using several optimization techniques and different evaluation performance metrics were computed. The optimization techniques included adaptive moment estimation (Adam), root mean squared propagation (RMSProp), and stochastic gradient descent with momentum (SGDM), for different batch sizes, including 32, 64 and 512. Experiments have been conducted on a large benchmark dataset of breast tomography scans. The performance of the first scenario was compared in terms of accuracy, precision, sensitivity, specificity, runtime, and f1-score. While in the second scenario, performance was compared in terms of training accuracy, training loss, and test accuracy. In the first scenario, results demonstrated that AlexNet reported improvement rates of 1.69%, 5.13%, 6.13%, 4.79% and 1.6%, compared to ResNet-18, MobileNetV2, GoogleNet, DenseNet-201 and VGG16, respectively. Experimental analysis with different optimization techniques and batch sizes demonstrated that the proposed Mod_AlexNet architecture outperformed AlexNet in terms of test accuracy with improvement rates of 3.23%, 1.79% and 1.34% when compared using SGDM, Adam, and RMSProp optimizers, respectively.
DETECT-LC: A 3D Deep Learning and Textural Radiomics Computational Model for Lung Cancer Staging and Tumor Phenotyping Based on Computed Tomography Volumes
Fathalla, Karma M.
Youssef, Sherin M.
Mohammed, Nourhan
Applied Sciences2022Journal Article, cited 0 times
NSCLC Radiogenomics
NSCLC-Radiomics
NSCLC-Radiomics-Genomics
Lung Cancer is one of the primary causes of cancer-related deaths worldwide. Timely diagnosis and precise staging are pivotal for treatment planning, and thus can lead to increased survival rates. The application of advanced machine learning techniques helps in effective diagnosis and staging. In this study, a multistage neurobased computational model is proposed, DETECT-LC learning. DETECT-LC handles the challenge of choosing discriminative CT slices for constructing 3D volumes, using Haralick, histogram-based radiomics, and unsupervised clustering. ALT-CNN-DENSE Net architecture is introduced as part of DETECT-LC for voxel-based classification. DETECT-LC offers an automatic threshold-based segmentation approach instead of the manual procedure, to help mitigate this burden for radiologists and clinicians. Also, DETECT-LC presents a slice selection approach and a newly proposed relatively light weight 3D CNN architecture to improve existing studies performance. The proposed pipeline is employed for tumor phenotyping and staging. DETECT-LC performance is assessed through a range of experiments, in which DETECT-LC attains outstanding performance surpassing its counterparts in terms of accuracy, sensitivity, F1-score and Area under Curve (AuC). For histopathology classification, DETECT-LC average performance achieved an improvement of 20% in overall accuracy, 0.19 in sensitivity, 0.16 in F1-Score and 0.16 in AuC over the state of the art. A similar enhancement is reached for staging, where higher overall accuracy, sensitivity and F1-score are attained with differences of 8%, 0.08 and 0.14.
Machine Learning Algorithm Accuracy Using Single- versus Multi-Institutional Image Data in the Classification of Prostate MRI Lesions
Provenzano, Destie
Melnyk, Oleksiy
Imtiaz, Danish
McSweeney, Benjamin
Nemirovsky, Daniel
Wynne, Michael
Whalen, Michael
Rao, Yuan James
Loew, Murray
Haji-Momenian, Shawn
Applied Sciences2023Journal Article, cited 0 times
PROSTATEx
Classification
Algorithm Development
Featured Application; The purpose of this study was to determine the efficacy of highly accurate ML classification algorithms trained on prostate image data from one institution and tested on image data from another institution.; Abstract; (1) Background: Recent studies report high accuracies when using machine learning (ML) algorithms to classify prostate cancer lesions on publicly available datasets. However, it is unknown if these trained models generalize well to data from different institutions. (2) Methods: This was a retrospective study using multi-parametric Magnetic Resonance Imaging (mpMRI) data from our institution (63 mpMRI lesions) and the ProstateX-2 challenge, a publicly available annotated image set (112 mpMRI lesions). Residual Neural Network (ResNet) algorithms were trained to classify lesions as high-risk (hrPCA) or low-risk/benign. Models were trained on (a) ProstateX-2 data, (b) local institutional data, and (c) combined ProstateX-2 and local data. The models were then tested on (a) ProstateX-2, (b) local and (c) combined ProstateX-2 and local data. (3) Results: Models trained on either local or ProstateX-2 image data had high Area Under the ROC Curve (AUC)s (0.82–0.98) in the classification of hrPCA when tested on their own respective populations. AUCs decreased significantly (0.23–0.50, p < 0.01) when models were tested on image data from the other institution. Models trained on image data from both institutions re-achieved high AUCs (0.83–0.99). (4) Conclusions: Accurate prostate cancer classification models trained on single-institutional image data performed poorly when tested on outside-institutional image data. Heterogeneous multi-institutional training image data will likely be required to achieve broadly applicable mpMRI models.
Deep Learning-Based Radiomics for Prognostic Stratification of Low-Grade Gliomas Using a Multiple-Gene Signature
Karabacak, Mert
Ozkara, Burak B.
Senparlak, Kaan
Bisdas, Sotirios
Applied Sciences2023Journal Article, cited 0 times
Website
TCGA-LGG
Radiomics
Radiogenomics
Deep Learning
Glioma
Low-grade gliomas are a heterogeneous group of infiltrative neoplasms. Radiomics allows the characterization of phenotypes with high-throughput extraction of quantitative imaging features from radiologic images. Deep learning models, such as convolutional neural networks (CNNs), offer well-performing models and a simplified pipeline by automatic feature learning. In our study, MRI data were retrospectively obtained from The Cancer Imaging Archive (TCIA), which contains MR images for a subset of the LGG patients in The Cancer Genome Atlas (TCGA). Corresponding molecular genetics and clinical information were obtained from TCGA. Three genes included in the genetic signatures were WEE1, CRTAC1, and SEMA4G. A CNN-based deep learning model was used to classify patients into low and high-risk groups, with the median gene signature risk score as the cut-off value. The data were randomly split into training and test sets, with 61 patients in the training set and 20 in the test set. In the test set, models using T1 and T2 weighted images had an area under the receiver operating characteristic curve of 73% and 79%, respectively. In conclusion, we developed a CNN-based model to predict non-invasively the risk stratification provided by the prognostic gene signature in LGGs. Numerous previously discovered gene signatures and novel genetic identifiers that will be developed in the future may be utilized with this method.
Reproducibility in Radiomics: A Comparison of Feature Extraction Methods and Two Independent Datasets
Thomas, Hannah Mary T.
Wang, Helen Y. C.
Varghese, Amal Joseph
Donovan, Ellen M.
South, Chris P.
Saxby, Helen
Nisbet, Andrew
Prakash, Vineet
Sasidharan, Balu Krishna
Pavamani, Simon Pradeep
Devadhas, Devakumar
Mathew, Manu
Isiah, Rajesh Gunasingam
Evans, Philip M.
Applied Sciences2023Journal Article, cited 0 times
Website
HEAD-NECK-RADIOMICS-HN1
RIDER Lung CT
Radiomics
Lung cancer
Head and neck cancer
Computed Tomography (CT)
Radiomics involves the extraction of information from medical images that are not visible to the human eye. There is evidence that these features can be used for treatment stratification and outcome prediction. However, there is much discussion about the reproducibility of results between different studies. This paper studies the reproducibility of CT texture features used in radiomics, comparing two feature extraction implementations, namely the MATLAB toolkit and Pyradiomics, when applied to independent datasets of CT scans of patients: (i) the open access RIDER dataset containing a set of repeat CT scans taken 15 min apart for 31 patients (RIDER Scan 1 and Scan 2, respectively) treated for lung cancer; and (ii) the open access HN1 dataset containing 137 patients treated for head and neck cancer. Gross tumor volume (GTV), manually outlined by an experienced observer available on both datasets, was used. The 43 common radiomics features available in MATLAB and Pyradiomics were calculated using two intensity-level quantization methods with and without an intensity threshold. Cases were ranked for each feature for all combinations of quantization parameters, and the Spearman’s rank coefficient, rs, calculated. Reproducibility was defined when a highly correlated feature in the RIDER dataset also correlated highly in the HN1 dataset, and vice versa. A total of 29 out of the 43 reported stable features were found to be highly reproducible between MATLAB and Pyradiomics implementations, having a consistently high correlation in rank ordering for RIDER Scan 1 and RIDER Scan 2 (rs > 0.8). 18/43 reported features were common in the RIDER and HN1 datasets, suggesting they may be agnostic to disease site. Useful radiomics features should be selected based on reproducibility. This study identified a set of features that meet this requirement and validated the methodology for evaluating reproducibility between datasets.
Improving the Robustness and Quality of Biomedical CNN Models through Adaptive Hyperparameter Tuning
Iqbal, S.
Qureshi, A. N.
Ullah, A.
Li, J. Q.
Mahmood, T.
Applied Sciences-Basel2022Journal Article, cited 0 times
BraTS 2020
BraTS 2021
BreakHis
Convolutional Neural Network (CNN)
Algorithm Development
Deep learning is an obvious method for the detection of disease, analyzing medical images and many researchers have looked into it. However, the performance of deep learning algorithms is frequently influenced by hyperparameter selection, the question of which combination of hyperparameters are best emerges. To address this challenge, we proposed a novel algorithm for Adaptive Hyperparameter Tuning (AHT) that automates the selection of optimal hyperparameters for Convolutional Neural Network (CNN) training. All of the optimal hyperparameters for the CNN models were instantaneously selected and allocated using a novel proposed algorithm Adaptive Hyperparameter Tuning (AHT). Using AHT, enables CNN models to be highly autonomous to choose optimal hyperparameters for classifying medical images into various classifications. The CNN model (Deep-Hist) categorizes medical images into basic classes: malignant and benign, with an accuracy of 95.71%. The most dominant CNN models such as ResNet, DenseNet, and MobileNetV2 are all compared to the already proposed CNN model (Deep-Hist). Plausible classification results were obtained using large, publicly available clinical datasets such as BreakHis, BraTS, NIH-Xray and COVID-19 X-ray. Medical practitioners and clinicians can utilize the CNN model to corroborate their first malignant and benign classification assessment. The recommended Adaptive high F1 score and precision, as well as its excellent generalization and accuracy, imply that it might be used to build a pathologist's aid tool.
Optimization of Median Modified Wiener Filter for Improving Lung Segmentation Performance in Low-Dose Computed Tomography Images
Lim, Sewon
Park, Minji
Kim, Hajin
Kang, Seong-Hyeon
Kim, Kyuseok
Lee, Youngjin
Applied Sciences2023Journal Article, cited 0 times
NLST
In low-dose computed tomography (LDCT), lung segmentation effectively improves the accuracy of lung cancer diagnosis. However, excessive noise is inevitable in LDCT, which can decrease lung segmentation accuracy. To address this problem, it is necessary to derive an optimized kernel size when using the median modified Wiener filter (MMWF) for noise reduction. Incorrect application of the kernel size can result in inadequate noise removal or blurring, degrading segmentation accuracy. Therefore, various kernel sizes of the MMWF were applied in this study, followed by region-growing-based segmentation and quantitative evaluation. In addition to evaluating the segmentation performance, we conducted a similarity assessment. Our results indicate that the greatest improvement in segmentation performance and similarity was at a kernel size 5 × 5. Compared with the noisy image, the accuracy, F1-score, intersection over union, root mean square error, and peak signal-to-noise ratio using the optimized MMWF were improved by factors of 1.38, 33.20, 64.86, 7.82, and 1.30 times, respectively. In conclusion, we have demonstrated that by applying the MMWF with an appropriate kernel size, the optimization of noise and blur reduction can enhance segmentation performance.
Lung Cancer Detection Model Using Deep Learning Technique
Wahab Sait, Abdul Rahaman
Applied Sciences2023Journal Article, cited 0 times
Lung-PET-CT-Dx
Globally, lung cancer (LC) is the primary factor for the highest cancer-related mortality rate. Deep learning (DL)-based medical image analysis plays a crucial role in LC detection and diagnosis. It can identify early signs of LC using positron emission tomography (PET) and computed tomography (CT) images. However, the existing DL-based LC detection models demand substantial computational resources. Healthcare centers face challenges in handling the complexities in the model implementation. Therefore, the author aimed to build a DL-based LC detection model using PET/CT images. Effective image preprocessing and augmentation techniques were followed to overcome the noises and artifacts. A convolutional neural network (CNN) model was constructed using the DenseNet-121 model for feature extraction. The author applied deep autoencoders to minimize the feature dimensionality. The MobileNet V3-Small model was used to identify the types of LC using the features. The author applied quantization-aware training and early stopping strategies to improve the proposed LC detection accuracy with less computational power. In addition, the Adam optimization (AO) algorithm was used to fine-tune the hyper-parameters in order to reduce the training time for detecting the LC type. The Lung-PET-CT-Dx dataset was used for performance evaluation. The experimental outcome highlighted that the proposed model obtained an accuracy of 98.6 and a Cohen’s Kappa value of 95.8 with fewer parameters. The proposed model can be implemented in real-time to support radiologists and physicians in detecting LC in the earlier stages. In the future, liquid neural networks and ensemble learning techniques will be used to enhance the performance of the proposed LC detection model.
Detection and Classification of Immature Leukocytes for Diagnosis of Acute Myeloid Leukemia Using Random Forest Algorithm
Dasariraju, Satvik
Huo, Marc
McCalla, Serena
Bioengineering2020Journal Article, cited 0 times
AML-Cytomorphology_LMU
Acute myeloid leukemia (AML) is a fatal blood cancer that progresses rapidly and hinders the function of blood cells and the immune system. The current AML diagnostic method, a manual examination of the peripheral blood smear, is time consuming, labor intensive, and suffers from considerable inter-observer variation. Herein, a machine learning model to detect and classify immature leukocytes for efficient diagnosis of AML is presented. Images of leukocytes in AML patients and healthy controls were obtained from a publicly available dataset in The Cancer Imaging Archive. Image format conversion, multi-Otsu thresholding, and morphological operations were used for segmentation of the nucleus and cytoplasm. From each image, 16 features were extracted, two of which are new nucleus color features proposed in this study. A random forest algorithm was trained for the detection and classification of immature leukocytes. The model achieved 92.99% accuracy for detection and 93.45% accuracy for classification of immature leukocytes into four types. Precision values for each class were above 65%, which is an improvement on the current state of art. Based on Gini importance, the nucleus to cytoplasm area ratio was a discriminative feature for both detection and classification, while the two proposed features were shown to be significant for classification. The proposed model can be used as a support tool for the diagnosis of AML, and the features calculated to be most important serve as a baseline for future research.
Brain Tumor Detection and Classification Using Deep Learning and Sine-Cosine Fitness Grey Wolf Optimization
ZainEldin, Hanaa
Gamel, Samah A.
El-Kenawy, El-Sayed M.
Alharbi, Amal H.
Khafaga, Doaa Sami
Ibrahim, Abdelhameed
Talaat, Fatma M.
Bioengineering2023Journal Article, cited 0 times
BraTS 2021
BraTS 2015
BraTS 2017
BraTS 2018
Segmentation
Algorithm Development
Optimization
Deep Learning
Computer Aided Diagnosis (CADx)
Diagnosing a brain tumor takes a long time and relies heavily on the radiologist’s abilities and experience. The amount of data that must be handled has increased dramatically as the number of patients has increased, making old procedures both costly and ineffective. Many researchers investigated a variety of algorithms for detecting and classifying brain tumors that were both accurate and fast. Deep Learning (DL) approaches have recently been popular in developing automated systems capable of accurately diagnosing or segmenting brain tumors in less time. DL enables a pre-trained Convolutional Neural Network (CNN) model for medical images, specifically for classifying brain cancers. The proposed Brain Tumor Classification Model based on CNN (BCM-CNN) is a CNN hyperparameters optimization using an adaptive dynamic sine-cosine fitness grey wolf optimizer (ADSCFGWO) algorithm. There is an optimization of hyperparameters followed by a training model built with Inception-ResnetV2. The model employs commonly used pre-trained models (Inception-ResnetV2) to improve brain tumor diagnosis, and its output is a binary 0 or 1 (0: Normal, 1: Tumor). There are primarily two types of hyperparameters: (i) hyperparameters that determine the underlying network structure; (ii) a hyperparameter that is responsible for training the network. The ADSCFGWO algorithm draws from both the sine cosine and grey wolf algorithms in an adaptable framework that uses both algorithms’ strengths. The experimental results show that the BCM-CNN as a classifier achieved the best results due to the enhancement of the CNN’s performance by the CNN optimization’s hyperparameters. The BCM-CNN has achieved 99.98% accuracy with the BRaTS 2021 Task 1 dataset.
AI-Driven Robust Kidney and Renal Mass Segmentation and Classification on 3D CT Images
Liu, Jingya
Yildirim, Onur
Akin, Oguz
Tian, Yingli
Bioengineering (Basel)2023Journal Article, cited 0 times
TCGA-KICH
TCGA-KIRP
TCGA-KIRC
Computed Tomography (CT)
KiTS19
KIDNEY
Segmentation
Classification
weakly supervised learning
Early intervention in kidney cancer helps to improve survival rates. Abdominal computed tomography (CT) is often used to diagnose renal masses. In clinical practice, the manual segmentation and quantification of organs and tumors are expensive and time-consuming. Artificial intelligence (AI) has shown a significant advantage in assisting cancer diagnosis. To reduce the workload of manual segmentation and avoid unnecessary biopsies or surgeries, in this paper, we propose a novel end-to-end AI-driven automatic kidney and renal mass diagnosis framework to identify the abnormal areas of the kidney and diagnose the histological subtypes of renal cell carcinoma (RCC). The proposed framework first segments the kidney and renal mass regions by a 3D deep learning architecture (Res-UNet), followed by a dual-path classification network utilizing local and global features for the subtype prediction of the most common RCCs: clear cell, chromophobe, oncocytoma, papillary, and other RCC subtypes. To improve the robustness of the proposed framework on the dataset collected from various institutions, a weakly supervised learning schema is proposed to leverage the domain gap between various vendors via very few CT slice annotations. Our proposed diagnosis system can accurately segment the kidney and renal mass regions and predict tumor subtypes, outperforming existing methods on the KiTs19 dataset. Furthermore, cross-dataset validation results demonstrate the robustness of datasets collected from different institutions trained via the weakly supervised learning schema.
Automated Classification of Lung Cancer Subtypes Using Deep Learning and CT-Scan Based Radiomic Analysis
Dunn, Bryce
Pierobon, Mariaelena
Wei, Qi
Bioengineering2023Journal Article, cited 0 times
Lung-PET-CT-Dx
Artificial intelligence and emerging data science techniques are being leveraged to interpret medical image scans. Traditional image analysis relies on visual interpretation by a trained radiologist, which is time-consuming and can, to some degree, be subjective. The development of reliable, automated diagnostic tools is a key goal of radiomics, a fast-growing research field which combines medical imaging with personalized medicine. Radiomic studies have demonstrated potential for accurate lung cancer diagnoses and prognostications. The practice of delineating the tumor region of interest, known as segmentation, is a key bottleneck in the development of generalized classification models. In this study, the incremental multiple resolution residual network (iMRRN), a publicly available and trained deep learning segmentation model, was applied to automatically segment CT images collected from 355 lung cancer patients included in the dataset "Lung-PET-CT-Dx", obtained from The Cancer Imaging Archive (TCIA), an open-access source for radiological images. We report a failure rate of 4.35% when using the iMRRN to segment tumor lesions within plain CT images in the lung cancer CT dataset. Seven classification algorithms were trained on the extracted radiomic features and tested for their ability to classify different lung cancer subtypes. Over-sampling was used to handle unbalanced data. Chi-square tests revealed the higher order texture features to be the most predictive when classifying lung cancers by subtype. The support vector machine showed the highest accuracy, 92.7% (0.97 AUC), when classifying three histological subtypes of lung cancer: adenocarcinoma, small cell carcinoma, and squamous cell carcinoma. The results demonstrate the potential of AI-based computer-aided diagnostic tools to automatically diagnose subtypes of lung cancer by coupling deep learning image segmentation with supervised classification. Our study demonstrated the integrated application of existing AI techniques in the non-invasive and effective diagnosis of lung cancer subtypes, and also shed light on several practical issues concerning the application of AI in biomedicine.
MRI-Based Deep Learning Method for Classification of IDH Mutation Status
Bangalore Yogananda, C. G.
Wagner, B. C.
Truong, N. C. D.
Holcomb, J. M.
Reddy, D. D.
Saadat, N.
Hatanpaa, K. J.
Patel, T. R.
Fei, B.
Lee, M. D.
Jain, R.
Bruce, R. J.
Pinho, M. C.
Madhuranthakam, A. J.
Maldjian, J. A.
Bioengineering (Basel)2023Journal Article, cited 0 times
Website
TCGA-GBM
TCGA-LGG
Ivy GAP
UCSF-PDGM
Convolutional Neural Network (CNN)
Isocitrate dehydrogenase (IDH) mutation
Magnetic Resonance Imaging (MRI)
U-net
brain tumor
Deep learning
Glioma
Isocitrate dehydrogenase (IDH) mutation status has emerged as an important prognostic marker in gliomas. This study sought to develop deep learning networks for non-invasive IDH classification using T2w MR images while comparing their performance to a multi-contrast network. Methods: Multi-contrast brain tumor MRI and genomic data were obtained from The Cancer Imaging Archive (TCIA) and The Erasmus Glioma Database (EGD). Two separate 2D networks were developed using nnU-Net, a T2w-image-only network (T2-net) and a multi-contrast network (MC-net). Each network was separately trained using TCIA (227 subjects) or TCIA + EGD data (683 subjects combined). The networks were trained to classify IDH mutation status and implement single-label tumor segmentation simultaneously. The trained networks were tested on over 1100 held-out datasets including 360 cases from UT Southwestern Medical Center, 136 cases from New York University, 175 cases from the University of Wisconsin-Madison, 456 cases from EGD (for the TCIA-trained network), and 495 cases from the University of California, San Francisco public database. A receiver operating characteristic curve (ROC) was drawn to calculate the AUC value to determine classifier performance. Results: T2-net trained on TCIA and TCIA + EGD datasets achieved an overall accuracy of 85.4% and 87.6% with AUCs of 0.86 and 0.89, respectively. MC-net trained on TCIA and TCIA + EGD datasets achieved an overall accuracy of 91.0% and 92.8% with AUCs of 0.94 and 0.96, respectively. We developed reliable, high-performing deep learning algorithms for IDH classification using both a T2-image-only and a multi-contrast approach. The networks were tested on more than 1100 subjects from diverse databases, making this the largest study on image-based IDH classification to date.
Explainable Precision Medicine in Breast MRI: A Combined Radiomics and Deep Learning Approach for the Classification of Contrast Agent Uptake
Nowakowska, S.
Borkowski, K.
Ruppert, C.
Hejduk, P.
Ciritsis, A.
Landsmann, A.
Marcon, M.
Berger, N.
Boss, A.
Rossi, C.
Bioengineering (Basel)2024Journal Article, cited 0 times
Website
EA1141
BI-RADS-compliant BPE classification
Shapley values
background parenchymal enhancement
breast cancer risk
breast dynamic contrast-enhanced MRI
deep neural networks
explainable AI
radiomics
In DCE-MRI, the degree of contrast uptake in normal fibroglandular tissue, i.e., background parenchymal enhancement (BPE), is a crucial biomarker linked to breast cancer risk and treatment outcome. In accordance with the Breast Imaging Reporting & Data System (BI-RADS), it should be visually classified into four classes. The susceptibility of such an assessment to inter-reader variability highlights the urgent need for a standardized classification algorithm. In this retrospective study, the first post-contrast subtraction images for 27 healthy female subjects were included. The BPE was classified slice-wise by two expert radiologists. The extraction of radiomic features from segmented BPE was followed by dataset splitting and dimensionality reduction. The latent representations were then utilized as inputs to a deep neural network classifying BPE into BI-RADS classes. The network's predictions were elucidated at the radiomic feature level with Shapley values. The deep neural network achieved a BPE classification accuracy of 84 +/- 2% (p-value < 0.00001). Most of the misclassifications involved adjacent classes. Different radiomic features were decisive for the prediction of each BPE class underlying the complexity of the decision boundaries. A highly precise and explainable pipeline for BPE classification was achieved without user- or algorithm-dependent radiomic feature selection.
The Next Frontier in Health Disparities-A Closer Look at Exploring Sex Differences in Glioma Data and Omics Analysis, from Bench to Bedside and Back
Diaz Rosario, M.
Kaur, H.
Tasci, E.
Shankavaram, U.
Sproull, M.
Zhuge, Y.
Camphausen, K.
Krauze, A.
Biomolecules2022Journal Article, cited 0 times
TCGA-GBM
TCGA-LGG
Animals
Female
*Glioma/genetics/therapy
Humans
Male
Prospective Studies
Publications
Retrospective Studies
*Sex Characteristics
genomics
glioma
health disparities
large-scale data
proteomics
sex differences
Sex differences are increasingly being explored and reported in oncology, and glioma is no exception. As potentially meaningful sex differences are uncovered, existing gender-derived disparities mirror data generated in retrospective and prospective trials, real-world large-scale data sets, and bench work involving animals and cell lines. The resulting disparities at the data level are wide-ranging, potentially resulting in both adverse outcomes and failure to identify and exploit therapeutic benefits. We set out to analyze the literature on women's data disparities in glioma by exploring the origins of data in this area to understand the representation of women in study samples and omics analyses. Given the current emphasis on inclusive study design and research, we wanted to explore if sex bias continues to exist in present-day data sets and how sex differences in data may impact conclusions derived from large-scale data sets, omics, biospecimen analysis, novel interventions, and standard of care management.
Interpretable Machine Learning with Brain Image and Survival Data
Eder, Matthias
Moser, Emanuel
Holzinger, Andreas
Jean-Quartier, Claire
Jeanquartier, Fleur
BioMedInformatics2022Journal Article, cited 1 times
Website
BraTS 2020
Radiomics
Glioma
Image Interpretation
Computer-Assisted/*methods
Deep learning
Convolutional Neural Network (CNN)
Recent developments in research on artificial intelligence (AI) in medicine deal with the analysis of image data such as Magnetic Resonance Imaging (MRI) scans to support the of decision-making of medical personnel. For this purpose, machine learning (ML) algorithms are often used, which do not explain the internal decision-making process at all. Thus, it is often difficult to validate or interpret the results of the applied AI methods. This manuscript aims to overcome this problem by using methods of explainable AI (XAI) to interpret the decision-making of an ML algorithm in the use case of predicting the survival rate of patients with brain tumors based on MRI scans. Therefore, we explore the analysis of brain images together with survival data to predict survival in gliomas with a focus on improving the interpretability of the results. Using the Brain Tumor Segmentation dataset BraTS 2020, we used a well-validated dataset for evaluation and relied on a convolutional neural network structure to improve the explainability of important features by adding Shapley overlays. The trained network models were used to evaluate SHapley Additive exPlanations (SHAP) directly and were not optimized for accuracy. The resulting overfitting of some network structures is therefore seen as a use case of the presented interpretation method. It is shown that the network structure can be validated by experts using visualizations, thus making the decision-making of the method interpretable. Our study highlights the feasibility of combining explainers with 3D voxels and also the fact that the interpretation of prediction results significantly supports the evaluation of results. The implementation in python is available on gitlab as “XAIforBrainImgSurv”.
State-of-the-Art CNN Optimizer for Brain Tumor Segmentation in Magnetic Resonance Images
Yaqub, M.
Jinchao, F.
Zia, M. S.
Arshid, K.
Jia, K.
Rehman, Z. U.
Mehmood, A.
Brain Sci2020Journal Article, cited 0 times
BraTS 2015
Convolutional Neural Network (CNN)
Deep learning
Segmentation
Brain tumors have become a leading cause of death around the globe. The main reason for this epidemic is the difficulty conducting a timely diagnosis of the tumor. Fortunately, magnetic resonance images (MRI) are utilized to diagnose tumors in most cases. The performance of a Convolutional Neural Network (CNN) depends on many factors (i.e., weight initialization, optimization, batches and epochs, learning rate, activation function, loss function, and network topology), data quality, and specific combinations of these model attributes. When we deal with a segmentation or classification problem, utilizing a single optimizer is considered weak testing or validity unless the decision of the selection of an optimizer is backed up by a strong argument. Therefore, optimizer selection processes are considered important to validate the usage of a single optimizer in order to attain these decision problems. In this paper, we provides a comprehensive comparative analysis of popular optimizers of CNN to benchmark the segmentation for improvement. In detail, we perform a comparative analysis of 10 different state-of-the-art gradient descent-based optimizers, namely Adaptive Gradient (Adagrad), Adaptive Delta (AdaDelta), Stochastic Gradient Descent (SGD), Adaptive Momentum (Adam), Cyclic Learning Rate (CLR), Adaptive Max Pooling (Adamax), Root Mean Square Propagation (RMS Prop), Nesterov Adaptive Momentum (Nadam), and Nesterov accelerated gradient (NAG) for CNN. The experiments were performed on the BraTS2015 data set. The Adam optimizer had the best accuracy of 99.2% in enhancing the CNN ability in classification and segmentation.
SwinBTS: A Method for 3D Multimodal Brain Tumor Segmentation Using Swin Transformer
Jiang, Y.
Zhang, Y.
Lin, X.
Dong, J.
Cheng, T.
Liang, J.
Brain Sci2022Journal Article, cited 0 times
Website
BraTS 2019
BraTS 2020
BraTS 2021
BraTS-TCGA-GBM
BraTS-TCGA-LGG
3d convolutional neural network (CNN)
Swin Transformer
Segmentation
Brain tumor semantic segmentation is a critical medical image processing work, which aids clinicians in diagnosing patients and determining the extent of lesions. Convolutional neural networks (CNNs) have demonstrated exceptional performance in computer vision tasks in recent years. For 3D medical image tasks, deep convolutional neural networks based on an encoder-decoder structure and skip-connection have been frequently used. However, CNNs have the drawback of being unable to learn global and remote semantic information well. On the other hand, the transformer has recently found success in natural language processing and computer vision as a result of its usage of a self-attention mechanism for global information modeling. For demanding prediction tasks, such as 3D medical picture segmentation, local and global characteristics are critical. We propose SwinBTS, a new 3D medical picture segmentation approach, which combines a transformer, convolutional neural network, and encoder-decoder structure to define the 3D brain tumor semantic segmentation job as a sequence-to-sequence prediction challenge in this research. To extract contextual data, the 3D Swin Transformer is utilized as the network's encoder and decoder, and convolutional operations are employed for upsampling and downsampling. Finally, we achieve segmentation results using an improved Transformer module that we built for increasing detail feature extraction. Extensive experimental results on the BraTS 2019, BraTS 2020, and BraTS 2021 datasets reveal that SwinBTS outperforms state-of-the-art 3D algorithms for brain tumor segmentation on 3D MRI scanned images.
RMTF-Net: Residual Mix Transformer Fusion Net for 2D Brain Tumor Segmentation
Gai, D.
Zhang, J.
Xiao, Y.
Min, W.
Zhong, Y.
Zhong, Y.
Brain Sci2022Journal Article, cited 0 times
Segmentation
Convolutional Neural Network (CNN)
BraTS 2019
BraTS 2020
Radiomics
mix transformer
overlapping patch embedding mechanism
Due to the complexity of medical imaging techniques and the high heterogeneity of glioma surfaces, image segmentation of human gliomas is one of the most challenging tasks in medical image analysis. Current methods based on convolutional neural networks concentrate on feature extraction while ignoring the correlation between local and global. In this paper, we propose a residual mix transformer fusion net, namely RMTF-Net, for brain tumor segmentation. In the feature encoder, a residual mix transformer encoder including a mix transformer and a residual convolutional neural network (RCNN) is proposed. The mix transformer gives an overlapping patch embedding mechanism to cope with the loss of patch boundary information. Moreover, a parallel fusion strategy based on RCNN is utilized to obtain local-global balanced information. In the feature decoder, a global feature integration (GFI) module is applied, which can enrich the context with the global attention feature. Extensive experiments on brain tumor segmentation from LGG, BraTS2019 and BraTS2020 demonstrated that our proposed RMTF-Net is superior to existing state-of-art methods in subjective visual performance and objective evaluation.
Axial Attention Convolutional Neural Network for Brain Tumor Segmentation with Multi-Modality MRI Scans
Tian, Weiwei
Li, Dengwang
Lv, Mengyu
Huang, Pu
Brain Sciences2023Journal Article, cited 0 times
BraTS 2019
BraTS 2020
Magnetic Resonance Imaging (MRI)
Segmentation
Deep learning
Accurately identifying tumors from MRI scans is of the utmost importance for clinical diagnostics and when making plans regarding brain tumor treatment. However, manual segmentation is a challenging and time-consuming process in practice and exhibits a high degree of variability between doctors. Therefore, an axial attention brain tumor segmentation network was established in this paper, automatically segmenting tumor subregions from multi-modality MRIs. The axial attention mechanism was employed to capture richer semantic information, which makes it easier for models to provide local-global contextual information by incorporating local and global feature representations while simplifying the computational complexity. The deep supervision mechanism is employed to avoid vanishing gradients and guide the AABTS-Net to generate better feature representations. The hybrid loss is employed in the model to handle the class imbalance of the dataset. Furthermore, we conduct comprehensive experiments on the BraTS 2019 and 2020 datasets. The proposed AABTS-Net shows greater robustness and accuracy, which signifies that the model can be employed in clinical practice and provides a new avenue for medical image segmentation systems.
A Prediction Model for Deciphering Intratumoral Heterogeneity Derived from the Microglia/Macrophages of Glioma Using Non-Invasive Radiogenomics
Zhu, Yunyang
Song, Zhaoming
Wang, Zhong
Brain Sciences2023Journal Article, cited 0 times
TCGA-GBM
TCGA-LGG
Radiogenomics
Isocitrate dehydrogenase (IDH) mutation
Magnetic Resonance Imaging (MRI)
FLAIR
Algorithm Development
Microglia and macrophages play a major role in glioma immune responses within the glioma microenvironment. We aimed to construct a prognostic prediction model for glioma based on microglia/macrophage-correlated genes. Additionally, we sought to develop a non-invasive radiogenomics approach for risk stratification evaluation. Microglia/macrophage-correlated genes were identified from four single-cell datasets. Hub genes were selected via lasso–Cox regression, and risk scores were calculated. The immunological characteristics of different risk stratifications were assessed, and radiomics models were constructed using corresponding MRI imaging to predict risk stratification. We identified eight hub genes and developed a relevant risk score formula. The risk score emerged as a significant prognostic predictor correlated with immune checkpoints, and a relevant nomogram was drawn. High-risk groups displayed an active microenvironment associated with microglia/macrophages. Furthermore, differences in somatic mutation rates, such as IDH1 missense variant and TP53 missense variant, were observed between high- and low-risk groups. Lastly, a radiogenomics model utilizing five features from magnetic resonance imaging (MRI) T2 fluid-attenuated inversion recovery (Flair) effectively predicted the risk groups under a random forest model. Our findings demonstrate that risk stratification based on microglia/macrophages can effectively predict prognosis and immune functions in glioma. Moreover, we have shown that risk stratification can be non-invasively predicted using an MRI-T2 Flair-based radiogenomics model.
Multimodal Radiomic Features for the Predicting Gleason Score of Prostate Cancer
Acute Tumor Transition Angle on Computed Tomography Predicts Chromosomal Instability Status of Primary Gastric Cancer: Radiogenomics Analysis from TCGA and Independent Validation
Lai, Ying-Chieh
Yeh, Ta-Sen
Wu, Ren-Chin
Tsai, Cheng-Kun
Yang, Lan-Yan
Lin, Gigin
Kuo, Michael D
Cancers2019Journal Article, cited 0 times
TCGA-STAD
Radiogenomics
Chromosomal instability (CIN) of gastric cancer is correlated with distinct outcomes. This study aimed to investigate the role of computed tomography (CT) imaging traits in predicting the CIN status of gastric cancer. We screened 443 patients in the Cancer Genome Atlas gastric cancer cohort to filter 40 patients with complete CT imaging and genomic data as the training cohort. CT imaging traits were subjected to logistic regression to select independent predictors for the CIN status. For the validation cohort, we prospectively enrolled 18 gastric cancer patients for CT and tumor genomic analysis. The imaging predictors were tested in the validation cohort using receiver operating characteristic curve (ROC) analysis. Thirty patients (75%) in the training cohort and 9 patients (50%) in the validation cohort had CIN subtype gastric cancers. Smaller tumor diameter (p = 0.017) and acute tumor transition angle (p = 0.045) independently predict CIN status in the training cohort. In the validation cohort, acute tumor transition angle demonstrated the highest accuracy, sensitivity, and specificity of 88.9%, 88.9%, and 88.9%, respectively, and areas under ROC curve of 0.89. In conclusion, this pilot study showed acute tumor transition angle on CT images may predict the CIN status of gastric cancer.
Tumor Transcriptome Reveals High Expression of IL-8 in Non-Small Cell Lung Cancer Patients with Low Pectoralis Muscle Area and Reduced Survival
Cury, Sarah Santiloni
de Moraes, Diogo
Freire, Paula Paccielli
de Oliveira, Grasieli
Marques, Douglas Venancio Pereira
Fernandez, Geysson Javier
Dal-Pai-Silva, Maeli
Hasimoto, Erica Nishida
Dos Reis, Patricia Pintor
Rogatto, Silvia Regina
Carvalho, Robson Francisco
Cancers (Basel)2019Journal Article, cited 1 times
Website
NSCLC-Radiomics-Genomics
Radiogenomics
Cachexia is a syndrome characterized by an ongoing loss of skeletal muscle mass associated with poor patient prognosis in non-small cell lung cancer (NSCLC). However, prognostic cachexia biomarkers in NSCLC are unknown. Here, we analyzed computed tomography (CT) images and tumor transcriptome data to identify potentially secreted cachexia biomarkers (PSCB) in NSCLC patients with low-muscularity. We integrated radiomics features (pectoralis muscle, sternum, and tenth thoracic (T10) vertebra) from CT of 89 NSCLC patients, which allowed us to identify an index for screening muscularity. Next, a tumor transcriptomic-based secretome analysis from these patients (discovery set) was evaluated to identify potential cachexia biomarkers in patients with low-muscularity. The prognostic value of these biomarkers for predicting recurrence and survival outcome was confirmed using expression data from eight lung cancer datasets (validation set). Finally, C2C12 myoblasts differentiated into myotubes were used to evaluate the ability of the selected biomarker, interleukin (IL)-8, in inducing muscle cell atrophy. We identified 75 over-expressed transcripts in patients with low-muscularity, which included IL-6, CSF3, and IL-8. Also, we identified NCAM1, CNTN1, SCG2, CADM1, IL-8, NPTX1, and APOD as PSCB in the tumor secretome. These PSCB were capable of distinguishing worse and better prognosis (recurrence and survival) in NSCLC patients. IL-8 was confirmed as a predictor of worse prognosis in all validation sets. In vitro assays revealed that IL-8 promoted C2C12 myotube atrophy. Tumors from low-muscularity patients presented a set of upregulated genes encoding for secreted proteins, including pro-inflammatory cytokines that predict worse overall survival in NSCLC. Among these upregulated genes, IL-8 expression in NSCLC tissues was associated with worse prognosis, and the recombinant IL-8 was capable of triggering atrophy in C2C12 myotubes.
A Radiogenomic Approach for Decoding Molecular Mechanisms Underlying Tumor Progression in Prostate Cancer
Fischer, Sarah
Tahoun, Mohamed
Klaan, Bastian
Thierfelder, Kolja M
Weber, Marc-Andre
Krause, Bernd J
Hakenberg, Oliver
Fuellen, Georg
Hamed, Mohamed
Cancers (Basel)2019Journal Article, cited 0 times
Website
TCGA-PRAD
Radiogenomics
Classification
PROSTATE
Prostate cancer (PCa) is a genetically heterogeneous cancer entity that causes challenges in pre-treatment clinical evaluation, such as the correct identification of the tumor stage. Conventional clinical tests based on digital rectal examination, Prostate-Specific Antigen (PSA) levels, and Gleason score still lack accuracy for stage prediction. We hypothesize that unraveling the molecular mechanisms underlying PCa staging via integrative analysis of multi-OMICs data could significantly improve the prediction accuracy for PCa pathological stages. We present a radiogenomic approach comprising clinical, imaging, and two genomic (gene and miRNA expression) datasets for 298 PCa patients. Comprehensive analysis of gene and miRNA expression profiles for two frequent PCa stages (T2c and T3b) unraveled the molecular characteristics for each stage and the corresponding gene regulatory interaction network that may drive tumor upstaging from T2c to T3b. Furthermore, four biomarkers (ANPEP, mir-217, mir-592, mir-6715b) were found to distinguish between the two PCa stages and were highly correlated (average r = +/- 0.75) with corresponding aggressiveness-related imaging features in both tumor stages. When combined with related clinical features, these biomarkers markedly improved the prediction accuracy for the pathological stage. Our prediction model exhibits high potential to yield clinically relevant results for characterizing PCa aggressiveness.
The Impact of Normalization Approaches to Automatically Detect Radiogenomic Phenotypes Characterizing Breast Cancer Receptors Status
Castaldo, Rossana
Pane, Katia
Nicolai, Emanuele
Salvatore, Marco
Franzese, Monica
Cancers (Basel)2020Journal Article, cited 0 times
Website
TCGA-BRCA
Radiomics
Radiogenomics
In breast cancer studies, combining quantitative radiomic with genomic signatures can help identifying and characterizing radiogenomic phenotypes, in function of molecular receptor status. Biomedical imaging processing lacks standards in radiomic feature normalization methods and neglecting feature normalization can highly bias the overall analysis. This study evaluates the effect of several normalization techniques to predict four clinical phenotypes such as estrogen receptor (ER), progesterone receptor (PR), human epidermal growth factor receptor 2 (HER2), and triple negative (TN) status, by quantitative features. The Cancer Imaging Archive (TCIA) radiomic features from 91 T1-weighted Dynamic Contrast Enhancement MRI of invasive breast cancers were investigated in association with breast invasive carcinoma miRNA expression profiling from the Cancer Genome Atlas (TCGA). Three advanced machine learning techniques (Support Vector Machine, Random Forest, and Naive Bayesian) were investigated to distinguish between molecular prognostic indicators and achieved an area under the ROC curve (AUC) values of 86%, 93%, 91%, and 91% for the prediction of ER+ versus ER-, PR+ versus PR-, HER2+ versus HER2-, and triple-negative, respectively. In conclusion, radiomic features enable to discriminate major breast cancer molecular subtypes and may yield a potential imaging biomarker for advancing precision medicine.
Integrative Radiogenomics Approach for Risk Assessment of Post-Operative Metastasis in Pathological T1 Renal Cell Carcinoma: A Pilot Retrospective Cohort Study
Lee, H. W.
Cho, H. H.
Joung, J. G.
Jeon, H. G.
Jeong, B. C.
Jeon, S. S.
Lee, H. M.
Nam, D. H.
Park, W. Y.
Kim, C. K.
Seo, S. I.
Park, H.
Cancers (Basel)2020Journal Article, cited 0 times
Website
TCGA-KIRC
Radiogenomics
KIDNEY
Despite the increasing incidence of pathological stage T1 renal cell carcinoma (pT1 RCC), postoperative distant metastases develop in many surgically treated patients, causing death in certain cases. Therefore, this study aimed to create a radiomics model using imaging features from multiphase computed tomography (CT) to more accurately predict the postoperative metastasis of pT1 RCC and further investigate the possible link between radiomics parameters and gene expression profiles generated by whole transcriptome sequencing (WTS). Four radiomic features, including the minimum value of a histogram feature from inner regions of interest (ROIs) (INNER_Min_hist), the histogram of the energy feature from outer ROIs (OUTER_Energy_Hist), the maximum probability of gray-level co-occurrence matrix (GLCM) feature from inner ROIs (INNER_MaxProb_GLCM), and the ratio of voxels under 80 Hounsfield units (Hus) in the nephrographic phase of postcontrast CT (Under80HURatio), were detected to predict the postsurgical metastasis of patients with pathological stage T1 RCC, and the clinical outcomes of patients could be successfully stratified based on their radiomic risk scores. Furthermore, we identified heterogenous-trait-associated gene signatures correlated with these four radiomic features, which captured clinically relevant molecular pathways, tumor immune microenvironment, and potential treatment strategies. Our results of accurate surrogates using radiogenomics could lead to additional benefit from adjuvant therapy or postsurgical metastases in pT1 RCC.
Immunotherapy in Metastatic Colorectal Cancer: Could the Latest Developments Hold the Key to Improving Patient Survival?
Damilakis, E.
Mavroudis, D.
Sfakianaki, M.
Souglakos, J.
Cancers (Basel)2020Journal Article, cited 0 times
Website
NSCLC-Radiomics
Radiomics
Radiogenomics
Immunotherapy has considerably increased the number of anticancer agents in many tumor types including metastatic colorectal cancer (mCRC). Anti-PD-1 (programmed death 1) and cytotoxic T-lymphocyte-associated antigen 4 (CTLA-4) immune checkpoint inhibitors (ICI) have been shown to benefit the mCRC patients with mismatch repair deficiency (dMMR) or high microsatellite instability (MSI-H). However, ICI is not effective in mismatch repair proficient (pMMR) colorectal tumors, which constitute a large population of patients. Several clinical trials evaluating the efficacy of immunotherapy combined with chemotherapy, radiation therapy, or other agents are currently ongoing to extend the benefit of immunotherapy to pMMR mCRC cases. In dMMR patients, MSI testing through immunohistochemistry and/or polymerase chain reaction can be used to identify patients that will benefit from immunotherapy. Next-generation sequencing has the ability to detect MSI-H using a low amount of nucleic acids and its application in clinical practice is currently being explored. Preliminary data suggest that radiomics is capable of discriminating MSI from microsatellite stable mCRC and may play a role as an imaging biomarker in the future. Tumor mutational burden, neoantigen burden, tumor-infiltrating lymphocytes, immunoscore, and gastrointestinal microbiome are promising biomarkers that require further investigation and validation.
The Combination of Low Skeletal Muscle Mass and High Tumor Interleukin-6 Associates with Decreased Survival in Clear Cell Renal Cell Carcinoma
Kays, J. K.
Koniaris, L. G.
Cooper, C. A.
Pili, R.
Jiang, G.
Liu, Y.
Zimmers, T. A.
Cancers (Basel)2020Journal Article, cited 0 times
Website
TCGA-KIRC
Radiogenomics
KIDNEY
Classification
Clear cell renal carcinoma (ccRCC) is frequently associated with cachexia which is itself associated with decreased survival and quality of life. We examined relationships among body phenotype, tumor gene expression, and survival. Demographic, clinical, computed tomography (CT) scans and tumor RNASeq for 217 ccRCC patients were acquired from the Cancer Imaging Archive and The Cancer Genome Atlas (TCGA). Skeletal muscle and fat masses measured from CT scans and tumor cytokine gene expression were compared with survival by univariate and multivariate analysis. Patients in the lowest skeletal muscle mass (SKM) quartile had significantly shorter overall survival versus the top three SKM quartiles. Patients who fell into the lowest quartiles for visceral adipose mass (VAT) and subcutaneous adipose mass (SCAT) also demonstrated significantly shorter overall survival. Multiple tumor cytokines correlated with mortality, most strongly interleukin-6 (IL-6); high IL-6 expression was associated with significantly decreased survival. The combination of low SKM/high IL-6 was associated with significantly lower overall survival compared to high SKM/low IL-6 expression (26.1 months vs. not reached; p < 0.001) and an increased risk of mortality (HR = 5.95; 95% CI = 2.86-12.38). In conclusion, tumor cytokine expression, body composition, and survival are closely related, with low SKM/high IL-6 expression portending worse prognosis in ccRCC.
Potential Added Value of PET/CT Radiomics for Survival Prognostication beyond AJCC 8th Edition Staging in Oropharyngeal Squamous Cell Carcinoma
Haider, S. P.
Zeevi, T.
Baumeister, P.
Reichel, C.
Sharaf, K.
Forghani, R.
Kann, B. H.
Judson, B. L.
Prasad, M. L.
Burtness, B.
Mahajan, A.
Payabvash, S.
Cancers (Basel)2020Journal Article, cited 2 times
Website
Head-Neck-PET-CT
HNSCC
Accurate risk-stratification can facilitate precision therapy in oropharyngeal squamous cell carcinoma (OPSCC). We explored the potential added value of baseline positron emission tomography (PET)/computed tomography (CT) radiomic features for prognostication and risk stratification of OPSCC beyond the American Joint Committee on Cancer (AJCC) 8th edition staging scheme. Using institutional and publicly available datasets, we included OPSCC patients with known human papillomavirus (HPV) status, without baseline distant metastasis and treated with curative intent. We extracted 1037 PET and 1037 CT radiomic features quantifying lesion shape, imaging intensity, and texture patterns from primary tumors and metastatic cervical lymph nodes. Utilizing random forest algorithms, we devised novel machine-learning models for OPSCC progression-free survival (PFS) and overall survival (OS) using "radiomics" features, "AJCC" variables, and the "combined" set as input. We designed both single- (PET or CT) and combined-modality (PET/CT) models. Harrell's C-index quantified survival model performance; risk stratification was evaluated in Kaplan-Meier analysis. A total of 311 patients were included. In HPV-associated OPSCC, the best "radiomics" model achieved an average C-index +/- standard deviation of 0.62 +/- 0.05 (p = 0.02) for PFS prediction, compared to 0.54 +/- 0.06 (p = 0.32) utilizing "AJCC" variables. Radiomics-based risk-stratification of HPV-associated OPSCC was significant for PFS and OS. Similar trends were observed in HPV-negative OPSCC. In conclusion, radiomics imaging features extracted from pre-treatment PET/CT may provide complimentary information to the current AJCC staging scheme for survival prognostication and risk-stratification of HPV-associated OPSCC.
Predictive Modeling for Voxel-Based Quantification of Imaging-Based Subtypes of Pancreatic Ductal Adenocarcinoma (PDAC): A Multi-Institutional Study
Identifying Cross-Scale Associations between Radiomic and Pathomic Signatures of Non-Small Cell Lung Cancer Subtypes: Preliminary Results
Alvarez-Jimenez, Charlems
Sandino, Alvaro A.
Prasanna, Prateek
Gupta, Amit
Viswanath, Satish E.
Romero, Eduardo
Cancers2020Journal Article, cited 0 times
NSCLC-Radiomics-Genomics
(1) Background: Despite the complementarity between radiology and histopathology, both from a diagnostic and a prognostic perspective, quantitative analyses of these modalities are usually performed in disconnected silos. This work presents initial results for differentiating two major non-small cell lung cancer (NSCLC) subtypes by exploring cross-scale associations between Computed Tomography (CT) images and corresponding digitized pathology images. (2) Methods: The analysis comprised three phases, (i) a multi-resolution cell density quantification to identify discriminant pathomic patterns for differentiating adenocarcinoma (ADC) and squamous cell carcinoma (SCC), (ii) radiomic characterization of CT images by using Haralick descriptors to quantify tumor textural heterogeneity as represented by gray-level co-occurrences to discriminate the two pathological subtypes, and (iii) quantitative correlation analysis between the multi-modal features to identify potential associations between them. This analysis was carried out using two publicly available digitized pathology databases (117 cases from TCGA and 54 cases from CPTAC) and a public radiological collection of CT images (101 cases from NSCLC-R). (3) Results: The top-ranked cell density pathomic features from the histopathology analysis were correlation, contrast, homogeneity, sum of entropy and difference of variance; which yielded a cross-validated AUC of 0.72 ± 0.02 on the training set (CPTAC) and hold-out validation AUC of 0.77 on the testing set (TCGA). Top-ranked co-occurrence radiomic features within NSCLC-R were contrast, correlation and sum of entropy which yielded a cross-validated AUC of 0.72 ± 0.01. Preliminary but significant cross-scale associations were identified between cell density statistics and CT intensity values using matched specimens available in the TCGA cohort, which were used to significantly improve the overall discriminatory performance of radiomic features in differentiating NSCLC subtypes (AUC = 0.78 ± 0.01). (4) Conclusions: Initial results suggest that cross-scale associations may exist between digital pathology and CT imaging which can be used to identify relevant radiomic and histopathology features to accurately distinguish lung adenocarcinomas from squamous cell carcinomas.
Interpretable Machine Learning Model for Locoregional Relapse Prediction in Oropharyngeal Cancers
Identification of Novel Transcriptome Signature as a Potential Prognostic Biomarker for Anti-Angiogenic Therapy in Glioblastoma Multiforme
Zheng, S.
Tao, W.
Cancers (Basel)2021Journal Article, cited 3 times
Website
Ivy GAP
CPTAC-GBM
BRAIN
Glioblastoma multiforme (GBM) is the most common and devastating type of primary brain tumor, with a median survival time of only 15 months. Having a clinically applicable genetic biomarker would lead to a paradigm shift in precise diagnosis, personalized therapeutic decisions, and prognostic prediction for GBM. Radiogenomic profiling connecting radiological imaging features with molecular alterations will offer a noninvasive method for genomic studies of GBM. To this end, we analyzed over 3800 glioma and GBM cases across four independent datasets. The Chinese Glioma Genome Atlas (CGGA) and The Cancer Genome Atlas (TCGA) databases were employed for RNA-Seq analysis, whereas the Ivy Glioblastoma Atlas Project (Ivy-GAP) and The Cancer Imaging Archive (TCIA) provided clinicopathological data. The Clinical Proteomic Tumor Analysis Consortium Glioblastoma Multiforme (CPTAC-GBM) was used for proteomic analysis. We identified a simple three-gene transcriptome signature-SOCS3, VEGFA, and TEK-that can connect GBM's overall prognosis with genes' expression and simultaneously correlate radiographical features of perfusion imaging with SOCS3 expression levels. More importantly, the rampant development of neovascularization in GBM offers a promising target for therapeutic intervention. However, treatment with bevacizumab failed to improve overall survival. We identified SOCS3 expression levels as a potential selection marker for patients who may benefit from early initiation of angiogenesis inhibitors.
Fine-Tuning Approach for Segmentation of Gliomas in Brain Magnetic Resonance Images with a Machine Learning Method to Normalize Image Differences among Facilities
Takahashi, S.
Takahashi, M.
Kinoshita, M.
Miyake, M.
Kawaguchi, R.
Shinojima, N.
Mukasa, A.
Saito, K.
Nagane, M.
Otani, R.
Higuchi, F.
Tanaka, S.
Hata, N.
Tamura, K.
Tateishi, K.
Nishikawa, R.
Arita, H.
Nonaka, M.
Uda, T.
Fukai, J.
Okita, Y.
Tsuyuguchi, N.
Kanemura, Y.
Kobayashi, K.
Sese, J.
Ichimura, K.
Narita, Y.
Hamamoto, R.
Cancers (Basel)2021Journal Article, cited 0 times
BraTS 2018
Magnetic Resonance Imaging (MRI)
Deep learning
Glioma
Machine learning
Machine learning models for automated magnetic resonance image segmentation may be useful in aiding glioma detection. However, the image differences among facilities cause performance degradation and impede detection. This study proposes a method to solve this issue. We used the data from the Multimodal Brain Tumor Image Segmentation Benchmark (BraTS) and the Japanese cohort (JC) datasets. Three models for tumor segmentation are developed. In our methodology, the BraTS and JC models are trained on the BraTS and JC datasets, respectively, whereas the fine-tuning models are developed from the BraTS model and fine-tuned using the JC dataset. Our results show that the Dice coefficient score of the JC model for the test portion of the JC dataset was 0.779 +/- 0.137, whereas that of the BraTS model was lower (0.717 +/- 0.207). The mean Dice coefficient score of the fine-tuning model was 0.769 +/- 0.138. There was a significant difference between the BraTS and JC models (p < 0.0001) and the BraTS and fine-tuning models (p = 0.002); however, no significant difference between the JC and fine-tuning models (p = 0.673). As our fine-tuning method requires fewer than 20 cases, this method is useful even in a facility where the number of glioma cases is small.
The Effects of In-Plane Spatial Resolution on CT-Based Radiomic Features’ Stability with and without ComBat Harmonization
Ibrahim, Abdalla
Refaee, Turkey
Primakov, Sergey
Barufaldi, Bruno
Acciavatti, Raymond J.
Granzier, Renée W. Y.
Hustinx, Roland
Mottaghy, Felix M.
Woodruff, Henry C.
Wildberger, Joachim E.
Lambin, Philippe
Maidment, Andrew D. A.
Cancers2021Journal Article, cited 0 times
CC-Radiomics-Phantom
While handcrafted radiomic features (HRFs) have shown promise in the field of personalized medicine, many hurdles hinder its incorporation into clinical practice, including but not limited to their sensitivity to differences in acquisition and reconstruction parameters. In this study, we evaluated the effects of differences in in-plane spatial resolution (IPR) on HRFs, using a phantom dataset (n = 14) acquired on two scanner models. Furthermore, we assessed the effects of interpolation methods (IMs), the choice of a new unified in-plane resolution (NUIR), and ComBat harmonization on the reproducibility of HRFs. The reproducibility of HRFs was significantly affected by variations in IPR, with pairwise concordant HRFs, as measured by the concordance correlation coefficient (CCC), ranging from 42% to 95%. The number of concordant HRFs (CCC > 0.9) after resampling varied depending on (i) the scanner model, (ii) the IM, and (iii) the NUIR. The number of concordant HRFs after ComBat harmonization depended on the variations between the batches harmonized. The majority of IMs resulted in a higher number of concordant HRFs compared to ComBat harmonization, and the combination of IMs and ComBat harmonization did not yield a significant benefit. Our developed framework can be used to assess the reproducibility and harmonizability of RFs.
Glioblastoma Surgery Imaging–Reporting and Data System: Validation and Performance of the Automated Segmentation Task
Simple Summary; Neurosurgical decisions for patients with glioblastoma depend on visual inspection of a preoperative MR scan to determine the tumor characteristics. To avoid subjective estimates and manual tumor delineation, automatic methods and standard reporting are necessary. We compared and extensively assessed the performances of two deep learning architectures on the task of automatic tumor segmentation. A total of 1887 patients from 14 institutions, manually delineated by a human rater, were compared to automated segmentations generated by neural networks. The automated segmentations were in excellent agreement with the manual segmentations, and external validity, as well as generalizability were demonstrated. Together with automatic tumor feature computation and standardized reporting, our Glioblastoma Surgery Imaging Reporting And Data System (GSI-RADS) exhibited the potential for more accurate data-driven clinical decisions. The trained models and software are open-source and open-access, enabling comparisons among surgical cohorts, multicenter trials, and patient registries.; Abstract; For patients with presumed glioblastoma, essential tumor characteristics are determined from preoperative MR images to optimize the treatment strategy. This procedure is time-consuming and subjective, if performed by crude eyeballing or manually. The standardized GSI-RADS aims to provide neurosurgeons with automatic tumor segmentations to extract tumor features rapidly and objectively. In this study, we improved automatic tumor segmentation and compared the agreement with manual raters, describe the technical details of the different components of GSI-RADS, and determined their speed. Two recent neural network architectures were considered for the segmentation task: nnU-Net and AGU-Net. Two preprocessing schemes were introduced to investigate the tradeoff between performance and processing speed. A summarized description of the tumor feature extraction and standardized reporting process is included. The trained architectures for automatic segmentation and the code for computing the standardized report are distributed as open-source and as open-access software. Validation studies were performed on a dataset of 1594 gadolinium-enhanced T1-weighted MRI volumes from 13 hospitals and 293 T1-weighted MRI volumes from the BraTS challenge. The glioblastoma tumor core segmentation reached a Dice score slightly below 90%, a patientwise F1-score close to 99%, and a 95th percentile Hausdorff distance slightly below 4.0 mm on average with either architecture and the heavy preprocessing scheme. A patient MRI volume can be segmented in less than one minute, and a standardized report can be generated in up to five minutes. The proposed GSI-RADS software showed robust performance on a large collection of MRI volumes from various hospitals and generated results within a reasonable runtime.
Deep Learning Predicts EBV Status in Gastric Cancer Based on Spatial Patterns of Lymphocyte Infiltration
Zhang, Baoyi
Yao, Kevin
Xu, Min
Wu, Jia
Cheng, Chao
Cancers2021Journal Article, cited 0 times
TIL-WSI-TCGA
EBV infection occurs in around 10% of gastric cancer cases and represents a distinct subtype, characterized by a unique mutation profile, hypermethylation, and overexpression of PD-L1. Moreover, EBV positive gastric cancer tends to have higher immune infiltration and a better prognosis. EBV infection status in gastric cancer is most commonly determined using PCR and in situ hybridization, but such a method requires good nucleic acid preservation. Detection of EBV status with histopathology images may complement PCR and in situ hybridization as a first step of EBV infection assessment. Here, we developed a deep learning-based algorithm to directly predict EBV infection in gastric cancer from H&E stained histopathology slides. Our model can not only predict EBV infection in gastric cancers from tumor regions but also from normal regions with potential changes induced by adjacent EBV+ regions within each H&E slide. Furthermore, in cohorts with zero EBV abundances, a significant difference of immune infiltration between high and low EBV score samples was observed, consistent with the immune infiltration difference observed between EBV positive and negative samples. Therefore, we hypothesized that our model's prediction of EBV infection is partially driven by the spatial information of immune cell composition, which was supported by mostly positive local correlations between the EBV score and immune infiltration in both tumor and normal regions across all H&E slides. Finally, EBV scores calculated from our model were found to be significantly associated with prognosis. This framework can be readily applied to develop interpretable models for prediction of virus infection across cancers.
AutoProstate: Towards Automated Reporting of Prostate MRI for Prostate Cancer Assessment Using Deep Learning
Mehta, Pritesh
Antonelli, Michela
Singh, Saurabh
Grondecka, Natalia
Johnston, Edward W.
Ahmed, Hashim U.
Emberton, Mark
Punwani, Shonit
Ourselin, Sébastien
Cancers2021Journal Article, cited 0 times
PROSTATEx
Multiparametric magnetic resonance imaging (mpMRI) of the prostate is used by radiologists to identify, score, and stage abnormalities that may correspond to clinically significant prostate cancer (CSPCa). Automatic assessment of prostate mpMRI using artificial intelligence algorithms may facilitate a reduction in missed cancers and unnecessary biopsies, an increase in inter-observer agreement between radiologists, and an improvement in reporting quality. In this work, we introduce AutoProstate, a deep learning-powered framework for automatic MRI-based prostate cancer assessment. AutoProstate comprises of three modules: Zone-Segmenter, CSPCa-Segmenter, and Report-Generator. Zone-Segmenter segments the prostatic zones on T2-weighted imaging, CSPCa-Segmenter detects and segments CSPCa lesions using biparametric MRI, and Report-Generator generates an automatic web-based report containing four sections: Patient Details, Prostate Size and PSA Density, Clinically Significant Lesion Candidates, and Findings Summary. In our experiment, AutoProstate was trained using the publicly available PROSTATEx dataset, and externally validated using the PICTURE dataset. Moreover, the performance of AutoProstate was compared to the performance of an experienced radiologist who prospectively read PICTURE dataset cases. In comparison to the radiologist, AutoProstate showed statistically significant improvements in prostate volume and prostate-specific antigen density estimation. Furthermore, AutoProstate matched the CSPCa lesion detection sensitivity of the radiologist, which is paramount, but produced more false positive detections.
Fully Automated MR Based Virtual Biopsy of Cerebral Gliomas
Haubold, Johannes
Hosch, René
Parmar, Vicky
Glas, Martin
Guberina, Nika
Catalano, Onofrio Antonio
Pierscianek, Daniela
Wrede, Karsten
Deuschl, Cornelius
Forsting, Michael
Nensa, Felix
Flaschel, Nils
Umutlu, Lale
Cancers2021Journal Article, cited 0 times
BraTS 2019
Automatic Segmentation
BRAIN
cerebral glioma
multi-parametric MRI
Radiogenomics
Radiomics
OBJECTIVE: The aim of this study was to investigate the diagnostic accuracy of a radiomics analysis based on a fully automated segmentation and a simplified and robust MR imaging protocol to provide a comprehensive analysis of the genetic profile and grading of cerebral gliomas for everyday clinical use. METHODS: MRI examinations of 217 therapy-naive patients with cerebral gliomas, each comprising a non-contrast T1-weighted, FLAIR and contrast-enhanced T1-weighted sequence, were included in the study. In addition, clinical and laboratory parameters were incorporated into the analysis. The BraTS 2019 pretrained DeepMedic network was used for automated segmentation. The segmentations generated by DeepMedic were evaluated with 200 manual segmentations with a DICE score of 0.8082 +/- 0.1321. Subsequently, the radiomics signatures were utilized to predict the genetic profile of ATRX, IDH1/2, MGMT and 1p19q co-deletion, as well as differentiating low-grade glioma from high-grade glioma. RESULTS: The network provided an AUC (validation/test) for the differentiation between low-grade gliomas vs. high-grade gliomas of 0.981 +/- 0.015/0.885 +/- 0.02. The best results were achieved for the prediction of the ATRX expression loss with AUCs of 0.979 +/- 0.028/0.923 +/- 0.045, followed by 0.929 +/- 0.042/0.861 +/- 0.023 for the prediction of IDH1/2. The prediction of 1p19q and MGMT achieved moderate results, with AUCs of 0.999 +/- 0.005/0.711 +/- 0.128 for 1p19q and 0.854 +/- 0.046/0.742 +/- 0.050 for MGMT. CONCLUSION: This fully automated approach utilizing simplified MR protocols to predict the genetic profile and grading of cerebral gliomas provides an easy and efficient method for non-invasive tumor decoding.; ; SIMPLE SUMMARY: Over the past few years, radiomics-based tissue characterization has demonstrated its potential for non-invasive prediction of the genetic profile and grading in cerebral gliomas using multiparametric MRI. The aim of our study was to investigate the feasibility and diagnostic accuracy of a fully automated radiomics analysis based on a simplified MR protocol derived from various scanner systems to prospectively ease the transition of radiomics-based non-invasive tissue sampling into clinical practice. Using an MRI with non-contrast and post-contrast T1-weighted sequences and FLAIR, our workflow automatically predicts the IDH1/2 mutation, the ATRX expression loss, the 1p19q co-deletion and the MGMT methylation status. It also effectively differentiates low-grade from high-grade gliomas. In summary, the present study demonstrated that a fully automated prediction of grading and the genetic profile of cerebral gliomas could be performed with our proposed method using a simplified MRI protocol that is robust to variations in scanner systems, imaging parameters and field strength.
Classification of Clinically Significant Prostate Cancer on Multi-Parametric MRI: A Validation Study Comparing Deep Learning and Radiomics
Fully Automatic Deep Learning Framework for Pancreatic Ductal Adenocarcinoma Detection on Computed Tomography
Alves, N.
Schuurmans, M.
Litjens, G.
Bosma, J. S.
Hermans, J.
Huisman, H.
Cancers (Basel)2022Journal Article, cited 0 times
Website
Pancreas-CT
Deep Learning
U-Net
Pancreatic ductal adenocarcinoma
PANCREAS
Early detection improves prognosis in pancreatic ductal adenocarcinoma (PDAC), but is challenging as lesions are often small and poorly defined on contrast-enhanced computed tomography scans (CE-CT). Deep learning can facilitate PDAC diagnosis; however, current models still fail to identify small (<2 cm) lesions. In this study, state-of-the-art deep learning models were used to develop an automatic framework for PDAC detection, focusing on small lesions. Additionally, the impact of integrating the surrounding anatomy was investigated. CE-CT scans from a cohort of 119 pathology-proven PDAC patients and a cohort of 123 patients without PDAC were used to train a nnUnet for automatic lesion detection and segmentation (nnUnet_T). Two additional nnUnets were trained to investigate the impact of anatomy integration: (1) segmenting the pancreas and tumor (nnUnet_TP), and (2) segmenting the pancreas, tumor, and multiple surrounding anatomical structures (nnUnet_MS). An external, publicly available test set was used to compare the performance of the three networks. The nnUnet_MS achieved the best performance, with an area under the receiver operating characteristic curve of 0.91 for the whole test set and 0.88 for tumors <2 cm, showing that state-of-the-art deep learning can detect small PDAC and benefits from anatomy information.
Efficient Radiomics-Based Classification of Multi-Parametric MR Images to Identify Volumetric Habitats and Signatures in Glioblastoma: A Machine Learning Approach
Chiu, F. Y.
Yen, Y.
Cancers (Basel)2022Journal Article, cited 0 times
Website
TCGA-GBM
annotation
Glioblastoma
Machine learning
multi-parametric
non-invasive
precision medicine
quantitative imaging
Radiomics
Imaging feature
Glioblastoma (GBM) is a fast-growing and aggressive brain tumor of the central nervous system. It encroaches on brain tissue with heterogeneous regions of a necrotic core, solid part, peritumoral tissue, and edema. This study provided qualitative image interpretation in GBM subregions and radiomics features in quantitative usage of image analysis, as well as ratios of these tumor components. The aim of this study was to assess the potential of multi-parametric MR fingerprinting with volumetric tumor phenotype and radiomic features to underlie biological process and prognostic status of patients with cerebral gliomas. Based on efficiently classified and retrieved cerebral multi-parametric MRI, all data were analyzed to derive volume-based data of the entire tumor from local cohorts and The Cancer Imaging Archive (TCIA) cohorts with GBM. Edema was mainly enriched for homeostasis whereas necrosis was associated with texture features. The proportional volume size of the edema was about 1.5 times larger than the size of the solid part tumor. The volume size of the solid part was approximately 0.7 times in the necrosis area. Therefore, the multi-parametric MRI-based radiomics model reveals efficiently classified tumor subregions of GBM and suggests that prognostic radiomic features from routine MRI examination may also be significantly associated with key biological processes as a practical imaging biomarker.
Tumor Connectomics: Mapping the Intra-Tumoral Complex Interaction Network Using Machine Learning
Parekh, V. S.
Pillai, J. J.
Macura, K. J.
LaViolette, P. S.
Jacobs, M. A.
Cancers (Basel)2022Journal Article, cited 0 times
LGG-1p19qDeletion
BraTS
BRAIN
BREAST
PROSTATE
cancer
complex networks
graph theory
multi-parametric magnetic resonance imaging (multi-parametric MRI)
tumor connectomics
The high-level relationships that form complex networks within tumors and between surrounding tissue is challenging and not fully understood. To better understand these tumoral networks, we developed a tumor connectomics framework (TCF) based on graph theory with machine learning to model the complex interactions within and around the tumor microenvironment that are detectable on imaging. The TCF characterization model was tested with independent datasets of breast, brain, and prostate lesions with corresponding validation datasets in breast and brain cancer. The TCF network connections were modeled using graph metrics of centrality, average path length (APL), and clustering from multiparametric MRI with IsoSVM. The Matthews Correlation Coefficient (MCC), Area Under the Curve-ROC, and Precision-Recall (AUC-ROC and AUC-PR) were used for statistical analysis. The TCF classified the breast and brain tumor cohorts with an IsoSVM AUC-PR and MCC of 0.86, 0.63 and 0.85, 0.65, respectively. The TCF benign breast lesions had a significantly higher clustering coefficient and degree centrality than malignant TCFs. Grade 2 brain tumors demonstrated higher connectivity compared to Grade 4 tumors with increased degree centrality and clustering coefficients. Gleason 7 prostate lesions had increased betweenness centrality and APL compared to Gleason 6 lesions with AUC-PR and MCC ranging from 0.90 to 0.99 and 0.73 to 0.87, respectively. These TCF findings were similar in the validation breast and brain datasets. In conclusion, we present a new method for tumor characterization and visualization that results in a better understanding of the global and regional connections within the lesion and surrounding tissue.
MaasPenn radiomics reproducibility score: A novel quantitative measure for evaluating the reproducibility of CT-based handcrafted radiomic features
Context-Aware Saliency Guided Radiomics: Application to Prediction of Outcome and HPV-Status from Multi-Center PET/CT Images of Head and Neck Cancer
Lv, W.
Xu, H.
Han, X.
Zhang, H.
Ma, J.
Rahmim, A.
Lu, L.
Cancers (Basel)2022Journal Article, cited 6 times
Website
HNSCC
Head-Neck-PET-CT
Head-Neck-Radiomics-HN1
TCGA-HNSC
QIN-HEADNECK
Hpv
PET/CT
head and neck cancer
outcome
Radiomics
Algorithm Development
PURPOSE: This multi-center study aims to investigate the prognostic value of context-aware saliency-guided radiomics in (18)F-FDG PET/CT images of head and neck cancer (HNC). METHODS: 806 HNC patients (training vs. validation vs. external testing: 500 vs. 97 vs. 209) from 9 centers were collected from The Cancer Imaging Archive (TCIA). There were 100/384 and 60/123 oropharyngeal carcinoma (OPC) patients with human papillomavirus (HPV) status in training and testing cohorts, respectively. Six types of images were used for radiomics feature extraction and further model construction, namely (i) the original image (Origin), (ii) a context-aware saliency map (SalMap), (iii, iv) high- or low-saliency regions in the original image (highSal or lowSal), (v) a saliency-weighted image (SalxImg), and finally, (vi) a fused PET-CT image (FusedImg). Four outcomes were evaluated, i.e., recurrence-free survival (RFS), metastasis-free survival (MFS), overall survival (OS), and disease-free survival (DFS), respectively. Multivariate Cox analysis and logistic regression were adopted to construct radiomics scores for the prediction of outcome (Rad_Ocm) and HPV-status (Rad_HPV), respectively. Besides, the prognostic value of their integration (Rad_Ocm_HPV) was also investigated. RESULTS: In the external testing cohort, compared with the Origin model, SalMap and SalxImg achieved the highest C-indices for RFS (0.621 vs. 0.559) and MFS (0.785 vs. 0.739) predictions, respectively, while FusedImg performed the best for both OS (0.685 vs. 0.659) and DFS (0.641 vs. 0.582) predictions. In the OPC HPV testing cohort, FusedImg showed higher AUC for HPV-status prediction compared with the Origin model (0.653 vs. 0.484). In the OPC testing cohort, compared with Rad_Ocm or Rad_HPV alone, Rad_Ocm_HPV performed the best for OS and DFS predictions with C-indices of 0.702 (p = 0.002) and 0.684 (p = 0.006), respectively. CONCLUSION: Saliency-guided radiomics showed enhanced performance for both outcome and HPV-status predictions relative to conventional radiomics. The radiomics-predicted HPV status also showed complementary prognostic value.
Radiomics-Based Method for Predicting the Glioma Subtype as Defined by Tumor Grade, IDH Mutation, and 1p/19q Codeletion
Gliomas are among the most common types of central nervous system (CNS) tumors. A prompt diagnosis of the glioma subtype is crucial to estimate the prognosis and personalize the treatment strategy. The objective of this study was to develop a radiomics pipeline based on the clinical Magnetic Resonance Imaging (MRI) scans to noninvasively predict the glioma subtype, as defined based on the tumor grade, isocitrate dehydrogenase (IDH) mutation status, and 1p/19q codeletion status. A total of 212 patients from the public retrospective The Cancer Genome Atlas Low Grade Glioma (TCGA-LGG) and The Cancer Genome Atlas Glioblastoma Multiforme (TCGA-GBM) datasets were used for the experiments and analyses. Different settings in the radiomics pipeline were investigated to improve the classification, including the Z-score normalization, the feature extraction strategy, the image filter applied to the MRI images, the introduction of clinical information, ComBat harmonization, the classifier chain strategy, etc. Based on numerous experiments, we finally reached an optimal pipeline for classifying the glioma tumors. We then tested this final radiomics pipeline on the hold-out test data with 51 randomly sampled random seeds for reliable and robust conclusions. The results showed that, after tuning the radiomics pipeline, the mean AUC improved from 0.8935 (±0.0351) to 0.9319 (±0.0386), from 0.8676 (±0.0421) to 0.9283 (±0.0333), and from 0.6473 (±0.1074) to 0.8196 (±0.0702) in the test data for predicting the tumor grade, IDH mutation, and 1p/19q codeletion status, respectively. The mean accuracy for predicting the five glioma subtypes also improved from 0.5772 (±0.0816) to 0.6716 (±0.0655). Finally, we analyzed the characteristics of the radiomic features that best distinguished the glioma grade, the IDH mutation, and the 1p/19q codeletion status, respectively. Apart from the promising prediction of the glioma subtype, this study also provides a better understanding of the radiomics model development and interpretability. The results in this paper are replicable with our python codes publicly available in github.
Validation of MRI-Based Models to Predict MGMT Promoter Methylation in Gliomas: BraTS 2021 Radiogenomics Challenge
Kim, B. H.
Lee, H.
Choi, K. S.
Nam, J. G.
Park, C. K.
Park, S. H.
Chung, J. W.
Choi, S. H.
Cancers (Basel)2022Journal Article, cited 1 times
Website
BraTS 2021
Radiogenomics
O6-methylguanine-DNA methyl transferase
Glioma
neural networks
O6-methylguanine-DNA methyl transferase (MGMT) methylation prediction models were developed using only small datasets without proper external validation and achieved good diagnostic performance, which seems to indicate a promising future for radiogenomics. However, the diagnostic performance was not reproducible for numerous research teams when using a larger dataset in the RSNA-MICCAI Brain Tumor Radiogenomic Classification 2021 challenge. To our knowledge, there has been no study regarding the external validation of MGMT prediction models using large-scale multicenter datasets. We tested recent CNN architectures via extensive experiments to investigate whether MGMT methylation in gliomas can be predicted using MR images. Specifically, prediction models were developed and validated with different training datasets: (1) the merged (SNUH + BraTS) (n = 985); (2) SNUH (n = 400); and (3) BraTS datasets (n = 585). A total of 420 training and validation experiments were performed on combinations of datasets, convolutional neural network (CNN) architectures, MRI sequences, and random seed numbers. The first-place solution of the RSNA-MICCAI radiogenomic challenge was also validated using the external test set (SNUH). For model evaluation, the area under the receiver operating characteristic curve (AUROC), accuracy, precision, and recall were obtained. With unexpected negative results, 80.2% (337/420) and 60.0% (252/420) of the 420 developed models showed no significant difference with a chance level of 50% in terms of test accuracy and test AUROC, respectively. The test AUROC and accuracy of the first-place solution of the BraTS 2021 challenge were 56.2% and 54.8%, respectively, as validated on the SNUH dataset. In conclusion, MGMT methylation status of gliomas may not be predictable with preoperative MR images even using deep learning.
A Deep Learning-Aided Automated Method for Calculating Metabolic Tumor Volume in Diffuse Large B-Cell Lymphoma
Metabolic tumor volume (MTV) is a robust prognostic biomarker in diffuse large B-cell lymphoma (DLBCL). The available semiautomatic software for calculating MTV requires manual input limiting its routine application in clinical research. Our objective was to develop a fully automated method (AM) for calculating MTV and to validate the method by comparing its results with those from two nuclear medicine (NM) readers. The automated method designed for this study employed a deep convolutional neural network to segment normal physiologic structures from the computed tomography (CT) scans that demonstrate intense avidity on positron emission tomography (PET) scans. The study cohort consisted of 100 patients with newly diagnosed DLBCL who were randomly selected from the Alliance/CALGB 50303 (NCT00118209) trial. We observed high concordance in MTV calculations between the AM and readers with Pearson’s correlation coefficients and interclass correlations comparing reader 1 to AM of 0.9814 (p < 0.0001) and 0.98 (p < 0.001; 95%CI = 0.96 to 0.99), respectively; and comparing reader 2 to AM of 0.9818 (p < 0.0001) and 0.98 (p < 0.0001; 95%CI = 0.96 to 0.99), respectively. The Bland–Altman plots showed only relatively small systematic errors between the proposed method and readers for both MTV and maximum standardized uptake value (SUVmax). This approach may possess the potential to integrate PET-based biomarkers in clinical trials.
Multi-View Radiomics Feature Fusion Reveals Distinct Immuno-Oncological Characteristics and Clinical Prognoses in Hepatocellular Carcinoma
Hepatocellular carcinoma (HCC) is one of the most prevalent malignancies worldwide, and the pronounced intra- and inter-tumor heterogeneity restricts clinical benefits. Dissecting molecular heterogeneity in HCC is commonly explored by endoscopic biopsy or surgical forceps, but invasive tissue sampling and possible complications limit the broadeer adoption. The radiomics framework is a promising non-invasive strategy for tumor heterogeneity decoding, and the linkage between radiomics and immuno-oncological characteristics is worth further in-depth study. In this study, we extracted multi-view imaging features from contrast-enhanced CT (CE-CT) scans of HCC patients, followed by developing a fused imaging feature subtyping (FIFS) model to identify two distinct radiomics subtypes. We observed two subtypes of patients with distinct texture-dominated radiomics profiles and prognostic outcomes, and the radiomics subtype identified by FIFS model was an independent prognostic factor. The heterogeneity was mainly attributed to inflammatory pathway activity and the tumor immune microenvironment. The predominant radiogenomics association was identified between texture-related features and immune-related pathways by integrating network analysis, and was validated in two independent cohorts. Collectively, this work described the close connections between multi-view radiomics features and immuno-oncological characteristics in HCC, and our integrative radiogenomics analysis strategy may provide clues to non-invasive inflammation-based risk stratification.
Within-Modality Synthesis and Novel Radiomic Evaluation of Brain MRI Scans
Rezaeijo, S. M.
Chegeni, N.
Baghaei Naeini, F.
Makris, D.
Bakas, S.
Cancers (Basel)2023Journal Article, cited 0 times
ACRIN-DSC-MR-Brain
ACRIN 6677
CycleGAN
Radiomics
Magnetic Resonance Imaging (MRI)
generative model
One of the most common challenges in brain MRI scans is to perform different MRI sequences depending on the type and properties of tissues. In this paper, we propose a generative method to translate T2-Weighted (T2W) Magnetic Resonance Imaging (MRI) volume from T2-weight-Fluid-attenuated-Inversion-Recovery (FLAIR) and vice versa using Generative Adversarial Networks (GAN). To evaluate the proposed method, we propose a novel evaluation schema for generative and synthetic approaches based on radiomic features. For the evaluation purpose, we consider 510 pair-slices from 102 patients to train two different GAN-based architectures Cycle GAN and Dual Cycle-Consistent Adversarial network (DC(2)Anet). The results indicate that generative methods can produce similar results to the original sequence without significant change in the radiometric feature. Therefore, such a method can assist clinics to make decisions based on the generated image when different sequences are not available or there is not enough time to re-perform the MRI scans.
Ability of (18)F-FDG Positron Emission Tomography Radiomics and Machine Learning in Predicting KRAS Mutation Status in Therapy-Naive Lung Adenocarcinoma
Zhang, R.
Shi, K.
Hohenforst-Schmidt, W.
Steppert, C.
Sziklavari, Z.
Schmidkonz, C.
Atzinger, A.
Hartmann, A.
Vieth, M.
Forster, S.
Cancers (Basel)2023Journal Article, cited 0 times
NSCLC Radiogenomics
KRAS
Positron Emission Tomography (PET)
Modeling
Classification
lung adenocarcinoma
Machine learning
Radiomic features
OBJECTIVE: Considering the essential role of KRAS mutation in NSCLC and the limited experience of PET radiomic features in KRAS mutation, a prediction model was built in our current analysis. Our model aims to evaluate the status of KRAS mutants in lung adenocarcinoma by combining PET radiomics and machine learning. METHOD: Patients were retrospectively selected from our database and screened from the NSCLC radiogenomic dataset from TCIA. The dataset was randomly divided into three subgroups. Two open-source software programs, 3D Slicer and Python, were used to segment lung tumours and extract radiomic features from (18)F-FDG-PET images. Feature selection was performed by the Mann-Whitney U test, Spearman's rank correlation coefficient, and RFE. Logistic regression was used to build the prediction models. AUCs from ROCs were used to compare the predictive abilities of the models. Calibration plots were obtained to examine the agreements of observed and predictive values in the validation and testing groups. DCA curves were performed to check the clinical impact of the best model. Finally, a nomogram was obtained to present the selected model. RESULTS: One hundred and nineteen patients with lung adenocarcinoma were included in our study. The whole group was divided into three datasets: a training set (n = 96), a validation set (n = 11), and a testing set (n = 12). In total, 1781 radiomic features were extracted from PET images. One hundred sixty-three predictive models were established according to each original feature group and their combinations. After model comparison and selection, one model, including wHLH_fo_IR, wHLH_glrlm_SRHGLE, wHLH_glszm_SAHGLE, and smoking habits, was validated with the highest predictive value. The model obtained AUCs of 0.731 (95% CI: 0.619~0.843), 0.750 (95% CI: 0.248~1.000), and 0.750 (95% CI: 0.448~1.000) in the training set, the validation set and the testing set, respectively. Results from calibration plots in validation and testing groups indicated that there was no departure between observed and predictive values in the two datasets (p = 0.377 and 0.861, respectively). CONCLUSIONS: Our model combining (18)F-FDG-PET radiomics and machine learning indicated a good predictive ability of KRAS status in lung adenocarcinoma. It may be a helpful non-invasive method to screen the KRAS mutation status of heterogenous lung adenocarcinoma before selected biopsy sampling.
Segmentation of 71 Anatomical Structures Necessary for the Evaluation of Guideline-Conforming Clinical Target Volumes in Head and Neck Cancers
Walter, A.
Hoegen-Sassmannshausen, P.
Stanic, G.
Rodrigues, J. P.
Adeberg, S.
Jakel, O.
Frank, M.
Giske, K.
Cancers (Basel)2024Journal Article, cited 0 times
HNSCC-3DCT-RT
Algorithm Development
Automatic Segmentation
Head-and-neck cancer
anatomical structures
clinical target volume delineation
expert guidelines
Lymph Nodes
The delineation of the clinical target volumes (CTVs) for radiation therapy is time-consuming, requires intensive training and shows high inter-observer variability. Supervised deep-learning methods depend heavily on consistent training data; thus, State-of-the-Art research focuses on making CTV labels more homogeneous and strictly bounding them to current standards. International consensus expert guidelines standardize CTV delineation by conditioning the extension of the clinical target volume on the surrounding anatomical structures. Training strategies that directly follow the construction rules given in the expert guidelines or the possibility of quantifying the conformance of manually drawn contours to the guidelines are still missing. Seventy-one anatomical structures that are relevant to CTV delineation in head- and neck-cancer patients, according to the expert guidelines, were segmented on 104 computed tomography scans, to assess the possibility of automating their segmentation by State-of-the-Art deep learning methods. All 71 anatomical structures were subdivided into three subsets of non-overlapping structures, and a 3D nnU-Net model with five-fold cross-validation was trained for each subset, to automatically segment the structures on planning computed tomography scans. We report the DICE, Hausdorff distance and surface DICE for 71 + 5 anatomical structures, for most of which no previous segmentation accuracies have been reported. For those structures for which prediction values have been reported, our segmentation accuracy matched or exceeded the reported values. The predictions from our models were always better than those predicted by the TotalSegmentator. The sDICE with 2 mm margin was larger than 80% for almost all the structures. Individual structures with decreased segmentation accuracy are analyzed and discussed with respect to their impact on the CTV delineation following the expert guidelines. No deviation is expected to affect the rule-based automation of the CTV delineation.
An Accuracy vs. Complexity Comparison of Deep Learning Architectures for the Detection of COVID-19 Disease
Sarv Ahrabi, Sima
Scarpiniti, Michele
Baccarelli, Enzo
Momenzadeh, Alireza
Computation2021Journal Article, cited 0 times
Website
COVID-19-AR
LUNG
Deep Learning
Computer Aided Detection (CADe)
Brain Tumor Segmentation of MRI Images Using Processed Image Driven U-Net Architecture
Arora, Anuja
Jayal, Ambikesh
Gupta, Mayank
Mittal, Prakhar
Satapathy, Suresh Chandra
Computers2021Journal Article, cited 6 times
Website
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2018
BRAIN
Segmentation
Algorithm Development
Brain tumor segmentation seeks to separate healthy tissue from tumorous regions. This is an essential step in diagnosis and treatment planning to maximize the likelihood of successful treatment. Magnetic resonance imaging (MRI) provides detailed information about brain tumor anatomy, making it an important tool for effective diagnosis which is requisite to replace the existing manual detection system where patients rely on the skills and expertise of a human. In order to solve this problem, a brain tumor segmentation & detection system is proposed where experiments are tested on the collected BraTS 2018 dataset. This dataset contains four different MRI modalities for each patient as T1, T2, T1Gd, and FLAIR, and as an outcome, a segmented image and ground truth of tumor segmentation, i.e., class label, is provided. A fully automatic methodology to handle the task of segmentation of gliomas in pre-operative MRI scans is developed using a U-Net-based deep learning model. The first step is to transform input image data, which is further processed through various techniques—subset division, narrow object region, category brain slicing, watershed algorithm, and feature scaling was done. All these steps are implied before entering data into the U-Net Deep learning model. The U-Net Deep learning model is used to perform pixel label segmentation on the segment tumor region. The algorithm reached high-performance accuracy on the BraTS 2018 training, validation, as well as testing dataset. The proposed model achieved a dice coefficient of 0.9815, 0.9844, 0.9804, and 0.9954 on the testing dataset for sets HGG-1, HGG-2, HGG-3, and LGG-1, respectively.
Auto Diagnostics of Lung Nodules Using Minimal Characteristics Extraction Technique
Peña, Diego M
Luo, Shouhua
Abdelgader, Abdeldime
Diagnostics2016Journal Article, cited 6 times
Website
LungCT-Diagnosis
SPIE-AAPM Lung CT Challenge
Segmentation
Computer-aided detection (CAD) systems provide useful tools and an advantageous process to physicians aiming to detect lung nodules. This paper develops a method composed of four processes for lung nodule detection. The first step employs image acquisition and pre-processing techniques to isolate the lungs from the rest of the body. The second stage involves the segmentation process using a 2D algorithm to affect every layer of a scan eliminating non-informative structures inside the lungs, and a 3D blob algorithm associated with a connectivity algorithm to select possible nodule shape candidates. The combinations of these algorithms efficiently eliminate the high rates of false positives. The third process extracts eight minimal representative characteristics of the possible candidates. The final step utilizes a support vector machine for classifying the possible candidates into nodules and non-nodules depending on their features. As the objective is to find nodules bigger than 4mm, the proposed approach demonstrated quite encouraging results. Among 65 computer tomography (CT) scans, 94.23% of sensitivity and 84.75% in specificity were obtained. The accuracy of these two results was 89.19% taking into consideration that 45 scans were used for testing and 20 for training. The rate of false positives was 0.2 per scan.
Visual Interpretation of Convolutional Neural Network Predictions in Classifying Medical Image Modalities
Kim, Incheol
Rajaraman, Sivaramakrishnan
Antani, Sameer
Diagnostics (Basel)2019Journal Article, cited 0 times
Website
Computer Aided Detection (CADe)
Computer Aided Diagnosis (CADx)
Convolutional Neural Network (CNN)
Deep learning
Deep learning (DL) methods are increasingly being applied for developing reliable computer-aided detection (CADe), diagnosis (CADx), and information retrieval algorithms. However, challenges in interpreting and explaining the learned behavior of the DL models hinders their adoption and use in real-world systems. In this study, we propose a novel method called “Class-selective Relevance Mapping” (CRM) for localizing and visualizing discriminative regions of interest (ROI) within a medical image. Such visualizations offer improved explanation of the convolutional neural network (CNN)-based DL model predictions. We demonstrate CRM effectiveness in classifying medical imaging modalities toward automatically labeling them for visual information retrieval applications. The CRM is based on linear sum of incremental mean squared errors (MSE) calculated at the output layer of the CNN model. It measures both positive and negative contributions of each spatial element in the feature maps produced from the last convolution layer leading to correct classification of an input image. A series of experiments on a “multi-modality” CNN model designed for classifying seven different types of image modalities shows that the proposed method is significantly better in detecting and localizing the discriminative ROIs than other state of the art class-activation methods. Further, to visualize its effectiveness we generate “class-specific” ROI maps by averaging the CRM scores of images in each modality class, and characterize the visual explanation through their different size, shape, and location for our multi-modality CNN model that achieved over 98% performance on a dataset constructed from publicly available images.
Multimodal Brain Tumor Classification Using Deep Learning and Robust Feature Selection: A Machine Learning Application for Radiologists
Khan, M. A.
Ashraf, I.
Alhaisoni, M.
Damasevicius, R.
Scherer, R.
Rehman, A.
Bukhari, S. A. C.
Diagnostics (Basel)2020Journal Article, cited 216 times
Website
BraTS 2015
BraTS 2017
BraTS 2018
Partial least squares
Deep learning
Radiomic features
Transfer learning
Algorithm Development
Manual identification of brain tumors is an error-prone and tedious process for radiologists; therefore, it is crucial to adopt an automated system. The binary classification process, such as malignant or benign is relatively trivial; whereas, the multimodal brain tumors classification (T1, T2, T1CE, and Flair) is a challenging task for radiologists. Here, we present an automated multimodal classification method using deep learning for brain tumor type classification. The proposed method consists of five core steps. In the first step, the linear contrast stretching is employed using edge-based histogram equalization and discrete cosine transform (DCT). In the second step, deep learning feature extraction is performed. By utilizing transfer learning, two pre-trained convolutional neural network (CNN) models, namely VGG16 and VGG19, were used for feature extraction. In the third step, a correntropy-based joint learning approach was implemented along with the extreme learning machine (ELM) for the selection of best features. In the fourth step, the partial least square (PLS)-based robust covariant features were fused in one matrix. The combined matrix was fed to ELM for final classification. The proposed method was validated on the BraTS datasets and an accuracy of 97.8%, 96.9%, 92.5% for BraTs2015, BraTs2017, and BraTs2018, respectively, was achieved.
BrainSeg-Net: Brain Tumor MR Image Segmentation via Enhanced Encoder-Decoder Network
Rehman, M. U.
Cho, S.
Kim, J.
Chong, K. T.
Diagnostics (Basel)2021Journal Article, cited 0 times
BraTS 2017
BraTS 2018
BraTS 2019
Feature Enhancer (FE)
Magnetic Resonance Imaging (MRI)
brain tumor
diagnostics
medical imaging
Segmentation
Radiomics
Efficient segmentation of Magnetic Resonance (MR) brain tumor images is of the utmost value for the diagnosis of tumor region. In recent years, advancement in the field of neural networks has been used to refine the segmentation performance of brain tumor sub-regions. The brain tumor segmentation has proven to be a complicated task even for neural networks, due to the small-scale tumor regions. These small-scale tumor regions are unable to be identified, the reason being their tiny size and the huge difference between area occupancy by different tumor classes. In previous state-of-the-art neural network models, the biggest problem was that the location information along with spatial details gets lost in deeper layers. To address these problems, we have proposed an encoder-decoder based model named BrainSeg-Net. The Feature Enhancer (FE) block is incorporated into the BrainSeg-Net architecture which extracts the middle-level features from low-level features from the shallow layers and shares them with the dense layers. This feature aggregation helps to achieve better performance of tumor identification. To address the problem associated with imbalance class, we have used a custom-designed loss function. For evaluation of BrainSeg-Net architecture, three benchmark datasets are utilized: BraTS2017, BraTS 2018, and BraTS 2019. Segmentation of Enhancing Core (EC), Whole Tumor (WT), and Tumor Core (TC) is carried out. The proposed architecture have exhibited good improvement when compared with existing baseline and state-of-the-art techniques. The MR brain tumor segmentation by BrainSeg-Net uses enhanced location and spatial features, which performs better than the existing plethora of brain MR image segmentation approaches.
A Cascaded Neural Network for Staging in Non-Small Cell Lung Cancer Using Pre-Treatment CT
Choi, J.
Cho, H. H.
Kwon, J.
Lee, H. Y.
Park, H.
Diagnostics (Basel)2021Journal Article, cited 0 times
Website
NSCLC-Radiomics-Genomics
NSCLC Radiogenomics
CPTAC-LUAD
CPTAC-LSCC
TCGA-LUAD
TCGA-LUSC
Computed Tomography (CT)
Convolutional neural networks (CNN)
Deep Learning
LUNG
Classification
BACKGROUND AND AIM: Tumor staging in non-small cell lung cancer (NSCLC) is important for treatment and prognosis. Staging involves expert interpretation of imaging, which we aim to automate with deep learning (DL). We proposed a cascaded DL method comprised of two steps to classification between early- and advanced-stage NSCLC using pretreatment computed tomography. METHODS: We developed and tested a DL model to classify between early- and advanced-stage using training (n = 90), validation (n = 8), and two test (n = 37, n = 26) cohorts obtained from the public domain. The first step adopted an autoencoder network to compress the imaging data into latent variables and the second step used the latent variable to classify the stages using the convolutional neural network (CNN). Other DL and machine learning-based approaches were compared. RESULTS: Our model was tested in two test cohorts of CPTAC and TCGA. In CPTAC, our model achieved accuracy of 0.8649, sensitivity of 0.8000, specificity of 0.9412, and area under the curve (AUC) of 0.8206 compared to other approaches (AUC 0.6824-0.7206) for classifying between early- and advanced-stages. In TCGA, our model achieved accuracy of 0.8077, sensitivity of 0.7692, specificity of 0.8462, and AUC of 0.8343. CONCLUSION: Our cascaded DL model for classification NSCLC patients into early-stage and advanced-stage showed promising results and could help future NSCLC research.
Does Anatomical Contextual Information Improve 3D U-Net-Based Brain Tumor Segmentation?
Tampu, Iulian Emil
Haj-Hosseini, Neda
Eklund, Anders
Diagnostics2021Journal Article, cited 0 times
BraTS 2020
3D U-Net
Magnetic Resonance Imaging (MRI)
Automatic segmentation
Low grade glioma
High grade glioma
BRAIN
Effective, robust, and automatic tools for brain tumor segmentation are needed for the extraction of information useful in treatment planning. Recently, convolutional neural networks have shown remarkable performance in the identification of tumor regions in magnetic resonance (MR) images. Context-aware artificial intelligence is an emerging concept for the development of deep learning applications for computer-aided medical image analysis. A large portion of the current research is devoted to the development of new network architectures to improve segmentation accuracy by using context-aware mechanisms. In this work, it is investigated whether or not the addition of contextual information from the brain anatomy in the form of white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) masks and probability maps improves U-Net-based brain tumor segmentation. The BraTS2020 dataset was used to train and test two standard 3D U-Net (nnU-Net) models that, in addition to the conventional MR image modalities, used the anatomical contextual information as extra channels in the form of binary masks (CIM) or probability maps (CIP). For comparison, a baseline model (BLM) that only used the conventional MR image modalities was also trained. The impact of adding contextual information was investigated in terms of overall segmentation accuracy, model training time, domain generalization, and compensation for fewer MR modalities available for each subject. Median (mean) Dice scores of 90.2 (81.9), 90.2 (81.9), and 90.0 (82.1) were obtained on the official BraTS2020 validation dataset (125 subjects) for BLM, CIM, and CIP, respectively. Results show that there is no statistically significant difference when comparing Dice scores between the baseline model and the contextual information models (p > 0.05), even when comparing performances for high and low grade tumors independently. In a few low grade cases where improvement was seen, the number of false positives was reduced. Moreover, no improvements were found when considering model training time or domain generalization. Only in the case of compensation for fewer MR modalities available for each subject did the addition of anatomical contextual information significantly improve (p < 0.05) the segmentation of the whole tumor. In conclusion, there is no overall significant improvement in segmentation performance when using anatomical contextual information in the form of either binary WM, GM, and CSF masks or probability maps as extra channels.
Impact of Lesion Delineation and Intensity Quantisation on the Stability of Texture Features from Lung Nodules on CT: A Reproducible Study
Bianconi, Francesco
Fravolini, Mario Luca
Palumbo, Isabella
Pascoletti, Giulia
Nuvoli, Susanna
Rondini, Maria
Spanu, Angela
Palumbo, Barbara
Diagnostics2021Journal Article, cited 0 times
LIDC-IDRI
Computer-assisted analysis of three-dimensional imaging data (radiomics) has received a lot of research attention as a possible means to improve the management of patients with lung cancer. Building robust predictive models for clinical decision making requires the imaging features to be stable enough to changes in the acquisition and extraction settings. Experimenting on 517 lung lesions from a cohort of 207 patients, we assessed the stability of 88 texture features from the following classes: first-order (13 features), Grey-level Co-Occurrence Matrix (24), Grey-level Difference Matrix (14), Grey-level Run-length Matrix (16), Grey-level Size Zone Matrix (16) and Neighbouring Grey-tone Difference Matrix (five). The analysis was based on a public dataset of lung nodules and open-access routines for feature extraction, which makes the study fully reproducible. Our results identified 30 features that had good or excellent stability relative to lesion delineation, 28 to intensity quantisation and 18 to both. We conclude that selecting the right set of imaging features is critical for building clinical predictive models, particularly when changes in lesion delineation and/or intensity quantisation are involved.
Narrow Band Active Contour Attention Model for Medical Segmentation
Le, N.
Bui, T.
Vo-Ho, V. K.
Yamazaki, K.
Luu, K.
Diagnostics (Basel)2021Journal Article, cited 6 times
Website
BraTS 2018
Deep learning
Segmentation
Weak boundary
Medical image segmentation is one of the most challenging tasks in medical image analysis and widely developed for many clinical applications. While deep learning-based approaches have achieved impressive performance in semantic segmentation, they are limited to pixel-wise settings with imbalanced-class data problems and weak boundary object segmentation in medical images. In this paper, we tackle those limitations by developing a new two-branch deep network architecture which takes both higher level features and lower level features into account. The first branch extracts higher level feature as region information by a common encoder-decoder network structure such as Unet and FCN, whereas the second branch focuses on lower level features as support information around the boundary and processes in parallel to the first branch. Our key contribution is the second branch named Narrow Band Active Contour (NB-AC) attention model which treats the object contour as a hyperplane and all data inside a narrow band as support information that influences the position and orientation of the hyperplane. Our proposed NB-AC attention model incorporates the contour length with the region energy involving a fixed-width band around the curve or surface. The proposed network loss contains two fitting terms: (i) a high level feature (i.e., region) fitting term from the first branch; (ii) a lower level feature (i.e., contour) fitting term from the second branch including the (ii1) length of the object contour and (ii2) regional energy functional formed by the homogeneity criterion of both the inner band and outer band neighboring the evolving curve or surface. The proposed NB-AC loss can be incorporated into both 2D and 3D deep network architectures. The proposed network has been evaluated on different challenging medical image datasets, including DRIVE, iSeg17, MRBrainS18 and Brats18. The experimental results have shown that the proposed NB-AC loss outperforms other mainstream loss functions: Cross Entropy, Dice, Focal on two common segmentation frameworks Unet and FCN. Our 3D network which is built upon the proposed NB-AC loss and 3DUnet framework achieved state-of-the-art results on multiple volumetric datasets.
Comparison of Supervised and Unsupervised Approaches for the Generation of Synthetic CT from Cone-Beam CT
Rossi, M.
Cerveri, P.
Diagnostics (Basel)2021Journal Article, cited 0 times
Website
Pelvic-Reference-Data
Computed Tomography (CT)
Machine Learning
U-Net
cycleGAN
Image Registration
Supervised training
synthetic images
unsupervised training
Due to major artifacts and uncalibrated Hounsfield units (HU), cone-beam computed tomography (CBCT) cannot be used readily for diagnostics and therapy planning purposes. This study addresses image-to-image translation by convolutional neural networks (CNNs) to convert CBCT to CT-like scans, comparing supervised to unsupervised training techniques, exploiting a pelvic CT/CBCT publicly available dataset. Interestingly, quantitative results were in favor of supervised against unsupervised approach showing improvements in the HU accuracy (62% vs. 50%), structural similarity index (2.5% vs. 1.1%) and peak signal-to-noise ratio (15% vs. 8%). Qualitative results conversely showcased higher anatomical artifacts in the synthetic CBCT generated by the supervised techniques. This was motivated by the higher sensitivity of the supervised training technique to the pixel-wise correspondence contained in the loss function. The unsupervised technique does not require correspondence and mitigates this drawback as it combines adversarial, cycle consistency, and identity loss functions. Overall, two main impacts qualify the paper: (a) the feasibility of CNN to generate accurate synthetic CT from CBCT images, which is fast and easy to use compared to traditional techniques applied in clinics; (b) the proposal of guidelines to drive the selection of the better training technique, which can be shifted to more general image-to-image translation.
Stability and Reproducibility of Radiomic Features Based Various Segmentation Technique on MR Images of Hepatocellular Carcinoma (HCC)
Haniff, N. S. M.
Abdul Karim, M. K.
Osman, N. H.
Saripan, M. I.
Che Isa, I. N.
Ibahim, M. J.
Diagnostics (Basel)2021Journal Article, cited 1 times
Website
TCGA-LIHC
LIVER
Magnetic Resonance Imaging (MRI)
Manual segmentation
Radiomics
Semi-automatic segmentation
Hepatocellular carcinoma (HCC) is considered as a complex liver disease and ranked as the eighth-highest mortality rate with a prevalence of 2.4% in Malaysia. Magnetic resonance imaging (MRI) has been acknowledged for its advantages, a gold technique for diagnosing HCC, and yet the false-negative diagnosis from the examinations is inevitable. In this study, 30 MR images from patients diagnosed with HCC is used to evaluate the robustness of semi-automatic segmentation using the flood fill algorithm for quantitative features extraction. The relevant features were extracted from the segmented MR images of HCC. Four types of features extraction were used for this study, which are tumour intensity, shape feature, textural feature and wavelet feature. A total of 662 radiomic features were extracted from manual and semi-automatic segmentation and compared using intra-class relation coefficient (ICC). Radiomic features extracted using semi-automatic segmentation utilized flood filling algorithm from 3D-slicer had significantly higher reproducibility (average ICC = 0.952 +/- 0.009, p < 0.05) compared with features extracted from manual segmentation (average ICC = 0.897 +/- 0.011, p > 0.05). Moreover, features extracted from semi-automatic segmentation were more robust compared to manual segmentation. This study shows that semi-automatic segmentation from 3D-Slicer is a better alternative to the manual segmentation, as they can produce more robust and reproducible radiomic features.
Brain Tumor Detection and Classification on MR Images by a Deep Wavelet Auto-Encoder Model
Abd El Kader, Isselmou
Xu, Guizhi
Shuai, Zhang
Saminu, Sani
Javaid, Imran
Ahmad, Isah Salim
Kamhi, Souha
Diagnostics2021Journal Article, cited 16 times
Website
TCGA-GBM
TCGA-LGG
BraTS 2015
Magnetic Resonance Imaging (MRI)
BRAIN
Computer Aided Detection (CADe)
Algorithm Development
Classification
Segmentation
Deep Learning
Wavelet autoencoder
The process of diagnosing brain tumors is very complicated for many reasons, including the brain’s synaptic structure, size, and shape. Machine learning techniques are employed to help doctors to detect brain tumor and support their decisions. In recent years, deep learning techniques have made a great achievement in medical image analysis. This paper proposed a deep wavelet autoencoder model named “DWAE model”, employed to divide input data slice as a tumor (abnormal) or no tumor (normal). This article used a high pass filter to show the heterogeneity of the MRI images and their integration with the input images. A high median filter was utilized to merge slices. We improved the output slices’ quality through highlight edges and smoothened input MR brain images. Then, we applied the seed growing method based on 4-connected since the thresholding cluster equal pixels with input MR data. The segmented MR image slices provide two two-layer using the proposed deep wavelet auto-encoder model. We then used 200 hidden units in the first layer and 400 hidden units in the second layer. The softmax layer testing and training are performed for the identification of the MR image normal and abnormal. The contribution of the deep wavelet auto-encoder model is in the analysis of pixel pattern of MR brain image and the ability to detect and classify the tumor with high accuracy, short time, and low loss validation. To train and test the overall performance of the proposed model, we utilized 2500 MR brain images from BRATS2012, BRATS2013, BRATS2014, BRATS2015, 2015 challenge, and ISLES, which consists of normal and abnormal images. The experiments results show that the proposed model achieved an accuracy of 99.3%, loss validation of 0.1, low FPR and FNR values. This result demonstrates that the proposed DWAE model can facilitate the automatic detection of brain tumors.
A Predictive Clinical-Radiomics Nomogram for Survival Prediction of Glioblastoma Using MRI
Ammari, Samy
Sallé de Chou, Raoul
Balleyguier, Corinne
Chouzenoux, Emilie
Touat, Mehdi
Quillent, Arnaud
Dumont, Sarah
Bockel, Sophie
Garcia, Gabriel C. T. E.
Elhaik, Mickael
Francois, Bidault
Borget, Valentin
Lassau, Nathalie
Khettab, Mohamed
Assi, Tarek
Diagnostics2021Journal Article, cited 8 times
Website
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2019
BRAIN
Segmentation
Algorithm Development
Glioblastoma Multiforme (GBM)
Machine Learning
Radiomics
Glioblastoma (GBM) is the most common and aggressive primary brain tumor in adult patients with a median survival of around one year. Prediction of survival outcomes in GBM patients could represent a huge step in treatment personalization. The objective of this study was to develop machine learning (ML) algorithms for survival prediction of GBM patient. We identified a radiomic signature on a training-set composed of data from the 2019 BraTS challenge (210 patients) from MRI retrieved at diagnosis. Then, using this signature along with the age of the patients for training classification models, we obtained on test-sets AUCs of 0.85, 0.74 and 0.58 (0.92, 0.88 and 0.75 on the training-sets) for survival at 9-, 12- and 15-months, respectively. This signature was then validated on an independent cohort of 116 GBM patients with confirmed disease relapse for the prediction of patients surviving less or more than the median OS of 22 months. Our model insured an AUC of 0.71 (0.65 on train). The Kaplan-Meier method showed significant OS difference between groups (log-rank p = 0.05). These results suggest that radiomic signatures may improve survival outcome predictions in GBM thus creating a solid clinical tool for tailoring therapy in this population.
Automated Screening for Abdominal Aortic Aneurysm in CT Scans under Clinical Conditions Using Deep Learning
Golla, A. K.
Tonnes, C.
Russ, T.
Bauer, D. F.
Froelich, M. F.
Diehl, S. J.
Schoenberg, S. O.
Keese, M.
Schad, L. R.
Zollner, F. G.
Rink, J. S.
Diagnostics (Basel)2021Journal Article, cited 0 times
Website
Pancreas-CT
Vasculature
abdominal aortic aneurysm
Computed Tomography (CT)
Deep Learning
Classification
deep convolutional neural network (DCNN)
Algorithm Development
Abdominal aortic aneurysms (AAA) may remain clinically silent until they enlarge and patients present with a potentially lethal rupture. This necessitates early detection and elective treatment. The goal of this study was to develop an easy-to-train algorithm which is capable of automated AAA screening in CT scans and can be applied to an intra-hospital environment. Three deep convolutional neural networks (ResNet, VGG-16 and AlexNet) were adapted for 3D classification and applied to a dataset consisting of 187 heterogenous CT scans. The 3D ResNet outperformed both other networks. Across the five folds of the first training dataset it achieved an accuracy of 0.856 and an area under the curve (AUC) of 0.926. Subsequently, the algorithms performance was verified on a second data set containing 106 scans, where it ran fully automated and resulted in an accuracy of 0.953 and an AUC of 0.971. A layer-wise relevance propagation (LRP) made the decision process interpretable and showed that the network correctly focused on the aortic lumen. In conclusion, the deep learning-based screening proved to be robust and showed high performance even on a heterogeneous multi-center data set. Integration into hospital workflow and its effect on aneurysm management would be an exciting topic of future research.
VGG19 Network Assisted Joint Segmentation and Classification of Lung Nodules in CT Images
Khan, M. A.
Rajinikanth, V.
Satapathy, S. C.
Taniar, D.
Mohanty, J. R.
Tariq, U.
Damasevicius, R.
Diagnostics (Basel)2021Journal Article, cited 0 times
LIDC-IDRI
Lung-PET-CT-Dx
VGG-SegNet
deep learning
lung CT images
nodule detection
pre-trained VGG19
Pulmonary nodule is one of the lung diseases and its early diagnosis and treatment are essential to cure the patient. This paper introduces a deep learning framework to support the automated detection of lung nodules in computed tomography (CT) images. The proposed framework employs VGG-SegNet supported nodule mining and pre-trained DL-based classification to support automated lung nodule detection. The classification of lung CT images is implemented using the attained deep features, and then these features are serially concatenated with the handcrafted features, such as the Grey Level Co-Occurrence Matrix (GLCM), Local-Binary-Pattern (LBP) and Pyramid Histogram of Oriented Gradients (PHOG) to enhance the disease detection accuracy. The images used for experiments are collected from the LIDC-IDRI and Lung-PET-CT-Dx datasets. The experimental results attained show that the VGG19 architecture with concatenated deep and handcrafted features can achieve an accuracy of 97.83% with the SVM-RBF classifier.
Reliability as a Precondition for Trust-Segmentation Reliability Analysis of Radiomic Features Improves Survival Prediction
Muller-Franzes, G.
Nebelung, S.
Schock, J.
Haarburger, C.
Khader, F.
Pedersoli, F.
Schulze-Hagen, M.
Kuhl, C.
Truhn, D.
Diagnostics (Basel)2022Journal Article, cited 0 times
LIDC-IDRI
NSCLC-Radiomics
inter-rater reliability
Classification
neural network
overall survival
Radiomic features
robustness
BraTS
LiTS
KiTS
Machine learning results based on radiomic analysis are often not transferrable. A potential reason for this is the variability of radiomic features due to varying human made segmentations. Therefore, the aim of this study was to provide comprehensive inter-reader reliability analysis of radiomic features in five clinical image datasets and to assess the association of inter-reader reliability and survival prediction. In this study, we analyzed 4598 tumor segmentations in both computed tomography and magnetic resonance imaging data. We used a neural network to generate 100 additional segmentation outlines for each tumor and performed a reliability analysis of radiomic features. To prove clinical utility, we predicted patient survival based on all features and on the most reliable features. Survival prediction models for both computed tomography and magnetic resonance imaging datasets demonstrated less statistical spread and superior survival prediction when based on the most reliable features. Mean concordance indices were C(mean) = 0.58 [most reliable] vs. C(mean) = 0.56 [all] (p < 0.001, CT) and C(mean) = 0.58 vs. C(mean) = 0.57 (p = 0.23, MRI). Thus, preceding reliability analyses and selection of the most reliable radiomic features improves the underlying model's ability to predict patient survival across clinical imaging modalities and tumor entities.
Ensembles of Convolutional Neural Networks for Survival Time Estimation of High-Grade Glioma Patients from Multimodal MRI
Ben Ahmed, Kaoutar
Hall, Lawrence O.
Goldgof, Dmitry B.
Gatenby, Robert
Diagnostics2022Journal Article, cited 2 times
Website
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BraTS 2019
Deep learning
Glioblastoma Multiforme (GBM)
Machine Learning
Magnetic Resonance Imaging (MRI)
Glioma is the most common type of primary malignant brain tumor. Accurate survival time prediction for glioma patients may positively impact treatment planning. In this paper, we develop an automatic survival time prediction tool for glioblastoma patients along with an effective solution to the limited availability of annotated medical imaging datasets. Ensembles of snapshots of three dimensional (3D) deep convolutional neural networks (CNN) are applied to Magnetic Resonance Image (MRI) data to predict survival time of high-grade glioma patients. Additionally, multi-sequence MRI images were used to enhance survival prediction performance. A novel way to leverage the potential of ensembles to overcome the limitation of labeled medical image availability is shown. This new classification method separates glioblastoma patients into long- and short-term survivors. The BraTS (Brain Tumor Image Segmentation) 2019 training dataset was used in this work. Each patient case consisted of three MRI sequences (T1CE, T2, and FLAIR). Our training set contained 163 cases while the test set included 46 cases. The best known prediction accuracy of 74% for this type of problem was achieved on the unseen test set.
Deep Learning and Domain-Specific Knowledge to Segment the Liver from Synthetic Dual Energy CT Iodine Scans
Mahmood, U.
Bates, D. D. B.
Erdi, Y. E.
Mannelli, L.
Corrias, G.
Kanan, C.
Diagnostics (Basel)2022Journal Article, cited 2 times
Website
CT-ORG
artificial intelligence
computed Tomography (CT)
deep learning
dual energy computed tomography
image-to-image translation
LIVER
Segmentation
We map single energy CT (SECT) scans to synthetic dual-energy CT (synth-DECT) material density iodine (MDI) scans using deep learning (DL) and demonstrate their value for liver segmentation. A 2D pix2pix (P2P) network was trained on 100 abdominal DECT scans to infer synth-DECT MDI scans from SECT scans. The source and target domain were paired with DECT monochromatic 70 keV and MDI scans. The trained P2P algorithm then transformed 140 public SECT scans to synth-DECT scans. We split 131 scans into 60% train, 20% tune, and 20% held-out test to train four existing liver segmentation frameworks. The remaining nine low-dose SECT scans tested system generalization. Segmentation accuracy was measured with the dice coefficient (DSC). The DSC per slice was computed to identify sources of error. With synth-DECT (and SECT) scans, an average DSC score of 0.93+/-0.06 (0.89+/-0.01) and 0.89+/-0.01 (0.81+/-0.02) was achieved on the held-out and generalization test sets. Synth-DECT-trained systems required less data to perform as well as SECT-trained systems. Low DSC scores were primarily observed around the scan margin or due to non-liver tissue or distortions within ground-truth annotations. In general, training with synth-DECT scans resulted in improved segmentation performance with less data.
Glioma Tumors’ Classification Using Deep-Neural-Network-Based Features with SVM Classifier
Latif, Ghazanfar
Ben Brahim, Ghassen
Iskandar, D. N. F. Awang
Bashar, Abul
Alghazo, Jaafar
Diagnostics2022Journal Article, cited 0 times
BraTS 2018
Classification
Convolutional Neural Network (CNN)
Imaging features
BRAIN
Glioma
Support Vector Machine (SVM)
Algorithm Development
The complexity of brain tissue requires skillful technicians and expert medical doctors to manually analyze and diagnose Glioma brain tumors using multiple Magnetic Resonance (MR) images with multiple modalities. Unfortunately, manual diagnosis suffers from its lengthy process, as well as elevated cost. With this type of cancerous disease, early detection will increase the chances of suitable medical procedures leading to either a full recovery or the prolongation of the patient's life. This has increased the efforts to automate the detection and diagnosis process without human intervention, allowing the detection of multiple types of tumors from MR images. This research paper proposes a multi-class Glioma tumor classification technique using the proposed deep-learning-based features with the Support Vector Machine (SVM) classifier. A deep convolution neural network is used to extract features of the MR images, which are then fed to an SVM classifier. With the proposed technique, a 96.19% accuracy was achieved for the HGG Glioma type while considering the FLAIR modality and a 95.46% for the LGG Glioma tumor type while considering the T2 modality for the classification of four Glioma classes (Edema, Necrosis, Enhancing, and Non-enhancing). The accuracies achieved using the proposed method were higher than those reported by similar methods in the extant literature using the same BraTS dataset. In addition, the accuracy results obtained in this work are better than those achieved by the GoogleNet and LeNet pre-trained models on the same dataset.
DeepTumor: Framework for Brain MR Image Classification, Segmentation and Tumor Detection
Latif, G.
Diagnostics (Basel)2022Journal Article, cited 1 times
Website
Convolutional Neural Network (CNN)
Deep learning
Glioma
Classification
Segmentation
Fuzzy C-means
The proper segmentation of the brain tumor from the image is important for both patients and medical personnel due to the sensitivity of the human brain. Operation intervention would require doctors to be extremely cautious and precise to target the brain's required portion. Furthermore, the segmentation process is also important for multi-class tumor classification. This work primarily concentrated on making a contribution in three main areas of brain MR Image processing for classification and segmentation which are: Brain MR image classification, tumor region segmentation and tumor classification. A framework named DeepTumor is presented for the multistage-multiclass Glioma Tumor classification into four classes; Edema, Necrosis, Enhancing and Non-enhancing. For the brain MR image binary classification (Tumorous and Non-tumorous), two deep Convolutional Neural Network) CNN models were proposed for brain MR image classification; 9-layer model with a total of 217,954 trainable parameters and an improved 10-layer model with a total of 80,243 trainable parameters. In the second stage, an enhanced Fuzzy C-means (FCM) based technique is proposed for the tumor segmentation in brain MR images. In the final stage, an enhanced CNN model 3 with 11 hidden layers and a total of 241,624 trainable parameters was proposed for the classification of the segmented tumor region into four Glioma Tumor classes. The experiments are performed using the BraTS MRI dataset. The experimental results of the proposed CNN models for binary classification and multiclass tumor classification are compared with the existing CNN models such as LeNet, AlexNet and GoogleNet as well as with the latest literature.
Phenotyping the Histopathological Subtypes of Non-Small-Cell Lung Carcinoma: How Beneficial Is Radiomics?
Pasini, G.
Stefano, A.
Russo, G.
Comelli, A.
Marinozzi, F.
Bini, F.
Diagnostics (Basel)2023Journal Article, cited 0 times
Website
NSCLC-Radiomics-Interobserver1
NSCLC-Radiomics
NSCLC Radiogenomics
TCGA-LUSC
TCGA-LUAD
NSCLC-Radiomics-Genomics
Computed Tomography (CT)
harmonization
machine learning
multicenter
non-small-cell lung carcinoma
phenotyping
Radiomics
The aim of this study was to investigate the usefulness of radiomics in the absence of well-defined standard guidelines. Specifically, we extracted radiomics features from multicenter computed tomography (CT) images to differentiate between the four histopathological subtypes of non-small-cell lung carcinoma (NSCLC). In addition, the results that varied with the radiomics model were compared. We investigated the presence of the batch effects and the impact of feature harmonization on the models' performance. Moreover, the question on how the training dataset composition influenced the selected feature subsets and, consequently, the model's performance was also investigated. Therefore, through combining data from the two publicly available datasets, this study involves a total of 152 squamous cell carcinoma (SCC), 106 large cell carcinoma (LCC), 150 adenocarcinoma (ADC), and 58 no other specified (NOS). Through the matRadiomics tool, which is an example of Image Biomarker Standardization Initiative (IBSI) compliant software, 1781 radiomics features were extracted from each of the malignant lesions that were identified in CT images. After batch analysis and feature harmonization, which were based on the ComBat tool and were integrated in matRadiomics, the datasets (the harmonized and the non-harmonized) were given as an input to a machine learning modeling pipeline. The following steps were articulated: (i) training-set/test-set splitting (80/20); (ii) a Kruskal-Wallis analysis and LASSO linear regression for the feature selection; (iii) model training; (iv) a model validation and hyperparameter optimization; and (v) model testing. Model optimization consisted of a 5-fold cross-validated Bayesian optimization, repeated ten times (inner loop). The whole pipeline was repeated 10 times (outer loop) with six different machine learning classification algorithms. Moreover, the stability of the feature selection was evaluated. Results showed that the batch effects were present even if the voxels were resampled to an isotropic form and whether feature harmonization correctly removed them, even though the models' performances decreased. Moreover, the results showed that a low accuracy (61.41%) was reached when differentiating between the four subtypes, even though a high average area under curve (AUC) was reached (0.831). Further, a NOS subtype was classified as almost completely correct (true positive rate ~90%). The accuracy increased (77.25%) when only the SCC and ADC subtypes were considered, as well as when a high AUC (0.821) was obtained-although harmonization decreased the accuracy to 58%. Moreover, the features that contributed the most to models' performance were those extracted from wavelet decomposed and Laplacian of Gaussian (LoG) filtered images and they belonged to the texture feature class.. In conclusion, we showed that our multicenter data were affected by batch effects, that they could significantly alter the models' performance, and that feature harmonization correctly removed them. Although wavelet features seemed to be the most informative features, an absolute subset could not be identified since it changed depending on the training/testing splitting. Moreover, performance was influenced by the chosen dataset and by the machine learning methods, which could reach a high accuracy in binary classification tasks, but could underperform in multiclass problems. It is, therefore, essential that the scientific community propose a more systematic radiomics approach, focusing on multicenter studies, with clear and solid guidelines to facilitate the translation of radiomics to clinical practice.
A Bi-FPN-Based Encoder–Decoder Model for Lung Nodule Image Segmentation
Annavarapu, Chandra Sekhara Rao
Parisapogu, Samson Anosh Babu
Keetha, Nikhil Varma
Donta, Praveen Kumar
Rajita, Gurindapalli
Diagnostics2023Journal Article, cited 0 times
Website
QIN-LungCT-Seg
Segmentation
Algorithm Development
Computed Tomography (CT)
LUNA16 Challenge
Encoder-decoder
Early detection and analysis of lung cancer involve a precise and efficient lung nodule segmentation in computed tomography (CT) images. However, the anonymous shapes, visual features, and surroundings of the nodules as observed in the CT images pose a challenging and critical problem to the robust segmentation of lung nodules. This article proposes a resource-efficient model architecture: an end-to-end deep learning approach for lung nodule segmentation. It incorporates a Bi-FPN (bidirectional feature network) between an encoder and a decoder architecture. Furthermore, it uses the Mish activation function and class weights of masks with the aim of enhancing the efficiency of the segmentation. The proposed model was extensively trained and evaluated on the publicly available LUNA-16 dataset consisting of 1186 lung nodules. To increase the probability of the suitable class of each voxel in the mask, a weighted binary cross-entropy loss of each sample of training was utilized as network training parameter. Moreover, on the account of further evaluation of robustness, the proposed model was evaluated on the QIN Lung CT dataset. The results of the evaluation show that the proposed architecture outperforms existing deep learning models such as U-Net with a Dice Similarity Coefficient of 82.82% and 81.66% on both datasets.
Applying Deep Transfer Learning to Assess the Impact of Imaging Modalities on Colon Cancer Detection
Alhazmi, Wael
Turki, Turki
Diagnostics2023Journal Article, cited 1 times
Website
TCGA-COAD
ACRIN 6664
Deep Learning
colon cancer
The Impact of Edema on MRI Radiomics for the Prediction of Lung Metastasis in Soft Tissue Sarcoma
Casale, Roberto
De Angelis, Riccardo
Coquelet, Nicolas
Mokhtari, Ayoub
Bali, Maria Antonietta
Diagnostics2023Journal Article, cited 0 times
Soft-tissue-Sarcoma
INTRODUCTION: This study aimed to evaluate whether radiomic features extracted solely from the edema of soft tissue sarcomas (STS) could predict the occurrence of lung metastasis in comparison with features extracted solely from the tumoral mass.
MATERIALS AND METHODS: We retrospectively analyzed magnetic resonance imaging (MRI) scans of 32 STSs, including 14 with lung metastasis and 18 without. A segmentation of the tumor mass and edema was assessed for each MRI examination. A total of 107 radiomic features were extracted for each mass segmentation and 107 radiomic features for each edema segmentation. A two-step feature selection process was applied. Two predictive features for the development of lung metastasis were selected from the mass-related features, as well as two predictive features from the edema-related features. Two Random Forest models were created based on these selected features; 100 random subsampling runs were performed. Key performance metrics, including accuracy and area under the ROC curve (AUC), were calculated, and the resulting accuracies were compared.
RESULTS: The model based on mass-related features achieved a median accuracy of 0.83 and a median AUC of 0.88, while the model based on edema-related features achieved a median accuracy of 0.75 and a median AUC of 0.79. A statistical analysis comparing the accuracies of the two models revealed no significant difference.
CONCLUSION: Both models showed promise in predicting the occurrence of lung metastasis in soft tissue sarcomas. These findings suggest that radiomic analysis of edema features can provide valuable insights into the prediction of lung metastasis in soft tissue sarcomas.
Early Detection of Lung Nodules Using a Revolutionized Deep Learning Model
Srivastava, Durgesh
Srivastava, Santosh Kumar
Khan, Surbhi Bhatia
Singh, Hare Ram
Maakar, Sunil K.
Agarwal, Ambuj Kumar
Malibari, Areej A.
Albalawi, Eid
Diagnostics2023Journal Article, cited 0 times
LIDC-IDRI
According to the WHO (World Health Organization), lung cancer is the leading cause of cancer deaths globally. In the future, more than 2.2 million people will be diagnosed with lung cancer worldwide, making up 11.4% of every primary cause of cancer. Furthermore, lung cancer is expected to be the biggest driver of cancer-related mortality worldwide in 2020, with an estimated 1.8 million fatalities. Statistics on lung cancer rates are not uniform among geographic areas, demographic subgroups, or age groups. The chance of an effective treatment outcome and the likelihood of patient survival can be greatly improved with the early identification of lung cancer. Lung cancer identification in medical pictures like CT scans and MRIs is an area where deep learning (DL) algorithms have shown a lot of potential. This study uses the Hybridized Faster R-CNN (HFRCNN) to identify lung cancer at an early stage. Among the numerous uses for which faster R-CNN has been put to good use is identifying critical entities in medical imagery, such as MRIs and CT scans. Many research investigations in recent years have examined the use of various techniques to detect lung nodules (possible indicators of lung cancer) in scanned images, which may help in the early identification of lung cancer. One such model is HFRCNN, a two-stage, region-based entity detector. It begins by generating a collection of proposed regions, which are subsequently classified and refined with the aid of a convolutional neural network (CNN). A distinct dataset is used in the model's training process, producing valuable outcomes. More than a 97% detection accuracy was achieved with the suggested model, making it far more accurate than several previously announced methods.
Classification of Covid-19 Coronavirus, Pneumonia and Healthy Lungs in CT Scans Using Q-Deformed Entropy and Deep Learning Features
Many health systems over the world have collapsed due to limited capacity and a dramatic increase of suspected COVID-19 cases. What has emerged is the need for finding an efficient, quick and accurate method to mitigate the overloading of radiologists’ efforts to diagnose the suspected cases. This study presents the combination of deep learning of extracted features with the Q-deformed entropy handcrafted features for discriminating between COVID-19 coronavirus, pneumonia and healthy computed tomography (CT) lung scans. In this study, pre-processing is used to reduce the effect of intensity variations between CT slices. Then histogram thresholding is used to isolate the background of the CT lung scan. Each CT lung scan undergoes a feature extraction which involves deep learning and a Q-deformed entropy algorithm. The obtained features are classified using a long short-term memory (LSTM) neural network classifier. Subsequently, combining all extracted features significantly improves the performance of the LSTM network to precisely discriminate between COVID-19, pneumonia and healthy cases. The maximum achieved accuracy for classifying the collected dataset comprising 321 patients is 99.68%.
Computational Complexity Reduction of Neural Networks of Brain Tumor Image Segmentation by Introducing Fermi–Dirac Correction Functions
Tai, Yen-Ling
Huang, Shin-Jhe
Chen, Chien-Chang
Lu, Henry Horng-Shing
Entropy2021Journal Article, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
BraTS 2019
dimensional fusion U-net
Image segmentation
Nowadays, deep learning methods with high structural complexity and flexibility inevitably lean on the computational capability of the hardware. A platform with high-performance GPUs and large amounts of memory could support neural networks having large numbers of layers and kernels. However, naively pursuing high-cost hardware would probably drag the technical development of deep learning methods. In the article, we thus establish a new preprocessing method to reduce the computational complexity of the neural networks. Inspired by the band theory of solids in physics, we map the image space into a noninteraction physical system isomorphically and then treat image voxels as particle-like clusters. Then, we reconstruct the Fermi-Dirac distribution to be a correction function for the normalization of the voxel intensity and as a filter of insignificant cluster components. The filtered clusters at the circumstance can delineate the morphological heterogeneity of the image voxels. We used the BraTS 2019 datasets and the dimensional fusion U-net for the algorithmic validation, and the proposed Fermi-Dirac correction function exhibited comparable performance to other employed preprocessing methods. By comparing to the conventional z-score normalization function and the Gamma correction function, the proposed algorithm can save at least 38% of computational time cost under a low-cost hardware architecture. Even though the correction function of global histogram equalization has the lowest computational time among the employed correction functions, the proposed Fermi-Dirac correction function exhibits better capabilities of image augmentation and segmentation.
Instance Segmentation of Multiple Myeloma Cells Using Deep-Wise Data Augmentation and Mask R-CNN
Paing, May Phu
Sento, Adna
Bui, Toan Huy
Pintavirooj, Chuchart
Entropy2022Journal Article, cited 0 times
MiMM_SBILab
Multiple myeloma is a condition of cancer in the bone marrow that can lead to dysfunction of the body and fatal expression in the patient. Manual microscopic analysis of abnormal plasma cells, also known as multiple myeloma cells, is one of the most commonly used diagnostic methods for multiple myeloma. However, as it is a manual process, it consumes too much effort and time. Besides, it has a higher chance of human errors. This paper presents a computer-aided detection and segmentation of myeloma cells from microscopic images of the bone marrow aspiration. Two major contributions are presented in this paper. First, different Mask R-CNN models using different images, including original microscopic images, contrast-enhanced images and stained cell images, are developed to perform instance segmentation of multiple myeloma cells. As a second contribution, a deep-wise augmentation, a deep learning-based data augmentation method, is applied to increase the performance of Mask R-CNN models. Based on the experimental findings, the Mask R-CNN model using contrast-enhanced images combined with the proposed deep-wise data augmentation provides a superior performance compared to other models. It achieves a mean precision of 0.9973, mean recall of 0.8631, and mean intersection over union (IOU) of 0.9062.
BU-Net: Brain Tumor Segmentation Using Modified U-Net Architecture
Rehman, Mobeen Ur
Cho, SeungBin
Kim, Jee Hong
Chong, Kil To
Electronics2020Journal Article, cited 0 times
BraTS 2017
BraTS 2018
Segmentation
Algorithm Development
BRAIN
Convolutional Neural Network (CNN)
The semantic segmentation of a brain tumor is of paramount importance for its treatment and prevention. Recently, researches have proposed various neural network-based architectures to improve the performance of segmentation of brain tumor sub-regions. Brain tumor segmentation, being a challenging area of research, requires improvement in its performance. This paper proposes a 2D image segmentation method, BU-Net, to contribute to brain tumor segmentation research. Residual extended skip (RES) and wide context (WC) are used along with the customized loss function in the baseline U-Net architecture. The modifications contribute by finding more diverse features, by increasing the valid receptive field. The contextual information is extracted with the aggregating features to get better segmentation performance. The proposed BU-Net was evaluated on the high-grade glioma (HGG) datasets of the BraTS2017 Challenge-the test datasets of the BraTS 2017 and 2018 Challenge datasets. Three major labels to segmented were tumor core (TC), whole tumor (WT), and enhancing core (EC). To compare the performance quantitatively, the dice score was utilized. The proposed BU-Net outperformed the existing state-of-the-art techniques. The high performing BU-Net can have a great contribution to researchers from the field of bioinformatics and medicine.
RMU-Net: A Novel Residual Mobile U-Net Model for Brain Tumor Segmentation from MR Images
Saeed, M. U.
Ali, G.
Bin, W.
Almotiri, S. H.
AlGhamdi, M. A.
Nagra, A. A.
Masood, K.
ul Amin, R.
Electronics2021Journal Article, cited 0 times
Segmentation
BraTS 2020
BraTS 2019
BraTS 2018
Deep learning
U-Net
The most aggressive form of brain tumor is gliomas, which leads to concise life when high grade. The early detection of glioma is important to save the life of patients. MRI is a commonly used approach for brain tumors evaluation. However, the massive amount of data provided by MRI prevents manual segmentation in a reasonable time, restricting the use of accurate quantitative measurements in clinical practice. An automatic and reliable method is required that can segment tumors accurately. To achieve end-to-end brain tumor segmentation, a hybrid deep learning model RMU-Net is proposed. The architecture of MobileNetV2 is modified by adding residual blocks to learn in-depth features. This modified Mobile Net V2 is used as an encoder in the proposed network, and upsampling layers of U-Net are used as the decoder part. The proposed model has been validated on BraTS 2020, BraTS 2019, and BraTS 2018 datasets. The RMU-Net achieved the dice coefficient scores for WT, TC, and ET of 91.35%, 88.13%, and 83.26% on the BraTS 2020 dataset, 91.76%, 91.23%, and 83.19% on the BraTS 2019 dataset, and 90.80%, 86.75%, and 79.36% on the BraTS 2018 dataset, respectively. The performance of the proposed method outperforms with less computational cost and time as compared to previous methods.
The Efficacy of Shape Radiomics and Deep Features for Glioblastoma Survival Prediction by Deep Learning
Trinh, D. L.
Kim, S. H.
Yang, H. J.
Lee, G. S.
Electronics2022Journal Article, cited 0 times
BraTS 2018
Glioblastoma
Radiomic features
Glioblastoma (known as glioblastoma multiforme) is one of the most aggressive brain malignancies, accounting for 48% of all primary brain tumors. For that reason, overall survival prediction plays a vital role in diagnosis and treatment planning for glioblastoma patients. The main target of our research is to demonstrate the effectiveness of features extracted from the combination of the whole tumor and enhancing tumor to the overall survival prediction. By the proposed method, there are two kinds of features, including shape radiomics and deep features, which is utilized for this task. Firstly, optimal shape radiomics features, consisting of sphericity, maximum 3D diameter, and surface area, are selected using the Cox proportional hazard model. Secondly, deep features are extracted by ResNet18 directly from magnetic resonance images. Finally, the combination of selected shape features, deep features, and clinical information fits the regression model for overall survival prediction. The proposed method achieves promising results, which obtained 57.1% and 97,531.8 for accuracy and mean squared error metrics, respectively. Furthermore, using selected features, the result on the mean squared error metric is slightly better than the competing methods. The experiments are conducted on the Brain Tumor Segmentation Challenge (BraTS) 2018 validation dataset.
LGMSU-Net: Local Features, Global Features, and Multi-Scale Features Fused the U-Shaped Network for Brain Tumor Segmentation
Pang, X. J.
Zhao, Z. J.
Wang, Y. L.
Li, F.
Chang, F. L.
Electronics2022Journal Article, cited 0 times
BraTS 2018
Segmentation
Deep learning
Radiomic features
Brain tumors are one of the deadliest cancers in the world. Researchers have conducted a lot of research work on brain tumor segmentation with good performance due to the rapid development of deep learning for assisting doctors in diagnosis and treatment. However, most of these methods cannot fully combine multiple feature information and their performances need to be improved. This study developed a novel network fusing local features representing detailed information, global features representing global information, and multi-scale features enhancing the model's robustness to fully extract the features of brain tumors and proposed a novel axial-deformable attention module for modeling global information to improve the performance of brain tumor segmentation to assist clinicians in the automatic segmentation of brain tumors. Moreover, positional embeddings were used to make the network training faster and improve the method's performance. Six metrics were used to evaluate the proposed method on the BraTS2018 dataset. Outstanding performance was obtained with Dice score, mean Intersection over Union, precision, recall, params, and inference time of 0.8735, 0.7756, 0.9477, 0.8769, 69.02 M, and 15.66 millisecond, respectively, for the whole tumor. Extensive experiments demonstrated that the proposed network obtained excellent performance and was helpful in providing supplementary advice to the clinicians.
LSW-Net: A Learning Scattering Wavelet Network for Brain Tumor and Retinal Image Segmentation
Liu, Ruihua
Nan, Haoyu
Zou, Yangyang
Xie, Ting
Ye, Zhiyong
Electronics2022Journal Article, cited 0 times
BraTS 2020
Algorithm Development
Segmentation
Wavelet
loss function
active contour
Convolutional network models have been widely used in image segmentation. However, there are many types of boundary contour features in medical images which seriously affect the stability and accuracy of image segmentation models, such as the ambiguity of tumors, the variability of lesions, and the weak boundaries of fine blood vessels. In this paper, in order to solve these problems we first introduce the dual-tree complex wavelet scattering transform module, and then innovatively propose a learning scattering wavelet network model. In addition, a new improved active contour loss function is further constructed to deal with complex segmentation. Finally, the equilibrium coefficient of our model is discussed. Experiments on the BraTS2020 dataset show that the LSW-Net model has improved the Dice coefficient, accuracy, and sensitivity of the classic FCN, SegNet, and At-Unet models by at least 3.51%, 2.11%, and 0.46%, respectively. In addition, the LSW-Net model still has an advantage in the average measure of Dice coefficients compared with some advanced segmentation models. Experiments on the DRIVE dataset prove that our model outperforms the other 14 algorithms in both Dice coefficient and specificity measures. In particular, the sensitivity of our model provides a 3.39% improvement when compared with the Unet model, and the model's effect is obvious.
Handcrafted Deep-Feature-Based Brain Tumor Detection and Classification Using MRI Images
Mohan, P.
Easwaramoorthy, S. V.
Subramani, N.
Subramanian, M.
Meckanzi, S.
Electronics2022Journal Article, cited 0 times
BraTS 2015
ResNet18
AlexNet
GoogLeNet
Radiomic features
Optimization
model
Adaptive fuzzy filter
Magnetic Resonance Imaging (MRI)
An abnormal growth of cells in the brain, often known as a brain tumor, has the potential to develop into cancer. Carcinogenesis of glial cells in the brain and spinal cord is the root cause of gliomas, which are the most prevalent type of primary brain tumor. After receiving a diagnosis of glioblastoma, it is anticipated that the average patient will have a survival time of less than 14 months. Magnetic resonance imaging (MRI) is a well-known non-invasive imaging technology that can detect brain tumors and gives a variety of tissue contrasts in each imaging modality. Until recently, only neuroradiologists were capable of performing the tedious and time-consuming task of manually segmenting and analyzing structural MRI scans of brain tumors. This was because neuroradiologists have specialized training in this area. The development of comprehensive and automatic segmentation methods for brain tumors will have a significant impact on both the diagnosis and treatment of brain tumors. It is now possible to recognize tumors in photographs because of developments in computer-aided design (CAD), machine learning (ML), and deep learning (DL) approaches. The purpose of this study is to develop, through the application of MRI data, an automated model for the detection and classification of brain tumors based on deep learning (DLBTDC-MRI). Using the DLBTDC-MRI method, brain tumors can be detected and characterized at various stages of their progression. Preprocessing, segmentation, feature extraction, and classification are all included in the DLBTDC-MRI methodology that is supplied. The use of adaptive fuzzy filtering, often known as AFF, as a preprocessing technique for photos, results in less noise and higher-quality MRI scans. A method referred to as "chicken swarm optimization" (CSO) was used to segment MRI images. This method utilizes Tsallis entropy-based image segmentation to locate parts of the brain that have been injured. In addition to this, a Residual Network (ResNet) that combines handcrafted features with deep features was used to produce a meaningful collection of feature vectors. A classifier developed by combining DLBTDC-MRI and CSO can finally be used to diagnose brain tumors. To assess the enhanced performance of brain tumor categorization, a large number of simulations were run on the BRATS 2015 dataset. It would appear, based on the findings of these trials, that the DLBTDC-MRI method is superior to other contemporary procedures in many respects.
Customized Deep Learning Classifier for Detection of Acute Lymphoblastic Leukemia Using Blood Smear Images
Sampathila, Niranjana
Chadaga, Krishnaraj
Goswami, Neelankit
Chadaga, Rajagopala P
Pandya, Mayur
Prabhu, Srikanth
Bairy, Muralidhar G
Katta, Swathi S
Bhat, Devadas
Upadya, Sudhakara P
Healthcare2022Journal Article, cited 0 times
Website
C_NMC_2019
Deep Learning
Leukemia
In Silico Approach for the Definition of radiomiRNomic Signatures for Breast Cancer Differential Diagnosis
Gallivanone, F.
Cava, C.
Corsi, F.
Bertoli, G.
Castiglioni, I.
Int J Mol Sci2019Journal Article, cited 2 times
Website
TCGA-BRCA
Radiogenomics
Radiomics
Personalized medicine relies on the integration and consideration of specific characteristics of the patient, such as tumor phenotypic and genotypic profiling. BACKGROUND: Radiogenomics aim to integrate phenotypes from tumor imaging data with genomic data to discover genetic mechanisms underlying tumor development and phenotype. METHODS: We describe a computational approach that correlates phenotype from magnetic resonance imaging (MRI) of breast cancer (BC) lesions with microRNAs (miRNAs), mRNAs, and regulatory networks, developing a radiomiRNomic map. We validated our approach to the relationships between MRI and miRNA expression data derived from BC patients. We obtained 16 radiomic features quantifying the tumor phenotype. We integrated the features with miRNAs regulating a network of pathways specific for a distinct BC subtype. RESULTS: We found six miRNAs correlated with imaging features in Luminal A (miR-1537, -205, -335, -337, -452, and -99a), seven miRNAs (miR-142, -155, -190, -190b, -1910, -3617, and -429) in HER2+, and two miRNAs (miR-135b and -365-2) in Basal subtype. We demonstrate that the combination of correlated miRNAs and imaging features have better classification power of Luminal A versus the different BC subtypes than using miRNAs or imaging alone. CONCLUSION: Our computational approach could be used to identify new radiomiRNomic profiles of multi-omics biomarkers for BC differential diagnosis and prognosis.
Estimation of an Image Biomarker for Distant Recurrence Prediction in NSCLC Using Proliferation-Related Genes
Ju, H. M.
Kim, B. C.
Lim, I.
Byun, B. H.
Woo, S. K.
Int J Mol Sci2023Journal Article, cited 0 times
Website
This study aimed to identify a distant-recurrence image biomarker in NSCLC by investigating correlations between heterogeneity functional gene expression and fluorine-18-2-fluoro-2-deoxy-D-glucose positron emission tomography ((18)F-FDG PET) image features of NSCLC patients. RNA-sequencing data and (18)F-FDG PET images of 53 patients with NSCLC (19 with distant recurrence and 34 without recurrence) from The Cancer Imaging Archive and The Cancer Genome Atlas Program databases were used in a combined analysis. Weighted correlation network analysis was performed to identify gene groups related to distant recurrence. Genes were selected for functions related to distant recurrence. In total, 47 image features were extracted from PET images as radiomics. The relationship between gene expression and image features was estimated using a hypergeometric distribution test with the Pearson correlation method. The distant recurrence prediction model was validated by a random forest (RF) algorithm using image texture features and related gene expression. In total, 37 gene modules were identified by gene-expression pattern with weighted gene co-expression network analysis. The gene modules with the highest significance were selected (p-value < 0.05). Nine genes with high protein-protein interaction and area under the curve (AUC) were identified as hub genes involved in the proliferation function, which plays an important role in distant recurrence of cancer. Four image features (GLRLM_SRHGE, GLRLM_HGRE, SUVmean, and GLZLM_GLNU) and six genes were identified to be correlated (p-value < 0.1). AUCs (accuracy: 0.59, AUC: 0.729) from the 47 image texture features and AUCs (accuracy: 0.767, AUC: 0.808) from hub genes were calculated using the RF algorithm. AUCs (accuracy: 0.783, AUC: 0.912) from the four image texture features and six correlated genes and AUCs (accuracy: 0.738, AUC: 0.779) from only the four image texture features were calculated using the RF algorithm. The four image texture features validated by heterogeneity group gene expression were found to be related to cancer heterogeneity. The identification of these image texture features demonstrated that advanced prediction of NSCLC distant recurrence is possible using the image biomarker.
Graph Neural Network Model for Prediction of Non-Small Cell Lung Cancer Lymph Node Metastasis Using Protein-Protein Interaction Network and (18)F-FDG PET/CT Radiomics
Ju, H.
Kim, K.
Kim, B. I.
Woo, S. K.
Int J Mol Sci2024Journal Article, cited 2 times
Website
NSCLC Radiogenomics
Humans
*Carcinoma
Non-Small-Cell Lung/diagnostic imaging/genetics
Protein Interaction Maps
Lymphatic Metastasis/diagnostic imaging
Positron Emission Tomography Computed Tomography
Fluorodeoxyglucose F18
Radiomics
*Lung Neoplasms/diagnostic imaging/genetics
Neural Networks
Computer
18f-fdg pet
Ct
Gnn
Nsclc
protein-protein interaction
radiogenomics
The image texture features obtained from (18)F-fluorodeoxyglucose positron emission tomography/computed tomography ((18)F-FDG PET/CT) images of non-small cell lung cancer (NSCLC) have revealed tumor heterogeneity. A combination of genomic data and radiomics may improve the prediction of tumor prognosis. This study aimed to predict NSCLC metastasis using a graph neural network (GNN) obtained by combining a protein-protein interaction (PPI) network based on gene expression data and image texture features. (18)F-FDG PET/CT images and RNA sequencing data of 93 patients with NSCLC were acquired from The Cancer Imaging Archive. Image texture features were extracted from (18)F-FDG PET/CT images and area under the curve receiver operating characteristic curve (AUC) of each image feature was calculated. Weighted gene co-expression network analysis (WGCNA) was used to construct gene modules, followed by functional enrichment analysis and identification of differentially expressed genes. The PPI of each gene module and genes belonging to metastasis-related processes were converted via a graph attention network. Images and genomic features were concatenated. The GNN model using PPI modules from WGCNA and metastasis-related functions combined with image texture features was evaluated quantitatively. Fifty-five image texture features were extracted from (18)F-FDG PET/CT, and radiomic features were selected based on AUC (n = 10). Eighty-six gene modules were clustered by WGCNA. Genes (n = 19) enriched in the metastasis-related pathways were filtered using DEG analysis. The accuracy of the PPI network, derived from WGCNA modules and metastasis-related genes, improved from 0.4795 to 0.5830 (p < 2.75 x 10(-12)). Integrating PPI of four metastasis-related genes with (18)F-FDG PET/CT image features in a GNN model elevated its accuracy over a without image feature model to 0.8545 (95% CI = 0.8401-0.8689, p-value < 0.02). This model demonstrated significant enhancement compared to the model using PPI and (18)F-FDG PET/CT derived from WGCNA (p-value < 0.02), underscoring the critical role of metastasis-related genes in prediction model. The enhanced predictive capability of the lymph node metastasis prediction GNN model for NSCLC, achieved through the integration of comprehensive image features with genomic data, demonstrates promise for clinical implementation.
The Current Role of Image Compression Standards in Medical Imaging
Liu, Feng
Hernandez-Cabronero, Miguel
Sanchez, Victor
Marcellin, Michael W
Bilgin, Ali
Information2017Journal Article, cited 4 times
Website
LIDC-IDRI
TCGA-BRCA
TCGA-GBM
CT-COLONOGRAPHY
image compression
Image Super-Resolution Algorithm Based on an Improved Sparse Autoencoder
Huang, Detian
Huang, Weiqin
Yuan, Zhenguo
Lin, Yanming
Zhang, Jian
Zheng, Lixin
Information2018Journal Article, cited 0 times
Website
Lung Cancer
Algorithm Development
Image resampling
Due to the limitations of the resolution of the imaging system and the influence of scene changes and other factors, sometimes only low-resolution images can be acquired, which cannot satisfy the practical application’s requirements. To improve the quality of low-resolution images, a novel super-resolution algorithm based on an improved sparse autoencoder is proposed. Firstly, in the training set preprocessing stage, the high- and low-resolution image training sets are constructed, respectively, by using high-frequency information of the training samples as the characterization, and then the zero-phase component analysis whitening technique is utilized to decorrelate the formed joint training set to reduce its redundancy. Secondly, a constructed sparse regularization term is added to the cost function of the traditional sparse autoencoder to further strengthen the sparseness constraint on the hidden layer. Finally, in the dictionary learning stage, the improved sparse autoencoder is adopted to achieve unsupervised dictionary learning to improve the accuracy and stability of the dictionary. Experimental results validate that the proposed algorithm outperforms the existing algorithms both in terms of the subjective visual perception and the objective evaluation indices, including the peak signal-to-noise ratio and the structural similarity measure.
Effective Diagnosis and Treatment through Content-Based Medical Image Retrieval (CBMIR) by Using Artificial Intelligence
Owais, Muhammad
Arsalan, Muhammad
Choi, Jiho
Park, Kang Ryoung
J Clin Med2019Journal Article, cited 0 times
Website
Computer Aided Diagnosis (CADx)
Content based image retrieval (CBIR)
Classification
PROSTATE
BLADDER
KIDNEY
COLON
BREAST
BRAIN
CHEST
ESOPHAGUS
OVARY
RECTUM
STOMACH
HEAD AND NECK
Medical-image-based diagnosis is a tedious task‚ and small lesions in various medical images can be overlooked by medical experts due to the limited attention span of the human visual system, which can adversely affect medical treatment. However, this problem can be resolved by exploring similar cases in the previous medical database through an efficient content-based medical image retrieval (CBMIR) system. In the past few years, heterogeneous medical imaging databases have been growing rapidly with the advent of different types of medical imaging modalities. Recently, a medical doctor usually refers to various types of imaging modalities all together such as computed tomography (CT), magnetic resonance imaging (MRI), X-ray, and ultrasound, etc of various organs in order for the diagnosis and treatment of specific disease. Accurate classification and retrieval of multimodal medical imaging data is the key challenge for the CBMIR system. Most previous attempts use handcrafted features for medical image classification and retrieval, which show low performance for a massive collection of multimodal databases. Although there are a few previous studies on the use of deep features for classification, the number of classes is very small. To solve this problem, we propose the classification-based retrieval system of the multimodal medical images from various types of imaging modalities by using the technique of artificial intelligence, named as an enhanced residual network (ResNet). Experimental results with 12 databases including 50 classes demonstrate that the accuracy and F1.score by our method are respectively 81.51% and 82.42% which are higher than those by the previous method of CBMIR (the accuracy of 69.71% and F1.score of 69.63%).
Enhanced Region Growing for Brain Tumor MR Image Segmentation
Biratu, E. S.
Schwenker, F.
Debelee, T. G.
Kebede, S. R.
Negera, W. G.
Molla, H. T.
J Imaging2021Journal Article, cited 30 times
Website
BRATS 2015
U-Net
brain MRI image
region growing
skull stripping
tumor region
A brain tumor is one of the foremost reasons for the rise in mortality among children and adults. A brain tumor is a mass of tissue that propagates out of control of the normal forces that regulate growth inside the brain. A brain tumor appears when one type of cell changes from its normal characteristics and grows and multiplies abnormally. The unusual growth of cells within the brain or inside the skull, which can be cancerous or non-cancerous has been the reason for the death of adults in developed countries and children in under developing countries like Ethiopia. The studies have shown that the region growing algorithm initializes the seed point either manually or semi-manually which as a result affects the segmentation result. However, in this paper, we proposed an enhanced region-growing algorithm for the automatic seed point initialization. The proposed approach's performance was compared with the state-of-the-art deep learning algorithms using the common dataset, BRATS2015. In the proposed approach, we applied a thresholding technique to strip the skull from each input brain image. After the skull is stripped the brain image is divided into 8 blocks. Then, for each block, we computed the mean intensities and from which the five blocks with maximum mean intensities were selected out of the eight blocks. Next, the five maximum mean intensities were used as a seed point for the region growing algorithm separately and obtained five different regions of interest (ROIs) for each skull stripped input brain image. The five ROIs generated using the proposed approach were evaluated using dice similarity score (DSS), intersection over union (IoU), and accuracy (Acc) against the ground truth (GT), and the best region of interest is selected as a final ROI. Finally, the final ROI was compared with different state-of-the-art deep learning algorithms and region-based segmentation algorithms in terms of DSS. Our proposed approach was validated in three different experimental setups. In the first experimental setup where 15 randomly selected brain images were used for testing and achieved a DSS value of 0.89. In the second and third experimental setups, the proposed approach scored a DSS value of 0.90 and 0.80 for 12 randomly selected and 800 brain images respectively. The average DSS value for the three experimental setups was 0.86.
Brain Tumor Segmentation Based on Deep Learning's Feature Representation
Aboussaleh, Ilyasse
Riffi, Jamal
Mahraz, Adnane Mohamed
Tairi, Hamid
Journal of Imaging2021Journal Article, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Algorithm Development
BraTS 2017
Challenge
Classification
Brain tumor is considered as one of the most serious causes of death in the world. Thus, it is very important to detect it as early as possible. In order to predict and segment the tumor, many approaches have been proposed. However, they suffer from different problems such as the necessity of the intervention of a specialist, the long required run-time and the choice of the appropriate feature extractor. To address these issues, we proposed an approach based on convolution neural network architecture aiming at predicting and segmenting simultaneously a cerebral tumor. The proposal was divided into two phases. Firstly, aiming at avoiding the use of the labeled image that implies a subject intervention of the specialist, we used a simple binary annotation that reflects the existence of the tumor or not. Secondly, the prepared image data were fed into our deep learning model in which the final classification was obtained; if the classification indicated the existence of the tumor, the brain tumor was segmented based on the feature representations generated by the convolutional neural network architectures. The proposed method was trained on the BraTS 2017 dataset with different types of gliomas. The achieved results show the performance of the proposed approach in terms of accuracy, precision, recall and Dice similarity coefficient. Our model showed an accuracy of 91% in tumor classification and a Dice similarity coefficient of 82.35% in tumor segmentation.
Brain Tumor Segmentation Using Deep Capsule Network and Latent-Dynamic Conditional Random Fields
Elmezain, M.
Mahmoud, A.
Mosa, D. T.
Said, W.
J Imaging2022Journal Article, cited 4 times
Website
BraTS 2015
BraTS 2021
Algorithm Development
Segmentation
Because of the large variabilities in brain tumors, automating segmentation remains a difficult task. We propose an automated method to segment brain tumors by integrating the deep capsule network (CapsNet) and the latent-dynamic condition random field (LDCRF). The method consists of three main processes to segment the brain tumor-pre-processing, segmentation, and post-processing. In pre-processing, the N4ITK process involves correcting each MR image's bias field before normalizing the intensity. After that, image patches are used to train CapsNet during the segmentation process. Then, with the CapsNet parameters determined, we employ image slices from an axial view to learn the LDCRF-CapsNet. Finally, we use a simple thresholding method to correct the labels of some pixels and remove small 3D-connected regions from the segmentation outcomes. On the BRATS 2015 and BRATS 2021 datasets, we trained and evaluated our method and discovered that it outperforms and can compete with state-of-the-art methods in comparable conditions.
Automatic Classification of Simulated Breast Tomosynthesis Whole Images for the Presence of Microcalcification Clusters Using Deep CNNs
Microcalcification clusters (MCs) are among the most important biomarkers for breast cancer, especially in cases of nonpalpable lesions. The vast majority of deep learning studies on digital breast tomosynthesis (DBT) are focused on detecting and classifying lesions, especially soft-tissue lesions, in small regions of interest previously selected. Only about 25% of the studies are specific to MCs, and all of them are based on the classification of small preselected regions. Classifying the whole image according to the presence or absence of MCs is a difficult task due to the size of MCs and all the information present in an entire image. A completely automatic and direct classification, which receives the entire image, without prior identification of any regions, is crucial for the usefulness of these techniques in a real clinical and screening environment. The main purpose of this work is to implement and evaluate the performance of convolutional neural networks (CNNs) regarding an automatic classification of a complete DBT image for the presence or absence of MCs (without any prior identification of regions). In this work, four popular deep CNNs are trained and compared with a new architecture proposed by us. The main task of these trainings was the classification of DBT cases by absence or presence of MCs. A public database of realistic simulated data was used, and the whole DBT image was taken into account as input. DBT data were considered without and with preprocessing (to study the impact of noise reduction and contrast enhancement methods on the evaluation of MCs with CNNs). The area under the receiver operating characteristic curve (AUC) was used to evaluate the performance. Very promising results were achieved with a maximum AUC of 94.19% for the GoogLeNet. The second-best AUC value was obtained with a new implemented network, CNN-a, with 91.17%. This CNN had the particularity of also being the fastest, thus becoming a very interesting model to be considered in other studies. With this work, encouraging outcomes were achieved in this regard, obtaining similar results to other studies for the detection of larger lesions such as masses. Moreover, given the difficulty of visualizing the MCs, which are often spread over several slices, this work may have an important impact on the clinical analysis of DBT images.
A Multimodal Ensemble Driven by Multiobjective Optimisation to Predict Overall Survival in Non-Small-Cell Lung Cancer
Caruso, C. M.
Guarrasi, V.
Cordelli, E.
Sicilia, R.
Gentile, S.
Messina, L.
Fiore, M.
Piccolo, C.
Beomonte Zobel, B.
Iannello, G.
Ramella, S.
Soda, P.
J Imaging2022Journal Article, cited 0 times
NSCLC Radiogenomics
NSCLC-Radiomics
Convolutional Neural Network (CNN)
medical imaging
multiexpert systems
multimodal deep learning
oncology
optimisation
precision medicine
tabular data
Training
Lung cancer accounts for more deaths worldwide than any other cancer disease. In order to provide patients with the most effective treatment for these aggressive tumours, multimodal learning is emerging as a new and promising field of research that aims to extract complementary information from the data of different modalities for prognostic and predictive purposes. This knowledge could be used to optimise current treatments and maximise their effectiveness. To predict overall survival, in this work, we investigate the use of multimodal learning on the CLARO dataset, which includes CT images and clinical data collected from a cohort of non-small-cell lung cancer patients. Our method allows the identification of the optimal set of classifiers to be included in the ensemble in a late fusion approach. Specifically, after training unimodal models on each modality, it selects the best ensemble by solving a multiobjective optimisation problem that maximises both the recognition performance and the diversity of the predictions. In the ensemble, the labels of each sample are assigned using the majority voting rule. As further validation, we show that the proposed ensemble outperforms the models learning a single modality, obtaining state-of-the-art results on the task at hand.
Fully Automated Segmentation Models of Supratentorial Meningiomas Assisted by Inclusion of Normal Brain Images
Hwang, Kihwan
Park, Juntae
Kwon, Young-Jae
Cho, Se Jin
Choi, Byung Se
Kim, Jiwon
Kim, Eunchong
Jang, Jongha
Ahn, Kwang-Sung
Kim, Sangsoo
Kim, Chae-Yong
Journal of Imaging2022Journal Article, cited 0 times
BraTS 2019
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Automatic segmentation
U-net
Deep learning
Magnetic Resonance Imaging (MRI)
meningioma
To train an automatic brain tumor segmentation model, a large amount of data is required. In this paper, we proposed a strategy to overcome the limited amount of clinically collected magnetic resonance image (MRI) data regarding meningiomas by pre-training a model using a larger public dataset of MRIs of gliomas and augmenting our meningioma training set with normal brain MRIs. Pre-operative MRIs of 91 meningioma patients (171 MRIs) and 10 non-meningioma patients (normal brains) were collected between 2016 and 2019. Three-dimensional (3D) U-Net was used as the base architecture. The model was pre-trained with BraTS 2019 data, then fine-tuned with our datasets consisting of 154 meningioma MRIs and 10 normal brain MRIs. To increase the utility of the normal brain MRIs, a novel balanced Dice loss (BDL) function was used instead of the conventional soft Dice loss function. The model performance was evaluated using the Dice scores across the remaining 17 meningioma MRIs. The segmentation performance of the model was sequentially improved via the pre-training and inclusion of normal brain images. The Dice scores improved from 0.72 to 0.76 when the model was pre-trained. The inclusion of normal brain MRIs to fine-tune the model improved the Dice score; it increased to 0.79. When employing BDL as the loss function, the Dice score reached 0.84. The proposed learning strategy for U-net showed potential for use in segmenting meningioma lesions.
XGBoost Improves Classification of MGMT Promoter Methylation Status in IDH1 Wildtype Glioblastoma
Le, N. Q. K.
Do, D. T.
Chiu, F. Y.
Yapp, E. K. Y.
Yeh, H. Y.
Chen, C. Y.
J Pers Med2020Journal Article, cited 1 times
Website
TCGA-GBM
Radiogenomics
Classification
Approximately 96% of patients with glioblastomas (GBM) have IDH1 wildtype GBMs, characterized by extremely poor prognosis, partly due to resistance to standard temozolomide treatment. O6-Methylguanine-DNA methyltransferase (MGMT) promoter methylation status is a crucial prognostic biomarker for alkylating chemotherapy resistance in patients with GBM. However, MGMT methylation status identification methods, where the tumor tissue is often undersampled, are time consuming and expensive. Currently, presurgical noninvasive imaging methods are used to identify biomarkers to predict MGMT methylation status. We evaluated a novel radiomics-based eXtreme Gradient Boosting (XGBoost) model to identify MGMT promoter methylation status in patients with IDH1 wildtype GBM. This retrospective study enrolled 53 patients with pathologically proven GBM and tested MGMT methylation and IDH1 status. Radiomics features were extracted from multimodality MRI and tested by F-score analysis to identify important features to improve our model. We identified nine radiomics features that reached an area under the curve of 0.896, which outperformed other classifiers reported previously. These features could be important biomarkers for identifying MGMT methylation status in IDH1 wildtype GBM. The combination of radiomics feature extraction and F-core feature selection significantly improved the performance of the XGBoost model, which may have implications for patient stratification and therapeutic strategy in GBM.
Development of a Convolutional Neural Network Based Skull Segmentation in MRI Using Standard Tesselation Language Models
Dalvit Carvalho da Silva, R.
Jenkyn, T. R.
Carranza, V. A.
J Pers Med2021Journal Article, cited 0 times
Website
CPTAC-GBM
HNSCC
TCGA-HNSC
ACRIN-FMISO-Brain
ACRIN 6684
Computed Tomography (CT)
Magnetic Resonance Imaging (MRI)
Convolutional Neural Network (CNN)
Segmentation
Image Registration
Segmentation is crucial in medical imaging analysis to help extract regions of interest (ROI) from different imaging modalities. The aim of this study is to develop and train a 3D convolutional neural network (CNN) for skull segmentation in magnetic resonance imaging (MRI). 58 gold standard volumetric labels were created from computed tomography (CT) scans in standard tessellation language (STL) models. These STL models were converted into matrices and overlapped on the 58 corresponding MR images to create the MRI gold standards labels. The CNN was trained with these 58 MR images and a mean +/- standard deviation (SD) Dice similarity coefficient (DSC) of 0.7300 +/- 0.04 was achieved. A further investigation was carried out where the brain region was removed from the image with the help of a 3D CNN and manual corrections by using only MR images. This new dataset, without the brain, was presented to the previous CNN which reached a new mean +/- SD DSC of 0.7826 +/- 0.03. This paper aims to provide a framework for segmenting the skull using CNN and STL models, as the 3D CNN was able to segment the skull with a certain precision.
Explainable Artificial Intelligence for Human-Machine Interaction in Brain Tumor Localization
Esmaeili, Morteza
Vettukattil, Riyas
Banitalebi, Hasan
Krogh, Nina R
Geitung, Jonn Terje
J Pers Med2021Journal Article, cited 0 times
Website
TCGA-LGG
BraTS-TCGA-GBM
TCGA-GBM
black box CNN
Magnetic Resonance Imaging (MRI)
explainable AI
gliomas
machine learning
tumor localization
Primary malignancies in adult brains are globally fatal. Computer vision, especially recent developments in artificial intelligence (AI), have created opportunities to automatically characterize and diagnose tumor lesions in the brain. AI approaches have provided scores of unprecedented accuracy in different image analysis tasks, including differentiating tumor-containing brains from healthy brains. AI models, however, perform as a black box, concealing the rational interpretations that are an essential step towards translating AI imaging tools into clinical routine. An explainable AI approach aims to visualize the high-level features of trained models or integrate into the training process. This study aims to evaluate the performance of selected deep-learning algorithms on localizing tumor lesions and distinguishing the lesion from healthy regions in magnetic resonance imaging contrasts. Despite a significant correlation between classification and lesion localization accuracy (R = 0.46, p = 0.005), the known AI algorithms, examined in this study, classify some tumor brains based on other non-relevant features. The results suggest that explainable AI approaches can develop an intuition for model interpretability and may play an important role in the performance evaluation of deep learning models. Developing explainable AI approaches will be an essential tool to improve human-machine interactions and assist in the selection of optimal training methods.
Locoregional Recurrence Prediction Using a Deep Neural Network of Radiological and Radiotherapy Images
Han, K.
Joung, J. F.
Han, M.
Sung, W.
Kang, Y. N.
J Pers Med2022Journal Article, cited 1 times
Website
Head-Neck-PET-CT
Deep learning
Radiation therapy (RT) is an important and potentially curative modality for head and neck squamous cell carcinoma (HNSCC). Locoregional recurrence (LR) of HNSCC after RT is ranging from 15% to 50% depending on the primary site and stage. In addition, the 5-year survival rate of patients with LR is low. To classify high-risk patients who might develop LR, a deep learning model for predicting LR needs to be established. In this work, 157 patients with HNSCC who underwent RT were analyzed. Based on the National Cancer Institute's multi-institutional TCIA data set containing FDG-PET/CT/dose, a 3D deep learning model was proposed to predict LR without time-consuming segmentation or feature extraction. Our model achieved an averaged area under the curve (AUC) of 0.856. Adding clinical factors into the model improved the AUC to an average of 0.892 with the highest AUC of up to 0.974. The 3D deep learning model could perform individualized risk quantification of LR in patients with HNSCC without time-consuming tumor segmentation.
CT Reconstruction Kernels and the Effect of Pre- and Post-Processing on the Reproducibility of Handcrafted Radiomic Features
Refaee, T.
Salahuddin, Z.
Widaatalla, Y.
Primakov, S.
Woodruff, H. C.
Hustinx, R.
Mottaghy, F. M.
Ibrahim, A.
Lambin, P.
J Pers Med2022Journal Article, cited 0 times
Website
Credence Cartridge Radiomics Phantom CT Scans
ComBat harmonization
image harmonization
Radiomics
Reproducibility
Handcrafted radiomics features (HRFs) are quantitative features extracted from medical images to decode biological information to improve clinical decision making. Despite the potential of the field, limitations have been identified. The most important identified limitation, currently, is the sensitivity of HRF to variations in image acquisition and reconstruction parameters. In this study, we investigated the use of Reconstruction Kernel Normalization (RKN) and ComBat harmonization to improve the reproducibility of HRFs across scans acquired with different reconstruction kernels. A set of phantom scans (n = 28) acquired on five different scanner models was analyzed. HRFs were extracted from the original scans, and scans were harmonized using the RKN method. ComBat harmonization was applied on both sets of HRFs. The reproducibility of HRFs was assessed using the concordance correlation coefficient. The difference in the number of reproducible HRFs in each scenario was assessed using McNemar's test. The majority of HRFs were found to be sensitive to variations in the reconstruction kernels, and only six HRFs were found to be robust with respect to variations in reconstruction kernels. The use of RKN resulted in a significant increment in the number of reproducible HRFs in 19 out of the 67 investigated scenarios (28.4%), while the ComBat technique resulted in a significant increment in 36 (53.7%) scenarios. The combination of methods resulted in a significant increment in 53 (79.1%) scenarios compared to the HRFs extracted from original images. Since the benefit of applying the harmonization methods depended on the data being harmonized, reproducibility analysis is recommended before performing radiomics analysis. For future radiomics studies incorporating images acquired with similar image acquisition and reconstruction parameters, except for the reconstruction kernels, we recommend the systematic use of the pre- and post-processing approaches (respectively, RKN and ComBat).
Robustness Evaluation of a Deep Learning Model on Sagittal and Axial Breast DCE-MRIs to Predict Pathological Complete Response to Neoadjuvant Chemotherapy
Massafra, Raffaella
Comes, Maria Colomba
Bove, Samantha
Didonna, Vittorio
Gatta, Gianluca
Giotta, Francesco
Fanizzi, Annarita
La Forgia, Daniele
Latorre, Agnese
Pastena, Maria Irene
Pomarico, Domenico
Rinaldi, Lucia
Tamborra, Pasquale
Zito, Alfredo
Lorusso, Vito
Paradiso, Angelo Virgilio
Journal of Personalized Medicine2022Journal Article, cited 0 times
ISPY1/ACRIN 6657
Deep Learning
BREAST
Algorithm Development
Radiomics
To date, some artificial intelligence (AI) methods have exploited Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) to identify finer tumor properties as potential earlier indicators of pathological Complete Response (pCR) in breast cancer patients undergoing neoadjuvant chemotherapy (NAC). However, they work either for sagittal or axial MRI protocols. More flexible AI tools, to be used easily in clinical practice across various institutions in accordance with its own imaging acquisition protocol, are required. Here, we addressed this topic by developing an AI method based on deep learning in giving an early prediction of pCR at various DCE-MRI protocols (axial and sagittal). Sagittal DCE-MRIs refer to 151 patients (42 pCR; 109 non-pCR) from the public I-SPY1 TRIAL database (DB); axial DCE-MRIs are related to 74 patients (22 pCR; 52 non-pCR) from a private DB provided by Istituto Tumori “Giovanni Paolo II” in Bari (Italy). By merging the features extracted from baseline MRIs with some pre-treatment clinical variables, accuracies of 84.4% and 77.3% and AUC values of 80.3% and 78.0% were achieved on the independent tests related to the public DB and the private DB, respectively. Overall, the presented method has shown to be robust regardless of the specific MRI protocol.
Federated Learning Approach with Pre-Trained Deep Learning Models for COVID-19 Detection from Unsegmented CT images
Florescu, L. M.
Streba, C. T.
Serbanescu, M. S.
Mamuleanu, M.
Florescu, D. N.
Teica, R. V.
Nica, R. E.
Gheonea, I. A.
Life (Basel)2022Journal Article, cited 0 times
Website
COVID-19
Lung-PET-CT-Dx
Computed Tomography (CT)
Federated learning
(1) Background: Coronavirus disease 2019 (COVID-19) is an infectious disease caused by SARS-CoV-2. Reverse transcription polymerase chain reaction (RT-PCR) remains the current gold standard for detecting SARS-CoV-2 infections in nasopharyngeal swabs. In Romania, the first reported patient to have contracted COVID-19 was officially declared on 26 February 2020. (2) Methods: This study proposes a federated learning approach with pre-trained deep learning models for COVID-19 detection. Three clients were locally deployed with their own dataset. The goal of the clients was to collaborate in order to obtain a global model without sharing samples from the dataset. The algorithm we developed was connected to our internal picture archiving and communication system and, after running backwards, it encountered chest CT changes suggestive for COVID-19 in a patient investigated in our medical imaging department on the 28 January 2020. (4) Conclusions: Based on our results, we recommend using an automated AI-assisted software in order to detect COVID-19 based on the lung imaging changes as an adjuvant diagnostic method to the current gold standard (RT-PCR) in order to greatly enhance the management of these patients and also limit the spread of the disease, not only to the general population but also to healthcare professionals.
Radiomics from Various Tumour Volume Sizes for Prognosis Prediction of Head and Neck Squamous Cell Carcinoma: A Voted Ensemble Machine Learning Approach
Tang, F. H.
Cheung, E. Y.
Wong, H. L.
Yuen, C. M.
Yu, M. H.
Ho, P. C.
Life (Basel)2022Journal Article, cited 0 times
Website
HNSCC
Gross Tumor Volume (GTV)
Planning Target Volume (PTV)
head and neck cancer
Head and neck squamous cell carcinoma (HNSCC)
Machine learning
prognosis prediction
Radiomics
Radiotherapy
BACKGROUND: Traditionally, cancer prognosis was determined by tumours size, lymph node spread and presence of metastasis (TNM staging). Radiomics of tumour volume has recently been used for prognosis prediction. In the present study, we evaluated the effect of various sizes of tumour volume. A voted ensemble approach with a combination of multiple machine learning algorithms is proposed for prognosis prediction for head and neck squamous cell carcinoma (HNSCC). METHODS: A total of 215 HNSCC CT image sets with radiotherapy structure sets were acquired from The Cancer Imaging Archive (TCIA). Six tumour volumes, including gross tumour volume (GTV), diminished GTV, extended GTV, planning target volume (PTV), diminished PTV and extended PTV were delineated. The extracted radiomics features were analysed by decision tree, random forest, extreme boost, support vector machine and generalized linear algorithms. A voted ensemble machine learning (VEML) model that optimizes the above algorithms was used. The receiver operating characteristic area under the curve (ROC-AUC) were used to compare the performance of machine learning methods, including accuracy, sensitivity and specificity. RESULTS: The VEML model demonstrated good prognosis prediction ability for all sizes of tumour volumes with reference to GTV and PTV with high accuracy of up to 88.3%, sensitivity of up to 79.9% and specificity of up to 96.6%. There was no significant difference between the various target volumes for the prognostic prediction of HNSCC patients (chi-square test, p > 0.05). CONCLUSIONS: Our study demonstrates that the proposed VEML model can accurately predict the prognosis of HNSCC patients using radiomics features from various tumour volumes.
A Neck-Thyroid Phantom with Small Sizes of Thyroid Remnants for Postsurgical I-123 and I-131 SPECT/CT Imaging
Michael, K.
Hadjiconstanti, A.
Lontos, A.
Demosthenous, G.
Frangos, S.
Parpottas, Y.
Life (Basel)2023Journal Article, cited 0 times
Website
TCGA-THCA
3D Printed Phantom
Iodine
SPECT/CT
Computed Tomography (CT)
nuclear imaging
postsurgical diagnostic thyroid imaging
thyroid remnants
thyroid-neck phantom
Post-surgical I-123 and I-131 SPECT/CT imaging can provide information on the presence and sizes of thyroid remnants and/or metastasis for an accurate re-staging of disease to apply an individualized radioiodine therapy. The purpose of this study was to develop and validate a neck-thyroid phantom with small sizes of thyroid remnants to be utilized for the optimization of post-surgical SPECT/CT imaging. 3D printing and molding techniques were used to develop the hollow human-shaped and -sized phantom which enclosed the trachea, esophagus, cervical spine, clavicle, and multiple detachable sections with different sizes of thyroid remnant in clinically relevant positions. CT images were acquired to evaluate the morphology of the phantom and the sizes of remnants. Triple-energy window scattered and attenuation corrected SPECT images were acquired for this phantom and for a modified RS-542 commercial solid neck-thyroid phantom. The response and sensitivity of the SPECT modality for different administered I-123 and I-131 activities within the equal-size remnants of both phantoms were calculated. When we compared the phantoms, using the same radiopharmaceutical and similar activities, we found that the measured sensitivities were comparable. In all cases, the I-123 counting rate was higher than the I-131 one. This phantom with capabilities to insert different small sizes of remnants and simulate different background-to-remnants activity ratios can be utilized to evaluate postsurgical thyroid SPECT/CT imaging procedures.
Innovative Design Methodology for Patient-Specific Short Femoral Stems
Solorzano-Requejo, W.
Ojeda, C.
Diaz Lantada, A.
Materials (Basel)2022Journal Article, cited 0 times
Website
TCGA-PRAD
Pelvic Reference Data
biomechanics
custom-made medical devices
Finite element model
hip replacement
short stems
strain shielding
BONE
3D printing
The biomechanical performance of hip prostheses is often suboptimal, which leads to problems such as strain shielding, bone resorption and implant loosening, affecting the long-term viability of these implants for articular repair. Different studies have highlighted the interest of short stems for preserving bone stock and minimizing shielding, hence providing an alternative to conventional hip prostheses with long stems. Such short stems are especially valuable for younger patients, as they may require additional surgical interventions and replacements in the future, for which the preservation of bone stock is fundamental. Arguably, enhanced results may be achieved by combining the benefits of short stems with the possibilities of personalization, which are now empowered by a wise combination of medical images, computer-aided design and engineering resources and automated manufacturing tools. In this study, an innovative design methodology for custom-made short femoral stems is presented. The design process is enhanced through a novel app employing elliptical adjustment for the quasi-automated CAD modeling of personalized short femoral stems. The proposed methodology is validated by completely developing two personalized short femoral stems, which are evaluated by combining in silico studies (finite element method (FEM) simulations), for quantifying their biomechanical performance, and rapid prototyping, for evaluating implantability.
Enhancement of Deep Learning in Image Classification Performance Using Xception with the Swish Activation Function for Colorectal Polyp Preliminary Screening
Jinsakul, Natinai
Tsai, Cheng-Fa
Tsai, Chia-En
Wu, Pensee
Mathematics2019Journal Article, cited 0 times
TCGA-COAD
Deep Learning
One of the leading forms of cancer is colorectal cancer (CRC), which is responsible for increasing mortality in young people. The aim of this paper is to provide an experimental modification of deep learning of Xception with Swish and assess the possibility of developing a preliminary colorectal polyp screening system by training the proposed model with a colorectal topogram dataset in two and three classes. The results indicate that the proposed model can enhance the original convolutional neural network model with evaluation classification performance by achieving accuracy of up to 98.99% for classifying into two classes and 91.48% for three classes. For testing of the model with another external image, the proposed method can also improve the prediction compared to the traditional method, with 99.63% accuracy for true prediction of two classes and 80.95% accuracy for true prediction of three classes.
Towards Personalized Diagnosis of Glioblastoma in Fluid-Attenuated Inversion Recovery (FLAIR) by Topological Interpretable Machine Learning
Rucco, Matteo
Viticchi, Giovanna
Falsetti, Lorenzo
Mathematics2020Journal Article, cited 0 times
Brain-Tumor-Progression
Glioblastoma multiforme (GBM) is a fast-growing and highly invasive brain tumor, which tends to occur in adults between the ages of 45 and 70 and it accounts for 52 percent of all primary brain tumors. Usually, GBMs are detected by magnetic resonance images (MRI). Among MRI, a fluid-attenuated inversion recovery (FLAIR) sequence produces high quality digital tumor representation. Fast computer-aided detection and segmentation techniques are needed for overcoming subjective medical doctors (MDs) judgment. This study has three main novelties for demonstrating the role of topological features as new set of radiomics features which can be used as pillars of a personalized diagnostic systems of GBM analysis from FLAIR. For the first time topological data analysis is used for analyzing GBM from three complementary perspectives—tumor growth at cell level, temporal evolution of GBM in follow-up period and eventually GBM detection. The second novelty is represented by the definition of a new Shannon-like topological entropy, the so-called Generator Entropy. The third novelty is the combination of topological and textural features for training automatic interpretable machine learning. These novelties are demonstrated by three numerical experiments. Topological Data Analysis of a simplified 2D tumor growth mathematical model had allowed to understand the bio-chemical conditions that facilitate tumor growth—the higher the concentration of chemical nutrients the more virulent the process. Topological data analysis was used for evaluating GBM temporal progression on FLAIR recorded within 90 days following treatment completion and at progression. The experiment had confirmed that persistent entropy is a viable statistics for monitoring GBM evolution during the follow-up period. In the third experiment we developed a novel methodology based on topological and textural features and automatic interpretable machine learning for automatic GBM classification on FLAIR. The algorithm reached a classification accuracy up to 97%.
Reprojection-Based Numerical Measure of Robustness for CT Reconstruction Neural Network Algorithms
Smolin, Aleksandr
Yamaev, Andrei
Ingacheva, Anastasia
Shevtsova, Tatyana
Polevoy, Dmitriy
Chukalina, Marina
Nikolaev, Dmitry
Arlazarov, Vladimir
Mathematics2022Journal Article, cited 0 times
LDCT-and-Projection-data
In computed tomography, state-of-the-art reconstruction is based on neural network (NN) algorithms. However, NN reconstruction algorithms can be not robust to small noise-like perturbations in the input signal. A not robust NN algorithm can produce inaccurate reconstruction with plausible artifacts that cannot be detected. Hence, the robustness of NN algorithms should be investigated and evaluated. There have been several attempts to construct the numerical metrics of the NN reconstruction algorithms’ robustness. However, these metrics estimate only the probability of the easily distinguishable artifacts occurring in the reconstruction. However, these methods measure only the probability of appearance of easily distinguishable artifacts on the reconstruction, which cannot lead to misdiagnosis in clinical applications. In this work, we propose a new method for numerical estimation of the robustness of the NN reconstruction algorithms. This method is based on the probability evaluation for NN to form such selected additional structures during reconstruction which may lead to an incorrect diagnosis. The method outputs a numerical score value from 0 to 1 that can be used when benchmarking the robustness of different reconstruction algorithms. We employed the proposed method to perform a comparative study of seven reconstruction algorithms, including five NN-based and two classical. The ResUNet network had the best robustness score (0.65) among the investigated NN algorithms, but its robustness score is still lower than that of the classical algorithm SIRT (0.989). The investigated NN models demonstrated a wide range of robustness scores (0.38–0.65). Thus, in this work, robustness of 7 reconstruction algorithms was measured using the new proposed score and it was shown that some of the neural algorithms are not robust.
Multi-Modal Brain Tumor Detection Using Deep Neural Network and Multiclass SVM
Maqsood, S.
Damasevicius, R.
Maskeliunas, R.
Medicina (Kaunas)2022Journal Article, cited 15 times
Website
BraTS 2018
Artificial Intelligence
*Brain Neoplasms/diagnostic imaging
Magnetic Resonance Imaging (MRI)
Support Vector Machine (SVM)
Deep learning
Segmentation
Background and Objectives: Clinical diagnosis has become very significant in today's health system. The most serious disease and the leading cause of mortality globally is brain cancer which is a key research topic in the field of medical imaging. The examination and prognosis of brain tumors can be improved by an early and precise diagnosis based on magnetic resonance imaging. For computer-aided diagnosis methods to assist radiologists in the proper detection of brain tumors, medical imagery must be detected, segmented, and classified. Manual brain tumor detection is a monotonous and error-prone procedure for radiologists; hence, it is very important to implement an automated method. As a result, the precise brain tumor detection and classification method is presented. Materials and Methods: The proposed method has five steps. In the first step, a linear contrast stretching is used to determine the edges in the source image. In the second step, a custom 17-layered deep neural network architecture is developed for the segmentation of brain tumors. In the third step, a modified MobileNetV2 architecture is used for feature extraction and is trained using transfer learning. In the fourth step, an entropy-based controlled method was used along with a multiclass support vector machine (M-SVM) for the best features selection. In the final step, M-SVM is used for brain tumor classification, which identifies the meningioma, glioma and pituitary images. Results: The proposed method was demonstrated on BraTS 2018 and Figshare datasets. Experimental study shows that the proposed brain tumor detection and classification method outperforms other methods both visually and quantitatively, obtaining an accuracy of 97.47% and 98.92%, respectively. Finally, we adopt the eXplainable Artificial Intelligence (XAI) method to explain the result. Conclusions: Our proposed approach for brain tumor detection and classification has outperformed prior methods. These findings demonstrate that the proposed approach obtained higher performance in terms of both visually and enhanced quantitative evaluation with improved accuracy.
Can Persistent Homology Features Capture More Intrinsic Information about Tumors from (18)F-Fluorodeoxyglucose Positron Emission Tomography/Computed Tomography Images of Head and Neck Cancer Patients?
Le, Q. C.
Arimura, H.
Ninomiya, K.
Kodama, T.
Moriyama, T.
Metabolites2022Journal Article, cited 0 times
Website
Head-Neck-PET-CT
Radiomics
Positron Emission Tomography (PET)
This study hypothesized that persistent homology (PH) features could capture more intrinsic information about the metabolism and morphology of tumors from (18)F-fluorodeoxyglucose positron emission tomography (PET)/computed tomography (CT) images of patients with head and neck (HN) cancer than other conventional features. PET/CT images and clinical variables of 207 patients were selected from the publicly available dataset of the Cancer Imaging Archive. PH images were generated from persistent diagrams obtained from PET/CT images. The PH features were derived from the PH PET/CT images. The signatures were constructed in a training cohort from features from CT, PET, PH-CT, and PH-PET images; clinical variables; and the combination of features and clinical variables. Signatures were evaluated using statistically significant differences (p-value, log-rank test) between survival curves for low- and high-risk groups and the C-index. In an independent test cohort, the signature consisting of PH-PET features and clinical variables exhibited the lowest log-rank p-value of 3.30 x 10(-5) and C-index of 0.80, compared with log-rank p-values from 3.52 x 10(-2) to 1.15 x 10(-4) and C-indices from 0.34 to 0.79 for other signatures. This result suggests that PH features can capture the intrinsic information of tumors and predict prognosis in patients with HN cancer.
Effects of Focused-Ultrasound-and-Microbubble-Induced Blood-Brain Barrier Disruption on Drug Transport under Liposome-Mediated Delivery in Brain Tumour: A Pilot Numerical Simulation Study
Zhan, Wenbo
Pharmaceutics2020Journal Article, cited 0 times
Website
RIDER NEURO MRI
radiomics
On the Evaluation of the Suitability of the Materials Used to 3D Print Holographic Acoustic Lenses to Correct Transcranial Focused Ultrasound Aberrations
Ferri, Marcelino
Bravo, Jose Maria
Redondo, Javier
Jimenez-Gambin, Sergio
Jimenez, Noe
Camarena, Francisco
Sanchez-Perez, Juan Vicente
Polymers (Basel)2019Journal Article, cited 2 times
Website
HEAD
Computed Tomography (CT)
Ultrasound
The correction of transcranial focused ultrasound aberrations is a relevant topic for enhancing various non-invasive medical treatments. Presently, the most widely accepted method to improve focusing is the emission through multi-element phased arrays; however, a new disruptive technology, based on 3D printed holographic acoustic lenses, has recently been proposed, overcoming the spatial limitations of phased arrays due to the submillimetric precision of the latest generation of 3D printers. This work aims to optimize this recent solution. Particularly, the preferred acoustic properties of the polymers used for printing the lenses are systematically analyzed, paying special attention to the effect of p-wave speed and its relationship to the achievable voxel size of 3D printers. Results from simulations and experiments clearly show that, given a particular voxel size, there are optimal ranges for lens thickness and p-wave speed, fairly independent of the emitted frequency, the transducer aperture, or the transducer-target distance.
An Appraisal of Lung Nodules Automatic Classification Algorithms for CT Images
Xinqi Wang
Keming Mao
Lizhe Wang
Peiyi Yang
Duo Lu
Ping He
Sensors (Basel)2019Journal Article, cited 0 times
Website
LIDC-IDRI
Lung
Computed Tomography(CT)
Classification
Lung cancer is one of the most deadly diseases around the world representing about 26% of all cancers in 2017. The five-year cure rate is only 18% despite great progress in recent diagnosis and treatment. Before diagnosis, lung nodule classification is a key step, especially since automatic classification can help clinicians by providing a valuable opinion. Modern computer vision and machine learning technologies allow very fast and reliable CT image classification. This research area has become very hot for its high efficiency and labor saving. The paper aims to draw a systematic review of the state of the art of automatic classification of lung nodules. This research paper covers published works selected from the Web of Science, IEEEXplore, and DBLP databases up to June 2018. Each paper is critically reviewed based on objective, methodology, research dataset, and performance evaluation. Mainstream algorithms are conveyed and generic structures are summarized. Our work reveals that lung nodule classification based on deep learning becomes dominant for its excellent performance. It is concluded that the consistency of the research objective and integration of data deserves more attention. Moreover, collaborative works among developers, clinicians, and other parties should be strengthened.
Laplacian Eigenmaps Network-Based Nonlocal Means Method for MR Image Denoising
Yu, Houqiang
Ding, Mingyue
Zhang, Xuming
Sensors2019Journal Article, cited 0 times
PROSTATEx
Magnetic resonance (MR) images are often corrupted by Rician noise which degrades the accuracy of image-based diagnosis tasks. The nonlocal means (NLM) method is a representative filter in denoising MR images due to its competitive denoising performance. However, the existing NLM methods usually exploit the gray-level information or hand-crafted features to evaluate the similarity between image patches, which is disadvantageous for preserving the image details while smoothing out noise. In this paper, an improved nonlocal means method is proposed for removing Rician noise in MR images by using the refined similarity measures. The proposed method firstly extracts the intrinsic features from the pre-denoised image using a shallow convolutional neural network named Laplacian eigenmaps network (LEPNet). Then, the extracted features are used for computing the similarity in the NLM method to produce the denoised image. Finally, the method noise of the denoised image is utilized to further improve the denoising performance. Specifically, the LEPNet model is composed of two cascaded convolutional layers and a nonlinear output layer, in which the Laplacian eigenmaps are employed to learn the filter bank in the convolutional layers and the Leaky Rectified Linear Unit activation function is used in the final output layer to output the nonlinear features. Due to the advantage of LEPNet in recovering the geometric structure of the manifold in the low-dimension space, the features extracted by this network can facilitate characterizing the self-similarity better than the existing NLM methods. Experiments have been performed on the BrainWeb phantom and the real images. Experimental results demonstrate that among several compared denoising methods, the proposed method can provide more effective noise removal and better details preservation in terms of human vision and such objective indexes as peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM).
Computed Tomography (CT) Image Quality Enhancement via a Uniform Framework Integrating Noise Estimation and Super-Resolution Networks
Chi, Jianning
Zhang, Yifei
Yu, Xiaosheng
Wang, Ying
Wu, Chengdong
Sensors (Basel)2019Journal Article, cited 2 times
Website
APOLLO-1-VA
Deep convolutional neural network (DCNN)
Machine Learning
Computed tomography (CT) imaging technology has been widely used to assist medical diagnosis in recent years. However, noise during the process of imaging, and data compression during the process of storage and transmission always interrupt the image quality, resulting in unreliable performance of the post-processing steps in the computer assisted diagnosis system (CADs), such as medical image segmentation, feature extraction, and medical image classification. Since the degradation of medical images typically appears as noise and low-resolution blurring, in this paper, we propose a uniform deep convolutional neural network (DCNN) framework to handle the de-noising and super-resolution of the CT image at the same time. The framework consists of two steps: Firstly, a dense-inception network integrating an inception structure and dense skip connection is proposed to estimate the noise level. The inception structure is used to extract the noise and blurring features with respect to multiple receptive fields, while the dense skip connection can reuse those extracted features and transfer them across the network. Secondly, a modified residual-dense network combined with joint loss is proposed to reconstruct the high-resolution image with low noise. The inception block is applied on each skip connection of the dense-residual network so that the structure features of the image are transferred through the network more than the noise and blurring features. Moreover, both the perceptual loss and the mean square error (MSE) loss are used to restrain the network, leading to better performance in the reconstruction of image edges and details. Our proposed network integrates the degradation estimation, noise removal, and image super-resolution in one uniform framework to enhance medical image quality. We apply our method to the Cancer Imaging Archive (TCIA) public dataset to evaluate its ability in medical image quality enhancement. The experimental results demonstrate that the proposed method outperforms the state-of-the-art methods on de-noising and super-resolution by providing higher peak signal to noise ratio (PSNR) and structure similarity index (SSIM) values.
TumorGAN: A Multi-Modal Data Augmentation Framework for Brain Tumor Segmentation
Li, Q.
Yu, Z.
Wang, Y.
Zheng, H.
Sensors (Basel)2020Journal Article, cited 41 times
Website
BraTS 2017
Brain/diagnostic imaging
*Brain Neoplasms/diagnostic imaging
Humans
Image Processing
Computer-Assisted
Segmentation
Generative Adversarial Network (GAN)
The high human labor demand involved in collecting paired medical imaging data severely impedes the application of deep learning methods to medical image processing tasks such as tumor segmentation. The situation is further worsened when collecting multi-modal image pairs. However, this issue can be resolved through the help of generative adversarial networks, which can be used to generate realistic images. In this work, we propose a novel framework, named TumorGAN, to generate image segmentation pairs based on unpaired adversarial training. To improve the quality of the generated images, we introduce a regional perceptual loss to enhance the performance of the discriminator. We also develop a regional L1 loss to constrain the color of the imaged brain tissue. Finally, we verify the performance of TumorGAN on a public brain tumor data set, BraTS 2017. The experimental results demonstrate that the synthetic data pairs generated by our proposed method can practically improve tumor segmentation performance when applied to segmentation network training.
Batch Similarity Based Triplet Loss Assembled into Light-Weighted Convolutional Neural Networks for Medical Image Classification
Huang, Z.
Zhou, Q.
Zhu, X.
Zhang, X.
Sensors (Basel)2021Journal Article, cited 0 times
Website
H&E-stained slides
Classification
Convolutional Neural Network (CNN)
In many medical image classification tasks, there is insufficient image data for deep convolutional neural networks (CNNs) to overcome the over-fitting problem. The light-weighted CNNs are easy to train but they usually have relatively poor classification performance. To improve the classification ability of light-weighted CNN models, we have proposed a novel batch similarity-based triplet loss to guide the CNNs to learn the weights. The proposed loss utilizes the similarity among multiple samples in the input batches to evaluate the distribution of training data. Reducing the proposed loss can increase the similarity among images of the same category and reduce the similarity among images of different categories. Besides this, it can be easily assembled into regular CNNs. To appreciate the performance of the proposed loss, some experiments have been done on chest X-ray images and skin rash images to compare it with several losses based on such popular light-weighted CNN models as EfficientNet, MobileNet, ShuffleNet and PeleeNet. The results demonstrate the applicability and effectiveness of our method in terms of classification accuracy, sensitivity and specificity.
Precise Segmentation of COVID-19 Infected Lung from CT Images Based on Adaptive First-Order Appearance Model with Morphological/Anatomical Constraints
Sharafeldeen, Ahmed
Elsharkawy, Mohamed
Alghamdi, Norah Saleh
Soliman, Ahmed
El-Baz, Ayman
Sensors2021Journal Article, cited 0 times
CT Images in COVID-19
A new segmentation technique is introduced for delineating the lung region in 3D computed tomography (CT) images. To accurately model the distribution of Hounsfield scale values within both chest and lung regions, a new probabilistic model is developed that depends on a linear combination of Gaussian (LCG). Moreover, we modified the conventional expectation-maximization (EM) algorithm to be run in a sequential way to estimate both the dominant Gaussian components (one for the lung region and one for the chest region) and the subdominant Gaussian components, which are used to refine the final estimated joint density. To estimate the marginal density from the mixed density, a modified k-means clustering approach is employed to classify the Gaussian subdominant components to determine which components belong properly to a lung and which components belong to a chest. The initial segmentation, based on the LCG-model, is then refined by the imposition of 3D morphological constraints based on a 3D Markov-Gibbs random field (MGRF) with analytically estimated potentials. The proposed approach was tested on CT data from 32 coronavirus disease 2019 (COVID-19) patients. Segmentation quality was quantitatively evaluated using four metrics: Dice similarity coefficient (DSC), overlap coefficient, 95th-percentile bidirectional Hausdorff distance (BHD), and absolute lung volume difference (ALVD), and it achieved 95.67±1.83%, 91.76±3.29%, 4.86±5.01, and 2.93±2.39, respectively. The reported results showed the capability of the proposed approach to accurately segment healthy lung tissues in addition to pathological lung tissues caused by COVID-19, outperforming four current, state-of-the-art deep learning-based lung segmentation approaches.
Super-Resolution Network with Information Distillation and Multi-Scale Attention for Medical CT Image
Zhao, Tianliu
Hu, Lei
Zhang, Yongmei
Fang, Jianying
Sensors2021Journal Article, cited 0 times
Lung-PET-CT-Dx
NSCLC Radiogenomics
The CT image is an important reference for clinical diagnosis. However, due to the external influence and equipment limitation in the imaging, the CT image often has problems such as blurring, a lack of detail and unclear edges, which affect the subsequent diagnosis. In order to obtain high-quality medical CT images, we propose an information distillation and multi-scale attention network (IDMAN) for medical CT image super-resolution reconstruction. In a deep residual network, instead of only adding the convolution layer repeatedly, we introduce information distillation to make full use of the feature information. In addition, in order to better capture information and focus on more important features, we use a multi-scale attention block with multiple branches, which can automatically generate weights to adjust the network. Through these improvements, our model effectively solves the problems of insufficient feature utilization and single attention source, improves the learning ability and expression ability, and thus can reconstruct the higher quality medical CT image. We conduct a series of experiments; the results show that our method outperforms the previous algorithms and has a better performance of medical CT image reconstruction in the objective evaluation and visual effect.
Brain MR Image Enhancement for Tumor Segmentation Using 3D U-Net
Ullah, F.
Ansari, S. U.
Hanif, M.
Ayari, M. A.
Chowdhury, M. E. H.
Khandakar, A. A.
Khan, M. S.
Sensors (Basel)2021Journal Article, cited 0 times
BraTS 2018
3D U-Net
Brain/diagnostic imaging
*Brain Neoplasms/diagnostic imaging
Humans
Image Processing
Computer-Assisted
*Magnetic Resonance Imaging
Segmentation
Deep learning
MRI images are visually inspected by domain experts for the analysis and quantification of the tumorous tissues. Due to the large volumetric data, manual reporting on the images is subjective, cumbersome, and error prone. To address these problems, automatic image analysis tools are employed for tumor segmentation and other subsequent statistical analysis. However, prior to the tumor analysis and quantification, an important challenge lies in the pre-processing. In the present study, permutations of different pre-processing methods are comprehensively investigated. In particular, the study focused on Gibbs ringing artifact removal, bias field correction, intensity normalization, and adaptive histogram equalization (AHE). The pre-processed MRI data is then passed onto 3D U-Net for automatic segmentation of brain tumors. The segmentation results demonstrated the best performance with the combination of two techniques, i.e., Gibbs ringing artifact removal and bias-field correction. The proposed technique achieved mean dice score metrics of 0.91, 0.86, and 0.70 for the whole tumor, tumor core, and enhancing tumor, respectively. The testing mean dice scores achieved by the system are 0.90, 0.83, and 0.71 for the whole tumor, core tumor, and enhancing tumor, respectively. The novelty of this work concerns a robust pre-processing sequence for improving the segmentation accuracy of MR images. The proposed method overcame the testing dice scores of the state-of-the-art methods. The results are benchmarked with the existing techniques used in the Brain Tumor Segmentation Challenge (BraTS) 2018 challenge.
Learning a Metric for Multimodal Medical Image Registration without Supervision Based on Cycle Constraints
Deep learning based medical image registration remains very difficult and often fails to improve over its classical counterparts where comprehensive supervision is not available, in particular for large transformations—including rigid alignment. The use of unsupervised, metric-based registration networks has become popular, but so far no universally applicable similarity metric is available for multimodal medical registration, requiring a trade-off between local contrast-invariant edge features or more global statistical metrics. In this work, we aim to improve over the use of handcrafted metric-based losses. We propose to use synthetic three-way (triangular) cycles that for each pair of images comprise two multimodal transformations to be estimated and one known synthetic monomodal transform. Additionally, we present a robust method for estimating large rigid transformations that is differentiable in end-to-end learning. By minimising the cycle discrepancy and adapting the synthetic transformation to be close to the real geometric difference of the image pairs during training, we successfully tackle intra-patient abdominal CT-MRI registration and reach performance on par with state-of-the-art metric-supervision and classic methods. Cyclic constraints enable the learning of cross-modality features that excel at accurate anatomical alignment of abdominal CT and MRI scans.; Keywords: image registration; cycle constraint; multimodal features; self-supervision; rigid alignment
A Feasibility Study on Deep Learning Based Brain Tumor Segmentation Using 2D Ellipse Box Areas
In most deep learning-based brain tumor segmentation methods, training the deep network requires annotated tumor areas. However, accurate tumor annotation puts high demands on medical personnel. The aim of this study is to train a deep network for segmentation by using ellipse box areas surrounding the tumors. In the proposed method, the deep network is trained by using a large number of unannotated tumor images with foreground (FG) and background (BG) ellipse box areas surrounding the tumor and background, and a small number of patients (<20) with annotated tumors. The training is conducted by initial training on two ellipse boxes on unannotated MRIs, followed by refined training on a small number of annotated MRIs. We use a multi-stream U-Net for conducting our experiments, which is an extension of the conventional U-Net. This enables the use of complementary information from multi-modality (e.g., T1, T1ce, T2, and FLAIR) MRIs. To test the feasibility of the proposed approach, experiments and evaluation were conducted on two datasets for glioma segmentation. Segmentation performance on the test sets is then compared with those used on the same network but trained entirely by annotated MRIs. Our experiments show that the proposed method has obtained good tumor segmentation results on the test sets, wherein the dice score on tumor areas is (0.8407, 0.9104), and segmentation accuracy on tumor areas is (83.88%, 88.47%) for the MICCAI BraTS’17 and US datasets, respectively. Comparing the segmented results by using the network trained by all annotated tumors, the drop in the segmentation performance from the proposed approach is (0.0594, 0.0159) in the dice score, and (8.78%, 2.61%) in segmented tumor accuracy for MICCAI and US test sets, which is relatively small. Our case studies have demonstrated that training the network for segmentation by using ellipse box areas in place of all annotated tumors is feasible, and can be considered as an alternative, which is a trade-off between saving medical experts’ time annotating tumors and a small drop in segmentation performance.
Image Recovery from Synthetic Noise Artifacts in CT Scans Using Modified U-Net
Gunawan, Rudy
Tran, Yvonne
Zheng, Jinchuan
Nguyen, Hung
Chai, Rifai
Sensors (Basel)2022Journal Article, cited 0 times
Website
NLST
LDCT-and-Projection-data
Algorithm Development
Computed Tomography (CT)
Image denoising
LUNG
*Artifacts
*Image Processing
Computer-Assisted/methods
Radiation Dosage
Signal-To-Noise Ratio
Tomography
X-Ray Computed/methods
Computed Tomography (CT) is commonly used for cancer screening as it utilizes low radiation for the scan. One problem with low-dose scans is the noise artifacts associated with low photon count that can lead to a reduced success rate of cancer detection during radiologist assessment. The noise had to be removed to restore detail clarity. We propose a noise removal method using a new model Convolutional Neural Network (CNN). Even though the network training time is long, the result is better than other CNN models in quality score and visual observation. The proposed CNN model uses a stacked modified U-Net with a specific number of feature maps per layer to improve the image quality, observable on an average PSNR quality score improvement out of 174 images. The next best model has 0.54 points lower in the average score. The score difference is less than 1 point, but the image result is closer to the full-dose scan image. We used separate testing data to clarify that the model can handle different noise densities. Besides comparing the CNN configuration, we discuss the denoising quality of CNN compared to classical denoising in which the noise characteristics affect quality.
Breast Density Transformations Using CycleGANs for Revealing Undetected Findings in Mammograms
Breast cancer is the most common cancer in women, a leading cause of morbidity and mortality, and a significant health issue worldwide. According to the World Health Organization’s cancer awareness recommendations, mammographic screening should be regularly performed on middle-aged or older women to increase the chances of early cancer detection. Breast density is widely known to be related to the risk of cancer development. The American College of Radiology Breast Imaging Reporting and Data System categorizes mammography into four levels based on breast density, ranging from ACR-A (least dense) to ACR-D (most dense). Computer-aided diagnostic (CAD) systems can now detect suspicious regions in mammograms and identify abnormalities more quickly and accurately than human readers. However, their performance is still influenced by the tissue density level, which must be considered when designing such systems. In this paper, we propose a novel method that uses CycleGANs to transform suspicious regions of mammograms from ACR-B, -C, and -D levels to ACR-A level. This transformation aims to reduce the masking effect caused by thick tissue and separate cancerous regions from surrounding tissue. Our proposed system enhances the performance of conventional CNN-based classifiers significantly by focusing on regions of interest that would otherwise be misidentified due to fatty masking. Extensive testing on different types of mammograms (digital and scanned X-ray film) demonstrates the effectiveness of our system in identifying normal, benign, and malignant regions of interest.
AResU-Net: Attention Residual U-Net for Brain Tumor Segmentation
Zhang, J. X.
Lv, X. G.
Zhang, H. B.
Liu, B.
Symmetry-Basel2020Journal Article, cited 0 times
Segmentation
BraTS 2017
BraTS 2018
Magnetic Resonance Imaging (MRI)
Deep Learning
U-Net
Convolutional Neural Network (CNN)
Automatic segmentation of brain tumors from magnetic resonance imaging (MRI) is a challenging task due to the uneven, irregular and unstructured size and shape of tumors. Recently, brain tumor segmentation methods based on the symmetric U-Net architecture have achieved favorable performance. Meanwhile, the effectiveness of enhancing local responses for feature extraction and restoration has also been shown in recent works, which may encourage the better performance of the brain tumor segmentation problem. Inspired by this, we try to introduce the attention mechanism into the existing U-Net architecture to explore the effects of local important responses on this task. More specifically, we propose an end-to-end 2D brain tumor segmentation network, i.e., attention residual U-Net (AResU-Net), which simultaneously embeds attention mechanism and residual units into U-Net for the further performance improvement of brain tumor segmentation. AResU-Net adds a series of attention units among corresponding down-sampling and up-sampling processes, and it adaptively rescales features to effectively enhance local responses of down-sampling residual features utilized for the feature recovery of the following up-sampling process. We extensively evaluate AResU-Net on two MRI brain tumor segmentation benchmarks of BraTS 2017 and BraTS 2018 datasets. Experiment results illustrate that the proposed AResU-Net outperforms its baselines and achieves comparable performance with typical brain tumor segmentation methods.
3D-MRI Brain Tumor Detection Model Using Modified Version of Level Set Segmentation Based on Dragonfly Algorithm
Khalil, H. A.
Darwish, S.
Ibrahim, Y. M.
Hassan, O. F.
Symmetry-Basel2020Journal Article, cited 31 times
Website
BraTS 2017
Magnetic Resonance Imaging (MRI)
Computer Aided Detection (CADe)
Segmentation
Accurate brain tumor segmentation from 3D Magnetic Resonance Imaging (3D-MRI) is an important method for obtaining information required for diagnosis and disease therapy planning. Variation in the brain tumor's size, structure, and form is one of the main challenges in tumor segmentation, and selecting the initial contour plays a significant role in reducing the segmentation error and the number of iterations in the level set method. To overcome this issue, this paper suggests a two-step dragonfly algorithm (DA) clustering technique to extract initial contour points accurately. The brain is extracted from the head in the preprocessing step, then tumor edges are extracted using the two-step DA, and these extracted edges are used as an initial contour for the MRI sequence. Lastly, the tumor region is extracted from all volume slices using a level set segmentation method. The results of applying the proposed technique on 3D-MRI images from the multimodal brain tumor segmentation challenge (BRATS) 2017 dataset show that the proposed method for brain tumor segmentation is comparable to the state-of-the-art methods.
Recurrent Multi-Fiber Network for 3D MRI Brain Tumor Segmentation
Zhao, Yue
Ren, Xiaoqiang
Hou, Kun
Li, Wentao
Symmetry2021Journal Article, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Challenge
BRAIN
Segmentation
Algorithm Development
3d recurrent multi-fiber network
3d recurrent unit
3d multi-fiber unit
3d mri
Segmentation
BraTS 2018
Automated brain tumor segmentation based on 3D magnetic resonance imaging (MRI) is critical to disease diagnosis. Moreover, robust and accurate achieving automatic extraction of brain tumor is a big challenge because of the inherent heterogeneity of the tumor structure. In this paper, we present an efficient semantic segmentation 3D recurrent multi-fiber network (RMFNet), which is based on encoder-decoder architecture to segment the brain tumor accurately. 3D RMFNet is applied in our paper to solve the problem of brain tumor segmentation, including a 3D recurrent unit and 3D multi-fiber unit. First of all, we propose that recurrent units segment brain tumors by connecting recurrent units and convolutional layers. This quality enhances the model's ability to integrate contextual information and is of great significance to enhance the contextual information. Then, a 3D multi-fiber unit is added to the overall network to solve the high computational cost caused by the use of a 3D network architecture to capture local features. 3D RMFNet combines both advantages from a 3D recurrent unit and 3D multi-fiber unit. Extensive experiments on the Brain Tumor Segmentation (BraTS) 2018 challenge dataset show that our RMFNet remarkably outperforms state-of-the-art methods, and achieves average Dice scores of 89.62%, 83.65% and 78.72% for the whole tumor, tumor core and enhancing tumor, respectively. The experimental results prove our architecture to be an efficient tool for brain tumor segmentation accurately.
A Postoperative Displacement Measurement Method for Femoral Neck Fracture Internal Fixation Implants Based on Femoral Segmentation and Multi-Resolution Frame Registration
Liu, Kaifeng
Nagamune, Kouki
Oe, Keisuke
Kuroda, Ryosuke
Niikura, Takahiro
Symmetry2021Journal Article, cited 0 times
Website
Pelvic-Reference-Data
PELVIS
Machine Learning
Computed Tomography (CT)
Segmentation
Femoral neck fractures have a high incidence in the geriatric population and are associatedwith high mortality and disability rates. With the minimally invasive nature, internal fixation iswidely used as a treatment option to stabilize femoral neck fractures. The fixation effectiveness andstability of the implant is an essential guide for the surgeon. However, there is no long-term reliableevaluation method to quantify the implant’s fixation effect without affecting the patient’s behaviorand synthesizing long-term treatment data. For the femur’s symmetrical structure, this study used3D convolutional networks for biomedical image segmentation (3D-UNet) to segment the injuredfemur as a mask, aligned computerized tomography (CT) scans of the patient at different times aftersurgery and quantified the displacement in the specified direction using the generated 3D point cloud.In the experimental part, we used 10 groups containing two CT images scanned at the one-yearinterval after surgery. By comparing manual segmentation of femur and segmentation of femur as amask using neural network, the mask obtained by segmentation using the 3D-UNet network withsymmetric structure fully meets the requirements of image registration. The data obtained fromthe 3D point cloud calculation is within the error tolerance, and the calculated displacement of theimplant can be visualized in 3D space.
GenU-Net++: An Automatic Intracranial Brain Tumors Segmentation Algorithm on 3D Image Series with High Performance
Zhang, Yan
Liu, Xi
Wa, Shiyun
Liu, Yutong
Kang, Jiali
Lv, Chunli
Symmetry2021Journal Article, cited 0 times
BraTS 2018
Algorithm Development
Segmentation
U-Net
Radiomics
Automatic segmentation of intracranial brain tumors in three-dimensional (3D) image series is critical in screening and diagnosing related diseases. However, there are various challenges in intracranial brain tumor images: (1) Multiple brain tumor categories hold particular pathological features. (2) It is a thorny issue to locate and discern brain tumors from other non-brain regions due to their complicated structure. (3) Traditional segmentation requires a noticeable difference in the brightness of the interest target relative to the background. (4) Brain tumor magnetic resonance images (MRI) have blurred boundaries, similar gray values, and low image contrast. (5) Image information details would be dropped while suppressing noise. Existing methods and algorithms do not perform satisfactorily in overcoming these obstacles mentioned above. Most of them share an inadequate accuracy in brain tumor segmentation. Considering that the image segmentation task is a symmetric process in which downsampling and upsampling are performed sequentially, this paper proposes a segmentation algorithm based on U-Net++, aiming to address the aforementioned problems. This paper uses the BraTS 2018 dataset, which contains MR images of 245 patients. We suggest the generative mask sub-network, which can generate feature maps. This paper also uses the BiCubic interpolation method for upsampling to obtain segmentation results different from U-Net++. Subsequently, pixel-weighted fusion is adopted to fuse the two segmentation results, thereby, improving the robustness and segmentation performance of the model. At the same time, we propose an auto pruning mechanism in terms of the architectural features of U-Net++ itself. This mechanism deactivates the sub-network by zeroing the input. It also automatically prunes GenU-Net++ during the inference process, increasing the inference speed and improving the network performance by preventing overfitting. Our algorithm's PA, MIoU, P, and R are tested on the validation dataset, reaching 0.9737, 0.9745, 0.9646, and 0.9527, respectively. The experimental results demonstrate that the proposed model outperformed the contrast models. Additionally, we encapsulate the model and develop a corresponding application based on the MacOS platform to make the model further applicable.
Design and Implementation of the Pre-Clinical DICOM Standard in Multi-Cohort Murine Studies
Kalen, Joseph D.
Clunie, David A.
Liu, Yanling
Tatum, James L.
Jacobs, Paula M.
Kirby, Justin
Freymann, John B.
Wagner, Ulrike
Smith, Kirk E.
Suloway, Christian
Doroshow, James H.
Tomography2021Journal Article, cited 0 times
PDMR-425362-245-T
The small animal imaging Digital Imaging and Communications in Medicine (DICOM) acquisition context structured report (SR) was developed to incorporate pre-clinical data in an established DICOM format for rapid queries and comparison of clinical and non-clinical datasets. Established terminologies (i.e., anesthesia, mouse model nomenclature, veterinary definitions, NCI Metathesaurus) were utilized to assist in defining terms implemented in pre-clinical imaging and new codes were added to integrate the specific small animal procedures and handling processes, such as housing, biosafety level, and pre-imaging rodent preparation. In addition to the standard DICOM fields, the small animal SR includes fields specific to small animal imaging such as tumor graft (i.e., melanoma), tissue of origin, mouse strain, and exogenous material, including the date and site of injection. Additionally, the mapping and harmonization developed by the Mouse-Human Anatomy Project were implemented to assist co-clinical research by providing cross-reference human-to-mouse anatomies. Furthermore, since small animal imaging performs multi-mouse imaging for high throughput, and queries for co-clinical research requires a one-to-one relation, an imaging splitting routine was developed, new Unique Identifiers (UID's) were created, and the original patient name and ID were saved for reference to the original dataset. We report the implementation of the small animal SR using MRI datasets (as an example) of patient-derived xenograft mouse models and uploaded to The Cancer Imaging Archive (TCIA) for public dissemination, and also implemented this on PET/CT datasets. The small animal SR enhancement provides researchers the ability to query any DICOM modality pre-clinical and clinical datasets using standard vocabularies and enhances co-clinical studies.
Convolutional Neural Network Addresses the Confounding Impact of CT Reconstruction Kernels on Radiomics Studies
Yoon, Jin H.
Sun, Shawn H.
Xiao, Manjun
Yang, Hao
Lu, Lin
Li, Yajun
Schwartz, Lawrence H.
Zhao, Binsheng
Tomography2021Journal Article, cited 0 times
RIDER Lung CT
Achieving high feature reproducibility while preserving biological information is one of the main challenges for the generalizability of current radiomics studies. Non-clinical imaging variables, such as reconstruction kernels, have shown to significantly impact radiomics features. In this study, we retrain an open-source convolutional neural network (CNN) to harmonize computerized tomography (CT) images with various reconstruction kernels to improve feature reproducibility and radiomic model performance using epidermal growth factor receptor (EGFR) mutation prediction in lung cancer as a paradigm. In the training phase, the CNN was retrained and tested on 32 lung cancer patients' CT images between two different groups of reconstruction kernels (smooth and sharp). In the validation phase, the retrained CNN was validated on an external cohort of 223 lung cancer patients' CT images acquired using different CT scanners and kernels. The results showed that the retrained CNN could be successfully applied to external datasets with different CT scanner parameters, and harmonization of reconstruction kernels from sharp to smooth could significantly improve the performance of radiomics model in predicting EGFR mutation status in lung cancer. In conclusion, the CNN based method showed great potential in improving feature reproducibility and generalizability by harmonizing medical images with heterogeneous reconstruction kernels.
Mortality Prediction Analysis among COVID-19 Inpatients Using Clinical Variables and Deep Learning Chest Radiography Imaging Features
Nguyen, X. V.
Dikici, E.
Candemir, S.
Ball, R. L.
Prevedello, L. M.
Tomography2022Journal Article, cited 0 times
Website
COVID-19-NY-SBU
COVID-19
Deep Learning
Radiography
machine learning
multi-modal imaging
Transfer learning
The emergence of the COVID-19 pandemic over a relatively brief interval illustrates the need for rapid data-driven approaches to facilitate clinical decision making. We examined a machine learning process to predict inpatient mortality among COVID-19 patients using clinical and chest radiographic data. Modeling was performed with a de-identified dataset of encounters prior to widespread vaccine availability. Non-imaging predictors included demographics, pre-admission clinical history, and past medical history variables. Imaging features were extracted from chest radiographs by applying a deep convolutional neural network with transfer learning. A multi-layer perceptron combining 64 deep learning features from chest radiographs with 98 patient clinical features was trained to predict mortality. The Local Interpretable Model-Agnostic Explanations (LIME) method was used to explain model predictions. Non-imaging data alone predicted mortality with an ROC-AUC of 0.87 +/- 0.03 (mean +/- SD), while the addition of imaging data improved prediction slightly (ROC-AUC: 0.91 +/- 0.02). The application of LIME to the combined imaging and clinical model found HbA1c values to contribute the most to model prediction (17.1 +/- 1.7%), while imaging contributed 8.8 +/- 2.8%. Age, gender, and BMI contributed 8.7%, 8.2%, and 7.1%, respectively. Our findings demonstrate a viable explainable AI approach to quantify the contributions of imaging and clinical data to COVID mortality predictions.
Dual-Domain Reconstruction Network Incorporating Multi-Level Wavelet Transform and Recurrent Convolution for Sparse View Computed Tomography Imaging
Lin, Juncheng
Li, Jialin
Dou, Jiazhen
Zhong, Liyun
Di, Jianglei
Qin, Yuwen
Tomography2024Journal Article, cited 0 times
LDCT-and-Projection-data
CT COLONOGRAPHY
Machine Learning
Sparse view computed tomography (SVCT) aims to reduce the number of X-ray projection views required for reconstructing the cross-sectional image of an object. While SVCT significantly reduces X-ray radiation dose and speeds up scanning, insufficient projection data give rise to issues such as severe streak artifacts and blurring in reconstructed images, thereby impacting the diagnostic accuracy of CT detection. To address this challenge, a dual-domain reconstruction network incorporating multi-level wavelet transform and recurrent convolution is proposed in this paper. The dual-domain network is composed of a sinogram domain network (SDN) and an image domain network (IDN). Multi-level wavelet transform is employed in both IDN and SDN to decompose sinograms and CT images into distinct frequency components, which are then processed through separate network branches to recover detailed information within their respective frequency bands. To capture global textures, artifacts, and shallow features in sinograms and CT images, a recurrent convolution unit (RCU) based on convolutional long and short-term memory (Conv-LSTM) is designed, which can model their long-range dependencies through recurrent calculation. Additionally, a self-attention-based multi-level frequency feature normalization fusion (MFNF) block is proposed to assist in recovering high-frequency components by aggregating low-frequency components. Finally, an edge loss function based on the Laplacian of Gaussian (LoG) is designed as the regularization term for enhancing the recovery of high-frequency edge structures. The experimental results demonstrate the effectiveness of our approach in reducing artifacts and enhancing the reconstruction of intricate structural details across various sparse views and noise levels. Our method excels in both performance and robustness, as evidenced by its superior outcomes in numerous qualitative and quantitative assessments, surpassing contemporary state-of-the-art CNNs or Transformer-based reconstruction methods.
Tumor Morphology for Prediction of Poor Responses Early in Neoadjuvant Chemotherapy for Breast Cancer: A Multicenter Retrospective Study
Li, W.
Le, N. N.
Nadkarni, R.
Onishi, N.
Wilmes, L. J.
Gibbs, J. E.
Price, E. R.
Joe, B. N.
Mukhtar, R. A.
Gennatas, E. D.
Kornak, J.
Magbanua, M. J. M.
Van't Veer, L. J.
LeStage, B.
Esserman, L. J.
Hylton, N. M.
Tomography2024Journal Article, cited 0 times
Website
BACKGROUND: This multicenter and retrospective study investigated the additive value of tumor morphologic features derived from the functional tumor volume (FTV) tumor mask at pre-treatment (T0) and the early treatment time point (T1) in the prediction of pathologic outcomes for breast cancer patients undergoing neoadjuvant chemotherapy. METHODS: A total of 910 patients enrolled in the multicenter I-SPY 2 trial were included. FTV and tumor morphologic features were calculated from the dynamic contrast-enhanced (DCE) MRI. A poor response was defined as a residual cancer burden (RCB) class III (RCB-III) at surgical excision. The area under the receiver operating characteristic curve (AUC) was used to evaluate the predictive performance. The analysis was performed in the full cohort and in individual sub-cohorts stratified by hormone receptor (HR) and human epidermal growth factor receptor 2 (HER2) status. RESULTS: In the full cohort, the AUCs for the use of the FTV ratio and clinicopathologic data were 0.64 +/- 0.03 (mean +/- SD [standard deviation]). With morphologic features, the AUC increased significantly to 0.76 +/- 0.04 (p < 0.001). The ratio of the surface area to volume ratio between T0 and T1 was found to be the most contributing feature. All top contributing features were from T1. An improvement was also observed in the HR+/HER2- and triple-negative sub-cohorts. The AUC increased significantly from 0.56 +/- 0.05 to 0.70 +/- 0.06 (p < 0.001) and from 0.65 +/- 0.06 to 0.73 +/- 0.06 (p < 0.001), respectively, when adding morphologic features. CONCLUSION: Tumor morphologic features can improve the prediction of RCB-III compared to using FTV only at the early treatment time point.
Impact of radiogenomics in esophageal cancer on clinical outcomes: A pilot study
Brancato, Valentina
Garbino, Nunzia
Mannelli, Lorenzo
Aiello, Marco
Salvatore, Marco
Franzese, Monica
Cavaliere, Carlo
2021Journal Article, cited 0 times
TCGA-ESCA
BACKGROUND: Esophageal cancer (ESCA) is the sixth most common malignancy in the world, and its incidence is rapidly increasing. Recently, several microRNAs (miRNAs) and messenger RNA (mRNA) targets were evaluated as potential biomarkers and regulators of epigenetic mechanisms involved in early diagnosis. In addition, computed tomography (CT) radiomic studies on ESCA improved the early stage identification and the prediction of response to treatment. Radiogenomics provides clinically useful prognostic predictions by linking molecular characteristics such as gene mutations and gene expression patterns of malignant tumors with medical images and could provide more opportunities in the management of patients with ESCA.
AIM: To explore the combination of CT radiomic features and molecular targets associated with clinical outcomes for characterization of ESCA patients.
METHODS: Of 15 patients with diagnosed ESCA were included in this study and their CT imaging and transcriptomic data were extracted from The Cancer Imaging Archive and gene expression data from The Cancer Genome Atlas, respectively. Cancer stage, history of significant alcohol consumption and body mass index (BMI) were considered as clinical outcomes. Radiomic analysis was performed on CT images acquired after injection of contrast medium. In total, 1302 radiomics features were extracted from three-dimensional regions of interest by using PyRadiomics. Feature selection was performed using a correlation filter based on Spearman's correlation (ρ) and Wilcoxon-rank sum test respect to clinical outcomes. Radiogenomic analysis involved ρ analysis between radiomic features associated with clinical outcomes and transcriptomic signatures consisting of eight N6-methyladenosine RNA methylation regulators and five up-regulated miRNA. The significance level was set at P < 0.05.
RESULTS: Of 25, five and 29 radiomic features survived after feature selection, considering stage, alcohol history and BMI as clinical outcomes, respectively. Radiogenomic analysis with stage as clinical outcome revealed that six of the eight mRNA regulators and two of the five up-regulated miRNA were significantly correlated with ten and three of the 25 selected radiomic features, respectively (-0.61 < ρ < -0.60 and 0.53 < ρ < 0.69, P < 0.05). Assuming alcohol history as clinical outcome, no correlation was found between the five selected radiomic features and mRNA regulators, while a significant correlation was found between one radiomic feature and three up-regulated miRNAs (ρ = -0.56, ρ = -0.64 and ρ = 0.61, P < 0.05). Radiogenomic analysis with BMI as clinical outcome revealed that four mRNA regulators and one up-regulated miRNA were significantly correlated with 10 and two radiomic features, respectively (-0.67 < ρ < -0.54 and 0.53 < ρ < 0.71, P < 0.05).
CONCLUSION: Our study revealed interesting relationships between the expression of eight N6-methyladenosine RNA regulators, as well as five up-regulated miRNAs, and CT radiomic features associated with clinical outcomes of ESCA patients.
Artificial intelligence radiogenomics for advancing precision and effectiveness in oncologic care (Review)
Trivizakis, Eleftherios
Papadakis, Georgios Z.
Souglakos, Ioannis
Papanikolaou, Nikolaos
Koumakis, Lefteris
Spandidos, Demetrios A.
Tsatsakis, Aristidis
Karantanas, Apostolos H.
Marias, Kostas
2020Journal Article, cited 0 times
TCGA-BRCA
TCGA-LGG
The new era of artificial intelligence (AI) has introduced revolutionary data‑driven analysis paradigms that have led to significant advancements in information processing techniques in the context of clinical decision‑support systems. These advances have created unprecedented momentum in computational medical imaging applications and have given rise to new precision medicine research areas. Radiogenomics is a novel research field focusing on establishing associations between radiological features and genomic or molecular expression in order to shed light on the underlying disease mechanisms and enhance diagnostic procedures towards personalized medicine. The aim of the current review was to elucidate recent advances in radiogenomics research, focusing on deep learning with emphasis on radiology and oncology applications. The main deep learning radiogenomics architectures, together with the clinical questions addressed, and the achieved genetic or molecular correlations are presented, while a performance comparison of the proposed methodologies is conducted. Finally, current limitations, potentially understudied topics and future research directions are discussed.
Brain Tumor Detection From MRI Images With Using Proposed Deep Learning Model : The Partial Correlation-Based Channel Selection
YILMAZ, Atınç
Turkish Journal of Electrical Engineering & Computer Sciences2021Journal Article, cited 0 times
BraTS-TCGA-GBM
MRI based genomic analysis of glioma using three pathway deep convolutional neural network for IDH classification
GORE, SONAL
JAGTAP, JAYANT
Turkish Journal of Electrical Engineering & Computer Sciences2021Journal Article, cited 0 times
Website
TCGA-GBM
TCGA-LGG
Radiogenomics
BRAIN
Magnetic Resonance Imaging (MRI)
Quantitative integration of radiomic and genomic data improves survival prediction of low-grade glioma patients
Ma, Chen
Yao, Zhihao
Zhang, Qinran
Zou, Xiufen
Mathematical Biosciences and Engineering2021Journal Article, cited 0 times
TCGA-LGG
Segmentation Labels
LGG
Radiomics
radiogenomics
Intelligent immune clonal optimization algorithm for pulmonary nodule classification
Mao, Q.
Zhao, S.
Ren, L.
Li, Z.
Tong, D.
Yuan, X.
Li, H.
Math Biosci Eng2021Journal Article, cited 0 times
LIDC-IDRI
Classification
Computer Aided Diagnosis (CADx)
Computer-aided diagnosis (CAD) of pulmonary nodules is an effective approach for early detection of lung cancers, and pulmonary nodule classification is one of the key issues in the CAD system. However, CAD has the problems of low accuracy and high false-positive rate (FPR) on pulmonary nodule classification. To solve these problems, a novel method using intelligent immune clonal selection and classification algorithm is proposed and developed in this work. First, according to the mechanism and characteristics of chaotic motion with a logistic mapping, the proposed method utilizes the characteristics of chaotic motion and selects the control factor of the optimal chaotic state, to generate an initial population with randomness and ergodicity. The singleness problem of the initial population of the immune algorithm was solved by the proposed method. Second, considering on the characteristics of Gaussian mutation operator (GMO) with a small scale, and Cauchy mutation operator (CMO) with a big scale, an intelligent mutation strategy is developed, and a novel control factor of the mutation is designed. Therefore, a Gauss-Cauchy hybrid mutation operator is designed. Ultimately, in this study, the intelligent immune clonal optimization algorithm is proposed and developed for pulmonary nodule classification. To verify its accuracy, the proposed method was used to analyze 90 CT scans with 652 nodules. The experimental results revealed that the proposed method had an accuracy of 97.87% and produced 1.52 false positives per scan (FPs/scan), indicating that the proposed method has high accuracy and low FPR.
SGEResU-Net for brain tumor segmentation
Liu, D.
Sheng, N.
He, T.
Wang, W.
Zhang, J.
Zhang, J.
Math Biosci Eng2022Journal Article, cited 0 times
Website
BraTS-TCGA-GBM
BraTS-TCGA-LGG
BRAIN
U-Net
Segmentation
Image denoising
The precise segmentation of tumor regions plays a pivotal role in the diagnosis and treatment of brain tumors. However, due to the variable location, size, and shape of brain tumors, the automatic segmentation of brain tumors is a relatively challenging application. Recently, U-Net related methods, which largely improve the segmentation accuracy of brain tumors, have become the mainstream of this task. Following merits of the 3D U-Net architecture, this work constructs a novel 3D U-Net model called SGEResU-Net to segment brain tumors. SGEResU-Net simultaneously embeds residual blocks and spatial group-wise enhance (SGE) attention blocks into a single 3D U-Net architecture, in which SGE attention blocks are employed to enhance the feature learning of semantic regions and reduce possible noise and interference with almost no extra parameters. Besides, the self-ensemble module is also utilized to improve the segmentation accuracy of brain tumors. Evaluation experiments on the Brain Tumor Segmentation (BraTS) Challenge 2020 and 2021 benchmarks demonstrate the effectiveness of the proposed SGEResU-Net for this medical application. Moreover, it achieves DSC values of 83.31, 91.64 and 86.85%, as well as Hausdorff distances (95%) of 19.278, 5.945 and 7.567 for the enhancing tumor, whole tumor, and tumor core on BraTS 2021 dataset, respectively.
A region-adaptive non-local denoising algorithm for low-dose computed tomography images
Zhang, Pengcheng
Liu, Yi
Gui, Zhiguo
Chen, Yang
Jia, Lina
Mathematical Biosciences and Engineering2022Journal Article, cited 0 times
LDCT-and-Projection-data
Algorithm Development
Segmentation
Classification
Low-dose computed tomography (LDCT) can effectively reduce radiation exposure in patients. However, with such dose reductions, large increases in speckled noise and streak artifacts occur, resulting in seriously degraded reconstructed images. The non-local means (NLM) method has shown potential for improving the quality of LDCT images. In the NLM method, similar blocks are obtained using fixed directions over a fixed range. However, the denoising performance of this method is limited. In this paper, a region-adaptive NLM method is proposed for LDCT image denoising. In the proposed method, pixels are classified into different regions according to the edge information of the image. Based on the classification results, the adaptive searching window, block size and filter smoothing parameter could be modified in different regions. Furthermore, the candidate pixels in the searching window could be filtered based on the classification results. In addition, the filter parameter could be adjusted adaptively based on intuitionistic fuzzy divergence (IFD). The experimental results showed that the proposed method performed better in LDCT image denoising than several of the related denoising methods in terms of numerical results and visual quality.
An ensemble-acute lymphoblastic leukemia model for acute lymphoblastic leukemia image classification
The timely diagnosis of acute lymphoblastic leukemia (ALL) is of paramount importance for enhancing the treatment efficacy and the survival rates of patients. In this study, we seek to introduce an ensemble-ALL model for the image classification of ALL, with the goal of enhancing early diagnostic capabilities and streamlining the diagnostic and treatment processes for medical practitioners. In this study, a publicly available dataset is partitioned into training, validation, and test sets. A diverse set of convolutional neural networks, including InceptionV3, EfficientNetB4, ResNet50, CONV_POOL-CNN, ALL-CNN, Network in Network, and AlexNet, are employed for training. The top-performing four individual models are meticulously chosen and integrated with the squeeze-and-excitation (SE) module. Furthermore, the two most effective SE-embedded models are harmoniously combined to create the proposed ensemble-ALL model. This model leverages the Bayesian optimization algorithm to enhance its performance. The proposed ensemble-ALL model attains remarkable accuracy, precision, recall, F1-score, and kappa scores, registering at 96.26, 96.26, 96.26, 96.25, and 91.36%, respectively. These results surpass the benchmarks set by state-of-the-art studies in the realm of ALL image classification. This model represents a valuable contribution to the field of medical image recognition, particularly in the diagnosis of acute lymphoblastic leukemia, and it offers the potential to enhance the efficiency and accuracy of medical professionals in the diagnostic and treatment processes.
Texture Classification Study of MR Images for Hepatocellular Carcinoma
QIU, Jia-jun
WU, Yue
HUI, Bei
LIU, Yan-bo
电子科技大学学报2019Journal Article, cited 0 times
TCGA-LIHC
LIVER
Classification
Combining wavelet multi-resolution analysis method and statistical analysis method, a composite texture classification model is proposed to evaluate its value in computer-aided diagnosis of hepatocellular carcinoma (HCC) and normal liver tissue based on magnetic resonance (MR) images. First, training samples are divided into two groups by two categories, statistics of wavelet coefficients are calculated in each group. Second, two discretizations are performed on wavelet coefficients of a new sample based on the two sets of statistical results, and two groups of features can be extracted by histogram, co-occurrence matrix, and run-length matrix, etc. Finally, classification is performed twice based on the two groups of features to calculate the category attribute probabilities, then a decision is conducted. The experimental results demonstrate that the proposed model can obtain better classification performance than routine methods, it is rewarding for the computer-aided diagnosis of HCC and normal liver tissue based on MR images.
Lung Cancer Diagnosis and Treatment Using AI and Mobile Applications
Rajesh, P.
Murugan, A.
Muruganantham, B.
Ganesh Kumar, S.
International Journal of Interactive Mobile Technologies (iJIM)2020Journal Article, cited 0 times
Website
Computer Aided Diagnosis (CADx)
LIDC-IDRI
Image denoising
LUNG
Non-small cell lung cancer (NSCLC)
Cancer has become very common in this evolving world. Technology advancements, increased radiations have made cancer a common syndrome. Various types of cancers like Skin Cancer, Breast Cancer, Prostate Cancer, Blood Cancer, Colorectal cancer, Kidney Cancer and Lung Cancer exist. Among these various types of cancers, the mortality rate is high in lung cancer which is tough to diagnose and can be diagnosed only in advanced stages. Small cell lung cancer and non-small cell lung cancer are the two types in which non-small cell lung cancer (NSCLC) is the most common type which makes up to 80 to 85 percent of all cases [1].Digital Image Processing and Artificial Intelligence advancements has helped lot in medical image analysis and Computer Aided Diagnosis(CAD). Numerous research is carried out in this field to improve the detection and prediction of the cancerous tissues. In current methods, traditional image processing techniques is applied for image processing, noise removal and feature extraction. There are few good approaches that applies Artificial Intelligence and produce better results. However, no research has achieved 100% accuracy in nodule detection, early detection of cancerous nodules nor faster processing methods. Application of Artificial Intelligence techniques like Machine Learning, Deep Learning is very minimal and limited. In this paper [Figure 1], we have applied Artificial intelligence techniques to process CT (Computed Tomography) Scan image for data collection and data model training. The DICOM image data is saved as numpy file with all medical information extracted from the files for training. With the trained data we apply deep learning for noise removal and feature extraction. We can process huge volume of medical images for data collection, image processing, detection and prediction of nodules. The patient is made well aware of the disease and enabled with their health tracking using various mobile applications made available in the online stores for iOS and Android mobile devices.
GLCM and CNN Deep Learning Model for Improved MRI Breast Tumors Detection
Alsalihi, Aya A
Aljobouri, Hadeel K
ALTameemi, Enam Azez Khalel
International Journal of Online & Biomedical Engineering2022Journal Article, cited 0 times
Website
BREAST-DIAGNOSIS
Deep convolution neural network
Quantitative Analysis of Breast Cancer NACT Response on DCE-MRI Using Gabor Filter Derived Radiomic Features
Moyya, Priscilla Dinkar
Asaithambi, Mythili
International Journal of Online and Biomedical Engineering (iJOE)2022Journal Article, cited 0 times
Website
QIN Breast
Breast cancer
Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI)
Gabor filter bank
Neoadjuvant chemotherapy
Radiomic features
Treatment response
In this work, an attempt has been made to quantify the treatment response due to Neoadjuvant Chemotherapy (NACT) on the publicly available QIN-Breast of TCIA database (N = 25) using Gabor filter derived radiomic features. The Gabor filter bank is constructed using 5 different scales and 7 different orientations. Different radiomic features were extracted from Gabor filtered Dynamic Contrast Enhanced Magnetic Resonance images (DCE-MRI) of patients having 3 different visits (Visit 1: before, Visit 2: after 1st cycle, and Visit 3: the last cycle of NACT). The extracted radiomic features were analyzed statistically and Area Under Receiver Operating Characteristic (AUROC) has been calculated. Results show that the Gabor derived radiomic features could differentiate the pathological differences among all three visits. Energy has shown a significant difference between all the three orientations particularly between Visits 2 & 3. However, Entropy from λ=2 and θ=300 between Visit 2 & 3, Skewness from λ=2 and θ=1200 between Visit 1 & 3 could differentiate the treatment response with high statistical significance of p=0.006 and 0.001 respectively. From the ROC analysis, the better predictors were Short Run Emphasis (SRE), Short Zone Emphasis (SZE), and Energy between Visit 1 & 3 by achieving an AUROC of 76.38%, 75.16%, and 71.10% respectively. Further, the results suggest that the radiomic features are capable of quantitatively compare the breast NACT prognosis that varies across multi-oriented Gabor filters.
DIAGNOSIS OF LUNG CANCER USING MULTISCALE CONVOLUTIONAL NEURAL NETWORK
Homayoon Yektai
Mohammad Manthouri
Biomedical Engineering: Applications, Basis and Communications2020Journal Article, cited 0 times
Website
LIDC
convolutional neural network
Lung cancer is one of the dangerous diseases that cause huge cancer death worldwide. Early detection of lung cancer is the only possible way to improve a patient’s chance for survival. This study presents an innovative automated diagnosis classification method for Computed Tomography (CT) images of lungs. In this paper, the CT scan of lung images was analyzed with the multiscale convolution. The entire lung is segmented from the CT images and the parameters are calculated from the segmented image. The use of image processing techniques and identifying patterns in the detection of lung cancer from CT images reduces human errors in detecting tumors, and speeds up diagnosis time. Artificial Neural Network (ANN) has been widely used to detect lung cancer, and has significantly reduced the percentage of errors. Therefore, in this paper, Convolution Neural Network (CNN), which is the most effective method, is used for the detection of various types of cancers. This study presents a Multiscale Convolutional Neural Network (MCNN) approach for the classification of tumors. Based on the structure of MCNN, which presents CT picture to several deep convolutional neural networks with different size and resolutions, the classical handcrafted features extraction step is avoided. The proposed approach gives better classification rates than the classical state of the art methods allowing a safer Computer-Aided Diagnosis of pleural cancer. This study reaches a diagnosis accuracy of 93.7±0.3 using multiscale convolution technique, which reveals the efficiency of the proposed method.
A Hybrid Approach for 3D Lung Segmentation in CT Images Using Active Contour and Morphological Operation
Lung segmentation is the initial step for detection and diagnosis for lung-related abnormalities and disease. In CAD system for lung cancer, this step traces the boundary for the pulmonary region from thorax in CT images. It decreases the overhead for a further step in CAD system by reducing the space for finding the ROIs. The major issue and challenging task for the segmentation is the inclusion of juxtapleural nodules in the segmented lungs. The chapter attempts 3D lung segmentation of CT images using active contour and morphological operations. The major steps in the proposed approach contain: preprocessing through various techniques, Otsu's thresholding for the binarizing the image; morphological operations are applied for elimination of undesired region and, finally, active contour for the segmentation of the lungs in 3D. For experiment, 10 subjects are taken from the public dataset of LIDC-IDRI. The proposed method achieved accuracies 0.979 Jaccard's similarity index value, 0.989 Dice similarity coefficient, and 0.073 volume overlap error when compared to ground truth.
Towards Better Segmentation of Abnormal Part in Multimodal Images Using Kernel Possibilistic C Means Particle Swarm Optimization With Morphological Reconstruction Filters: Combination of KFCM and PSO With Morphological Filters
R., Sumathi
Mandadi, Venkatesulu
2021Journal Article, cited 0 times
RIDER Breast MRI
The authors designed an automated framework to segment tumors with various image sequences like T1, T2, and post-processed MRI multimodal images. Contrast-limited adaptive histogram equalization method is used for preprocessing images to enhance the intensity level and view the tumor part clearly. With the combination of kernel possibilistic c means clustering with particle swarm optimization technique, a tumor part is segmented, and morphological filters are applied to remove the unrelated outlier pixels in the segmented image to detect the accurate tumor part. The authors collected various image sequences from online resources like Harvard brain dataset, BRATS, and RIDER, and a few from clinical datasets. Efficiency is ensured by computing various performance metrics like Jaccard Index MSE, PSNR, sensitivity, specificity, accuracy, and computational time. The proposed approach yields 97.06% segmentation accuracy and 98.08% classification accuracy for multimodal images with an average of 5s for all multimodal images.
A Block-Based Arithmetic Entropy Encoding Scheme for Medical Images
Sharma, Urvashi
Sood, Meenakshi
Puthooran, Emjee
Kumar, Yugal
2020Journal Article, cited 0 times
RIDER Breast MRI
The digitization of human body, especially for treatment of diseases can generate a large volume of data. This generated medical data has a large resolution and bit depth. In the field of medical diagnosis, lossless compression techniques are widely adopted for the efficient archiving and transmission of medical images. This article presents an efficient coding solution based on a predictive coding technique. The proposed technique consists of Resolution Independent Gradient Edge Predictor16 (RIGED16) and Block Based Arithmetic Encoding (BAAE). The objective of this technique is to find universal threshold values for prediction and provide an optimum block size for encoding. The validity of the proposed technique is tested on some real images as well as standard images. The simulation results of the proposed technique are compared with some well-known and existing compression techniques. It is revealed that proposed technique gives a higher coding efficiency rate compared to other techniques.
An Innovative Model for Detecting Brain Tumors and Glioblastoma Multiforme Disease Patterns
In this article, an innovative model is proposed for detecting brain tumors and glioblastoma multiforme disease patterns (DBT-GBM) in medical imaging. The DBT-GBM model mainly includes five steps, the image conversion in the L* component of the L*a*b* space, an image sample region selection, calculation of the average values of colors, image pixel classification using the minimum distance classifier and the segmentation operation. In the approach, the minimum distance classifier is used to classify each pixel by calculating the Euclidean distance between that pixel and each color marker of the pattern. In the experiments, the authors implement the DBT-GBM model into real-time data, the samples of three anatomic sections of a T1w 3D MRI (axial, sagittal and coronal cross-sections) on the GBM-3D-Slicer datasets and the CBTC datasets. The implementation results show that the proposed DBT-GBM robustly detects the GBM disease patterns and cancer nuclei (involving the omics indicative of brain tumors pathologically) in medical imaging, leading to improved segmentation performance in comparison.
Qualitative stomach cancer assessment by multi-slice computed tomography
Chacón, Gerardo
Rodríguez, Johel E.
Bermúdez, Valmore
Vera, Miguel
Hernandez, Juan Diego
Pardo, Aldo
Lameda, Carlos
Madriz, Delia
Bravo, Antonio José
Ingeniare. Revista chilena de ingeniería2020Journal Article, cited 0 times
Website
TCGA-STAD
Radiomics
Radiogenomics
STOMACH
Computed Tomography (CT)
ABSTRACT; ; A theoretical framework based on the Borrmann classification and the Japanese gastric cancer classification is proposed in order to qualitatively assess the stomach cancer from the three-dimensional (3-D) images obtained using multi-slice computerized tomography (MSCT). The main goal of this paper is to demonstrate through visual inspection, the MSCT capacity to effectively reflect the morphopathological characteristics of the stomach adenocarcinoma types. The idea is to contrast the pathological theoretic characteristics with those that are possible to understand from MSCT images available in clinical datasets. This research corresponds to a study with a mixed approach (qualitative and quantitative), applied to a total of 46 images available for patients diagnosed, from the data collection included of the Cancer Genome Atlas Stomach Adenocarcinoma (TCGA-STAD). The conclusions are established from a comparative analysis based on the document review and direct observation, the product being a matrix of compliance with the specific qualities of the theoretical standards, in the visualization of images performed by the clinical specialist from the datasets. A total of 6210 slices from 46 MSCT explorations are visually inspected, and then visual characteristics are contrasted with respect to the theoretic characteristics obtained from the cancer classifications. These characteristics match into about 96% of images inspected. The approach effectiveness measured using the positive predictive value is about 96.50%. The results of the images data also show a sensitivity of 97.83%, and specificity of 98.27%. MSCT is a precise imaging modality in the qualitative assessment of the staging of stomach cancer.; ; Keywords: Stomach cancer; adenocarcinoma; macroscopic assessment; Borrmann classification; Japanese classification; medical imaging; multi-slice computerized tomography; ; RESUMEN; ; En el presente artículo se propone un marco teórico basado en la clasificación de Borrmann y la clasificación japonesa del cáncer gástrico para evaluar cualitativamente el cáncer a partir de imágenes tridimensionales (3-D) obtenidas mediante tomografía computarizada multicorte (MSCT). El objetivo es demostrar a través de la inspección visual, la capacidad de MSCT para reflejar efectivamente las características morfopatológicas de los tipos de adenocarcinoma de estómago. La idea es contrastar las características teóricas patológicas con aquellas que son posibles de comprender en las imágenes disponibles. Esta investigación corresponde a un estudio con un enfoque mixto (cualitativo y cuantitativo), aplicado a un total de 46 imágenes de pacientes diagnosticados, incluidos en el Atlas del Genoma del Cáncer (TCGA-STAD). Las conclusiones se establecen mediante un análisis comparativo basado en la revisión documental y observación directa, siendo el producto una matriz de cumplimiento de las cualidades específicas de los estándares teóricos, a partir de la visualización de imágenes realizadas por el especialista clínico. Se inspeccionaron visualmente un total de 6210 cortes de tomografía de 46 exploraciones de MSCT, y luego se contrastaron las características visuales patológicas con respecto a los criterios patológicos obtenidos de las clasificaciones de cáncer. Las características coinciden con aproximadamente el 96% de las imágenes inspeccionadas. La efectividad del enfoque medida usando el valorpredictivo positivo es aproximadamente 96,50%. Los resultados también muestran una sensibilidad de 97,83% y especificidad de 99,18%. MSCT es una modalidad de imagen precisa en la evaluación cualitativa de la estadificación del cáncer de estómago.; ; Palabras clave: Cáncer de estómago; adenocarcinoma; evaluación macroscópica; clasificación de Borrmann; clasificación japonesa; imágenes médicas; tomografía computarizada
Boundary Aware Semantic Segmentation using Pyramid-dilated Dense U-Net for Lung Segmentation in Computed Tomography Images
Agnes, S. Akila
2023Journal Article, cited 0 times
LCTSC
Segmentation
lung
Aim: The main objective of this work is to propose an efficient segmentation model for accurate and robust lung segmentation from computed tomography (CT) images, even when the lung contains abnormalities such as juxtapleural nodules, cavities, and consolidation.
Methodology: A novel deep learning-based segmentation model, pyramid-dilated dense U-Net (PDD-U-Net), is proposed to directly segment lung regions from the whole CT image. The model is integrated with pyramid-dilated convolution blocks to capture and preserve multi-resolution spatial features effectively. In addition, shallow and deeper stream features are embedded in the nested U-Net structure at the decoder side to enhance the segmented output. The effect of three loss functions is investigated in this paper, as the medical image analysis method requires precise boundaries. The proposed PDD-U-Net model with shape-aware loss function is tested on the lung CT segmentation challenge (LCTSC) dataset with standard lung CT images and the lung image database consortium-image database resource initiative (LIDC-IDRI) dataset containing both typical and pathological lung CT images.
Results: The performance of the proposed method is evaluated using Intersection over Union, dice coefficient, precision, recall, and average Hausdorff distance metrics. Segmentation results showed that the proposed PDD-U-Net model outperformed other segmentation methods and achieved a 0.983 dice coefficient for the LIDC-IDRI dataset and a 0.994 dice coefficient for the LCTSC dataset.
Conclusions: The proposed PDD-U-Net model with shape-aware loss function is an effective and accurate method for lung segmentation from CT images, even in the presence of abnormalities such as cavities, consolidation, and nodules. The model's integration of pyramid-dilated convolution blocks and nested U-Net structure at the decoder side, along with shape-aware loss function, contributed to its high segmentation accuracy. This method could have significant implications for the computer-aided diagnosis system, allowing for quick and accurate analysis of lung regions.
A Novel Hybridized Feature Extraction Approach for Lung Nodule Classification Based on Transfer Learning Technique
Bruntha, P. M.
Pandian, S. I. A.
Anitha, J.
Abraham, S. S.
Kumar, S. N.
J Med Phys2022Journal Article, cited 0 times
Website
LIDC-IDRI
Convolutional Neural Network (CNN)
Support Vector Machine (SVM)
Classification
residual neural network
transfer learning
Purpose: In the field of medical diagnosis, deep learning-based computer-aided detection of diseases will reduce the burden of physicians in the diagnosis of diseases especially in the case of lung cancer nodule classification. Materials and Methods: A hybridized model which integrates deep features from Residual Neural Network using transfer learning and handcrafted features from the histogram of oriented gradients feature descriptor is proposed to classify the lung nodules as benign or malignant. The intrinsic convolutional neural network (CNN) features have been incorporated and they can resolve the drawbacks of handcrafted features that do not completely reflect the specific characteristics of a nodule. In the meantime, they also reduce the need for a large-scale annotated dataset for CNNs. For classifying malignant nodules and benign nodules, radial basis function support vector machine is used. The proposed hybridized model is evaluated on the LIDC-IDRI dataset. Results: It has achieved an accuracy of 97.53%, sensitivity of 98.62%, specificity of 96.88%, precision of 95.04%, F1 score of 0.9679, false-positive rate of 3.117%, and false-negative rate of 1.38% and has been compared with other state of the art techniques. Conclusions: The performance of the proposed hybridized feature-based classification technique is better than the deep features-based classification technique in lung nodule classification.
Improving Generalization of Deep Learning Models for Diagnostic Pathology by Increasing Variability in Training Data: Experiments on Osteosarcoma Subtypes
Tang, Haiming
Sun, Nanfei
Shen, Steven
2021Journal Article, cited 0 times
Osteosarcoma-Tumor-Assessment
BACKGROUND: Artificial intelligence has an emerging progress in diagnostic pathology. A large number of studies of applying deep learning models to histopathological images have been published in recent years. While many studies claim high accuracies, they may fall into the pitfalls of overfitting and lack of generalization due to the high variability of the histopathological images.
AIMS AND OBJECTS: Use the model training of osteosarcoma as an example to illustrate the pitfalls of overfitting and how the addition of model input variability can help improve model performance.
MATERIALS AND METHODS: We use the publicly available osteosarcoma dataset to retrain a previously published classification model for osteosarcoma. We partition the same set of images into the training and testing datasets differently than the original study: the test dataset consists of images from one patient while the training dataset consists images of all other patients. We also show the influence of training data variability on model performance by collecting a minimal dataset of 10 osteosarcoma subtypes as well as benign tissues and benign bone tumors of differentiation.
RESULTS: The performance of the re-trained model on the test set using the new partition schema declines dramatically, indicating a lack of model generalization and overfitting. We show the additions of more and moresubtypes into the training data step by step under the same model schema yield a series of coherent models with increasing performances.
CONCLUSIONS: In conclusion, we bring forward data preprocessing and collection tactics for histopathological images of high variability to avoid the pitfalls of overfitting and build deep learning models of higher generalization abilities.
Integrative Analysis of mRNA, microRNA, and Protein Correlates of Relative Cerebral Blood Volume Values in GBM Reveals the Role for Modulators of Angiogenesis and Tumor Proliferation
Rao, Arvind
Manyam, Ganiraju
Rao, Ganesh
Jain, Rajan
Cancer Informatics2016Journal Article, cited 5 times
Website
TCGA-GBM
angiogenesis
data integration
imaging-genomics
pathway analysis
perfusion imaging
rCBV
Dynamic susceptibility contrast-enhanced magnetic resonance imaging is routinely used to provide hemodynamic assessment of brain tumors as a diagnostic as well as a prognostic tool. Recently, it was shown that the relative cerebral blood volume (rCBV), obtained from the contrast-enhancing as well as -nonenhancing portion of glioblastoma (GBM), is strongly associated with overall survival. In this study, we aim to characterize the genomic correlates (microRNA, messenger RNA, and protein) of this vascular parameter. This study aims to provide a comprehensive radiogenomic and radioproteomic characterization of the hemodynamic phenotype of GBM using publicly available imaging and genomic data from the Cancer Genome Atlas GBM cohort. Based on this analysis, we identified pathways associated with angiogenesis and tumor proliferation underlying this hemodynamic parameter in GBM.
Approaches to uncovering cancer diagnostic and prognostic molecular signatures
Hong, Shengjun
Huang, Yi
Cao, Yaqiang
Chen, Xingwei
Han, Jing-Dong J
Molecular & Cellular Oncology2014Journal Article, cited 2 times
Website
Algorithm Development
The recent rapid development of high-throughput technology enables the study of molecular signatures for cancer diagnosis and prognosis at multiple levels, from genomic and epigenomic to transcriptomic. These unbiased large-scale scans provide important insights into the detection of cancer-related signatures. In addition to single-layer signatures, such as gene expression and somatic mutations, integrating data from multiple heterogeneous platforms using a systematic approach has been proven to be particularly effective for the identification of classification markers. This approach not only helps to uncover essential driver genes and pathways in the cancer network that are responsible for the mechanisms of cancer development, but will also lead us closer to the ultimate goal of personalized cancer therapy.
Evaluating long-term outcomes via computed tomography in lung cancer screening
Wu, D
Liu, R
Levitt, B
Riley, T
Baumgartner, KB
J Biom Biostat2016Journal Article, cited 0 times
NLST
LDCT
lung
Cancer Screening
Associations between gene expression profiles of invasive breast cancer and Breast Imaging Reporting and Data System MRI lexicon
Kim, Ga Ram
Ku, You Jin
Cho, Soon Gu
Kim, Sei Joong
Min, Byung Soh
Annals of Surgical Treatment and Research2017Journal Article, cited 3 times
Website
TCGA-BRCA
Radiogenomics
BI-RADS
BREAST
Magnetic resonance imaging (MRI)
Gene expression profiling
Purpose: To evaluate whether the Breast Imaging Reporting and Data System (BI-RADS) MRI lexicon could reflect the genomic information of breast cancers and to suggest intuitive imaging features as biomarkers.; Methods: Matched breast MRI data from The Cancer Imaging Archive and gene expression profile from The Cancer Genome Atlas of 70 invasive breast cancers were analyzed. Magnetic resonance images were reviewed according to the BI-RADS MRI lexicon of mass morphology. The cancers were divided into 2 groups of gene clustering by gene set enrichment analysis. Clinicopathologic and imaging characteristics were compared between the 2 groups.; Results: The luminal subtype was predominant in the group 1 gene set and the triple-negative subtype was predominant in the group 2 gene set (55 of 56, 98.2% vs. 9 of 14, 64.3%). Internal enhancement descriptors were different between the 2 groups; heterogeneity was most frequent in group 1 (27 of 56, 48.2%) and rim enhancement was dominant in group 2 (10 of 14, 71.4%). In group 1, the gene sets related to mammary gland development were overexpressed whereas the gene sets related to mitotic cell division were overexpressed in group 2.; Conclusion: We identified intuitive imaging features of breast MRI associated with distinct gene expression profiles using the standard imaging variables of BI-RADS. The internal enhancement pattern on MRI might reflect specific gene expression profiles of breast cancers, which can be recognized by visual distinction.
Automated Classification of Lung Diseases in Computed Tomography Images Using a Wavelet Based Convolutional Neural Network
Matsuyama, Eri
Tsai, Du-Yih
Journal of Biomedical Science and Engineering2018Journal Article, cited 0 times
Website
TCGA-LUAD
TCGA-LUSC
lung cancer
wavelet transform
Application of Sparse-Coding Super-Resolution to 16-Bit DICOM Images for Improving the Image Resolution in MRI
Ota, Junko
Umehara, Kensuke
Ishimaru, Naoki
Ishida, Takayuki
Open Journal of Medical Imaging2017Journal Article, cited 1 times
Website
REMBRANDT
Algorithm Development
Magnetic Resonance Imaging (MRI)
super-resolution (SR) schemes
sparse-coding super resolution (ScSR)
Super-Resolution Imaging of Mammograms Based on the Super-Resolution Convolutional Neural Network
Umehara, Kensuke
Ota, Junko
Ishida, Takayuki
Open Journal of Medical Imaging2017Journal Article, cited 0 times
Website
CBIS-DDSM
breast cancer
Patient-Wise Versus Nodule-Wise Classification of Annotated Pulmonary Nodules using Pathologically Confirmed Cases
Aggarwal, Preeti
Vig, Renu
Sardana, HK
Journal of Computers2013Journal Article, cited 5 times
Website
LIDC-IDRI
Classification
Computer Aided Detection (CADe)
LUNG
This paper presents a novel framework for combining well known shape, texture, size and resolution informatics descriptor of solitary pulmonary nodules (SPNs) detected using CT scan. The proposed methodology evaluates the performance of classifier in differentiating benign, malignant as well as metastasis SPNs with 246 chests CT scan of patients. Both patient-wise as well as nodule-wise available diagnostic report of 80 patients was used in differentiating the SPNs and the results were compared. For patient-wise data, generated a model with efficiency of 62.55% with labeled nodules and using semi-supervised approach, labels of rest of the unknown nodules were predicted and finally classification accuracy of 82.32% is achieved with all labeled nodules. For nodule-wise data, ground truth database of labeled nodules is expanded from a very small ground truth using content based image retrieval (CBIR) method and achieved a precision of 98%. Proposed methodology not only avoids unnecessary biopsies but also efficiently label unknown nodules using pre-diagnosed cases which can certainly help the physicians in diagnosis.
3D PULMONARY NODULES DETECTION USING FAST MARCHING SEGMENTATION
Paing, MP
Choomchuay, S
Journal of Fundamental and Applied Sciences2017Journal Article, cited 1 times
Website
LungCT-Diagnosis
lung cancer
automated computer aided diagnosis
lung parenchyma segmentation
fast marching method
random forest classifier
DeepCADe: A Deep Learning Architecture for the Detection of Lung Nodules in CT Scans
Early detection of lung nodules in thoracic Computed Tomography (CT) scans is of great importance for the successful diagnosis and treatment of lung cancer. Due to improvements in screening technologies, and an increased demand for their use, radiologists are required to analyze an ever increasing amount of image data, which can affect the quality of their diagnoses. Computer-Aided Detection (CADe) systems are designed to assist radiologists in this endeavor. In this thesis, we present DeepCADe, a novel CADe system for the detection of lung nodules in thoracic CT scans which produces improved results compared to the state-of-the-art in this field of research. CT scans are grayscale images, so the terms scans and images are used interchangeably in this work. DeepCADe was trained with the publicly available Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) database, which contains 1018 thoracic CT scans with nodules of different shape and size, and is built on a Deep Convolutional Neural Network (DCNN), which is trained using the backpropagation algorithm to extract volumetric features from the input data and detect lung nodules in sub-volumes of CT images. Considering only lung nodules that have been annotated by at least three radiologists, DeepCADe achieves a 2.1% improvement in sensitivity (true positive rate) over the best result in the current published scientific literature, assuming an equal number of false positives (FPs) per scan. More specifically, it achieves a sensitivity of 89.6% with 4 FPs per scan, or a sensitivity of 92.8% with 10 FPs per scan. Furthermore, DeepCADe is validated on a larger number of lung nodules compared to other studies (Table 5.2). This increases the variation in the appearance of nodules and therefore makes their detection by a CADe system more challenging. We study the application of Deep Convolutional Neural Networks (DCNNs) for the detection of lung nodules in thoracic CT scans. We explore some of the meta parameters that affect the performance of such models, which include: 1. the network architecture, i.e. its structure in terms of convolution layers, fully-connected layers, pooling layers, and activation functions, 2. the receptive field of the network, which defines the dimensions of its input, i.e. how much of the CT scan is processed by the network in a single forward pass, 3. a threshold value, which affects the sliding window algorithm with which the network is used to detect nodules in complete CT scans, and 4. the agreement level, which is used to interpret the independent nodule annotations of four experienced radiologists. Finally, we visualize the shape and location of annotated lung nodules and compare them to the output of DeepCADe. This demonstrates the compactness and flexibility in shape of the nodule predictions made by our proposed CADe system. In addition to the 5-fold cross validation results presented in this thesis, these visual results support the applicability of our proposed CADe system in real-world medical practice.
Deep learning in ovarian cancer diagnosis: a comprehensive review of various imaging modalities
Sadeghi, Mohammad Hossein
Sina, Sedigheh
Omidi, Hamid
Farshchitabrizi, Amir Hossein
Alavi, Mehrosadat
2024Journal Article, cited 0 times
Ovarian Bevacizumab Response
Deep Learning
CNN
Ovarian cancer poses a major worldwide health issue, marked by high death rates and a deficiency in reliable diagnostic methods. The precise and prompt detection of ovarian cancer holds great importance in advancing patient outcomes and determining suitable treatment plans. Medical imaging techniques are vital in diagnosing ovarian cancer, but achieving accurate diagnoses remains challenging. Deep learning (DL), particularly convolutional neural networks (CNNs), has emerged as a promising solution to improve the accuracy of ovarian cancer detection. This systematic review explores the role of DL in improving the diagnostic accuracy for ovarian cancer. The methodology involved the establishment of research questions, inclusion and exclusion criteria, and a comprehensive search strategy across relevant databases. The selected studies focused on DL techniques applied to ovarian cancer diagnosis using medical imaging modalities, as well as tumour differentiation and radiomics. Data extraction, analysis, and synthesis were performed to summarize the characteristics and findings of the selected studies. The review emphasizes the potential of DL in enhancing the diagnosis of ovarian cancer by accelerating the diagnostic process and offering more precise and efficient solutions. DL models have demonstrated their effectiveness in categorizing ovarian tissues and achieving comparable diagnostic performance to that of experienced radiologists. The integration of DL into ovarian cancer diagnosis holds the promise of improving patient outcomes, refining treatment approaches, and supporting well-informed decision-making. Nevertheless, additional research and validation are necessary to ensure the dependability and applicability of DL models in everyday clinical settings.
Novel Framework for Breast Cancer Classification for Retaining Computational Efficiency and Precise Diagnosis
Vidya, K
Kurian, MZ
Communications Applied Electronics2018Journal Article, cited 0 times
Website
Algorithm Development
breast cancer
Classification
K Nearest Neighbor (KNN)
MRI
Classification of breast cancer is still an open-end challenge in medical image processing. The existing literatures were reviewed to found that existing solution are more pivotal towards accuracy in classification and less towards achieving computational effectiveness in classification process. Therefore, this paper presents a novel classification approach that bridges the trade-off between computational performances of classifier with its final response towards disease criticality. An analytical framework is built that takes the input of Magnetic Resonance Imaging (MRI) of breast cancer which is subjected to non-linear map-based filter for enhancing pre-processing operation. The algorithm also offers a novel integral transformation scheme that lets the filtered image to get itself transformed followed by precise extraction of foreground and background for assisting in reliable classification. A statistical-based approach is used for extracting feature followed by classifying using unsupervised learning algorithm. The study outcome shows superior performance compared to existing schemes of classification.
The Utilization of Consignable Multi-Model in Detection and Classification of Pulmonary Nodules
Zia, Muhammad Bilal
Juan, Zhao Juan
Rehman, Zia Ur
Javed, Kamran
Rauf, Saad Abdul
Khan, Arooj
International Journal of Computer Applications2019Journal Article, cited 2 times
Website
LIDC-IDRI
LUNG
Classification
Computer Assisted Detection (CAD)
Early stage Detection and Classification of pulmonary nodule diagnostics from CT images is a complicated task. The risk assessment for malignancy is usually used to assist the physician in assessing the cancer stage and creating a follow-up prediction strategy. Due to the difference in size, structure, and location of the nodules, the classification of nodules in the computer-assisted diagnostic system has been a great challenge. While deep learning is currently the most effective solution in terms of image detection and classification, there are many training information required, typically not readily accessible in most routine frameworks of medical imaging. Though, it is complicated for radiologists to recognize the inexplicability of deep neural networks. In this paper, a Consignable Multi-Model (CMM) is proposed for the detection and classification of a lung nodule, which first detect the lung nodule from CT images by different detection algorithms and then classify the lung nodules using Multi-Output DenseNet (MOD) technique. In order to enhance the interpretability of the proposed CMM, two inputs with multiple early outputs have been introduced in dense blocks. MOD accepts the detect patches into its two inputs which were identified from the detection phase and then classified it between benign and malignant using early outputs to gain more knowledge of a tumor. In addition, the experimental results on the LIDC-IDRI dataset demonstrate a 92.10% accuracy of CMM for the lung nodule classification, respectively. CMM made substantial progress in the diagnosis of nodules in contrast to the existing methods.
Machine Learning Models for Radiogenomics in Cancer
Shrey's work led to the development of 'ImaGene,' a robust software for conducting radiogenomic analysis of solid tumors, predicting cancer biomarkers using radiographic traits. He demonstrated its utility by testing it on clinical datasets from multiple hospitals, significantly advancing global radiogenomic and clinicopathologic modeling efforts. We envision that ImaGene will become the standard platform for tumor analysis in radiogenomics due to its ease of use, flexibility, and reproducibility. This innovative software promises to serve as a central hub for the emerging radiogenomic knowledge base, marking a leap forward in precision medicine for cancer.
Glioma Grade Classification via Omics Imaging
Guarracino, Mario
Manzo, Mario
Manipur, Ichcha
Granata, Ilaria
Maddalena, Lucia
2020Conference Paper, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Radiogenomics
Imaging features
Classification
BRAIN
Omics imaging is an emerging interdisciplinary field concerned with the integration of data collected from biomedical images and omics experiments. Bringing together information coming from different sources, it permits to reveal hidden genotype-phenotype relationships, with the aim of better understanding the onset and progression of many diseases, and identifying new diagnostic and prognostic biomarkers. In this work, we present an omics imaging approach to the classification of different grades of gliomas, which are primary brain tumors arising from glial cells, as this is of critical clinical importance for making decisions regarding initial and subsequent treatment strategies. Imaging data come from analyses available in The Cancer Imaging Archive, while omics attributes are extracted by integrating metabolic models with transcriptomic data available from the Genomic Data Commons portal. We investigate the results of feature selection for the two types of data separately, as wel l as for the integrated data, providing hints on the most distinctive ones that can be exploited as biomarkers for glioma grading. Moreover, we show how the integrated data can provide additional clinical information as compared to the two types of data separately, leading to higher performance. We believe our results can be valuable to clinical tests in practice.
Deep-Learning-based Segmentation of Organs-at-Risk in the Head for MR-assisted Radiation Therapy Planning
Segmentation of organs-at-risk (OAR) in MR images has several clinical applications; including radiation therapy (RT) planning. This paper presents a deep-learning-based method to segment 15 structures in the head region. The proposed method first applies 2D U-Net models to each of the three planes (axial, coronal, sagittal) to roughly segment the structure. Then, the results of the 2D models are combined into a fused prediction to localize the 3D bounding box of the structure. Finally, a 3D U-Net is applied to the volume of the bounding box to determine the precise contour of the structure. The model was trained on a public dataset and evaluated on both public and private datasets that contain T2-weighted MR scans of the head-and-neck region. For all cases the contour of each structure was defined by operators trained by expert clinical delineators. The evaluation demonstrated that various structures can be accurately and efficiently localized and segmented using the presented framework. The contours generated by the proposed method were also qualitatively evaluated. The majority (92%) of the segmented OARs was rated as clinically useful for radiation therapy.
Using Anatomical Priors for Deep 3D One-shot Segmentation
With the success of deep convolutional neural networks for semantic segmentation in the medical imaging domain, there is a high demand for labeled training data, that is often not available or expensive to acquire.Training with little data usually leads to overfitting, which prohibits the model to generalize to unseen problems. However, in the medical imaging setting, image perspectives and anatomical topology do not vary as much as in natural images, as the patient is often instructed to hold a specific posture to follow a standardized protocol. In this work we therefore investigate the one-shot segmentation capabilities of a standard 3D U-Net architecture in such setting and propose incorporating anatomical priors to increase the segmentation performance. We evaluate our proposed method on the example of liver segmentation in abdominal CT volumes.
Predicting the MGMT Promoter Methylation Status in T2-FLAIR Magnetic Resonance Imaging Scans Using Machine Learning
Kurbiel, Martyna
Wijata, Agata
Nalepa, Jakub
2024Conference Paper, cited 0 times
BraTS 2021
RSNA-ASNR-MICCAI BraTS 2021
Radiomics
MGMT methylation status
Glioblastoma
Classification
Glioblastoma is the most common form of brain cancer in adults, and is characterized by one of the worst prognosis, with median survival being less than one year. Magnetic resonance imaging (MRI) plays a key role in detecting and objectively tracking the disease by extracting quantifiable parameters of the tumor, such as its volume or bidimensional measurements. However, it has been shown that the presence a specific genetic sequence in a lesion, being the DNA repair enzyme O6 -methylguanine-DNA methyltransferase (MGMT) promoter methylation, may be effectively used to predict the patient’s responsiveness to chemotherapy. The invasive process of analyzing a tissue sample to verify the MGMT promoter methylation status is time-consuming, and may require performing multiple surgical interventions in longitudinal studies. Thus, building non-invasive techniques of predicting the genetic subtype of glioblastoma is of utmost practical importance to not only accelerate the overall process of determining the MGMT promoter methylation status in glioblastoma patients, but also to minimize the number of necessary surgeries. In this paper, we tackle this problem and propose an end-to-end machine learning classification pipeline benefitting from radiomic features extracted from brain MRI scans, and validate it over a well-established RSNA-MICCAI Brain Tumor Radiogenomic Classification benchmark dataset.
3D-SCoBeP: 3D medical image registration using sparse coding and belief propagation
Roozgard, Aminmohammad
Barzigar, Nafise
Verma, Pramode
Cheng, Samuel
International Journal of Diagnostic Imaging2014Journal Article, cited 4 times
Website
LIDC-IDRI
Vessel extraction from breast MR
Gierlinger, Marco
Brandner, Dinah
Zagar, Bernhard G.
2021Conference Proceedings, cited 0 times
ISPY1/ACRIN 6657
BREAST
We present an extension of the previous work, where a multi-seed region growing algorithm was shown, that extracts segments from breast MRI. The algorithm of our extended work filters elongated segments from the segments derived by the MSRG algorithm to obtain vessel-like structures. This filter is a skeletonization-like algorithm that collects useful information about the segments' thickness, length, etc. A model is shown that scans through the solution space of the MSRG algorithm by adjusting its parameters and by providing shape information for the filter. We further elaborate on the usefulness of the algorithm to assist medical experts in their daignosis of diseases relevant to angiography.
Effectiveness of synthetic data generation for capsule endoscopy images
Turan, Mehmet
Medicine Science | International Medical Journal2021Journal Article, cited 0 times
Website
CT COLONOGRAPHY
TCGA-STAD
Computed Tomography (CT)
Vasculature
Synthetic images
With advances in digital healthcare technologies, optional therapeutic modules and tasks such as depth estimation, visual localization, active control, automatic navigation, and targeted drug delivery are desirable for the next generation of capsule endoscopy devices to diagnose and treat gastrointestinal diseases. Although deep learning applications promise many advanced functions for capsule endoscopes, some limitations and challenges are encountered during the implementation of data-driven algorithms, with the difficulty of obtaining real endoscopy images and the limited availability of annotated data being the most common problems. In addition, some artefacts in endoscopy images due to lighting conditions, reflections as well as camera view can significantly affect the performance of artificial intelligence methods, making it difficult to develop a robust model. Realistic simulations that generate synthetic data have emerged as a solution to develop data-driven algorithms by addressing these problems. In this study, synthetic data for different organs of the GI tract are generated using a simulation environment to investigate the utility and generalizability of the synthetic data for various medical image analysis tasks using the state-of-the-art Endo-SfMLearner model, and the performance of the models is evaluated with both real and synthetic images. The extensive qualitative and quantitative results demonstrate that the use of synthetic data in training improves the performance of pose and depth estimation and that the model can be accurately generalized to real medical data.; Keywords: Synthetic data generation, capsule endoscopy, depth and pose estimation
Segmentação Automática de Candidatos a Nódulos Pulmonares em Imagens de Tomografia Computadorizada
Este trabalho apresenta um algoritmo para segmentação automática de candidatos a nódulos pulmonares em imagens de Tomografia Computadorizada do tórax. A metodologia empregada inclui aquisição das imagens, eliminação de ruídos, segmentação do parênquima pulmonar e segmentação dos candidatos a nódulos pulmonares. O uso do filtro wiener e a aplicação do limiar ideal garante ao algoritmo uma melhora significativa nos resultados, permitindo detectar um maior número de nódulos nas imagens. Os testes foram realizados utilizando um conjunto de imagens da base LIDC-IDRI, contendo 708 nódulos. Os resultados do teste mostraram que o algoritmo localizou 93,08% dos nódulos considerados.; ; This paper presents an algorithm for automatic segmentation of pulmonary nodules candidates in chest computed tomography images. The methodology includes acquisition images, noise elimination, segmentation of pulmonary parenchyma and segmentation pulmonary nodules candidates. The use of the filter wiener and the application of ideal threshold ensures to the algorithm a significant improvement in results, allowing to detect a greater number of nodules on the images. The tests were conducted using a set of images of the base LIDC-IDRI, containing 708 nodules. The test results showed that the algorithm located 93.08% of the nodules considered.
Automatic 3D Mesh-Based Centerline Extraction from a Tubular Geometry Form
Yahya-Zoubir, Bahia
Hamami, Latifa
Saadaoui, Llies
Ouared, Rafik
Information Technology And Control2016Journal Article, cited 0 times
Website
CT Colonography
Convolutional-Neural-Network Assisted Segmentation and SVM Classification of Brain Tumor in Clinical MRI Slices
Rajinikanth, Venkatesan
Kadry, Seifedine
Nam, Yunyoung
Information Technology And Control2021Journal Article, cited 1 times
Website
TCGA-GBM
TCGA-LGG
Computer Aided Diagnosis (CADx)
U-Net
Classification
T2-weighted
Magnetic Resonance Imaging (MRI)
Due to the increased disease occurrence rates in humans, the need for the Automated Disease Diagnosis (ADD) systems is also raised. Most of the ADD systems are proposed to support the doctor during the screening and decision making process. This research aims at developing a Computer Aided Disease Diagnosis (CADD) scheme to categorize the brain tumour of 2D MRI slices into Glioblastoma/Glioma class with better accuracy. The main contribution of this research work is to develop a CADD system with Convolutional-Neural-Network (CNN) supported segmentation and classification. The proposed CADD framework consist of the following phases; (i) Image collection and resizing, (ii) Automated tumour segmentation using VGG-UNet, (iv) Deep-feature extraction using VGG16 network, (v) Handcrafted feature extraction, (vi) Finest feature choice by firefly-algorithm, and (vii) Serial feature concatenation and binary classification. The merit of the executed CADD is confirmed using an investigation realized using the benchmark as well as clinically collected brain MRI slices. In this work, a binary classification with a 10-fold cross validation is implemented using well known classifiers and the results attained with the SVM-Cubic (accuracy >98%) is superior. This result confirms that the combination of CNN assisted segmentation and classification helps to achieve enhanced disease detection accuracy.
Analysis of CT DICOM Image Segmentation for Abnormality Detection
Kulkarni, Rashmi
Bhavani, K.
International Journal of Engineering and Manufacturing2019Journal Article, cited 0 times
Website
LIDC-IDRI
Computer Aided Detection (CADe)
Computed Tomography (CT)
The cancer is a menacing disease. More care is required while diagnosing cancer disease. Mostly CT modality is used for Cancer therapy. Image processing techniques [1] can help doctors to diagnose easily and more accurately. Image pre-processing [2], segmentation methods [3] are used in extraction of cancerous nodules from CT images. Many researches have been done on segmentation of CT images with different algorithms, but they failed to reach 100% accuracy. This research work, proposes a model for analysis of CT image segmentation with filtered and without filtered images. And brings out the importance of pre-processing of CT images.
Reproducible research is a growing movement among scientists, but the tools for creating sustainable software to support the computational side of research are still in their infancy and are typically only being used by scientists with expertise in computer programming and system administration. Docker is a new platform developed for the DevOps community that enables the easy creation and management of consistent computational environments. This article describes how we have applied it to computational science and suggests that it could be a powerful tool for reproducible research.
GFAP expression is influenced by astrocytoma grade and rs2070935 polymorphism
Glial fibrillary acidic protein (GFAP) is an intermediate filament that provides mechanical support to astrocytes. Rs2070935 is a single nucleotide polymorphism (SNP) located in the promoter region of the GFAP gene. The aim of this pilot study is to investigate GFAP expression at mRNA, protein levels and rs2070935 polymorphism in 50 different grade human astrocytoma samples. GFAP expression at mRNA level was measured using quantitative reverse transcription polymerase chain reaction (qRT-PCR) with SYBR Green dye, whereas the translational activity of the following gene was detected using western blot assay. Furthermore, genotypes of rs2070935 were identified using qPCR with TaqMan probes. As a result, GFAP mRNA and protein expression was found to be declining with increasing astrocytoma grade (p < 0.05). A tendency was observed between increased GFAP mRNA expression and shorter grade IV astrocytoma patient survival (p = 0.2117). The rs2070935 CC genotype was found to be associated with increased GFAP translational activity in grade II astrocytoma (p = 0.0238). Possible links between rs2070935 genotypes and alternative splicing of GFAP were also observed. The rs2070935 AA genotype was found to be associated with poor clinical outcome for grade IV astrocytoma patients (p = 0.0007), although the following data should be checked in a larger sample size of astrocytoma patients.
Integrative analysis of imaging and transcriptomic data of the immune landscape associated with tumor metabolism in lung adenocarcinoma: Clinical and prognostic implications
Choi, Hongyoon
Na, Kwon Joong
THERANOSTICS2018Journal Article, cited 0 times
Website
TCGA-LUAD
Comparative study of preclinical mouse models of high-grade glioma for nanomedicine research: the importance of reproducing blood-brain barrier heterogeneity
Brighi, C.
Reid, L.
Genovesi, L. A.
Kojic, M.
Millar, A.
Bruce, Z.
White, A. L.
Day, B. W.
Rose, S.
Whittaker, A. K.
Puttick, S.
THERANOSTICS2020Journal Article, cited 32 times
Website
The clinical translation of new nanoparticle-based therapies for high-grade glioma (HGG) remains extremely poor. This has partly been due to the lack of suitable preclinical mouse models capable of replicating the complex characteristics of recurrent HGG (rHGG), namely the heterogeneous structural and functional characteristics of the blood-brain barrier (BBB). The goal of this study is to compare the characteristics of the tumor BBB of rHGG with two different mouse models of HGG, the ubiquitously used U87 cell line xenograft model and a patient-derived cell line WK1 xenograft model, in order to assess their suitability for nanomedicine research. Method: Structural MRI was used to assess the extent of BBB opening in mouse models with a fully developed tumor, and dynamic contrast enhanced MRI was used to obtain values of BBB permeability in contrast enhancing tumor. H&E and immunofluorescence staining were used to validate results obtained from the in vivo imaging studies. Results: The extent of BBB disruption and permeability in the contrast enhancing tumor was significantly higher in the U87 model than in rHGG. These values in the WK1 model are similar to those of rHGG. The U87 model is not infiltrative, has an entirely abnormal and leaky vasculature and it is not of glial origin. The WK1 model infiltrates into the non-neoplastic brain parenchyma, it has both regions with intact BBB and regions with leaky BBB and remains of glial origin. Conclusion: The WK1 mouse model more accurately reproduces the extent of BBB disruption, the level of BBB permeability and the histopathological characteristics found in rHGG patients than the U87 mouse model, and is therefore a more clinically relevant model for preclinical evaluations of emerging nanoparticle-based therapies for HGG.
Reciprocal change in Glucose metabolism of Cancer and Immune Cells mediated by different Glucose Transporters predicts Immunotherapy response
Na, Kwon Joong
Choi, Hongyoon
Oh, Ho Rim
Kim, Yoon Ho
Lee, Sae Bom
Jung, Yoo Jin
Koh, Jaemoon
Park, Samina
Lee, Hyun Joo
Jeon, Yoon Kyung
Chung, Doo Hyun
Paeng, Jin Chul
Park, In Kyu
Kang, Chang Hyun
Cheon, Gi Jeong
Kang, Keon Wook
Lee, Dong Soo
Kim, Young Tae
THERANOSTICS2020Journal Article, cited 0 times
Website
TCGA-LUSC
LUNG
Radiogenomics
The metabolic properties of tumor microenvironment (TME) are dynamically dysregulated to achieve immune escape and promote cancer cell survival. However, in vivo properties of glucose metabolism in cancer and immune cells are poorly understood and their clinical application to development of a biomarker reflecting immune functionality is still lacking. Methods: We analyzed RNA-seq and fluorodeoxyglucose (FDG) positron emission tomography profiles of 63 lung squamous cell carcinoma (LUSC) specimens to correlate FDG uptake, expression of glucose transporters (GLUT) by RNA-seq and immune cell enrichment score (ImmuneScore). Single cell RNA-seq analysis in five lung cancer specimens was performed. We tested the GLUT3/GLUT1 ratio, the GLUT-ratio, as a surrogate representing immune metabolic functionality by investigating the association with immunotherapy response in two melanoma cohorts. Results: ImmuneScore showed a negative correlation with GLUT1 (r = -0.70, p < 0.01) and a positive correlation with GLUT3 (r = 0.39, p < 0.01) in LUSC. Single-cell RNA-seq showed GLUT1 and GLUT3 were mostly expressed in cancer and immune cells, respectively. In immune-poor LUSC, FDG uptake was positively correlated with GLUT1 (r = 0.27, p = 0.04) and negatively correlated with ImmuneScore (r = -0.28, p = 0.04). In immune-rich LUSC, FDG uptake was positively correlated with both GLUT3 (r = 0.78, p = 0.01) and ImmuneScore (r = 0.58, p = 0.10). The GLUT-ratio was higher in anti-PD1 responders than nonresponders (p = 0.08 for baseline; p = 0.02 for on-treatment) and associated with a progression-free survival in melanoma patients who treated with anti-CTLA4 (p = 0.04). Conclusions: Competitive uptake of glucose by cancer and immune cells in TME could be mediated by differential GLUT expression in these cells.
Automatic Electronic Cleansing in Computed Tomography Colonography Images using Domain Knowledge
Manjunath, KN
Siddalingaswamy, PC
Prabhu, GK
Asian Pacific Journal of Cancer Prevention2015Journal Article, cited 0 times
CT Colonography
A rotation and translation invariant method for 3D organ image classification using deep convolutional neural networks
Islam, Kh Tohidul
Wijewickrema, Sudanthi
O’Leary, Stephen
PeerJ Computer Science2019Journal Article, cited 0 times
Website
Radiomics
Deep Learning
Three-dimensional (3D) medical image classification is useful in applications such as disease diagnosis and content-based medical image retrieval. It is a challenging task due to several reasons. First, image intensity values are vastly different depending on the image modality. Second, intensity values within the same image modality may vary depending on the imaging machine and artifacts may also be introduced in the imaging process. Third, processing 3D data requires high computational power. In recent years, significant research has been conducted in the field of 3D medical image classification. However, most of these make assumptions about patient orientation and imaging direction to simplify the problem and/or work with the full 3D images. As such, they perform poorly when these assumptions are not met. In this paper, we propose a method of classification for 3D organ images that is rotation and translation invariant. To this end, we extract a representative two-dimensional (2D) slice along the plane of best symmetry from the 3D image. We then use this slice to represent the 3D image and use a 20-layer deep convolutional neural network (DCNN) to perform the classification task. We show experimentally, using multi-modal data, that our method is comparable to existing methods when the assumptions of patient orientation and viewing direction are met. Notably, it shows similarly high accuracy even when these assumptions are violated, where other methods fail. We also explore how this method can be used with other DCNN models as well as conventional classification approaches.
On the classification of simple and complex biological images using Krawtchouk moments and Generalized pseudo-Zernike moments: a case study with fly wing images and breast cancer mammograms
Goh, J. Y.
Khang, T. F.
PeerJ Comput Sci2021Journal Article, cited 0 times
Website
CBIS-DDSM
BREAST
mammography
Radiomic features
Radiomics
Algorithm Development
Random Forest
Image analysis
Machine Learning
In image analysis, orthogonal moments are useful mathematical transformations for creating new features from digital images. Moreover, orthogonal moment invariants produce image features that are resistant to translation, rotation, and scaling operations. Here, we show the result of a case study in biological image analysis to help researchers judge the potential efficacy of image features derived from orthogonal moments in a machine learning context. In taxonomic classification of forensically important flies from the Sarcophagidae and the Calliphoridae family (n = 74), we found the GUIDE random forests model was able to completely classify samples from 15 different species correctly based on Krawtchouk moment invariant features generated from fly wing images, with zero out-of-bag error probability. For the more challenging problem of classifying breast masses based solely on digital mammograms from the CBIS-DDSM database (n = 1,151), we found that image features generated from the Generalized pseudo-Zernike moments and the Krawtchouk moments only enabled the GUIDE kernel model to achieve modest classification performance. However, using the predicted probability of malignancy from GUIDE as a feature together with five expert features resulted in a reasonably good model that has mean sensitivity of 85%, mean specificity of 61%, and mean accuracy of 70%. We conclude that orthogonal moments have high potential as informative image features in taxonomic classification problems where the patterns of biological variations are not overly complex. For more complicated and heterogeneous patterns of biological variations such as those present in medical images, relying on orthogonal moments alone to reach strong classification performance is unrealistic, but integrating prediction result using them with carefully selected expert features may still produce reasonably good prediction models.
Towards survival prediction of cancer patients using medical images
Ul Haq, Nazeef
Tahir, Bilal
Firdous, Samar
Amir Mehmood, Muhammad
PeerJ Computer Science2022Journal Article, cited 0 times
Website
BraTS-TCGA-GBM
BraTS-TCGA-LGG
NSCLC-Radiomics
Radiomics
Classification
Survival prediction of a patient is a critical task in clinical medicine for physicians and patients to make an informed decision. Several survival and risk scoring methods have been developed to estimate the survival score of patients using clinical information. For instance, the Global Registry of Acute Coronary Events (GRACE) and Thrombolysis in Myocardial Infarction (TIMI) risk scores are developed for the survival prediction of heart patients. Recently, state-of-the-art medical imaging and analysis techniques have paved the way for survival prediction of cancer patients by understanding key features extracted from Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) scanned images with the help of image processing and machine learning techniques. However, survival prediction is a challenging task due to the complexity in benchmarking of image features, feature selection methods, and machine learning models. In this article, we evaluate the performance of 156 visual features from radiomic and hand-crafted feature classes, six feature selection methods, and 10 machine learning models to benchmark their performance. In addition, MRI scanned Brain Tumor Segmentation (BraTS) and CT scanned non-small cell lung cancer (NSCLC) datasets are used to train classification and regression models. Our results highlight that logistic regression outperforms for the classification with 66 and 54% accuracy for BraTS and NSCLC datasets, respectively. Moreover, our analysis of best-performing features shows that age is a common and significant feature for survival prediction. Also, gray level and shape-based features play a vital role in regression. We believe that the study can be helpful for oncologists, radiologists, and medical imaging researchers to understand and automate the procedure of decision-making and prognosis of cancer patients.
A methodological showcase: utilizing minimal clinical parameters for early-stage mortality risk assessment in COVID-19-positive patients
Yan, Jonathan K.
PeerJ Computer Science2024Journal Article, cited 0 times
Website
COVID-19-NY-SBU
The scarcity of data is likely to have a negative effect on machine learning (ML). Yet, in the health sciences, data is diverse and can be costly to acquire. Therefore, it is critical to develop methods that can reach similar accuracy with minimal clinical features. This study explores a methodology that aims to build a model using minimal clinical parameters to reach comparable performance to a model trained with a more extensive list of parameters. To develop this methodology, a dataset of over 1,000 COVID-19-positive patients was used. A machine learning model was built with over 90% accuracy when combining 24 clinical parameters using Random Forest (RF) and logistic regression. Furthermore, to obtain minimal clinical parameters to predict the mortality of COVID-19 patients, the features were weighted using both Shapley values and RF feature importance to get the most important factors. The six most highly weighted features that could produce the highest performance metrics were combined for the final model. The accuracy of the final model, which used a combination of six features, is 90% with the random forest classifier and 91% with the logistic regression model. This performance is close to that of a model using 24 combined features (92%), suggesting that highly weighted minimal clinical parameters can be used to reach similar performance. The six clinical parameters identified here are acute kidney injury, glucose level, age, troponin, oxygen level, and acute hepatic injury. Among those parameters, acute kidney injury was the highest-weighted feature. Together, a methodology was developed using significantly minimal clinical parameters to reach performance metrics similar to a model trained with a large dataset, highlighting a novel approach to address the problems of clinical data collection for machine learning.
DICOM for quantitative imaging biomarker development: a standards based approach to sharing clinical data and structured PET/CT analysis results in head and neck cancer research
Background. Imaging biomarkers hold tremendous promise for precision medicine clinical applications. Development of such biomarkers relies heavily on image post-processing tools for automated image quantitation. Their deployment in the context of clinical research necessitates interoperability with the clinical systems. Comparison with the established outcomes and evaluation tasks motivate integration of the clinical and imaging data, and the use of standardized approaches to support annotation and sharing of the analysis results and semantics. We developed the methodology and tools to support these tasks in Positron Emission Tomography and Computed Tomography (PET/CT) quantitative imaging (QI) biomarker development applied to head and neck cancer (HNC) treatment response assessment, using the Digital Imaging and Communications in Medicine (DICOM((R))) international standard and free open-source software. Methods. Quantitative analysis of PET/CT imaging data collected on patients undergoing treatment for HNC was conducted. Processing steps included Standardized Uptake Value (SUV) normalization of the images, segmentation of the tumor using manual and semi-automatic approaches, automatic segmentation of the reference regions, and extraction of the volumetric segmentation-based measurements. Suitable components of the DICOM standard were identified to model the various types of data produced by the analysis. A developer toolkit of conversion routines and an Application Programming Interface (API) were contributed and applied to create a standards-based representation of the data. Results. DICOM Real World Value Mapping, Segmentation and Structured Reporting objects were utilized for standards-compliant representation of the PET/CT QI analysis results and relevant clinical data. A number of correction proposals to the standard were developed. The open-source DICOM toolkit (DCMTK) was improved to simplify the task of DICOM encoding by introducing new API abstractions. Conversion and visualization tools utilizing this toolkit were developed. The encoded objects were validated for consistency and interoperability. The resulting dataset was deposited in the QIN-HEADNECK collection of The Cancer Imaging Archive (TCIA). Supporting tools for data analysis and DICOM conversion were made available as free open-source software. Discussion. We presented a detailed investigation of the development and application of the DICOM model, as well as the supporting open-source tools and toolkits, to accommodate representation of the research data in QI biomarker development. We demonstrated that the DICOM standard can be used to represent the types of data relevant in HNC QI biomarker development, and encode their complex relationships. The resulting annotated objects are amenable to data mining applications, and are interoperable with a variety of systems that support the DICOM standard.
Classification of the glioma grading using radiomics analysis
Effect of domain knowledge encoding in CNN model architecture—a prostate cancer study using mpMRI images
Sobecki, Piotr
Jóźwiak, Rafał
Sklinda, Katarzyna
Przelaskowski, Artur
PeerJ2021Journal Article, cited 0 times
PROSTATEx
BACKGROUND: Prostate cancer is one of the most common cancers worldwide. Currently, convolution neural networks (CNNs) are achieving remarkable success in various computer vision tasks, and in medical imaging research. Various CNN architectures and methodologies have been applied in the field of prostate cancer diagnosis. In this work, we evaluate the impact of the adaptation of a state-of-the-art CNN architecture on domain knowledge related to problems in the diagnosis of prostate cancer. The architecture of the final CNN model was optimised on the basis of the Prostate Imaging Reporting and Data System (PI-RADS) standard, which is currently the best available indicator in the acquisition, interpretation, and reporting of prostate multi-parametric magnetic resonance imaging (mpMRI) examinations.
METHODS: A dataset containing 330 suspicious findings identified using mpMRI was used. Two CNN models were subjected to comparative analysis. Both implement the concept of decision-level fusion for mpMRI data, providing a separate network for each multi-parametric series. The first model implements a simple fusion of multi-parametric features to formulate the final decision. The architecture of the second model reflects the diagnostic pathway of PI-RADS methodology, using information about a lesion's primary anatomic location within the prostate gland. Both networks were experimentally tuned to successfully classify prostate cancer changes.
RESULTS: The optimised knowledge-encoded model achieved slightly better classification results compared with the traditional model architecture (AUC = 0.84 vs. AUC = 0.82). We found the proposed model to achieve convergence significantly faster.
CONCLUSIONS: The final knowledge-encoded CNN model provided more stable learning performance and faster convergence to optimal diagnostic accuracy. The results fail to demonstrate that PI-RADS-based modelling of CNN architecture can significantly improve performance of prostate cancer recognition using mpMRI.
Upright walking has driven unique vascular specialization of the hominin ilium
Image Quality Evaluation in Computed Tomography Using Super-resolution Convolutional Neural Network
Nm, Kibok
Cho, Jeonghyo
Lee, Seungwan
Kim, Burnyoung
Yim, Dobin
Lee, Dahye
Journal of the Korean Society of Radiology2020Journal Article, cited 0 times
SPIE-AAPM Lung CT Challenge
Convolutional Neural Network (CNN)
Computer Assisted Detection (CAD)
Computer Assisted Diagnosis (CAD)
High-quality computed tomography (CT) images enable precise lesion detection and accurate diagnosis. A lot of studies have been performed to improve CT image quality while reducing radiation dose. Recently, deep learning-based techniques for improving CT image quality have been developed and show superior performance compared to conventional techniques. In this study, a super-resolution convolutional neural network (SRCNN) model was used to improve the spatial resolution of CT images, and image quality according to the hyperparameters, which determine the performance of the SRCNN model, was evaluated in order to verify the effect of hyperparameters on the SRCNN model. Profile, structural similarity (SSIM), peak signal-to-noise ratio (PSNR), and full-width at half-maximum (FWHM) were measured to evaluate the performance of the SRCNN model. The results showed that the performance of the SRCNN model was improved with an increase of the numbers of epochs and training sets, and the learning rate needed to be optimized for obtaining acceptable image quality. Therefore, the SRCNN model with optimal hyperparameters is able to improve CT image quality.
Evaluation of the Usefulness of Detection of Abdominal CT Kidney and Vertebrae using Deep Learning;
Journal of the Korean Society of Radiology2021Journal Article, cited 0 times
Pancreas-CT
Computer Aided Detection (CADe)
Deep Learning
CT is important role in the medical field, such as disease diagnosis, but the number of examination and CT images are increasing. Recently, deep learning has been actively used in the medical field, and it has been used to diagnose auxiliary disease through object detection during deep learning using medical images. The purpose of study to evaluate accuracy by detecting kidney and vertebrae during abdominal CT using object detection deep learning in YOLOv3. As a results of the study, the detection accuracy of the kidney and vertebrae was 83.00%, 82.45%, and can be used as basic data for the object detection of medical images using deep learning.
Classification of Prostate Transitional Zone Cancer and Hyperplasia Using Deep Transfer Learning From Disease-Related Images
Purpose The diagnosis of prostate transition zone cancer (PTZC) remains a clinical challenge due to their similarity to benign prostatic hyperplasia (BPH) on MRI. The Deep Convolutional Neural Networks (DCNNs) showed high efficacy in diagnosing PTZC on medical imaging but was limited by the small data size. A transfer learning (TL) method was combined with deep learning to overcome this challenge. Materials and methods A retrospective investigation was conducted on 217 patients enrolled from our hospital database (208 patients) and The Cancer Imaging Archive (nine patients). Using T2-weighted images (T2WIs) and apparent diffusion coefficient (ADC) maps, DCNN models were trained and compared between different TL databases (ImageNet vs. disease-related images) and protocols (from scratch, fine-tuning, or transductive transferring). Results PTZC and BPH can be classified through traditional DCNN. The efficacy of TL from natural images was limited but improved by transferring knowledge from the disease-related images. Furthermore, transductive TL from disease-related images had comparable efficacy to the fine-tuning method. Limitations include retrospective design and a relatively small sample size. Conclusion Deep TL from disease-related images is a powerful tool for an automated PTZC diagnostic system. In developing regions where only conventional MR scans are available, the accurate diagnosis of PTZC can be achieved via transductive deep TL from disease-related images.
Classifying the Acquisition Sequence for Brain MRIs Using Neural Networks on Single Slices
Background Neural networks for analyzing MRIs are oftentimes trained on particular combinations of perspectives and acquisition sequences. Since real-world data are less structured and do not follow a standard denomination of acquisition sequences, this impedes the transition from deep learning research to clinical application. The purpose of this study is therefore to assess the feasibility of classifying the acquisition sequence from a single MRI slice using convolutional neural networks. Methods A total of 113 MRI slices from 52 patients were used in a transfer learning approach to train three convolutional neural networks of different complexities to predict the acquisition sequence, while 27 slices were used for internal validation. The model then underwent external validation on 600 slices from 273 patients belonging to one of four classes (T1-weighted without contrast enhancement, T1-weighted with contrast enhancement, T2-weighted, and diffusion-weighted). Categorical accuracy was noted, and the results of the predictions for the validation set are provided with confusion matrices. Results The neural networks achieved a categorical accuracy of 0.79, 0.81, and 0.84 on the external validation data. The implementation of Grad-CAM showed no clear pattern of focus except for T2-weighted slices, where the network focused on areas containing cerebrospinal fluid. Conclusion Automatically classifying the acquisition sequence using neural networks seems feasible and could be used to facilitate the automatic labelling of MRI data.
Serum Procalcitonin as a Predictive Biomarker in COVID-19: A Retrospective Cohort Analysis
Introduction: Since the onset of COVID-19, physicians and scientists have been working to further understand biomarkers associated with the infection, so that patients who have contracted the virus can be treated. Although COVID-19 is a complex virus that affects patients differently, current research suggests that COVID-19 infections have been associated with increased procalcitonin, a biomarker traditionally indicative of bacterial infections. This paper aims to investigate the relationship between COVID-19 infection severity and procalcitonin levels in the hopes to aid the management of patients with COVID-19 infections.; ; Methods: Patient data were obtained from the Renaissance School of Medicine at Stony Brook University. The data of the patients who had tested positive for COVID-19 and had an associated procalcitonin value (n=1046) was divided into age splits of 18-59, 59-74, and 74-90. Multiple factors were analyzed to determine the severity of each patient’s infection. Patients were divided into low, medium, and high severity dependent on the patient's COVID-19 severity. A one-way analysis of variance (ANOVA) was done for each age split to compare procalcitonin values of the severity groups within the respective age split. Next, post hoc analysis was done for the severity groups in each age split to further compare the groups against each other. ; ; Results: One-way ANOVA testing of the three age splits all had a resulting p<0.0001, displaying that the null hypothesis was rejected. In the post hoc analysis, however, the test failed to reject the null hypothesis when comparing the medium and high severity groups against each other in the 59-74 and 74-90 age splits. The null hypothesis was rejected in all pairwise comparisons in the 18-59 age split. We determined that a procalcitonin value of greater than 0.24 ng/mL would be characterized as a more severe COVID-19 infection when considering patient factors and comorbidities. ; ; Conclusion: The analysis of the data concluded that elevated procalcitonin levels correlated with the severity of COVID-19 infections. This finding can be used to assist medical providers in the management of COVID-19 patients.
Classification and Segmentation of Brain Tumor Using EfficientNet-B7 and U-Net
Adinegoro, Antonius Fajar
Sutapa, Gusti Ngurah
Gunawan, Anak Agung Ngurah
Anggarani, Ni Kadek Nova
Suardana, Putu
Kasmawan, I. Gde Antha
Asian Journal of Research in Computer Science2023Journal Article, cited 0 times
Website
TCGA-LGG
Brain-Tumor-Progression
Transfer learning
Tumors are caused by uncontrolled growth of abnormal cells. Magnetic Resonance Imaging (MRI) is modality that is widely used to produce highly detailed brain images. In addition, a surgical biopsy of the suspected tissue (tumor) is required to obtain more information about the type of tumor. Biopsy takes 10 to 15 days for laboratory testing. Based on a study conducted by Brady in 2016, errors in radiology practice are common, with an estimated daily error rate of 3-5%. Therefore, using the application of artificial intelligence, is expected to simplify and improve the accuracy of doctor's diagnose.
Performance Analysis of Denoising in MR Images with Double Density Dual Tree Complex Wavelets, Curvelets and NonSubsampled Contourlet Transforms
Krishnakumar, V
Parthiban, Latha
Annual Research & Review in Biology2014Journal Article, cited 0 times
RIDER Breast MRI
Digital images are extensively used by the medical doctors during different stages of disease diagnosis and treatment process. In the medical field, noise occurs in an image during two phases: acquisition and transmission. During the acquisition phase, noise is induced into an image, due to manufacturing defects, improper functioning of internal components, minute component failures and manual handling errors of the electronic scanning devices such as PECT/SPECT, MRI/CT scanners. Nowadays, healthcare organizations are beginning to consider cloud computing solutions for managing and sharing huge volume of medical data. This leads to the possibility of transmitting different types of medical data including CT, MR images, patient details and much more information through internet. Due to the presence of noise in the transmission channel, some unwanted signals are added to the transmitted medical data. Image denoising algorithms are employed to reduce the unwanted modifications of the pixels in an image. In this paper, the performance of denoising methods with two dimensional transformations of nonsubsampled contourlets (NSCT), curvelets, double density dual tree complex wavelets (DD-DTCWT) are compared and analysed using the image quality measures such as peak signal to noise ratio, root mean square error, structural similarity index. In this paper, 200 MR images of brain (3T MRI scan), heart and breast are selected for testing the noise reduction techniques with above transformations. The results shows that the NSCT gives good PSNR values for random and impulse noises. DD-DTCWT has good noise suppressing capability for speckle and Rician noises. Both NSCT and DD-DTCWT copes well in images affected by poisson noises. The best PSNR value obtained for salt and pepper and additive white Guassian noises are 21.29 and 56.45 respectively. For speckle noises, DD-DTCWT gives 33.46 and it is better than NSCT and curvelet. The values 33.50 and 33.56 are the top PSNRs of NSCT and DD-DTCWT for poisson noises.
Detection of Lung Nodules on Medical Images by the Use of Fractal Segmentation
Rezaie, Afsaneh
Habiboghli, Ali
International Journal of Interactive Multimedia and Artificial Inteligence2017Journal Article, cited 0 times
Website
LIDC-IDRI
Radiomics
Segmentation
LUNG
Fractal
Pulmonary Nodule Classification in Lung Cancer from 3D Thoracic CT Scans Using fastai and MONAI
Kaliyugarasan, Satheshkumar
Lundervold, Arvid
Lundervold, Alexander Selvikvåg
International Journal of Interactive Multimedia and Artificial Intelligence2021Journal Article, cited 0 times
Website
LIDC-IDRI
Computed Tomography (CT)
Supervised
Classification
Convolutional Neural Network (CNN)
Jupyter notebook
We construct a convolutional neural network to classify pulmonary nodules as malignant or benign in the context of lung cancer. To construct and train our model, we use our novel extension of the fastai deep learning framework to 3D medical imaging tasks, combined with the MONAI deep learning library. We train and evaluate the model using a large, openly available data set of annotated thoracic CT scans. Our model achieves a nodule classification accuracy of 92.4% and a ROC AUC of 97% when compared to a “ground truth” based on ; multiple human raters subjective assessment of malignancy. We further evaluate our approach by predicting patient-level diagnoses of cancer, achieving a test set accuracy of 75%. This is higher than the 70% obtained by aggregating the human raters assessments. Class activation maps are applied to investigate the features used by our classifier, enabling a rudimentary level of explainability for what is otherwise close to “black box” predictions. As the classification of structures in chest CT scans is useful across a variety of diagnostic and prognostic tasks in radiology, our approach has broad applicability. As we aimed to construct a fully reproducible system that can be compared to new proposed methods and easily be adapted and extended, the full source code of our work is available at https://github.com/MMIV-ML/Lung-CT-fastai-2020.
Analysis of Classification Methods for Diagnosis of Pulmonary Nodules in CT Images
Baboo, Capt Dr S Santhosh
Iyyapparaj, E
IOSR Journal of Electrical and Electronics Engineering2017Journal Article, cited 0 times
Website
LIDC-IDRI
Computed Tomography (CT)
LUNG
Classification
Random Forest
Computer Aided Detection (CADe)
The main aim of this work is to propose a novel Computer-aided detection (CAD) system based on a Contextual clustering combined with region growing for assisting radiologists in early identification of lung cancer from computed tomography(CT) scans. Instead of using conventional thresholding approach, this proposed work uses Contextual Clustering which yields a more accurate segmentation of the lungs from the chest volume. Following segmentation GLCM features are extracted which are then classified using three different classifiers namely Random forest, SVM and k-NN.
A Heterogeneous and Multi-Range Soft-Tissue Deformation Model for Applications in Adaptive Radiotherapy
Bartelheimer, Kathrin
2020Thesis, cited 0 times
Dissertation
Thesis
Head-Neck Cetuximab
Segmentation
Model
Skeletonization
Abstract; ; During fractionated radiotherapy, anatomical changes result in uncertainties in the applied dose distribution. With increasing steepness of applied dose gradients, the relevance of patient deformations increases. Especially in proton therapy, small anatomical changes in the order of millimeters can result in large range uncertainties and therefore in substantial deviations from the planned dose. To quantify the anatomical changes, deformation models are required. With upcoming MR-guidance, the soft-tissue deformations gain visibility, but so far only few soft-tissue models meeting the requirements of high-precision radiotherapy exist. Most state-of-the-art models either lack anatomical detail or exhibit long computation times.; ; In this work, a fast soft-tissue deformation model is developed which is capable of considering tissue properties of heterogeneous tissue. The model is based on the chainmail (CM)-concept, which is improved by three basic features. For the first time, rotational degrees of freedom are introduced into the CM-concept to improve the characteristic deformation behavior. A novel concept for handling multiple deformation initiators is developed to cope with global deformation input. And finally, a concept for handling various shapes of deformation input is proposed to provide a high flexibility concerning the design of deformation input.; ; To demonstrate the model flexibility, it was coupled to a kinematic skeleton model for the head and neck region, which provides anatomically correct deformation input for the bones. For exemplary patient CTs, the combined model was shown to be capable of generating artificially deformed CT images with realistic appearance. This was achieved for small-range deformations in the order of interfractional deformations, as well as for large-range deformations like an arms-up to arms-down deformation, as can occur between images of different modalities. The deformation results showed a strong improvement in biofidelity, compared to the original chainmail-concept, as well as compared to clinically used image-based deformation methods. The computation times for the model are in the order of 30 min for single-threaded calculations, by simple code parallelization times in the order of 1 min can be achieved.; ; Applications that require realistic forward deformations of CT images will benefit from the improved biofidelity of the developed model. Envisioned applications are the generation of plan libraries and virtual phantoms, as well as data augmentation for deep learning approaches. Due to the low computation times, the model is also well suited for image registration applications. In this context, it will contribute to an improved calculation of accumulated dose, as is required in high-precision adaptive radiotherapy.; ; Translation of abstract (German); ; Anatomische Veränderungen im Laufe der fraktionierten Strahlentherapie erzeugen Unsicherheiten in der tatsächlich applizierten Dosisverteilung. Je steiler die Dosisgradienten in der Verteilung sind, desto größer wird der Einfluss von Patientendeformationen. Insbesondere in der Protonentherapie erzeugen schon kleine anatomische Veränderungen im mm-Bereich große Unsicherheiten in der Reichweite und somit extreme Unterschiede zur geplanten Dosis. Um solche anatomischen Veränderungen zu quantifizieren, werden Deformationsmodelle benötigt. Durch die aufkommenden Möglichkeiten von MR-guidance gewinnt das Weichgewebe an Sichtbarkeit. Allerdings gibt es bisher nur wenige Modelle für Weichgewebe, welche den Anforderungen von hochpräziser Strahlentherapie genügen. Die meisten Modelle berücksichtigen entweder nicht genügend anatomische Details oder benötigen lange Rechenzeiten.; ; In dieser Arbeit wird ein schnelles Deformationsmodell für Weichgewebe entwickelt, welches es ermöglicht, Gewebeeigenschaften von heterogenem Gewebe zu berücksichtigen. Dieses Modell basiert auf dem Chainmail (CM)-Konzept, welches um drei grundlegende Eigenschaften erweitert wird. Rotationsfreiheitsgrade werden in das CM-Konzept eingebracht, um das charakteristische Deformationsverhalten zu verbessern. Es wird ein neues Konzept für multiple Deformationsinitiatoren entwickelt, um mit globalem Deformationsinput umgehen zu können. Und zuletzt wird ein Konzept zum Umgang mit verschiedenen Formen von Deformationsinput vorgestellt, welches eine hohe Flexibilität für die Kopplung zu anderen Modellen ermöglicht.; ; Um diese Flexibilität des Modells zu zeigen, wurde es mit einem kinematischen Skelettmodell für die Kopf-Hals-Region gekoppelt, welches anatomisch korrekten Input für die Knochen liefert. Basierend auf exemplarischen Patientendatensätzen wurde gezeigt, dass das gekoppelte Modell realistisch aussehende, künstlich deformierte CTs erzeugen kann. Dies war sowohl für eine kleine Deformation im Bereich von interfraktionellen Bewegungen als auch für eine große Deformation, wie z.B. eine arms-up zu arms-down Bewegung, welche zwischen multimodalen Bildern auftreten kann, möglich. Die Ergebnisse zeigen eine starke Verbesserung der Biofidelity im Vergleich zum CM-Modell, und auch im Vergleich zu klinisch eingesetzten bildbasierten Deformationsmodellen. Die Rechenzeiten für das Modell liegen im Bereich von 30 min für single-threaded Berechnungen. Durch einfache Code-Parallelisierung können Zeiten im Bereich von 1 min erreicht werden.; ; Anwendungen, die realistische CTs aus Vorwärtsdeformationen benötigen, werden von der verbesserten Biofidelity des entwickelten Modells profitieren. Mögliche Anwendungen sind die Erstellung von Plan-Bibliotheken und virtuellen Phantomen sowie Daten-Augmentation für deep-learning Ansätze. Aufgrund der geringen Rechenzeiten ist das Modell auch für Anwendungen in der Bildregistrierung gut geeignet. In diesem Kontext wird es zu einer verbesserten Berechnung der akkumulierten Dosis beitragen, welche für hochpräzise adaptive Strahlentherapie benötigt wird.
Glioblastomas brain tumour segmentation based on convolutional neural networks
Al-Hadidi, Moh'd Rasoul
AlSaaidah, Bayan
Al-Gawagzeh, Mohammed
International Journal of Electrical and Computer Engineering (IJECE)2020Journal Article, cited 0 times
REMBRANDT
Machine Learning
Brain tumour segmentation can improve diagnostics efficiency, rise the prediction rate and treatment planning. This will help the doctors and experts in their work. Where many types of brain tumour may be classified easily, the gliomas tumour is challenging to be segmented because of the diffusion between the tumour and the surrounding edema. Another important challenge with this type of brain tumour is that the tumour may grow anywhere in the brain with different shape and size. Brain cancer presents one of the most famous diseases over the world, which encourage the researchers to find a high-throughput system for tumour detection and classification. Several approaches have been proposed to design automatic detection and classification systems. This paper presents an integrated framework to segment the gliomas brain tumour automatically using pixel clustering for the MRI images foreground and background and classify its type based on deep learning mechanism, which is the convolutional neural network. In this work, a novel segmentation and classification system is proposed to detect the tumour cells and classify the brain image if it is healthy or not. After collecting data for healthy and non-healthy brain images, satisfactory results are found and registered using computer vision approaches. This approach can be used as a part of a bigger diagnosis system for breast tumour detection and manipulation.
A novel CAD system to automatically detect cancerous lung nodules using wavelet transform and SVM
Abu Baker, Ayman A.
Ghadi, Yazeed
International Journal of Electrical and Computer Engineering (IJECE)2020Journal Article, cited 0 times
Website
LIDC-IDRI
Support Vector Machine (SVM)
A novel cancerous nodules detection algorithm for computed tomography images (CT - images ) is presented in this paper. CT -images are large size images with high resolution. In some cases, number of cancerous lung nodule lesions may missed by the radiologist due to fatigue. A CAD system that is proposed in this paper can help the radiologist in detecting cancerous nodules in CT -images. The proposed algorithm is divided to four stages. In the first stage, an enhancement algorithm is implement to highlight the suspicious regions. Then in the second stage, the region of interest will be detected. ; The adaptive SVM and wavelet transform techniques are used to reduce the detected false positive regions. This algorithm is evaluated using 60 cases (normal and cancerous cases), and it shows a high sensitivity in detecting the cancerous lung nodules with TP ration 94.5% and with FP ratio 7 cluster/image.
Computer-aided diagnostic system kinds and pulmonary nodule detection efficacy
Kadhim, Omar Raad
Motlak, Hassan Jassim
Abdalla, Kasim Karam
International Journal of Electrical and Computer Engineering (IJECE)2022Journal Article, cited 0 times
Website
LIDC-IDRI
Classification
Computer Aided Detection (CADe)
Feature Extraction
LUNG
This paper summarizes the literature on computer-aided detection (CAD) systems used to identify and diagnose lung nodules in images obtained with computed tomography (CT) scanners. The importance of developing such systems lies in the fact that the process of manually detecting lung nodules is painstaking and sequential work for radiologists, as it takes a long time. Moreover, the pulmonary nodules have multiple appearances and shapes, and the large number of slices generated by the scanner creates great difficulty in accurately locating the lung nodules. The handcraft nodules detection process can be caused by messing some nodules spicily when these nodules' diameter be less than 10 mm. So, the CAD system is an essential assistant to the radiologist in this case of nodule detection, and it contributed to reducing time consumption in nodules detection; moreover, it applied more accuracy in this field. The objective of this paper is to follow up on current and previous work on lung cancer detection and lung nodule diagnosis. This literature dealt with a group of specialized systems in this field quickly and showed the methods used in them. It dealt with an emphasis on a system based on deep learning involving neural convolution networks.
A hybrid model based on convolutional neural networks and fuzzy kernel K-medoids for lung cancer detection
Saragih, Glori Stephani
Rustam, Zuherman
Aurelia, Jane Eva
2021Journal Article, cited 0 times
Anti-PD-1_Lung
Lung cancer is the deadliest cancer worldwide. Correct diagnosis of lung cancer is one of the main tasks that is challenging tasks, so the patient can be treated as soon as possible. In this research, we proposed a hybrid model based on convolutional neural networks (CNN) and fuzzy kernel k-medoids (FKKM) for lung cancer detection, where the magnetic resonance imaging (MRI) images are transmitted to CNN, and then the output is used as new input for FKKM. The dataset used in this research consist of MRI images taken from someone who had lung cancer with the treatment of anti programmed cell death-1 (anti-PD1) immunotherapy in 2016 that obtained from the cancer imaging archive. The proposed method obtained accuracy, sensitivity, precision, specificity, and F1-score 100% by using radial basis function (RBF) kernel with sigma of {10 -8 , 10 -4 , 10 -3 , 5x10 -2 , 10 -1 , 1, 10 4 } in 20-fold cross-validation. The computational time is only taking less than 10 seconds to forward dataset to CNN and 3.85 ± 0.6 seconds in FKKM model. So, the proposed method is more efficient in time and has a high performance for detecting lung cancer from MRI images.
Adaptive multi-modality fusion network for glioma grading;; 自适应多模态特征融合胶质瘤分级网络
Wang Li
Cao Ying
Tian Lili
Chen Qijian
Guo Shunchao
Zhang Jian
Wang Lihui
Journal of Image and Graphics2021Journal Article, cited 0 times
BraTS-TCGA-LGG
BraTS-TCGA-GBM
MICCAI
Classification
BRAIN
Objective The accurate grading of glioma is the main method to assist in the formulation of personalized treatment plans, but most of the existing studies focus on the classification prediction based on the tumor area, which needs to delineate the area of interest in advance, which cannot meet the real-time performance of clinical intelligent auxiliary diagnosis. need. Therefore, this paper proposes an adaptive multi-modal fusion network (AMMFNet), which can achieve end-to-end accurate prediction from the original acquired images to the glioma level without the need to delineate the tumor region. . Methods The AMMFNet method uses four isomorphic network branches to extract multi-scale image features of different modalities; uses adaptive multi-modal feature fusion module and dimensionality reduction module for feature fusion; combines cross-entropy classification loss and feature embedding loss to improve glue. Classification accuracy of plasmoid tumors. In order to verify the model performance, this paper uses the MICCAI (Medical Image Computing and Computer Assisted Intervention Society) 2018 public dataset for training and testing, and compares it with the cutting-edge deep learning model and the latest glioma classification model, and uses the accuracy and subject The area under the curve (AUC) and other indicators were used for quantitative analysis. Results Without delineating the tumor area, the AUC of this model for predicting glioma grade was 0.965; when the tumor area was used, its AUC was as high as 0.997, and the accuracy was 0.982, which was more than the current best glioma classification model- The task convolutional neural network increased by 1.2% year-on-year. Conclusion The adaptive multimodal feature fusion network proposed in this paper can accurately predict glioma grades without delineating tumor regions by combining multimodal and multi-semantic-level features.; ; Glioma grading ; deep learning ; multimodal fusion ; multiscale features ; end-to-end classification
Prostate Cancer Diagnosis Based on Cascaded Convolutional Neural Networks
LIU Ke-wen
LIU Zi-long
WANG Xiang-yu
CHEN Li
LI Zhao
WU Guang-yao
LIU Chao-yang
Chinese Journal of Magnetic Resonance2020Journal Article, cited 1 times
Website
PROSTATEx
Magnetic Resonance Imaging (MRI)
Prostate cancer (PCa)
Computer Aided Detection (CADe)
Classification
Interpreting magnetic resonance imaging (MRI) data by radiologists is time consuming and demands special expertise. Diagnosis of prostate cancer (PCa) with deep learning can also be time consuming and data storage consuming. This work presents an automated method for PCa detection based on cascaded convolutional neural network (CNN), including pre-network and post-network. The pre-network is based on a Faster-RCNN and trained with prostate images in order to separate the prostate from nearby tissues; the ResNet-based post-network is for PCa diagnosis, which is connected by bottlenecks and improved by applying batch normalization (BN) and global average pooling (GAP). The experimental results demonstrated that the cascaded CNN proposed had a good classification results on the in-house datasets, with less training time and computation resources.
Low-Dose CT streak artifacts removal using deep residual neural network
Computational assessment of stomach tumor volume from multi-slice computerized tomography images in presence of type 2 cancer
Chacón, Gerardo
Rodríguez, Johel E
Bermúdez, Valmore
Vera, Miguel
Hernández, Juan Diego
Vargas, Sandra
Pardo, Aldo
Lameda, Carlos
Madriz, Delia
Bravo, Antonio J
F1000Research2018Journal Article, cited 0 times
Website
TCGA-STAD
STOMACH
region growing method
Algorithm Development
Background: The multi-slice computerized tomography (MSCT) is a medical imaging modality that has been used to determine the size and location of the stomach cancer. Additionally, MSCT is considered the best modality for the staging of gastric cancer. One way to assess the type 2 cancer of stomach is by detecting the pathological structure with an image segmentation approach. The tumor segmentation of MSCT gastric cancer images enables the diagnosis of the disease condition, for a given patient, without using an invasive method as surgical intervention. Methods: This approach consists of three stages. The initial stage, an image enhancement, consists of a method for correcting non homogeneities present in the background of MSCT images. Then, a segmentation stage using a clustering method allows to obtain the adenocarcinoma morphology. In the third stage, the pathology region is reconstructed and then visualized with a three-dimensional (3-D) computer graphics procedure based on marching cubes algorithm. In order to validate the segmentations, the Dice score is used as a metric function useful for comparing the segmentations obtained using the proposed method with respect to ground truth volumes traced by a clinician. Results: A total of 8 datasets available for patients diagnosed, from the cancer data collection of the project, Cancer Genome Atlas Stomach Adenocarcinoma (TCGASTAD) is considered in this research. The volume of the type 2 stomach tumor is estimated from the 3-D shape computationally segmented from the each dataset. These 3-D shapes are computationally reconstructed and then used to assess the morphopathology macroscopic features of this cancer. Conclusions: The segmentations obtained are useful for assessing qualitatively and quantitatively the stomach type 2 cancer. In addition, this type of segmentation allows the development of computational models that allow the planning of virtual surgical processes related to type 2 cancer.;
Differentiation of invasive ductal and lobular carcinoma of the breast using MRI radiomic features: a pilot study
Maiti, S.
Nayak, S.
Hebbar, K. D.
Pendem, S.
F1000Res2024Journal Article, cited 0 times
Website
Duke-Breast-Cancer-MRI
*Carcinoma
Lobular/diagnostic imaging/pathology
Pilot Projects
Retrospective Studies
Radiomics
*Breast Neoplasms/diagnostic imaging/pathology
Magnetic Resonance Imaging/methods
Invasive carcinoma
MRI Sequences
Magnetic Resonance Imaging (MRI)
Noninvasive diagnosis
Radiomic features
BACKGROUND: Breast cancer (BC) is one of the main causes of cancer-related mortality among women. For clinical management to help patients survive longer and spend less time on treatment, early and precise cancer identification and differentiation of breast lesions are crucial. To investigate the accuracy of radiomic features (RF) extracted from dynamic contrast-enhanced Magnetic Resonance Imaging (DCE MRI) for differentiating invasive ductal carcinoma (IDC) from invasive lobular carcinoma (ILC). METHODS: This is a retrospective study. The IDC of 30 and ILC of 28 patients from Dukes breast cancer MRI data set of The Cancer Imaging Archive (TCIA), were included. The RF categories such as shape based, Gray level dependence matrix (GLDM), Gray level co-occurrence matrix (GLCM), First order, Gray level run length matrix (GLRLM), Gray level size zone matrix (GLSZM), NGTDM (Neighbouring gray tone difference matrix) were extracted from the DCE-MRI sequence using a 3D slicer. The maximum relevance and minimum redundancy (mRMR) was applied using Google Colab for identifying the top fifteen relevant radiomic features. The Mann-Whitney U test was performed to identify significant RF for differentiating IDC and ILC. Receiver Operating Characteristic (ROC) curve analysis was performed to ascertain the accuracy of RF in distinguishing between IDC and ILC. RESULTS: Ten DCE MRI-based RFs used in our study showed a significant difference (p <0.001) between IDC and ILC. We noticed that DCE RF, such as Gray level run length matrix (GLRLM) gray level variance (sensitivity (SN) 97.21%, specificity (SP) 96.2%, area under curve (AUC) 0.998), Gray level co-occurrence matrix (GLCM) difference average (SN 95.72%, SP 96.34%, AUC 0.983), GLCM interquartile range (SN 95.24%, SP 97.31%, AUC 0.968), had the strongest ability to differentiate IDC and ILC. CONCLUSIONS: MRI-based RF derived from DCE sequences can be used in clinical settings to differentiate malignant lesions of the breast, such as IDC and ILC, without requiring intrusive procedures.
A Transfer Representation Learning Approach for Breast Cancer Diagnosis from Mammograms using EfficientNet Models
Oza, Parita Rajiv
Sharma, Paawan
Patel, Samir
Scalable Computing: Practice and Experience2022Journal Article, cited 0 times
Website
CBIS-DDSM
BREAST
Computer Aided Diagnosis (CADx)
Transfer learning
Convolutional Neural Network (CNN)
Breast cancer is a deadly disease that affects the lives of millions of women throughout the world. Over time, the number of cases of breast cancer has increased. Preventing this disease is difficult and remains unidentified, but the survival percentage can be improved if diagnosed early. The progress of computer-assisted diagnosis (CAD) of breast cancer has seen a lot of improvements thanks to advances in deep learning. With the notable advancement of deep neural networks, diagnostic capabilities are nearing a human expert's. In this paper, we used EfficientNet to classify mammograms. This model is introduced with the new concept of model scaling called compound scaling. Compound scaling is the strategy which scales the model by adding more layers to extend the receptive field along with more channels to catch the detailed features of larger input. We also compare the performance of various variants of EfficientNet over CBIS-DDSM mammogram datasets. We used the optimum fine-tuning procedure to represent the importance of transfer learning (TL) during training.
Semantic Composition of Data Analytical Processes
Bednár, Peter
Ivančáková, Juliana
Sarnovský, Martin
Acta Polytechnica Hungarica2024Journal Article, cited 0 times
Website
C_NMC_2019 Dataset: ALL Challenge dataset of ISBI 2019
Algorithm Development
Semantic features
Ontology
This paper presents the semantic framework for the description and automatic composition of the data analytical processes. The framework specifies how to describe goals, input data, outputs and various data operators for data pre-processing and modelling that can be applied to achieve the goals. The main contribution of this paper is the formal language for the specification of the preconditions, postconditions, inputs and outputs of the data operators. The formal description of the operators with the logical expressions allows automatic composition of operators into the complex workflows achieving the specified goals of the data analysis. The evaluation of the semantic; framework was performed on the two real-world use cases from the medical domain, where the automatically generated workflow was compared with the implementation manually programmed by the data scientist.
Open-Source Tools for Dense Facial Tissue Depth Mapping of Computed Tomography Models
Simmons-Ehrhardt, Terrie
Falsetti, Catyana
Falsetti, Anthony B.
Ehrhardt, Christopher J.
2018Journal Article, cited 0 times
Head-Neck Cetuximab
QIN-HEADNECK
TCGA-HNSC
TCGA-THCA
Computed tomography (CT) scans provide anthropologists with a resource to generate three-dimensional (3D) digital skeletal material to expand quantification methods and build more standardized reference collections. The ability to visualize and manipulate the bone and skin of the face simultaneously in a 3D digital environment introduces a new way for forensic facial approximation practitioners to access and study the face. Craniofacial relationships can be quantified with landmarks or with surface-processing software that can quantify the geometric properties of the entire 3D facial surface. This article describes tools for the generation of dense facial tissue depth maps (FTDMs) using deidentified head CT scans of modern Americans from the Cancer Imaging Archive public repository and the open-source program Meshlab. CT scans of 43 females and 63 males from the archive were segmented and converted to 3D skull and face models using Mimics and exported as stereolithography files. All subsequent processing steps were performed in Meshlab. Heads were transformed to a common orientation and coordinate system using the coordinates of nasion, left orbitale, and left and right porion. Dense FTDMs were generated on hollowed, cropped face shells using the Hausdorff sampling filter. Two new point clouds consisting of the 3D coordinates for both skull and face were colorized on an RGB (red-green-blue) scale from 0.0 (red) to 40.0-mm (blue) depth values and exported as polygon (PLY) file format models with tissue depth values saved in the "vertex quality" field. FTDMs were also split into 1.0-mm increments to facilitate viewing of common depths across all faces. In total, 112 FTDMs were generated for 106 individuals. Minimum depth values ranged from 1.2 mm to 3.4 mm, indicating a common range of starting depths for most faces regardless of weight, as well as common locations for these values over the nasal bones, lateral orbital margins, and forehead superior to the supraorbital border. Maximum depths were found in the buccal region and neck, excluding the nose. Individuals with multiple scans at visibly different weights presented the greatest differences within larger depth areas such as the cheeks and neck, with little to no difference in the thinnest areas. A few individuals with minimum tissue depths at the lateral orbital margins and thicker tissues over the nasal bones (>3.0 mm) suggested the potential influence of nasal bone morphology on tissue depths. This study produced visual quantitative representations of the face and skull for forensic facial approximation research and practice that can be further analyzed or interacted with using free software. The presented tools can be applied to preexisting CT scans, traditional or cone beam, adult or subadult individuals, with or without landmarks, and regardless of head orientation, for forensic applications as well as for studies of facial variation and facial growth. In contrast with other facial mapping studies, this method produced both skull and face points based on replicable geometric relationships, producing multiple data outputs that are easily readable with software that is openly accessible.
Improved Machine Learning Algorithms with Application to Medical Diagnosis
Medical data generated from hospitals are an increasing source of information for automatic medical diagnosis. These data contain latent patterns and correlations that can result in better diagnosis when appropriately processed. Most applications of machine learning (ML) to these patient records have focused on utilizing the ML algorithms directly, which usually results in suboptimal performance as most medical datasets are quite imbalanced. Also, labelling the enormous medical data is a challenging and expensive task. In order to solve these problems, recent research has focused on the development of improved ML methods, mainly preprocessing pipelines and feature learning methods. This thesis presents four machine learning approaches aimed at improving the medical diagnosis performance using publicly available datasets.·Firstly, a method was proposed to predict heart disease risk using an unsupervised sparse autoencoder (SAE) and artificial neural network.·Secondly, a method was developed by stacking multiple SAEs to achieve improved representation learning, combined with a softmax classifier utilized for the classification task. ·Thirdly, an approach was developed for the classification of pulmonary lesions indicating lung cancer using animproved predictive sparse decomposition (PSD) method to achieve unsupervised feature learning and a densely connected convolutional network (DenseNet) for classification. ·Lastly, an enhanced ensemble learning method was developed to predict heart disease effectively. The proposed methods obtained better performance compared to other ML algorithms and some techniques available in recent literature. This research has also shown that ML algorithms tend to achieve improved performance when trained with relevant data. Also, the study further demonstrates the effectiveness of an enhanced ensemble learning method in disease prediction.This thesis also provides direction for future research.
A New Adaptive-Weighted Fusion Rule for Wavelet based PET/CT Fusion
Barani, R
Sumathi, M
International Journal of Signal Processing, Image Processing and Pattern Recognition2016Journal Article, cited 1 times
Website
RIDER Lung PET-CT
Image fusion
In recent years the Wavelet Transform (WT) had an important role in various applications of signal and image processing. In Image Processing, WT is more useful in many domains like image denoising, feature segmentation, compression, restoration, image fusion, etc. In WT based image fusion, initially the source images are decomposed into approximation and detail coefficients and followed by combining the coefficients using the suitable fusion rules. The resultant fused image is reconstructed by applying; inverse WT on the combined coefficients. This paper proposes a new adaptive fusion rule for combining the approximation coefficients of CT and PET images. The Excellency of the proposed fusion rule is stamped by measuring the image information metrics, EOG, SD and ENT on the decomposed approximation coefficients. On the other hand, the detail coefficients are combined using several existing fusion rules. The resultant fused images are quantitatively analyzed using the non-reference image quality, image fusion and error metrics. The analysis declares that the newly proposed fusion rule is more suitable for; extracting the complementary information from CT and PET images and also produces the fused image which is rich in content with good contrast and sharpness.
An Level Set Evolution Morphology Based Segmentation of Lung Nodules and False Nodule Elimination by 3D Centroid Shift and Frequency Domain DC Constant Analysis
Krishnamurthy, Senthilkumar
Narasimhan, Ganesh
Rengasamy, Umamaheswari
International Journal of u- and e- Service, Science and Technology2016Journal Article, cited 0 times
Website
LIDC-IDRI
Segmentation
LUNG
Classification
A Level Set Evolution with Morphology (LSEM) based segmentation algorithm is proposed in this work to segment all the possible lung nodules from a series of CT scan images. All the segmented nodule candidates were not cancerous in nature. Initially the vessels and calcifications were also segmented as nodule candidates. The structural feature analysis was carried out to remove the vessels. The nodules with more centroid shift in the consecutive slices were eliminated since malignant nodule’s resultant position did not usually deviate. The calcifications were eliminated by frequency domain analysis.; DC constant of nodule candidates were computed in frequency domain. The nodule candidates with high DC constant value could be the calcifications as the calcification patterns were homogeneous in nature. This algorithm was applied on a database of 40 patient cases with 58 malignant nodules. The algorithms proposed in this paper precisely detected 55 malignant nodules and failed to detect 3 with a sensitivity of 95%. Further,; this algorithm correctly eliminated 778 tissue clusters that were initially segmented as nodules, however, 79 non-malignant tissue clusters were detected as malignant nodules.; Therefore, the false positive of this algorithm was 1.98 per patient.
The Clinical Applications of 3D Polymer Gel Dosimetry in Commissioning Stereotactic Radiosurgery (SRS) and Spatially Fractionated Radiotherapy (SFRT)
Radiation therapy is used to treat various types of cancers. The technologies in radiation delivery continue to advance rapidly. Currently, we are able to accurately target a radiation beam to a tumour volume by conforming the shape of the beam to the complex tumour shape. However, with that, there is a need for radiation dose detection tools that accurately capture the complex dose distribution in 3D space in order to verify the accuracy and precision of a treatment delivery. The purpose of this work is to implement a promising solution to this clinical challenge that utilizes a 3D NIPAM polymer gel dosimetry system with CBCT readout to verify the dosimetric and spatial accuracy of stereotactic radiosurgery (SRS) and spatially fractionated radiotherapy (SFRT) technique.
Three main objectives of this work are 1) to evaluate the reproducibility of a NIPAM gel dosimetry workflow between two institutions, by implementing three identical verification plans in order to demonstrate its wide scale applicability in commissioning advanced radiotherapy techniques. In the study, two separate gel analysis pipelines were utilized based on the individual institution’s preference. 2) to commission two SRS techniques; HyperArc® (Varian Medical Systems, Palo Alto CA) to treat brain metastases and a virtual cone technique to treat trigeminal neuralgia. In the virtual cone study, an end–to–end spatial accuracy test of the treatment delivery was performed using a 3D-printed anthropomorphic phantom. The dosimetric accuracy of the gel dosimetry system was benchmarked against a gold standard, film dosimeter. 3) utilizing a traditional dosimeter solely to verify the treatment delivery accuracy of SFRT is incredibly challenging and inefficient due to the heterogenous dose distribution generated in three-dimensional space.
Therefore, the goal of the final study is to demonstrate the application of the gel dosimetry system to commission SFRT technique. A semi-automated SFRT planning approach was utilized to generate a verification plan on a gel dosimeter for analysis.
This work presents novel applications of a gel dosimetry workflow in two advanced radiotherapy deliveries (SRS and SFRT). The dosimetric and spatial accuracy with this type of gel dosimetry analysis is invaluable for the clinical commissioning process.
Lung Cancer Nodule Detection by Using Selective Search Feature Extraction and Segmentation Approach of Deep Neural Network
Sahoo, Satyasangram
Borugadda, Prem Kumar
Lakhsmi, R.
International Transaction Journal of Engineering, Management, & Applied Sciences & Technologies2022Journal Article, cited 0 times
Website
NSCLC-Radiomics
Segmentation
LUNG
The study addresses the implementation of the selective search for the classification of cancer nodules in the lungs. The search processes integrate the power of both segmentation as well as exhaustive search for detection of an object in an image. In addition, the features of the cancer stage classifier are also used for cluster organization from the histogram to set the difference between inter-class variance. The selective search makes use of class variance to trace out meta-similarities. Later the neural network is implemented for the cancer stage classification.
Application of Homomorphic Encryption on Neural Network in Prediction of Acute Lymphoid Leukemia
Khilji, Ishfaque Qamar
Saha, Kamonashish
Amin, Jushan
Iqbal, Muhammad
International Journal of Advanced Computer Science and Applications2020Journal Article, cited 0 times
C_NMC_2019 Dataset: ALL Challenge dataset of ISBI 2019
Acute lymphoblastic leukemia (ALL)
Pathology
Convolutional Neural Network (CNN)
Classification
Computer Aided Diagnosis (CADx)
Machine Learning
Machine learning is now becoming a widely used mechanism and applying it in certain sensitive fields like medical and financial data has only made things easier. Accurate Diagnosis of cancer is essential in treating it properly. Medical tests regarding cancer in recent times are quite expensive and not available in many parts of the world. CryptoNets, on the other hand, is an exhibit of the use of Neural-Networks over data encrypted with Homomorphic Encryption. This project demonstrates the use of Homomorphic Encryption for outsourcing neural-network predictions in case of Acute Lymphoid Leukemia (ALL). By using CryptoNets, the patients or doctors in need of the service can encrypt their data using Homomorphic Encryption and send only the encrypted message to the service provider (hospital or model owner). Since Homomorphic Encryptions allow the provider to operate on the data while it is encrypted, the provider can make predictions using a pre-trained Neural-Network while the data remains encrypted all throughout the process and finally sending the prediction to the user who can decrypt the results. During the process the service provider (hospital or the model owner) gains no knowledge about the data that was used or the result since everything is encrypted throughout the process. Our work proposes a Neural Network model which will be able to predict ALL-Acute Lymphoid Leukemia with approximate 80% accuracy using the C_NMC Challenge dataset. Prior to building our own model, we used the dataset and pre-process it using a different approach. We then ran on different machine learning and Neural Network models like VGG16, SVM, AlexNet, ResNet50 and compared the validation accuracies of these models with our own model which lastly gives better accuracy than the rest of the models used. We then use our own pre-trained Neural Network to make predictions using CryptoNets. We were able to achieve an encrypted prediction of about 78% which is close to what we achieved when validating our own CNN model that has a validation accuracy of 80% for prediction of Acute Lymphoid Leukemia (ALL).
Automatic Colorectal Segmentation with Convolutional Neural Network
Guachi, Lorena
Guachi, Robinson
Bini, Fabiano
Marinozzi, Franco
Computer-Aided Design and Applications2019Journal Article, cited 3 times
Website
CT-COLONOGRAPHY
Segmentation
Convolutional Neural Network (CNN)
This paper presents a new method for colon tissues segmentation on Computed Tomography images which takes advantages of using deep and hierarchical learning about colon features through Convolutional Neural Networks (CNN). The proposed method works robustly reducing misclassified colon tissues pixels that are introduced by the presence of noise, artifacts, unclear edges, and other organs or different areas characterized by the same intensity value as the colon. Patch analysis is exploited for allowing the classification of each center pixel as colon tissue or background pixel.; Experimental results demonstrate the proposed method achieves a higher effectiveness in terms of sensitivity and specificity with respect to three state-of the art methods.
Preliminary Detection and Analysis of Lung Cancer on CT images using MATLAB: A Cost-effective Alternative
Khan, Md Daud Hossain
Ahmed, Mansur
Bach, Christian
Journal of Biomedical Engineering and Medical Imaging2016Journal Article, cited 0 times
LUNG
MATLAB
Computer Aided Detection (CADe)
Non-Small Cell Lung Cancer (NSCLC)
Computed Tomography (CT)
Cancer is the second leading cause of death worldwide. Lung cancer possesses the highest mortality, with non-small cell lung cancer (NSCLC) being its most prevalent subtype of lung cancer. Despite gradual reduction in incidence, approximately 585720 new cancer patients were diagnosed in 2014, with majority from low-and-middleincome countries (LMICs). Limited availability of diagnostic equipment, poorly trained medical staff, late revelation of symptoms and classification of the exact lung cancer subtype and overall poor patient access to medical providers result in late or terminal stage diagnosis and delay of treatment. Therefore, the need for an economic, simple, fast computed image-processing system to aid decisions regarding staging and resection, especially for LMICs is clearly imminent. In this study, we developed a preliminary program using MATLAB that accurately detects cancer cells in CT images of lungs of affected patients, measures area of region of interest (ROI) or tumor mass and helps determine nodal spread. A preset value for nodal spread was used, which can be altered accordingly.
Automated lung tumor detection and diagnosis in CT Scans using texture feature analysis and SVM
Adams, Tim
Dörpinghaus, Jens
Jacobs, Marc
Steinhage, Volker
Communication Papers of the Federated Conference on Computer Science and Information Systems2018Journal Article, cited 0 times
Website
SPIE-AAPM Lungx
Haralick texture features
support vector machine (SVM)
Radiomics
Quantitative Impact of Label Noise on the Quality of Segmentation of Brain Tumors on MRI scans
Marcinkiewicz, Michat
Mrukwa, Grzegorz
2019Conference Proceedings, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Machine Learning
Over the last few years, deep learmng has proven to be a great solution to many problems, such as image or text classification. Recently, deep learning-based solutions have outperformed humans on selected benchmark datasets, yielding a promising future for scientific and real-world applications. Training of deep learning models requires vast amounts of high quality data to achieve such supreme performance. In real-world scenarios, obtaining a large, coherent, and properly labeled dataset is a challenging task. This is especially true in medical applications, where high-quality data and annotations are scarce and the number of expert annotators is limited. In this paper, we investigate the impact of corrupted ground-truth masks on the performance of a neural network for a brain tumor segmentation task. Our findings suggest that a) the performance degrades about 8% less than it could be expected from simulations, b) a neural network learns the simulated biases of annotators, c) biases can be partially mitigated by using an inversely-biased dice loss function.
Medical Imaging Segmentation Assessment via Bayesian Approaches to Fusion, Accuracy and Variability Estimation with Application to Head and Neck Cancer
With the advancement of technology, medical imaging has become a fast growing area of research. Some imaging questions require little physician analysis, such as diagnosing a broken bone, using a 2-D X-ray image. More complicated questions, using 3-D scans, such as computerized tomography (CT), can be much more difficult to answer. For example, estimating tumor growth to evaluate malignancy; which informs whether intervention is necessary. This requires careful delineation of different structures in the image. For example, what is the tumor versus what is normal tissue; this is referred to as segmentation. Currently, the gold standard of segmentation is for a radiologist to manually trace structure edges in the 3-D image, however, this can be extremely time consuming. Additionally, manual segmentation results can differ drastically between and even within radiologists. A more reproducible, less variable, and more time efficient segmentation approach would drastically improve medical treatment. This potential, as well as the continued increase in computing power, has led to computationally intensive semiautomated segmentation algorithms. Segmentation algorithms' widespread use is limited due to difficulty in validating their performance. Fusion models, such as STAPLE, have been proposed as a way to combine multiple segmentations into a consensus ground truth; this allows for evaluation of both manual and semiautomated segmentation in relation to the consensus ground truth. Once a consensus ground truth is obtained, a multitude of approaches have been proposed for evaluating different aspects of segmentation performance; segmentation accuracy, between- and within -reader variability.; ; ; The focus of this dissertation is threefold. First, a simulation based tool is introduced to allow for the validation of fusion models. The simulation properties closely follow a real dataset, in order to ensure that they mimic reality. Second, a statistical hierarchical Bayesian fusion model is proposed, in order to estimate a consensus ground truth within a robust statistical framework. The model is validated using the simulation tool and compared to other fusion models, including STAPLE. Additionally, the model is applied to real datasets and the consensus ground truth estimates are compared across different fusion models. Third, a statistical hierarchical Bayesian performance model is proposed in order to estimate segmentation method specific accuracy, between- and within -reader variability. An extensive simulation study is performed to validate the model’s parameter estimation and coverage properties. Additionally, the model is fit to a real data source and performance estimates are summarized.
Detection of Brain Tumour in MRI Scan Images using Tetrolet Transform and SVM Classifier
Babu, B Shoban
Varadarajan, S
Indian Journal of Science and Technology2017Journal Article, cited 1 times
Website
REMBRANDT
Classification
Support Vector Machine (SVM)
Brain
Comparison of Accuracy of Color Spaces in Cell Features Classificationin Images of Leukemia types ALL and MM
Espinoza-Del Angel, Cinthia
Femat-Diaz, Aurora
Mexican Journal of Biomedical Engineering2022Journal Article, cited 0 times
Website
SN-AM
Leukemia
Pathomics
Classification
Model
Algorithm Development
This study presents a methodology for identifying the color space that provides the best performance in an image processing application. When measurements are performed without selecting the appropriate color model, the accuracy of the results is considerably altered. It is significant in computation, mainly when a diagnostic is based on stained cell microscopy images. This work shows how the proper selection of the color model provides better characterization in two types of cancer, acute lymphoid leukemia, and multiple myeloma. The methodology uses images from a public database. First, the nuclei are segmented, and then statistical moments are calculated for class identification. After, a principal component analysis is performed to reduce the extracted features and identify the most significant ones. At last, the predictive model is evaluated using the k-nearest neighbor algorithm and a confusion matrix. For the images used, the results showed that the CIE L*a*b color space best characterized the analyzed cancer types with an average accuracy of 95.52%. With an accuracy of 91.81%, RGB and CMY spaces followed. HSI and HSV spaces had an accuracy of 87.86% and 89.39%, respectively, and the worst performer was grayscale with an accuracy of 55.56%.
Consistency and Comparison of Medical Image Registration-Segmentation and Mathematical Model for Glioblastoma Volume Progression
IRMAK, Emrah
2020Journal Article, cited 0 times
RIDER NEURO MRI
Tumor volume progression and calculation is a very common task in cancer research and image processing. Tumor volume analysis can be carried out in two ways. The first way is using different mathematical formulas and the second way is using image registration-segmentation method. In this paper an objective application of registration of multiple brain imaging scans with segmentation is used to investigate brain tumor growth in a 3 dimensional (3D) manner. Using 3D medical image registration-segmentation algorithm, multiple scans of MR images of a patient who has brain tumor are registered with different MR images of the same patient acquired at a different time so that growth of the tumor inside the patient's brain can be investigated. Brain tumor volume measurement is also achieved using mathematical model based formulas in this paper. Medical image registration-segmentation and mathematical based method are implemented to 19 patients and satisfactory results are obtained. An advantageous point of medical image registration-segmentation method for brain tumor investigation is that grown, diminished, and unchanged brain tumor parts of the patients are investigated and computed on an individual basis in a three-dimensional (3D) manner within the time. This paper is intended to provide a comprehensive reference source for researchers involved in medical image registration, segmentation and tumor growth investigation.
Assessing the Effects of Software Platforms on Volumetric Segmentation of Glioblastoma
Dunn, William D Jr
Aerts, Hugo J W L
Cooper, Lee A
Holder, Chad A
Hwang, Scott N
Jaffe, Carle C
Brat, Daniel J
Jain, Rajan
Flanders, Adam E
Zinn, Pascal O
Colen, Rivka R
Gutman, David A
J Neuroimaging Psychiatry Neurol2016Journal Article, cited 0 times
Website
Radiogenomics
Magnetic resonance imaging (MRI)
Segmentation
TCGA
3D Slicer
BRAIN
Background: Radiological assessments of biologically relevant regions in glioblastoma have been associated with genotypic characteristics, implying a potential role in personalized medicine. Here, we assess the reproducibility and association with survival of two volumetric segmentation platforms and explore how methodology could impact subsequent interpretation and analysis. Methods: Post-contrast T1- and T2-weighted FLAIR MR images of 67 TCGA patients were segmented into five distinct compartments (necrosis, contrast-enhancement, FLAIR, post contrast abnormal, and total abnormal tumor volumes) by two quantitative image segmentation platforms - 3D Slicer and a method based on Velocity AI and FSL. We investigated the internal consistency of each platform by correlation statistics, association with survival, and concordance with consensus neuroradiologist ratings using ordinal logistic regression. Results: We found high correlations between the two platforms for FLAIR, post contrast abnormal, and total abnormal tumor volumes (spearman's r(67) = 0.952, 0.959, and 0.969 respectively). Only modest agreement was observed for necrosis and contrast-enhancement volumes (r(67) = 0.693 and 0.773 respectively), likely arising from differences in manual and automated segmentation methods of these regions by 3D Slicer and Velocity AI/FSL, respectively. Survival analysis based on AUC revealed significant predictive power of both platforms for the following volumes: contrast-enhancement, post contrast abnormal, and total abnormal tumor volumes. Finally, ordinal logistic regression demonstrated correspondence to manual ratings for several features. Conclusion: Tumor volume measurements from both volumetric platforms produced highly concordant and reproducible estimates across platforms for general features. As automated or semi-automated volumetric measurements replace manual linear or area measurements, it will become increasingly important to keep in mind that measurement differences between segmentation platforms for more detailed features could influence downstream survival or radio genomic analyses.;
Optimization of Deep CNN Techniques to Classify Breast Cancer and Predict Relapse
Prasad, Venkata vara
Venkataramana, Lokeswari Y.
S. Keerthana
Subha, R.
Journal of Advanced Zoology2023Journal Article, cited 0 times
Duke-Breast-Cancer-MRI
Magnetic Resonance Imaging (MRI)
Computer Aided Detection (CADe)
Support Vector Machine (SVM)
Breast cancer is a fatal disease that has a high rate of morbidity and mortality. Finding the right diagnosis is one of the most crucial steps in breast cancer treatment. Doctors can use machine learning (ML) and deep learning techniques to aid with diagnosis. This work makes an effort to devise a methodology for the classification of Breast cancer into its molecular subtypes and prediction of relapse. The objective is to compare the performance of Deep CNN, Tuned CNN and Hypercomplex-Valued CNN, and infer the results, thus automating the classification process. The traditional method used by doctors to detect is tedious and time consuming. It employs multiple methods, including MRI, CT scanning, aspiration, and blood tests as well as image testing. The proposed approach uses image processing techniques to detect irregular breast tissues in the MRI. The survivors of Breast Cancer are still at risk for relapse after remission, and once the disease relapses, the survival rate is much lower. A thorough analysis of data can potentially identify risk factors and reduce the risk of relapse in the first place. A SVM (Support Vector Machine) module with GridSearchCV for hyperparameter tuning is used to identify patterns in those patients who experience a relapse, so that these patterns can be used to predict the relapse before it occurs. The traditional deep learning CNN model achieved an accuracy of 27%, the tuned CNN model achieved an accuracy of 92% and the hypercomplex-valued CNN achieved an accuracy of 98%. The SVM model achieved an accuracy of 89% and on tuning the hyperparameters by using GridSearchCV it achieved and accuracy of 98%.
Mamografi görüntülerindeki anormalliklerin yerel ikili örüntü ve varyantları kullanılarak sınıflandırılması
TİRYAKİ, Volkan Müjdat
2020Journal Article, cited 0 times
CBIS-DDSM
Meme kanseri teşhisinde kullanılan mamografilerdeki anormalliklerin sınıflandırılması için makine öğrenme araştırmaları büyük önem arz etmektedir. Bu çalışmada Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM) görüntü tabanındaki kitleli ve kalsifikasyonlu mamografi görüntüleri sınıflandırılmıştır. Veri setindeki görüntülerden Yerel İkili Örüntü(YİÖ), Yerel Türev Örüntü, Yerel Dörtlü Örüntü(YDÖ), ve Gürültüye Dirençli Yerel İkili Örüntü yöntemleri ile doku öznitelikleri çıkarılmıştır. Öznitelik çıkarım yöntemlerinden yerel çarpıklık örüntü tabanlı ayrıntılı histogram yöntemiyle de öznitelik çıkarımı yapılmıştır. Daha sonra öznitelik vektörleri doğrusal ve radyal tabanlı fonksiyon kernel destek vektör makineleri(DVM) ve yapay sinir ağları (YSA) kullanılarak sınıflandırılmıştır. Eğitim ve doğrulama verisi için 5-kez çapraz doğrulama yöntemi uygulanmıştır. En yüksek sınıflandırma performansı veren eşik seviyeleri ve pencere boyutları her bir öznitelik çıkarım yöntemi için belirlenmiştir. Öznitelik çıkarımı için gerekli olan süreler tablo halinde verilmiştir. Öznitelik çıkarım yöntemi olarak farklı çap ve nokta sayısı ile hesaplanmış YİÖ vektörleri füzyonu ve sınıflandırıcı olarak 2 gizli katmanlı YSA kullanılması durumunda test verisi için %85.74 başarı oranı elde edilmiştir. Elde edilen başarı oranları literatürdeki makine öğrenmesi sonuçlarına göre yüksek ve derin öğrenme sonuçları ile kıyaslanabilir sonuçlardır.
MRI brain image classification using Linear Vector Quantization Classifier
Rao, R. R.
Pabboju, S.
Raju, A. R.
Cardiometry2022Journal Article, cited 0 times
REMBRANDT
MRI
Wavelet
The metastases cancer other than the lifestyle-related or environmental related no known facts for the brain tumors. Only factors that may cause brain tumors might be the exposure to high ionizing radiation and a family history of any brain disease also increase brain cancer risk. The cancerous brain is a brain disorder that shapes masses in cells called tumors. The early diagnosis of brain cancer using the Magnetic Resonance Imaging (MRI) scan image for cancer disease is required to reduce the mortality rate. Dual-Tree Mband Wavelet Transform (DTMBWT) based feature extraction, and Linear Vector Quantization Classifier (LVQC) based MRI brain image classification. DTMBWT decomposes the MRI brain images in the frequency domain as the sub-bands for fuzzy-based low and high components to evaluate the features selected. The Sub-band Energy Features (SEF) for individual and sub-set ranking helps classify normal and abnormal images that LVQC for output prediction characterizes. The results show the classification accuracy of 95% using DTMBWT based SEF and LVQC classifiers.
ResMLP_GGR: Residual Multilayer Perceptrons- Based Genotype-Guided Recurrence Prediction of Non-small Cell Lung Cancer
Ai, Yang
Li, Yinhao
Chen, Yen-Wei
Aonpong, Panyanat
Han, Xianhua
Journal of Image and Graphics2023Journal Article, cited 1 times
Website
NSCLC Radiogenomics
Deep Learning
Non-Small Cell Lung Cancer (NSCLC)
Predictive model
Radiogenomics
residual neural network
Algorithm Development
Imaging features
Non-small Cell Lung Cancer (NSCLC) is one of the malignant tumors with the highest morbidity and mortality. The postoperative recurrence rate in patients with NSCLC is high, which directly endangers the lives of patients. In recent years, many studies have used Computed Tomography (CT) images to predict NSCLC recurrence. Although this approach is inexpensive, it has low prediction accuracy. Gene expression data can achieve high accuracy. However, gene acquisition is expensive and invasive, and cannot meet the recurrence prediction requirements of all patients. In this study, a low-cost, high-accuracy residual multilayer perceptrons-based genotype-guided recurrence (ResMLP_GGR) prediction method is proposed that uses a gene estimation model to guide recurrence prediction. First, a gene estimation model is proposed to construct a mapping function of mixed features (handcrafted and deep features) and gene data to estimate the genetic information of tumor heterogeneity. Then, from gene estimation data obtained using a regression model, representations related to recurrence are learned to realize NSCLC recurrence prediction. In the testing phase, NSCLC recurrence prediction can be achieved with only CT images. The experimental results show that the proposed method has few parameters, strong generalization ability, and is suitable for small datasets. Compared with state-of-the-art methods, the proposed method significantly improves recurrence prediction accuracy by 3.39% with only 1% of parameters.
Automatic Pancreas Segmentation using A Novel Modified Semantic Deep Learning Bottom-Up Approach
Paithane, Pradip Mukundrao
Kakarwal, Dr S. N.
International Journal of Intelligent Systems and Applications in Engineering2022Journal Article, cited 0 times
Website
Pancreas-CT
Deep Learning
Classification
Algorithm Development
Sharpe and smooth pancreas segmentation is acrucial and arduous problem in medical image analysis and investigation. A semantic deep learning bottom-up approach isthemost popular and efficient method used for pancreas segmentation withasmooth and sharp result. The Automatic pancreas segmentation process is performed through semantic segmentation for abdominal computed tomography (CT) clinical images. A novel semantic segmentation is applied for acute pancreas segmentation with different anglesof CT images. Inthenovel modified semantic approach,12 layers are used. The proposed model is executed on a dataset of 80 patient single-phase CT images. For training purposes,699 images and testing purposes150 images are taken from a dataset with a different angle. The Proposed approach is used for many organs segmentation from CT scans clinical images with high accuracy. “transposedConv2dLayer” layeris used for up-sampling and down-sampling so the computation time period is reduced as related to the state-of-art. Bfscore, Dice Coefficient, Jaccard Coefficient are used to calculate similarity index values between test image and expected output image only. The proposed approach achieved a dice similarity index score upto 81±7.43%.The Class balancing process is executed with the help of class weight and data augmentation. In novel modified semantic segmentation, max-pooling layer, RELU layer, softmax layer, transposed conv2d layer and dicePixelClassification layer are used. DicePixelClassification is newly introduced and incorporated in a novel method for improved results. VGG-16, VGG-19 and RSnet-18 deep learning models are used for pancreas segmentation.
Lupsix: A Cascade Framework for Lung Parenchyma Segmentation in Axial CT Images
Koyuncu, Hasan
International Journal of Intelligent Systems and Applications in Engineering2018Journal Article, cited 0 times
Website
LIDC-IDRI
Segmentation
Fusion of CT and MR Liver Images by SURF-Based Registration
Aslan, Muhammet Fatih
Durdu, Akif
International Journal of Intelligent Systems and Applications in Engineering2019Journal Article, cited 3 times
Website
TCGA-LIHC
SVM-PUK Kernel Based MRI-brain Tumor Identification Using Texture and Gabor Wavelets
Chinnam, Siva
Sistla, Venkatramaphanikumar
Kolli, Venkata
Traitement du Signal2019Journal Article, cited 0 times
Website
Algorithm Development
Support Vector Machine (SVM)
BraTS
Segmentation
Brain
Examining the Validity of Input Lung CT Images Submitted to the AI-Based Computerized Diagnosis
Kosareva, Aleksandra A.
Paulenka, Dzmitry A.
Snezhko, Eduard V.
Bratchenko, Ivan A.
Kovalev, Vassili A.
Journal of Biomedical Photonics & Engineering2022Journal Article, cited 0 times
Website
LCTSC
LIDC-IDRI
Pancreas-CT
Head-Neck-PET-CT
ACRIN 6668
ACRIN-NSCLC-FDG-PET
Anti-PD-1_Lung
B-mode-and-CEUS-Liver
Prostate-MRI-US-Biopsy
Breast-MRI-NACT-Pilot
CPTAC-PDA
VICTRE
Classification
Convolutional Neural Network (CNN)
Deep Learning
Computer Aided Diagnosis (CADx)
Computed Tomography (CT)
A well-designed CAD tool should respond to input requests, user actions, and perform input checks. Thus, an important element of such a tool is the pre-processing of incoming data and screening out those data that cannot be processed by the application. In this paper, we consider non-trivial methods of chest computed tomography (CT) images verifications: modality and human chest checks. We review sources to develop training datasets, describe architectures of convolution neural networks (CNN), clarify pre-processing and augmentation processes of chest CT scans and show results of training. The developed application showed good results: 100% classification accuracy on the test dataset for modality check and 89% classification accuracy on the test dataset for checking of lungs presence. Analysis of wrong predictions showed that the model performs poorly on biopsy of lungs. In general, the developed input data validation model shows good results on the designed datasets for CT image modality check and for checking of lungs presence.
Segmentation, tracking, and kinematics of lung parenchyma and lung tumors from 4D CT with application to radiation treatment planning
This thesis is concerned with development of techniques for efficient computerized analysis of 4-D CT data. The goal is to have a highly automated approach to segmentation of the lung boundary and lung nodules inside the lung. The determination of exact lung tumor location over space and time by image segmentation is an essential step to track thoracic malignancies. Accurate image segmentation helps clinical experts examine the anatomy and structure and determine the disease progress. Since 4-D CT provides structural and anatomical information during tidal breathing, we use the same data to also measure mechanical properties related to deformation of the lung tissue including Jacobian and strain at high resolutions and as a function of time. Radiation Treatment of patients with lung cancer can benefit from knowledge of these measures of regional ventilation. Graph-cuts techniques have been popular for image segmentation since they are able to treat highly textured data via robust global optimization, avoiding local minima in graph based optimization. The graph-cuts methods have been used to extract globally optimal boundaries from images by s/t cut, with energy function based on model-specific visual cues, and useful topological constraints. The method makes N-dimensional globally optimal segmentation possible with good computational efficiency. Even though the graph-cuts method can extract objects where there is a clear intensity difference, segmentation of organs or tumors pose a challenge. For organ segmentation, many segmentation methods using a shape prior have been proposed. However, in the case of lung tumors, the shape varies from patient to patient, and with location. In this thesis, we use a shape prior for tumors through a training step and PCA analysis based on the Active Shape Model (ASM). The method has been tested on real patient data from the Brown Cancer Center at the University of Louisville. We performed temporal B-spline deformable registration of the 4-D CT data - this yielded 3-D deformation fields between successive respiratory phases from which measures of regional lung function were determined. During the respiratory cycle, the lung volume changes and five different lobes of the lung (two in the left and three in the right lung) show different deformation yielding different strain and Jacobian maps. In this thesis, we determine the regional lung mechanics in the Lagrangian frame of reference through different respiratory phases, for example, Phase10 to 20, Phase10 to 30, Phase10 to 40, and Phase10 to 50. Single photon emission computed tomography (SPECT) lung imaging using radioactive tracers with SPECT ventilation and SPECT perfusion imaging also provides functional information. As part of an IRB-approved study therefore, we registered the max-inhale CT volume to both VSPECT and QSPECT data sets using the Demon's non-rigid registration algorithm in patient subjects. Subsequently, statistical correlation between CT ventilation images (Jacobian and strain values), with both VSPECT and QSPECT was undertaken. Through statistical analysis with the Spearman's rank correlation coefficient, we found that Jacobian values have the highest correlation with both VSPECT and QSPECT.
The Impact of Arterial Input Function Determination Variations on Prostate Dynamic Contrast-Enhanced Magnetic Resonance Imaging Pharmacokinetic Modeling: A Multicenter Data Analysis Challenge
Huang, Wei
Chen, Yiyi
Fedorov, Andriy
Li, Xia
Jajamovich, Guido H
Malyarenko, Dariya I
Aryal, Madhava P
LaViolette, Peter S
Oborski, Matthew J
O'Sullivan, Finbarr
Tomography: a journal for imaging research2016Journal Article, cited 21 times
Website
QIN PROSTATE
Automated Segmentation of Hyperintense Regions in FLAIR MRI Using Deep Learning
Korfiatis, Panagiotis
Kline, Timothy L
Erickson, Bradley J
Tomography2016Journal Article, cited 16 times
Website
BraTS
Magnetic Resonance Imaging (MRI)
FLAIR
Convolutional Neural Network (CNN)
Segmentation
We present a deep convolutional neural network application based on autoencoders aimed at segmentation of increased signal regions in fluid-attenuated inversion recovery magnetic resonance imaging images. The convolutional autoencoders were trained on the publicly available Brain Tumor Image Segmentation Benchmark (BRATS) data set, and the accuracy was evaluated on a data set where 3 expert segmentations were available. The simultaneous truth and performance level estimation (STAPLE) algorithm was used to provide the ground truth for comparison, and Dice coefficient, Jaccard coefficient, true positive fraction, and false negative fraction were calculated. The proposed technique was within the interobserver variability with respect to Dice, Jaccard, and true positive fraction. The developed method can be used to produce automatic segmentations of tumor regions corresponding to signal-increased fluid-attenuated inversion recovery regions.
Radiomics of Lung Nodules: A Multi-Institutional Study of Robustness and Agreement of Quantitative Imaging Features
Kalpathy-Cramer, J.
Mamomov, A.
Zhao, B.
Lu, L.
Cherezov, D.
Napel, S.
Echegaray, S.
Rubin, D.
McNitt-Gray, M.
Lo, P.
Sieren, J. C.
Uthoff, J.
Dilger, S. K.
Driscoll, B.
Yeung, I.
Hadjiiski, L.
Cha, K.
Balagurunathan, Y.
Gillies, R.
Goldgof, D.
Tomography: a journal for imaging research2016Journal Article, cited 19 times
Website
Radiomics
QIN
LUNG
Segmentation
RIDER Lung CT
Phantom FDA
NSCLC Radiogenomics
LIDC-IDRI
Radiomic features
lung cancer
reproducibility
An Approach Toward Automatic Classification of Tumor Histopathology of Non–Small Cell Lung Cancer Based on Radiomic Features
Patil, Ravindra
Mahadevaiah, Geetha
Dekker, Andre
Tomography: a journal for imaging research2016Journal Article, cited 2 times
Website
NSCLC-Radiomics
lung cancer
tumor histopathology
Radiomics
Computational Challenges and Collaborative Projects in the NCI Quantitative Imaging Network
Farahani, Keyvan
Kalpathy-Cramer, Jayashree
Chenevert, Thomas L
Rubin, Daniel L
Sunderland, John J
Nordstrom, Robert J
Buatti, John
Hylton, Nola
Tomography2016Journal Article, cited 2 times
Website
Radiomics
Quantitative Imaging Network (QIN)
The Quantitative Imaging Network (QIN) of the National Cancer Institute (NCI) conducts research in development and validation of imaging tools and methods for predicting and evaluating clinical response to cancer therapy. Members of the network are involved in examining various imaging and image assessment parameters through network-wide cooperative projects. To more effectively use the cooperative power of the network in conducting computational challenges in benchmarking of tools and methods and collaborative projects in analytical assessment of imaging technologies, the QIN Challenge Task Force has developed policies and procedures to enhance the value of these activities by developing guidelines and leveraging NCI resources to help their administration and manage dissemination of results. Challenges and Collaborative Projects (CCPs) are further divided into technical and clinical CCPs. As the first NCI network to engage in CCPs, we anticipate a variety of CCPs to be conducted by QIN teams in the coming years. These will be aimed to benchmark advanced software tools for clinical decision support, explore new imaging biomarkers for therapeutic assessment, and establish consensus on a range of methods and protocols in support of the use of quantitative imaging to predict and assess response to cancer therapy.
A Population-Based Digital Reference Object (DRO) for Optimizing Dynamic Susceptibility Contrast (DSC)-MRI Methods for Clinical Trials
Semmineh, Natenael B
Stokes, Ashley M
Bell, Laura C
Boxerman, Jerrold L
Quarles, C Chad
Tomography2017Journal Article, cited 5 times
Website
DSC-MRI
PHANTOM
simulated data
digital reference object (DRO)
QIN
Glioblastoma multiforme (GBM)
Algorithm Development
The standardization and broad-scale integration of dynamic susceptibility contrast (DSC)-magnetic resonance imaging (MRI) have been confounded by a lack of consensus on DSC-MRI methodology for preventing potential relative cerebral blood volume inaccuracies, including the choice of acquisition protocols and postprocessing algorithms. Therefore, we developed a digital reference object (DRO), using physiological and kinetic parameters derived from in vivo data, unique voxel-wise 3-dimensional tissue structures, and a validated MRI signal computational approach, aimed at validating image acquisition and analysis methods for accurately measuring relative cerebral blood volume in glioblastomas. To achieve DSC-MRI signals representative of the temporal characteristics, magnitude, and distribution of contrast agent-induced T1 and changes observed across multiple glioblastomas, the DRO's input parameters were trained using DSC-MRI data from 23 glioblastomas (>40 000 voxels). The DRO's ability to produce reliable signals for combinations of pulse sequence parameters and contrast agent dosing schemes unlike those in the training data set was validated by comparison with in vivo dual-echo DSC-MRI data acquired in a separate cohort of patients with glioblastomas. Representative applications of the DRO are presented, including the selection of DSC-MRI acquisition and postprocessing methods that optimize CBV accuracy, determination of the impact of DSC-MRI methodology choices on sample size requirements, and the assessment of treatment response in clinical glioblastoma trials.
[18F] FDG Positron Emission Tomography (PET) Tumor and Penumbra Imaging Features Predict Recurrence in Non-Small Cell Lung Cancer
Mattonen, Sarah A.
Davidzon, Guido A.
Bakr, Shaimaa
Echegaray, Sebastian
Leung, Ann N. C.
Vasanawala, Minal
Horng, George
Napel, Sandy
Nair, Viswam S.
Tomography (Ann Arbor, Mich.)2019Journal Article, cited 0 times
Website
Radiomics
PET
Non-small cell lung cancer (NSCLC)
We identified computational imaging features on 18F-fluorodeoxyglucose positron emission tomography (PET) that predict recurrence/progression in non-small cell lung cancer (NSCLC). We retrospectively identified 291 patients with NSCLC from 2 prospectively acquired cohorts (training, n = 145; validation, n = 146). We contoured the metabolic tumor volume (MTV) on all pretreatment PET images and added a 3-dimensional penumbra region that extended outward 1 cm from the tumor surface. We generated 512 radiomics features, selected 435 features based on robustness to contour variations, and then applied randomized sparse regression (LASSO) to identify features that predicted time to recurrence in the training cohort. We built Cox proportional hazards models in the training cohort and independently evaluated the models in the validation cohort. Two features including stage and a MTV plus penumbra texture feature were selected by LASSO. Both features were significant univariate predictors, with stage being the best predictor (hazard ratio [HR] = 2.15 [95% confidence interval (CI): 1.56-2.95], P < .001). However, adding the MTV plus penumbra texture feature to stage significantly improved prediction (P = .006). This multivariate model was a significant predictor of time to recurrence in the training cohort (concordance = 0.74 [95% CI: 0.66-0.81], P < .001) that was validated in a separate validation cohort (concordance = 0.74 [95% CI: 0.67-0.81], P < .001). A combined radiomics and clinical model improved NSCLC recurrence prediction. FDG PET radiomic features may be useful biomarkers for lung cancer prognosis and add clinical utility for risk stratification.
A Fully Automated Deep Learning Network for Brain Tumor Segmentation
Bangalore Yogananda, C. G.
Shah, B. R.
Vejdani-Jahromi, M.
Nalawade, S. S.
Murugesan, G. K.
Yu, F. F.
Pinho, M. C.
Wagner, B. C.
Emblem, K. E.
Bjornerud, A.
Fei, B.
Madhuranthakam, A. J.
Maldjian, J. A.
Tomography2020Journal Article, cited 40 times
Website
BraTS 2018
BraTS 2017
*Deep Learning
Humans
Image Processing
Computer-Assisted
Magnetic Resonance Imaging (MRI)
Segmentation
Convolutional Neural Network (CNN)
Dense U-Net
Machine learning
We developed a fully automated method for brain tumor segmentation using deep learning; 285 brain tumor cases with multiparametric magnetic resonance images from the BraTS2018 data set were used. We designed 3 separate 3D-Dense-UNets to simplify the complex multiclass segmentation problem into individual binary-segmentation problems for each subcomponent. We implemented a 3-fold cross-validation to generalize the network's performance. The mean cross-validation Dice-scores for whole tumor (WT), tumor core (TC), and enhancing tumor (ET) segmentations were 0.92, 0.84, and 0.80, respectively. We then retrained the individual binary-segmentation networks using 265 of the 285 cases, with 20 cases held-out for testing. We also tested the network on 46 cases from the BraTS2017 validation data set, 66 cases from the BraTS2018 validation data set, and 52 cases from an independent clinical data set. The average Dice-scores for WT, TC, and ET were 0.90, 0.84, and 0.80, respectively, on the 20 held-out testing cases. The average Dice-scores for WT, TC, and ET on the BraTS2017 validation data set, the BraTS2018 validation data set, and the clinical data set were as follows: 0.90, 0.80, and 0.78; 0.90, 0.82, and 0.80; and 0.85, 0.80, and 0.77, respectively. A fully automated deep learning method was developed to segment brain tumors into their subcomponents, which achieved high prediction accuracy on the BraTS data set and on the independent clinical data set. This method is promising for implementation into a clinical workflow.
Stanford DRO Toolkit: Digital Reference Objects for Standardization of Radiomic Features
Jaggi, Akshay
Mattonen, Sarah A.
McNitt-Gray, Michael
Napel, Sandy
Tomography2020Journal Article, cited 0 times
CC-Radiomics-Phantom
DRO-Toolkit
Several institutions have developed image feature extraction software to compute quantitative descriptors of medical images for radiomics analyses. With radiomics increasingly proposed for use in research and clinical contexts, new techniques are necessary for standardizing and replicating radiomics findings across software implementations. We have developed a software toolkit for the creation of 3D digital reference objects with customizable size, shape, intensity, texture, and margin sharpness values. Using user-supplied input parameters, these objects are defined mathematically as continuous functions, discretized, and then saved as DICOM objects. Here, we present the definition of these objects, parameterized derivations of a subset of their radiomics values, computer code for object generation, example use cases, and a user-downloadable sample collection used for the examples cited in this paper.
Standardization in Quantitative Imaging: A Multicenter Comparison of Radiomic Features from Different Software Packages on Digital Reference Objects and Patient Data Sets
McNitt-Gray, M.
Napel, S.
Jaggi, A.
Mattonen, S.A.
Hadjiiski, L.
Muzi, M.
Goldgof, D.
Balagurunathan, Y.
Pierce, L.A.
Kinahan, P.E.
Jones, E.F.
Nguyen, A.
Virkud, A.
Chan, H.P.
Emaminejad, N.
Wahi-Anwar, M.
Daly, M.
Abdalah, M.
Yang, H.
Lu, L.
Lv, W.
Rahmim, A.
Gastounioti, A.
Pati, S.
Bakas, S.
Kontos, D.
Zhao, B.
Kalpathy-Cramer, J.
Farahani, K.
Tomography2020Journal Article, cited 0 times
Radiomic-Feature-Standards
Radiomic features are being increasingly studied for clinical applications. We aimed to assess the agreement among radiomic features when computed by several groups by using different software packages under very tightly controlled conditions, which included standardized feature definitions and common image data sets. Ten sites (9 from the NCI's Quantitative Imaging Network] positron emission tomography-computed tomography working group plus one site from outside that group) participated in this project. Nine common quantitative imaging features were selected for comparison including features that describe morphology, intensity, shape, and texture. The common image data sets were: three 3D digital reference objects (DROs) and 10 patient image scans from the Lung Image Database Consortium data set using a specific lesion in each scan. Each object (DRO or lesion) was accompanied by an already-defined volume of interest, from which the features were calculated. Feature values for each object (DRO or lesion) were reported. The coefficient of variation (CV), expressed as a percentage, was calculated across software packages for each feature on each object. Thirteen sets of results were obtained for the DROs and patient data sets. Five of the 9 features showed excellent agreement with CV < 1%; 1 feature had moderate agreement (CV < 10%), and 3 features had larger variations (CV ≥ 10%) even after attempts at harmonization of feature calculations. This work highlights the value of feature definition standardization as well as the need to further clarify definitions for some features.
Multisite Technical and Clinical Performance Evaluation of Quantitative Imaging Biomarkers from 3D FDG PET Segmentations of Head and Neck Cancer Images
Smith, Brian J
Buatti, John M
Bauer, Christian
Ulrich, Ethan J
Ahmadvand, Payam
Budzevich, Mikalai M
Gillies, Robert J
Goldgof, Dmitry
Grkovski, Milan
Hamarneh, Ghassan
Kinahan, Paul E
Muzi, John P
Muzi, Mark
Laymon, Charles M
Mountz, James M
Nehmeh, Sadek
Oborski, Matthew J
Zhao, Binsheng
Sunderland, John J
Beichel, Reinhard R
Tomography2020Journal Article, cited 1 times
Website
QIN-HEADNECK
Radiomics
Segmentation
Quantitative imaging biomarkers (QIBs) provide medical image-derived intensity, texture, shape, and size features that may help characterize cancerous tumors and predict clinical outcomes. Successful clinical translation of QIBs depends on the robustness of their measurements. Biomarkers derived from positron emission tomography images are prone to measurement errors owing to differences in image processing factors such as the tumor segmentation method used to define volumes of interest over which to calculate QIBs. We illustrate a new Bayesian statistical approach to characterize the robustness of QIBs to different processing factors. Study data consist of 22 QIBs measured on 47 head and neck tumors in 10 positron emission tomography/computed tomography scans segmented manually and with semiautomated methods used by 7 institutional members of the NCI Quantitative Imaging Network. QIB performance is estimated and compared across institutions with respect to measurement errors and power to recover statistical associations with clinical outcomes. Analysis findings summarize the performance impact of different segmentation methods used by Quantitative Imaging Network members. Robustness of some advanced biomarkers was found to be similar to conventional markers, such as maximum standardized uptake value. Such similarities support current pursuits to better characterize disease and predict outcomes by developing QIBs that use more imaging information and are robust to different processing factors. Nevertheless, to ensure reproducibility of QIB measurements and measures of association with clinical outcomes, errors owing to segmentation methods need to be reduced.
Efficient CT Image Reconstruction in a GPU Parallel Environment
Valencia Pérez, Tomas A
Hernández López, Javier M
Moreno-Barbosa, Eduardo
de Celis Alonso, Benito
Palomino Merino, Martin R
Castaño Meneses, Victor M
Tomography2020Journal Article, cited 0 times
TCGA-SARC
Computed tomography
Computed tomography is nowadays an indispensable tool in medicine used to diagnose multiple diseases. In clinical and emergency room environments, the speed of acquisition and information processing are crucial. CUDA is a software architecture used to work with NVIDIA graphics processing units. In this paper a methodology to accelerate tomographic image reconstruction based on maximum likelihood expectation maximization iterative algorithm and combined with the use of graphics processing units programmed in CUDA framework is presented. Implementations developed here are used to reconstruct images with clinical use. Timewise, parallel versions showed improvement with respect to serial implementations. These differences reached, in some cases, 2 orders of magnitude in time while preserving image quality. The image quality and reconstruction times were not affected significantly by the addition of Poisson noise to projections. Furthermore, our implementations showed good performance when compared with reconstruction methods provided by commercial software. One of the goals of this work was to provide a fast, portable, simple, and cheap image reconstruction system, and our results support the statement that the goal was achieved.
Evaluating the Use of rCBV as a Tumor Grade and Treatment Response Classifier Across NCI Quantitative Imaging Network Sites: Part II of the DSC-MRI Digital Reference Object (DRO) Challenge
Bell, Laura C
Semmineh, Natenael
An, Hongyu
Eldeniz, Cihat
Wahl, Richard
Schmainda, Kathleen M
Prah, Melissa A
Erickson, Bradley J
Korfiatis, Panagiotis
Wu, Chengyue
Sorace, Anna G
Yankeelov, Thomas E
Rutledge, Neal
Chenevert, Thomas L
Malyarenko, Dariya
Liu, Yichu
Brenner, Andrew
Hu, Leland S
Zhou, Yuxiang
Boxerman, Jerrold L
Yen, Yi-Fen
Kalpathy-Cramer, Jayashree
Beers, Andrew L
Muzi, Mark
Madhuranthakam, Ananth J
Pinho, Marco
Johnson, Brian
Quarles, C Chad
Tomography2020Journal Article, cited 1 times
Website
QIN-BRAIN-DSC-MRI
Classification
BRAIN
We have previously characterized the reproducibility of brain tumor relative cerebral blood volume (rCBV) using a dynamic susceptibility contrast magnetic resonance imaging digital reference object across 12 sites using a range of imaging protocols and software platforms. As expected, reproducibility was highest when imaging protocols and software were consistent, but decreased when they were variable. Our goal in this study was to determine the impact of rCBV reproducibility for tumor grade and treatment response classification. We found that varying imaging protocols and software platforms produced a range of optimal thresholds for both tumor grading and treatment response, but the performance of these thresholds was similar. These findings further underscore the importance of standardizing acquisition and analysis protocols across sites and software benchmarking.
Convolutional Neural Networks for Multi-scale Lung Nodule Classification in CT: Influence of Hyperparameter Tuning on Performance
Hernández-Rodríguez, Jorge
Cabrero-Fraile, Francisco-Javier
Rodríguez-Conde, María-José
TEM Journal2022Journal Article, cited 0 times
Website
LIDC-IDRI
SPIE-AAPM Lung CT Challenge
Algorithm Development
Computer Aided Detection (CADe)
Computed Tomography (CT)
LUNG
In this study, a system based in Convolutional Neural Networks for differentiating lung nodules and non-nodules in Computed Tomography is developed. Multi-scale patches, extracted from LIDC-IDRI database, are used to train different CNN models. Adjustable hyperparameters are modified sequentially, to study their influence, evaluate learning process and find each size best performing network. Classification accuracies obtained are superior to 87% for all sizes with areas under Receiver Operating Characteristic in the interval (0.936-0.951). Trained models are tested with nodules from an independent database, providing sensitivities above 96%. Performance of trained models is similar to other published articles and show good classification capacities. As a basis for developing CAD systems, recommendations regarding hyperparameter tuning are provided.
RGU-Net: Computationally Efficient U-Net for Automated Brain Extraction of mpMRI with Presence of Glioblastoma
Brain extraction refers to the process of removing non-brain tissues in brain scans and is one of the initial pre-processing procedures in neuroimage analysis. Since errors produced during this process can be challenging to amend in subsequent analyses, accurate brain extraction is crucial. Most deep learning-based brain extraction models are optimised on performance, leading to computationally expensive models. Such models may be ideal for research; however, they are not ideal in a clinical setting. In this work, we propose a new computationally efficient 2D brain extraction model, named RGU-Net. RGU-Net incorporates Ghost modules and residual paths to accurately extract features and reduce computational cost. Our results show that RGU-Net has 98.26% fewer parameters compared to the original U-Net model, whilst yielding state-of-the-art performance of 97.97 ± 0.84% Dice similarity coefficient. Faster run time was also observed in CPUs which illustrates the model’s practicality in real-world applications.
Radiomics-based prediction of survival in patients with head and neck squamous cell carcinoma based on pre- and post-treatment (18)F-PET/CT
Liu, Z.
Cao, Y.
Diao, W.
Cheng, Y.
Jia, Z.
Peng, X.
Aging (Albany NY)2020Journal Article, cited 0 times
Website
HNSCC
Radiomics
HEAD AND NECK
Positron Emission Tomography (PET)
Computed Tomography (CT)
Classification
BACKGROUND: 18-fluorodeoxyglucose positron emission tomography/computed tomography ((18)F-PET/CT) has been widely applied for the imaging of head and neck squamous cell carcinoma (HNSCC). This study examined whether pre- and post-treatment (18)F-PET/CT features can help predict the survival of HNSCC patients. RESULTS: Three radiomics features were identified as prognostic factors. Radiomics score calculated from these features significantly predicted overall survival (OS) and disease-free disease (DFS). The clinicopathological characteristics combined with pre- or post-treatment nomograms showed better ROC curves and decision curves than the nomogram based only on clinicopathological characteristics. CONCLUSIONS: Combining clinicopathological characteristics with radiomics features of pre-treatment PET/CT or post-treatment PET/CT assessment of primary tumor sites as positive or negative may substantially improve prediction of OS and DFS of HNSCC patients. METHODS: 171 patients who received pre-treatment (18)F-PET/CT scans and 154 patients who received post-treatment (18)F-PET/CT scans with HNSCC in the Cancer Imaging Achieve (TCIA) were included. Nomograms that combined clinicopathological features with either pre-treatment PET/CT radiomics features or post-treatment assessment of primary tumor sites were constructed using data from 154 HNSCC patients. Receiver operating characteristic (ROC) curves and decision curves were used to compare the predictions of these models with those of a model incorporating only clinicopathological features.
Multiple-response regression analysis links magnetic resonance imaging features to de-regulated protein expression and pathway activity in lower grade glioma
Lehrer, Michael
Bhadra, Anindya
Ravikumar, Visweswaran
Chen, James Y
Wintermark, Max
Hwang, Scott N
Holder, Chad A
Huang, Erich P
Fevrier-Sullivan, Brenda
Freymann, John B
Rao, Arvind
Oncoscience2017Journal Article, cited 1 times
Website
TCGA-LGG
VASARI
Radiogenomics
cBioPortal
imaging-proteomics analysis
signaling pathway activity
multiple-response regression
Radiomics
Lower-grade glioma (LGG)
BACKGROUND AND PURPOSE: Lower grade gliomas (LGGs), lesions of WHO grades II and III, comprise 10-15% of primary brain tumors. In this first-of-a-kind study, we aim to carry out a radioproteomic characterization of LGGs using proteomics data from the TCGA and imaging data from the TCIA cohorts, to obtain an association between tumor MRI characteristics and protein measurements. The availability of linked imaging and molecular data permits the assessment of relationships between tumor genomic/proteomic measurements with phenotypic features. MATERIALS AND METHODS: Multiple-response regression of the image-derived, radiologist scored features with reverse-phase protein array (RPPA) expression levels generated correlation coefficients for each combination of image-feature and protein or phospho-protein in the RPPA dataset. Significantly-associated proteins for VASARI features were analyzed with Ingenuity Pathway Analysis software. Hierarchical clustering of the results of the pathway analysis was used to determine which feature groups were most strongly correlated with pathway activity and cellular functions. RESULTS: The multiple-response regression approach identified multiple proteins associated with each VASARI imaging feature. VASARI features were found to be correlated with expression of IL8, PTEN, PI3K/Akt, Neuregulin, ERK/MAPK, p70S6K and EGF signaling pathways. CONCLUSION: Radioproteomics analysis might enable an insight into the phenotypic consequences of molecular aberrations in LGGs.
High-dimensional regression analysis links magnetic resonance imaging features and protein expression and signaling pathway alterations in breast invasive carcinoma
Lehrer, M.
Bhadra, A.
Aithala, S.
Ravikumar, V.
Zheng, Y.
Dogan, B.
Bonaccio, E.
Burnside, E. S.
Morris, E.
Sutton, E.
Whitman, G. J.
Net, J.
Brandt, K.
Ganott, M.
Zuley, M.
Rao, A.
Tcga Breast Phenotype Research Group
Oncoscience2018Journal Article, cited 0 times
Website
TCGA-BRCA
MRI
Radiogenomics
breast invasive carcinoma
protein expression
signaling pathway analysis
Background: Imaging features derived from MRI scans can be used for not only breast cancer detection and measuring disease extent, but can also determine gene expression and patient outcomes. The relationships between imaging features, gene/protein expression, and response to therapy hold potential to guide personalized medicine. We aim to characterize the relationship between radiologist-annotated tumor phenotypic features (based on MRI) and the underlying biological processes (based on proteomic profiling) in the tumor. Methods: Multiple-response regression of the image-derived, radiologist-scored features with reverse-phase protein array expression levels generated association coefficients for each combination of image-feature and protein in the RPPA dataset. Significantly-associated proteins for features were analyzed with Ingenuity Pathway Analysis software. Hierarchical clustering of the results of the pathway analysis determined which features were most strongly correlated with pathway activity and cellular functions. Results: Each of the twenty-nine imaging features was found to have a set of significantly correlated molecules, associated biological functions, and pathways. Conclusions: We interrogated the pathway alterations represented by the protein expression associated with each imaging feature. Our study demonstrates the relationships between biological processes (via proteomic measurements) and MRI features within breast tumors.
Identifying molecular genetic features and oncogenic pathways of clear cell renal cell carcinoma through the anatomical (PADUA) scoring system
Zhu, H
Chen, H
Lin, Z
Shi, G
Lin, X
Wu, Z
Zhang, X
Zhang, X
Oncotarget2016Journal Article, cited 3 times
Website
TCGA-KIRC
PADUA scoring system
Clear cell renal cell carcinoma (ccRCC)
KIDNEY
Computed Tomography (CT)
Although the preoperative aspects and dimensions used for the PADUA scoring system were successfully applied in macroscopic clinical practice for renal tumor, the relevant molecular genetic basis remained unclear. To uncover meaningful correlations between the genetic aberrations and radiological features, we enrolled 112 patients with clear cell renal cell carcinoma (ccRCC) whose clinicopathological data, genomics data and CT data were obtained from The Cancer Genome Atlas (TCGA) and The Cancer Imaging Archive (TCIA). Overall PADUA score and several radiological features included in the PADUA system were assigned for each ccRCC. Despite having observed no significant association between the gene mutation frequency and the overall PADUA score, correlations between gene mutations and a few radiological features (tumor rim location and tumor size) were identified. A significant association between rim location and miRNA molecular subtypes was also observed. Survival analysis revealed that tumor size > 7 cm was significantly associated with poor survival. In addition, Gene Set Enrichment Analysis (GSEA) on mRNA expression revealed that the high PADUA score was related to numerous cancer-related networks, especially epithelial to mesenchymal transition (EMT) related pathways. This preliminary analysis of ccRCC revealed meaningful correlations between PADUA anatomical features and molecular basis including genomic aberrations and molecular subtypes.
Differential localization of glioblastoma subtype: implications on glioblastoma pathogenesis
Steed, Tyler C
Treiber, Jeffrey M
Patel, Kunal
Ramakrishnan, Valya
Merk, Alexander
Smith, Amanda R
Carter, Bob S
Dale, Anders M
Chow, LM
Chen, Clark C
Oncotarget2016Journal Article, cited 8 times
Website
TCGA-GBM
Magnetic Resonance Imaging (MRI)
BRAIN
Glioblastoma
REMBRANDT
INTRODUCTION: The subventricular zone (SVZ) has been implicated in the pathogenesis of glioblastoma. Whether molecular subtypes of glioblastoma arise from unique niches of the brain relative to the SVZ remains largely unknown. Here, we tested whether these subtypes of glioblastoma occupy distinct regions of the cerebrum and examined glioblastoma localization in relation to the SVZ. METHODS: Pre-operative MR images from 217 glioblastoma patients from The Cancer Imaging Archive were segmented automatically into contrast enhancing (CE) tumor volumes using Iterative Probabilistic Voxel Labeling (IPVL). Probabilistic maps of tumor location were generated for each subtype and distances were calculated from the centroid of CE tumor volumes to the SVZ. Glioblastomas that arose in a Genetically Modified Murine Model (GEMM) model were also analyzed with regard to SVZ distance and molecular subtype. RESULTS: Classical and mesenchymal glioblastomas were more diffusely distributed and located farther from the SVZ. In contrast, proneural and neural glioblastomas were more likely to be located in closer proximity to the SVZ. Moreover, in a GFAP-CreER; PtenloxP/loxP; Trp53loxP/loxP; Rb1loxP/loxP; Rbl1-/- GEMM model of glioblastoma where tumor can spontaneously arise in different regions of the cerebrum, tumors that arose near the SVZ were more likely to be of proneural subtype (p < 0.0001). CONCLUSIONS: Glioblastoma subtypes occupy different regions of the brain and vary in proximity to the SVZ. These findings harbor implications pertaining to the pathogenesis of glioblastoma subtypes.
Identification of biomarkers for pseudo and true progression of GBM based on radiogenomics study
Qian, Xiaohua
Tan, Hua
Zhang, Jian
Liu, Keqin
Yang, Tielin
Wang, Maode
Debinskie, Waldemar
Zhao, Weilin
Chan, Michael D
Zhou, Xiaobo
Oncotarget2016Journal Article, cited 8 times
Website
TCGA-GBM
Radiogenomics
The diagnosis for pseudoprogression (PsP) and true tumor progression (TTP) of GBM is a challenging task in clinical practices. The purpose of this study is to identify potential genetic biomarkers associated with PsP and TTP based on the clinical records, longitudinal imaging features, and genomics data. We are the first to introduce the radiogenomics approach to identify candidate genes for PsP and TTP of GBM. Specifically, a novel longitudinal sparse regression model was developed to construct the relationship between gene expression and imaging features. The imaging features were extracted from tumors along the longitudinal MRI and provided diagnostic information of PsP and TTP. The 33 candidate genes were selected based on their association with the imaging features, reflecting their relation with the development of PsP and TTP. We then conducted biological relevance analysis for 33 candidate genes to identify the potential biomarkers, i.e., Interferon regulatory factor (IRF9) and X-ray repair cross-complementing gene (XRCC1), which were involved in the cancer suppression and prevention, respectively. The IRF9 and XRCC1 were further independently validated in the TCGA data. Our results provided the first substantial evidence that IRF9 and XRCC1 can serve as the potential biomarkers for the development of PsP and TTP.
Association between tumor architecture derived from generalized Q-space MRI and survival in glioblastoma
Taylor, Erik N
Ding, Yao
Zhu, Shan
Cheah, Eric
Alexander, Phillip
Lin, Leon
Aninwene II, George E
Hoffman, Matthew P
Mahajan, Anita
Mohamed, Abdallah SR
Oncotarget2017Journal Article, cited 0 times
Website
TCGA-GBM
Radiomics
glioma
diffusion weighted MRI
Diffusion Weighted Imaging
Contrast enhancement
Magnetic Resonance Imaging (MRI)
While it is recognized that the overall resistance of glioblastoma to treatment may be related to intra-tumor patterns of structural heterogeneity, imaging methods to assess such patterns remain rudimentary. Methods: We utilized a generalized Q-space imaging (GQI) algorithm to analyze magnetic resonance imaging (MRI) derived from a rodent model of glioblastoma and 2 clinical datasets to correlate GQI, histology, and survival. Results: In a rodent glioblastoma model, GQI demonstrated a poorly coherent core region, consisting of diffusion tracts < 5 mm, surrounded by a shell of highly coherent diffusion tracts, 6-25 mm. Histologically, the core region possessed a high degree of necrosis, whereas the shell consisted of organized sheets of anaplastic cells with elevated mitotic index. These attributes define tumor architecture as the macroscopic organization of variably aligned tumor cells. Applied to MRI data from The Cancer Imaging Atlas (TCGA), the core-shell diffusion tract-length ratio (c/s ratio) correlated linearly with necrosis, which, in turn, was inversely associated with survival (p = 0.00002). We confirmed in an independent cohort of patients (n = 62) that the c/s ratio correlated inversely with survival (p = 0.0004). Conclusions: The analysis of MR images by GQI affords insight into tumor architectural patterns in glioblastoma that correlate with biological heterogeneity and clinical outcome.
Tumor image-derived texture features are associated with CD3 T-cell infiltration status in glioblastoma
Narang, Shivali
Kim, Donnie
Aithala, Sathvik
Heimberger, Amy B
Ahmed, Salmaan
Rao, Dinesh
Rao, Ganesh
Rao, Arvind
Oncotarget2017Journal Article, cited 1 times
Website
glioma
imaging-genomics analysis
texture analysis
immune activity
TCGA-LGG
Predicting survival time of lung cancer patients using radiomic analysis
Chaddad, Ahmad
Desrosiers, Christian
Toews, Matthew
Abdulkarim, Bassam
Oncotarget2017Journal Article, cited 4 times
Website
Radiomics
LUNG
Non Small Cell Lung Cancer (NSCLC)
Computed Tomography (CT)
Classification
Computer Assisted Diagnosis (CAD)
Objectives: This study investigates the prediction of Non-small cell lung cancer (NSCLC) patient survival outcomes based on radiomic texture and shape features automatically extracted from tumor image data.; Materials and Methods: Retrospective analysis involves CT scans of 315 NSCLC patients from The Cancer Imaging Archive (TCIA). A total of 24 image features are computed from labeled tumor volumes of patients within groups defined using NSCLC subtype and TNM staging information. Spearman's rank correlation, Kaplan-Meier estimation and log-rank tests were used to identify features related to long/short NSCLC patient survival groups. Automatic random forest classification was used to predict patient survival group from multivariate feature data. Significance is assessed at P < 0.05 following Holm-Bonferroni correction for multiple comparisons.; Results: Significant correlations between radiomic features and survival were observed for four clinical groups: (group, [absolute correlation range]): (large cell carcinoma (LCC) [0.35, 0.43]), (tumor size T2, [0.31, 0.39]), (non lymph node metastasis N0, [0.3, 0.33]), (TNM stage I, [0.39, 0.48]). Significant log-rank relationships between features and survival time were observed for three clinical groups: (group, hazard ratio): (LCC, 3.0), (LCC, 3.9), (T2, 2.5) and (stage I, 2.9). Automatic survival prediction performance (i.e. below/above median) is superior for combined radiomic features with age-TNM in comparison to standard TNM clinical staging information (clinical group, mean area-under-the-ROC-curve (AUC)): (LCC, 75.73%), (N0, 70.33%), (T2, 70.28%) and (TNM-I, 76.17%).; Conclusion: Quantitative lung CT imaging features can be used as indicators of survival, in particular for patients with large-cell-carcinoma (LCC), primary-tumor-sizes (T2) and no lymph-node-metastasis (N0).
Spatial habitats from multiparametric MR imaging are associated with signaling pathway activities and survival in glioblastoma
Dextraze, Katherine
Saha, Abhijoy
Kim, Donnie
Narang, Shivali
Lehrer, Michael
Rao, Anita
Narang, Saphal
Rao, Dinesh
Ahmed, Salmaan
Madhugiri, Venkatesh
Fuller, Clifton David
Kim, Michelle M
Krishnan, Sunil
Rao, Ganesh
Rao, Arvind
Oncotarget2017Journal Article, cited 0 times
Website
Radiomics
Glioblastoma Multiforme (GBM)
TCGA-GBM
Glioblastoma (GBM) show significant inter- and intra-tumoral heterogeneity, impacting response to treatment and overall survival time of 12-15 months. To study glioblastoma phenotypic heterogeneity, multi-parametric magnetic resonance images (MRI) of 85 glioblastoma patients from The Cancer Genome Atlas were analyzed to characterize tumor-derived spatial habitats for their relationship with outcome (overall survival) and to identify their molecular correlates (i.e., determine associated tumor signaling pathways correlated with imaging-derived habitat measurements). Tumor sub-regions based on four sequences (fluid attenuated inversion recovery, T1-weighted, post-contrast T1-weighted, and T2-weighted) were defined by automated segmentation. From relative intensity of pixels in the 3-dimensional tumor region, "imaging habitats" were identified and analyzed for their association to clinical and genetic data using survival modeling and Dirichlet regression, respectively. Sixteen distinct tumor sub-regions ("spatial imaging habitats") were derived, and those associated with overall survival (denoted "relevant" habitats) in glioblastoma patients were identified. Dirichlet regression implicated each relevant habitat with unique pathway alterations. Relevant habitats also had some pathways and cellular processes in common, including phosphorylation of STAT-1 and natural killer cell activity, consistent with cancer hallmarks. This work revealed clinical relevance of MRI-derived spatial habitats and their relationship with oncogenic molecular mechanisms in patients with GBM. Characterizing the associations between imaging-derived phenotypic measurements with the genomic and molecular characteristics of tumors can enable insights into tumor biology, further enabling the practice of personalized cancer treatment. The analytical framework and workflow demonstrated in this study are inherently scalable to multiple MR sequences.
Multi-modal magnetic resonance imaging-based grading analysis for gliomas by integrating radiomics and deep features
Ning, Z.
Luo, J.
Xiao, Q.
Cai, L.
Chen, Y.
Yu, X.
Wang, J.
Zhang, Y.
Ann Transl Med2021Journal Article, cited 0 times
Website
LGG-1p19qDeletion
Algorithm Development
Support Vector Machine (SVM)
Classification
Glioma grading
Radiomics
Deep Learning
Background: To investigate the feasibility of integrating global radiomics and local deep features based on multi-modal magnetic resonance imaging (MRI) for developing a noninvasive glioma grading model. Methods: In this study, 567 patients [211 patients with glioblastomas (GBMs) and 356 patients with low-grade gliomas (LGGs)] between May 2006 and September 2018, were enrolled and divided into training (n=186), validation (n=47), and testing cohorts (n=334), respectively. All patients underwent postcontrast enhanced T1-weighted and T2 fluid-attenuated inversion recovery MRI scanning. Radiomics and deep features (trained by 8,510 3D patches) were extracted to quantify the global and local information of gliomas, respectively. A kernel fusion-based support vector machine (SVM) classifier was used to integrate these multi-modal features for grading gliomas. The performance of the grading model was assessed using the area under receiver operating curve (AUC), sensitivity, specificity, Delong test, and t-test. Results: The AUC, sensitivity, and specificity of the model based on combination of radiomics and deep features were 0.94 [95% confidence interval (CI): 0.85, 0.99], 86% (95% CI: 64%, 97%), and 92% (95% CI: 75%, 99%), respectively, for the validation cohort; and 0.88 (95% CI: 0.84, 0.91), 88% (95% CI: 80%, 93%), and 81% (95% CI: 76%, 86%), respectively, for the independent testing cohort from a local hospital. The developed model outperformed the models based only on either radiomics or deep features (Delong test, both of P<0.001), and was also comparable to the clinical radiologists. Conclusions: This study demonstrated the feasibility of integrating multi-modal MRI radiomics and deep features to develop a promising noninvasive grading model for gliomas.
Influence of feature calculating parameters on the reproducibility of CT radiomic features: a thoracic phantom study
Li, Ying
Tan, Guanghua
Vangel, Mark
Hall, Jonathan
Cai, Wenli
Quantitative Imaging in Medicine and Surgery2020Journal Article, cited 0 times
Website
Phantom FDA
Radiomic feature
Algorithms applied to spatially registered multi-parametric MRI for prostate tumor volume measurement
Mayer, Rulon
Simone, Charles B
II, Baris Turkbey
Choyke, Peter
Quantitative Imaging in Medicine and Surgery2021Journal Article, cited 0 times
Website
PROSTATE-MRI
Tumor volume quantification
Prostate Cancer
Multi-parametric MRI
Comparative evaluation of conventional and deep learning methods for semi-automated segmentation of pulmonary nodules on CT
Bianconi, Francesco
Fravolini, Mario Luca
Pizzoli, Sofia
Palumbo, Isabella
Minestrini, Matteo
Rondini, Maria
Nuvoli, Susanna
Spanu, Angela
Palumbo, Barbara
Quant Imaging Med Surg2021Journal Article, cited 2 times
Website
LIDC-IDRI
Segmentation
Algorithm Development
Computed Tomography (CT)
Deep Learning
LUNG
Background: Accurate segmentation of pulmonary nodules on computed tomography (CT) scans plays a crucial role in the evaluation and management of patients with suspicion of lung cancer (LC). When performed manually, not only the process requires highly skilled operators, but is also tiresome and time-consuming. To assist the physician in this task several automated and semi-automated methods have been proposed in the literature. In recent years, in particular, the appearance of deep learning has brought about major advances in the field. Methods: Twenty-four (12 conventional and 12 based on deep learning) semi-automated-'one-click'-methods for segmenting pulmonary nodules on CT were evaluated in this study. The experiments were carried out on two datasets: a proprietary one (383 images from a cohort of 111 patients) and a public one (259 images from a cohort of 100). All the patients had a positive transcript for suspect pulmonary nodules. Results: The methods based on deep learning clearly outperformed the conventional ones. The best performance [Sorensen-Dice coefficient (DSC)] in the two datasets was, respectively, 0.853 and 0.763 for the deep learning methods, and 0.761 and 0.704 for the traditional ones. Conclusions: Deep learning is a viable approach for semi-automated segmentation of pulmonary nodules on CT scans.
Correlation of prostate tumor eccentricity and Gleason scoring from prostatectomy and multi-parametric-magnetic resonance imaging
Mayer, Rulon
Simone, Charles B
II, Baris Turkbey
Choyke, Peter
Quantitative Imaging in Medicine and Surgery2021Journal Article, cited 0 times
Website
PROSTATE-MRI
multi-parametric MRI
Gleason scoring
A review of deep learning-based three-dimensional medical image registration methods
Xiao, Haonan
Teng, Xinzhi
Liu, Chenyang
Li, Tian
Ren, Ge
Yang, Ruijie
Shen, Dinggang
Cai, Jing
Quantitative Imaging in Medicine and Surgery2021Journal Article, cited 0 times
Prostate Fused-MRI-Pathology
Prostate-3T
Medical image registration is a vital component of many medical procedures, such as image-guided radiotherapy (IGRT), as it allows for more accurate dose-delivery and better management of side effects. Recently, the successful implementation of deep learning (DL) in various fields has prompted many research groups to apply DL to three-dimensional (3D) medical image registration. Several of these efforts have led to promising results. This review summarized the progress made in DL-based 3D image registration over the past 5 years and identify existing challenges and potential avenues for further research. The collected studies were statistically analyzed based on the region of interest (ROI), image modality, supervision method, and registration evaluation metrics. The studies were classified into three categories: deep iterative registration, supervised registration, and unsupervised registration. The studies are thoroughly reviewed and their unique contributions are highlighted. A summary is presented following a review of each category of study, discussing its advantages, challenges, and trends. Finally, the common challenges for all categories are discussed, and potential future research topics are identified.
Quant Imaging Med Surg2022Journal Article, cited 0 times
Website
PROSTATE-MRI
Radiomics
Radiogenomics
Gleason score
Tumor morphology
histology of wholemount prostatectomy
multi-parametric magnetic resonance imaging (multi-parametric MRI)
prostate cancer (PCa)
BACKGROUND: Prostate tumor volume predicts biochemical recurrence, metastases, and tumor proliferation. A recent study showed that prostate tumor eccentricity (elongation or roundness) correlated with Gleason score. No studies examined the relationship among the prostate tumor's shape, volume, and potential aggressiveness. METHODS: Of the 26 patients that were analyzed, 18 had volumes >1 cc for the histology-based study, and 25 took up contrast material for the MRI portion of this study. This retrospective study quantitatively compared tumor eccentricity and volume measurements from pathology assessment sectioned wholemount prostates and multi-parametric MRI to Gleason scores. Multi-parametric MRI (T1, T2, diffusion, dynamic contrast-enhanced images) were resized, translated, and stitched to form spatially registered multi-parametric cubes. Multi-parametric signatures that characterize prostate tumors were inserted into a target detection algorithm (Adaptive Cosine Estimator, ACE). Various detection thresholds were applied to discriminate tumor from normal tissue. Pixel-based blobbing, and labeling were applied to digitized pathology slides and threshold ACE images. Tumor volumes were measured by counting voxels within the blob. Eccentricity calculation used moments of inertia from the blobs. RESULTS: From wholemount prostatectomy slides, fitting two sets of independent variables, prostate tumor eccentricity (largest blob eccentricity, weighted eccentricity, filtered weighted eccentricity) and tumor volume (largest blob volume, average blob volume, filtered average blob volume) to Gleason score in a multivariate analysis, yields correlation coefficient R=0.798 to 0.879 with P<0.01. The eccentricity t-statistic exceeded the volume t-statistic. Fitting histology-based total prostate tumor volume against Gleason score yields R=0.498, P=0.0098. From multi-parametric MRI, the correlation coefficient R between the Gleason score and the largest blob eccentricity for varying thresholds (0.30 to 0.55) ranged from -0.51 to -0.672 (P<0.01). For varying thresholds (0.60 to 0.80) for MRI detection, the R between the largest blob volume eccentricity against the Gleason score ranged from 0.46 to 0.50 (P<0.03). Combining tumor eccentricity and tumor volume in multivariate analysis failed to increase Gleason score prediction. CONCLUSIONS: Prostate tumor eccentricity, determined by histology or MRI, more accurately predicted Gleason score than prostate tumor volume. Combining tumor eccentricity with volume from histology-based analysis enhanced Gleason score prediction, unlike MRI.
Developing and validating a deep learning and radiomic model for glioma grading using multiplanar reconstructed magnetic resonance contrast-enhanced T1-weighted imaging: a robust, multi-institutional study
Ding, J.
Zhao, R.
Qiu, Q.
Chen, J.
Duan, J.
Cao, X.
Yin, Y.
Quant Imaging Med Surg2022Journal Article, cited 2 times
Website
TCGA-LGG
DICOM-Glioma-SEG
Multiplanar reconstruction (MPR)
Deep Learning
BRAIN
Radiomics
Background: Although surgical pathology or biopsy are considered the gold standard for glioma grading, these procedures have limitations. This study set out to evaluate and validate the predictive performance of a deep learning radiomics model based on contrast-enhanced T1-weighted multiplanar reconstruction images for grading gliomas. Methods: Patients from three institutions who diagnosed with gliomas by surgical specimen and multiplanar reconstructed (MPR) images were enrolled in this study. The training cohort included 101 patients from institution 1, including 43 high-grade glioma (HGG) patients and 58 low-grade glioma (LGG) patients, while the test cohorts consisted of 50 patients from institutions 2 and 3 (25 HGG patients, 25 LGG patients). We then extracted radiomics features and deep learning features using six pretrained models from the MPR images. The Spearman correlation test and the recursive elimination feature selection method were used to reduce the redundancy and select most predictive features. Subsequently, three classifiers were used to construct classification models. The performance of the grading models was evaluated using the area under the receiver operating curve, sensitivity, specificity, accuracy, precision, and negative predictive value. Finally, the prediction performances of the test cohort were compared to determine the optimal classification model. Results: For the training cohort, 62% (13 out of 21) of the classification models constructed with MPR images from multiple planes outperformed those constructed with single-plane MPR images, and 61% (11 out of 18) of classification models constructed with both radiomics features and deep learning features had higher area under the curve (AUC) values than those constructed with only radiomics or deep learning features. The optimal model was a random forest model that combined radiomic features and VGG16 deep learning features derived from MPR images, which achieved AUC of 0.847 in the training cohort and 0.898 in the test cohort. In the test cohort, the sensitivity, specificity, and accuracy of the optimal model were 0.840, 0.760, and 0.800, respectively. Conclusions: Multiplanar CE-T1W MPR imaging features are more effective than features from single planes when differentiating HGG and LGG. The combination of deep learning features and radiomics features can effectively grade glioma and assist clinical decision-making.
Development and testing quantitative metrics from multi-parametric magnetic resonance imaging that predict Gleason score for prostate tumors
Mayer, R.
Simone, C. B., 2nd
Turkbey, B.
Choyke, P.
Quant Imaging Med Surg2022Journal Article, cited 0 times
Website
PROSTATE-MRI
PROSTATE
Radiogenomics
Radiomics
Gleason score
Supervised target detection
adaptive cosine estimator
histology of wholemount prostatectomy
multi-parametric magnetic resonance imaging (multi-parametric MRI)
prostate cancer (PCa)
Background: Radiologists currently subjectively examine multi-parametric magnetic resonance imaging (MRI) to detect possible clinically significant lesions using the Prostate Imaging Reporting and Data System (PI-RADS) protocol. The assessment of imaging, however, relies on the experience and judgement of radiologists creating opportunity for inter-reader variability. Quantitative metrics, such as z-score and signal to clutter ratio (SCR), are therefore needed. Methods: Multi-parametric MRI (T1, T2, diffusion, dynamic contrast-enhanced images) were resampled, rescaled, translated, and stitched to form spatially registered multi-parametric cubes for patients undergoing radical prostatectomy. Multi-parametric signatures that characterize prostate tumors were inserted into z-score and SCR. The multispectral covariance matrix was computed for the outlined normal prostate. The z-score from each MRI image was computed and summed. To reduce noise in the covariance matrix, following matrix decomposition, the noisy eigenvectors were removed. Also, regularization and modified regularization was applied to the covariance matrix by minimizing the discrimination score. The filtered and regularized covariance matrices were inserted into the SCR calculation. The z-score and SCR were quantitatively compared to Gleason scores from clinical pathology assessment of the histology of sectioned wholemount prostates. Results: Twenty-six consecutive patients were enrolled in this retrospective study. Median patient age was 60 years (range, 49 to 75 years), median prostate-specific antigen (PSA) was 5.8 ng/mL (range, 2.3 to 23.7 ng/mL), and median Gleason score was 7 (range, 6 to 9). A linear fit of the summed z-score against Gleason score found a correlation of R=0.48 and a P value of 0.015. A linear fit of the SCR from regularizing covariance matrix against Gleason score found a correlation of R=0.39 and a P value of 0.058. The SCR employing the modified regularizing covariance matrix against Gleason score found a correlation of R=0.52 and a P value of 0.007. A linear fit of the SCR from filtering out 3 and 4 eigenvectors from the covariance matrix against Gleason score found correlations of R=0.50 and 0.44, respectively, and P values of 0.011 and 0.027, respectively. Conclusions: Z-score and SCR using filtered and regularized covariance matrices derived from spatially registered multi-parametric MRI correlates with Gleason score with highly significant P values.
Development and validation of bone-suppressed deep learning classification of COVID-19 presentation in chest radiographs
Lam, Ngo Fung Daniel
Sun, Hongfei
Song, Liming
Yang, Dongrong
Zhi, Shaohua
Ren, Ge
Chou, Pak Hei
Wan, Shiu Bun Nelson
Wong, Man Fung Esther
Chan, King Kwong
Tsang, Hoi Ching Hailey
Kong, Feng-Ming Spring
Wáng, Yì Xiáng J
Qin, Jing
Chan, Lawrence Wing Chi
Ying, Michael
Cai, Jing
Quantitative Imaging in Medicine and Surgery2022Journal Article, cited 0 times
MIDRC-RICORD-1C
Background: Coronavirus disease 2019 (COVID-19) is a pandemic disease. Fast and accurate diagnosis of COVID-19 from chest radiography may enable more efficient allocation of scarce medical resources and hence improved patient outcomes. Deep learning classification of chest radiographs may be a plausible step towards this. We hypothesize that bone suppression of chest radiographs may improve the performance of deep learning classification of COVID-19 phenomena in chest radiographs.
Methods: Two bone suppression methods (Gusarev et al. and Rajaraman et al.) were implemented. The Gusarev and Rajaraman methods were trained on 217 pairs of normal and bone-suppressed chest radiographs from the X-ray Bone Shadow Suppression dataset (https://www.kaggle.com/hmchuong/xray-bone-shadow-supression). Two classifier methods with different network architectures were implemented. Binary classifier models were trained on the public RICORD-1c and RSNA Pneumonia Challenge datasets. An external test dataset was created retrospectively from a set of 320 COVID-19 positive patients from Queen Elizabeth Hospital (Hong Kong, China) and a set of 518 non-COVID-19 patients from Pamela Youde Nethersole Eastern Hospital (Hong Kong, China), and used to evaluate the effect of bone suppression on classifier performance. Classification performance, quantified by sensitivity, specificity, negative predictive value (NPV), accuracy and area under the receiver operating curve (AUC), for non-suppressed radiographs was compared to that for bone suppressed radiographs. Some of the pre-trained models used in this study are published at (https://github.com/danielnflam).
Results: Bone suppression of external test data was found to significantly (P<0.05) improve AUC for one classifier architecture [from 0.698 (non-suppressed) to 0.732 (Rajaraman-suppressed)]. For the other classifier architecture, suppression did not significantly (P>0.05) improve or worsen classifier performance.
Conclusions: Rajaraman suppression significantly improved classification performance in one classification architecture, and did not significantly worsen classifier performance in the other classifier architecture. This research could be extended to explore the impact of bone suppression on classification of different lung pathologies, and the effect of other image enhancement techniques on classifier performance.
Combining and analyzing novel multi-parametric magnetic resonance imaging metrics for predicting Gleason score
Mayer, Rulon
Turkbey, Baris
Choyke, Peter
Simone, Charles B
Quantitative Imaging in Medicine and Surgery2022Journal Article, cited 0 times
PROSTATE-MRI
Background: Radiologists currently subjectively examine multi-parametric magnetic resonance imaging (MP-MRI) to determine prostate tumor aggressiveness using the Prostate Imaging Reporting and Data System scoring system (PI-RADS). Recent studies showed that modified signal to clutter ratio (SCR), tumor volume, and eccentricity (elongation or roundness) of prostate tumors correlated with Gleason score (GS). No previous studies have combined the prostate tumor's shape, SCR, tumor volume, in order to predict potential tumor aggressiveness and GS.
Methods: MP-MRI (T1, T2, diffusion, dynamic contrast-enhanced images) were obtained, resized, translated, and stitched to form spatially registered multi-parametric cubes. Multi-parametric signatures that characterize prostate tumors were inserted into a target detection algorithm [adaptive cosine estimator (ACE)]. Pixel-based blobbing, and labeling were applied to the threshold ACE images. Eccentricity calculation used moments of inertia from the blobs. Tumor volume was computed by counting pixels within multi parametric MRI blobs and tumor outlines based on pathologist assessment of whole mount histology. Pathology assessment of GS was performed on whole mount prostatectomy. The covariance matrix and mean of normal tissue background was computed from normal prostate. Using signatures and normal tissue statistics, the z-score, noise corrected SCR [principal component (PC), modified regularization] from each patient was computed. Eccentricity, tumor volume, and SCR were fitted to GS. Analysis of variance assesses the relationship among the variables.
Results: A multivariate analysis generated correlation coefficient (0.60 to 0.784) and P value (0.00741 to <0.0001) from fitting two sets of independent variates, namely, tumor eccentricity (the eccentricity for the largest blob, weighted average for the eccentricity) and SCR (removing 3 PCs, removing 4 PCs, modified regularization, and z-score) to GS. The eccentricity t-statistic exceeded the SCR t-statistic. The three-variable fit to GS using tumor volume (histology, MRI) yielded correlation coefficients ranging from 0.724 to 0.819 (P value <<0.05). Tumor volumes generated from histology yielded higher correlation coefficients than MRI volumes. Adding volume to eccentricity and SCR adds little improvement for fitting GS due to higher correlation coefficients among independent variables and little additional, independent information.
Conclusions: Combining prostate tumors eccentricity with SCR relatively highly correlates with GS.
UMRFormer-net: a three-dimensional U-shaped pancreas segmentation method based on a double-layer bridged transformer network
Fang, Kun
He, Baochun
Liu, Libo
Hu, Haoyu
Fang, Chihua
Huang, Xuguang
Jia, Fucang
Quantitative Imaging in Medicine and Surgery2023Journal Article, cited 0 times
CPTAC-PDA
Background: Methods based on the combination of transformer and convolutional neural networks (CNNs) have achieved impressive results in the field of medical image segmentation. However, most of the recently proposed combination segmentation approaches simply treat transformers as auxiliary modules which help to extract long-range information and encode global context into convolutional representations, and there is a lack of investigation on how to optimally combine self-attention with convolution.
Methods: We designed a novel transformer block (MRFormer) that combines a multi-head self-attention layer and a residual depthwise convolutional block as the basic unit to deeply integrate both long-range and local spatial information. The MRFormer block was embedded between the encoder and decoder in U-Net at the last two layers. This framework (UMRFormer-Net) was applied to the segmentation of three-dimensional (3D) pancreas, and its ability to effectively capture the characteristic contextual information of the pancreas and surrounding tissues was investigated.
Results: Experimental results show that the proposed UMRFormer-Net achieved accuracy in pancreas segmentation that was comparable or superior to that of existing state-of-the-art 3D methods in both the Clinical Proteomic Tumor Analysis Consortium Pancreatic Ductal Adenocarcinoma (CPTAC-PDA) dataset and the public Medical Segmentation Decathlon dataset (self-division). UMRFormer-Net statistically significantly outperformed existing transformer-related methods and state-of-the-art 3D methods (P<0.05, P<0.01, or P<0.001), with a higher Dice coefficient (85.54% and 77.36%, respectively) or a lower 95% Hausdorff distance (4.05 and 8.34 mm, respectively).
Conclusions: UMRFormer-Net can obtain more matched and accurate segmentation boundary and region information in pancreas segmentation, thus improving the accuracy of pancreas segmentation. The code is available at https://github.com/supersunshinefk/UMRFormer-Net.
Development and acceptability validation of a deep learning-based tool for whole-prostate segmentation on multiparametric MRI: a multicenter study
Xu, L.
Zhang, G.
Zhang, D.
Zhang, J.
Zhang, X.
Bai, X.
Chen, L.
Jin, R.
Mao, L.
Li, X.
Sun, H.
Jin, Z.
Quant Imaging Med Surg2023Journal Article, cited 0 times
PROSTATEx
PROSTATE
Deep Learning
3D U-Net
Magnetic Resonance Imaging (MRI)
Organ segmentation
Segmentation
BACKGROUND: Accurate whole prostate segmentation on magnetic resonance imaging (MRI) is important in the management of prostatic diseases. In this multicenter study, we aimed to develop and evaluate a clinically applicable deep learning-based tool for automatic whole prostate segmentation on T2-weighted imaging (T2WI) and diffusion-weighted imaging (DWI). METHODS: In this retrospective study, 3-dimensional (3D) U-Net-based models in the segmentation tool were trained with 223 patients who underwent prostate MRI and subsequent biopsy from 1 hospital and validated in 1 internal testing cohort (n=95) and 3 external testing cohorts: PROSTATEx Challenge for T2WI and DWI (n=141), Tongji Hospital (n=30), and Beijing Hospital for T2WI (n=29). Patients from the latter 2 centers were diagnosed with advanced prostate cancer. The DWI model was further fine-tuned to compensate for the scanner variety in external testing. A quantitative evaluation, including Dice similarity coefficients (DSCs), 95% Hausdorff distance (95HD), and average boundary distance (ABD), and a qualitative analysis were used to evaluate the clinical usefulness. RESULTS: The segmentation tool showed good performance in the testing cohorts on T2WI (DSC: 0.922 for internal testing and 0.897-0.947 for external testing) and DWI (DSC: 0.914 for internal testing and 0.815 for external testing with fine-tuning). The fine-tuning process significantly improved the DWI model's performance in the external testing dataset (DSC: 0.275 vs. 0.815; P<0.01). Across all testing cohorts, the 95HD was <8 mm, and the ABD was <3 mm. The DSCs in the prostate midgland (T2WI: 0.949-0.976; DWI: 0.843-0.942) were significantly higher than those in the apex (T2WI: 0.833-0.926; DWI: 0.755-0.821) and base (T2WI: 0.851-0.922; DWI: 0.810-0.929) (all P values <0.01). The qualitative analysis showed that 98.6% of T2WI and 72.3% of DWI autosegmentation results in the external testing cohort were clinically acceptable. CONCLUSIONS: The 3D U-Net-based segmentation tool can automatically segment the prostate on T2WI with good and robust performance, especially in the prostate midgland. Segmentation on DWI was feasible, but fine-tuning might be needed for different scanners.
Non-annotated renal histopathological image analysis with deep ensemble learning
Koo, Jia Chun
Ke, Qi
Hum, Yan Chai
Goh, Choon Hian
Lai, Khin Wee
Yap, Wun-She
Tee, Yee Kai
Quantitative Imaging in Medicine and Surgery2023Journal Article, cited 0 times
CPTAC-CCRCC
Background: Renal cancer is one of the leading causes of cancer-related deaths worldwide, and early detection of renal cancer can significantly improve the patients' survival rate. However, the manual analysis of renal tissue in the current clinical practices is labor-intensive, prone to inter-pathologist variations and easy to miss the important cancer markers, especially in the early stage.
Methods: In this work, we developed deep convolutional neural network (CNN) based heterogeneous ensemble models for automated analysis of renal histopathological images without detailed annotations. The proposed method would first segment the histopathological tissue into patches with different magnification factors, then classify the generated patches into normal and tumor tissues using the pre-trained CNNs and lastly perform the deep ensemble learning to determine the final classification. The heterogeneous ensemble models consisted of CNN models from five deep learning architectures, namely VGG, ResNet, DenseNet, MobileNet, and EfficientNet. These CNN models were fine-tuned and used as base learners, they exhibited different performances and had great diversity in histopathological image analysis. The CNN models with superior classification accuracy (Acc) were then selected to undergo ensemble learning for the final classification. The performance of the investigated ensemble approaches was evaluated against the state-of-the-art literature.
Results: The performance evaluation demonstrated the superiority of the proposed best performing ensembled model: five-CNN based weighted averaging model, with an Acc (99%), specificity (Sp) (98%), F1-score (F1) (99%) and area under the receiver operating characteristic (ROC) curve (98%) but slightly inferior recall (Re) (99%) compared to the literature.
Conclusions: The outstanding robustness of the developed ensemble model with a superiorly high-performance scores in the evaluated metrics suggested its reliability as a diagnosis system for assisting the pathologists in analyzing the renal histopathological tissues. It is expected that the proposed ensemble deep CNN models can greatly improve the early detection of renal cancer by making the diagnosis process more efficient, and less misdetection and misdiagnosis; subsequently, leading to higher patients' survival rate.
Transferring U-Net between low-dose CT denoising tasks: a validation study with varied spatial resolutions
Zhang, Xin
Su, Ting
Zhang, Yunxin
Cui, Han
Tan, Yuhang
Zhu, Jiongtao
Xia, Dongmei
Zheng, Hairong
Liang, Dong
Ge, Yongshuai
Quantitative Imaging in Medicine and Surgery2024Journal Article, cited 0 times
Pancreas-CT
LDCT
Background: Recently, deep learning techniques have been widely used in low-dose computed tomography (LDCT) imaging applications for quickly generating high quality computed tomography (CT) images at lower radiation dose levels. The purpose of this study is to validate the reproducibility of the denoising performance of a given network that has been trained in advance across varied LDCT image datasets that are acquired from different imaging systems with different spatial resolutions.
Methods: Specifically, LDCT images with comparable noise levels but having different spatial resolutions were prepared to train the U-Net. The number of CT images used for the network training, validation and test was 2,400, 300 and 300, respectively. Afterwards, self- and cross-validations among six selected spatial resolutions (62.5, 125, 250, 375, 500, 625 µm) were studied and compared side by side. The residual variance, peak signal to noise ratio (PSNR), normalized root mean square error (NRMSE) and structural similarity (SSIM) were measured and compared. In addition, network retraining on a small number of image set was performed to fine tune the performance of transfer learning among LDCT tasks with varied spatial resolutions.
Results: Results demonstrated that the U-Net trained upon LDCT images having a certain spatial resolution can effectively reduce the noise of the other LDCT images having different spatial resolutions. Regardless, results showed that image artifacts would be generated during the above cross validations. For instance, noticeable residual artifacts were presented at the margin and central areas of the object as the resolution inconsistency increased. The retraining results showed that the artifacts caused by the resolution mismatch can be greatly reduced by utilizing about only 20% of the original training data size. This quantitative improvement led to a reduction in the NRMSE from 0.1898 to 0.1263 and an increase in the SSIM from 0.7558 to 0.8036.
Conclusions: In conclusion, artifacts would be generated when transferring the U-Net to a LDCT denoising task with different spatial resolution. To maintain the denoising performance, it is recommended to retrain the U-Net with a small amount of datasets having the same target spatial resolution.
An unsupervised semi-automated pulmonary nodule segmentation method based on enhanced region growing
Ren, He
Zhou, Lingxiao
Liu, Gang
Peng, Xueqing
Shi, Weiya
Xu, Huilin
Shan, Fei
Liu, Lei
Quantitative Imaging in Medicine and Surgery2020Journal Article, cited 0 times
Website
LIDC
Segmentation
region growing method
An overview of publicly available patient-centered prostate cancer datasets
Hulsen, Tim
2019Journal Article, cited 0 times
NaF PROSTATE
Prostate Fused-MRI-Pathology
Prostate-3T
PROSTATE-DIAGNOSIS
PROSTATE-MRI
PROSTATEx
QIN PROSTATE
QIN-PROSTATE-Repeatability
TCGA-PRAD
Prostate cancer (PCa) is the second most common cancer in men, and the second leading cause of death from cancer in men. Many studies on PCa have been carried out, each taking much time before the data is collected and ready to be analyzed. However, on the internet there is already a wide range of PCa datasets available, which could be used for data mining, predictive modelling or other purposes, reducing the need to setup new studies to collect data. In the current scientific climate, moving more and more to the analysis of "big data" and large, international, multi-site projects using a modern IT infrastructure, these datasets could be proven extremely valuable. This review presents an overview of publicly available patient-centered PCa datasets, divided into three categories (clinical, genomics and imaging) and an "overall" section to enable researchers to select a suitable dataset for analysis, without having to go through days of work to find the right data. To acquire a list of human PCa databases, scientific literature databases and academic social network sites were searched. We also used the information from other reviews. All databases in the combined list were then checked for public availability. Only databases that were either directly publicly available or available after signing a research data agreement or retrieving a free login were selected for inclusion in this review. Data should be available to commercial parties as well. This paper focuses on patient-centered data, so the genomics data section does not include gene-centered databases or pathway-centered databases. We identified 42 publicly available, patient-centered PCa datasets. Some of these consist of different smaller datasets. Some of them contain combinations of datasets from the three data domains: clinical data, imaging data and genomics data. Only one dataset contains information from all three domains. This review presents all datasets and their characteristics: number of subjects, clinical fields, imaging modalities, expression data, mutation data, biomarker measurements, etc. Despite all the attention that has been given to making this overview of publicly available databases as extensive as possible, it is very likely not complete, and will also be outdated soon. However, this review might help many PCa researchers to find suitable datasets to answer the research question with, without the need to start a new data collection project. In the coming era of big data analysis, overviews like this are becoming more and more useful.
Radiomics nomogram for prediction disease-free survival and adjuvant chemotherapy benefits in patients with resected stage I lung adenocarcinoma
Xie, Dong
Wang, Ting-Ting
Huang, Shu-Jung
Deng, Jia-Jun
Ren, Yi-Jiu
Yang, Yang
Wu, Jun-Qi
Zhang, Lei
Fei, Ke
Sun, Xi-Wen
She, Yun-Lang
Chen, Chang
2020Journal Article, cited 0 times
NSCLC Radiogenomics
BACKGROUND: Robust imaging biomarkers are needed for risk stratification in stage I lung adenocarcinoma patients in order to select optimal treatment regimen. We aimed to construct and validate a radiomics nomogram for predicting the disease-free survival (DFS) of patients with resected stage I lung adenocarcinoma, and further identifying candidates benefit from adjuvant chemotherapy (ACT).
METHODS: Using radiomics approach, we analyzed 554 patients' computed tomography (CT) images from three multicenter cohorts. Prognostic radiomics features were extracted from computed tomography (CT) images and selected using least absolute shrinkage and selection operator (LASSO) Cox regression model to build a radiomics signature for DFS stratification. The biological basis of radiomics was explored in the Radiogenomics dataset (n=79) by gene set enrichment analysis (GSEA). Then a nomogram that integrated the signature with these significant clinicopathologic factors in the multivariate analysis were constructed in the training cohort (n=238), and its prognostic accuracy was evaluated in the validation cohort (n=237). Finally, the predictive value of nomogram for ACT benefits was assessed.
RESULTS: The radiomics signature with higher score was significantly associated with worse DFS in both the training and validation cohorts (P<0.001). The GSEA presented that the signature was highly correlated to characteristic metabolic process and immune system during cancer progression. Multivariable analysis revealed that age (P=0.031), pathologic TNM stage (P=0.043), histologic subtype (P=0.010) and the signature (P<0.001) were independently associated with patients' DFS. The integrated radiomics nomogram showed good discrimination performance, as well as good calibration and clinical utility, for DFS prediction in the validation cohort. We further found that the patients with high points (point ≥8.788) defined by the radiomics nomogram obtained a significant favorable response to ACT (P=0.04) while patients with low points (point <8.788) showed no survival difference (P=0.7).
CONCLUSIONS: The radiomics nomogram could be used for prognostic prediction and ACT benefits identification for patient with resected stage I lung adenocarcinoma.
Breast Cancer MRI Classification Based on Fractional Entropy Image Enhancement and Deep Feature Extraction
Hasan, Ali M.
Qasim, Asaad F.
Jalab, Hamid A.
Ibrahim, Rabha W.
2022Journal Article, cited 0 times
BREAST-DIAGNOSIS
سرطان الثدي يعتبر واحد من الامراض القاتلة الشائعة بين النساء في جميع أنحاء العالم. والتشخيص المبكر لسرطان الثدي الكشف المبكر من أهم استراتيجيات الوقاية الثانوية. نظرًا لاستخدام التصوير الطبي على نطاق واسع في تشخيص العديد من الأمراض المزمنة ومراقبتها، فقد تم اقتراح العديد من خوارزميات معالجة الصور على مر السنين لزيادة مجال التصوير الطبي بحيث تصبح عملية التشخيص أكثر دقة وكفاءة. تقدم هذه الدراسة خوارزمية جديدة لاستخراج الخواص العميقة من نوعين من صور الرنين المغناطيسي T2W-TSE و STIR MRI كمدخلات للشبكات العصبية العميقة المقترحة والتي تُستخدم لاستخراج الخواص للتمييز بين فحوصات التصوير بالرنين المغناطيسي للثدي المرضية والصحية. في هذه الخوارزمية، تتم معالجة فحوصات التصوير بالرنين المغناطيسي للثدي مسبقًا قبل خطوة استخراج الخواص لتقليل تأثيرات الاختلافات بين شرائح التصوير بالرنين المغناطيسي، وفصل الثدي الايمن عن الايسر، بالإضافة الى عزل خلفية الصور. وقد كانت أقصى دقة تم تحقيقها لتصنيف مجموعة بيانات تضم 326 شريحة تصوير بالرنين المغناطيسي للثدي 98.77٪. يبدو أن النموذج يتسم بالكفاءة والأداء ويمكن بالتالي اعتباره مرشحًا للتطبيق في بيئة سريرية.
The contribution of axillary lymph node volume to recurrence-free survival status in breast cancer patients with sub-stratification by molecular subtypes and pathological complete response
Kang, James
Li, Haifang
Cattell, Renee
Talanki, Varsha
Cohen, Jules A.
Bernstein, Clifford S.
Duong, Tim
Breast Cancer Research2020Journal Article, cited 0 times
Website
ISPY1/ACRIN 6657
Purpose This study sought to examine the contribution of axillary lymph node (LN) volume to recurrence-free survival (RFS) in breast cancer patients with sub-stratification by molecular subtypes, and full or nodal PCR.; ; Methods The largest LN volumes per patient at pre-neoadjuvant chemotherapy on standard clinical breast 1.5-Tesla MRI, 3 molecular subtypes, full, breast, and nodal PCR, and 10-year RFS were tabulated (N = 110 patients from MRIs of I-SPY-1 TRIAL). A volume threshold of two standard deviations was used to categorize large versus small LNs for sub stratification. In addition, “normal” node volumes were determined from a different cohort of 218 axillary LNs.; ; Results LN volume (4.07 ± 5.45 cm3) were significantly larger than normal axillary LN volumes (0.646 ± 0.657 cm3, P = 10− 16). Full and nodal pathologic complete response (PCR) was not dependent on pre-neoadjuvant chemotherapy nodal volume (P > .05). The HR+/HER2– group had smaller axillary LN volumes than the HER2 + and triple-negative groups (P < .05). Survival was not dependent on pre-treatment axillary LN volumes alone (P = .29). However, when substratified by PCR, the large LN group with full (P = .011) or nodal PCR (P = .0026) both showed better recurrence-free survival than the small LN group. There was significant difference in RFS when the small node group was separated by the 3 molecular subtypes (P = .036) but not the large node group (P = .97).; ; Conclusions This study found an interaction of axillary lymph node volume, pathological complete responses, and molecular subtypes that inform recurrence-free survival status. Improved characterization of the axillary lymph nodes has the potential to improve the management of breast cancer patients.
Predicting the polybromo-1 (PBRM1) mutation of a clear cell renal cell carcinoma using computed tomography images and KNN classification with random subspace
Ökmen, Harika Beste
Guvenis, Albert
Uysal, Hadi
2019Journal Article, cited 0 times
TCGA-KIRC
Purpose: Molecular genetic knowledge of clear-cell renal-cell carcinoma (CCRCC) plays an important role in predicting the prognosis and may be used as a guide in treatment decisions and the conception of clinical trials. It would then be desirable to predict these mutations non-invasively from CT images which are already available for CCRCC patients. Methods: TCGAKIRC data were obtained from the National Cancer Institute’s (NCI) image dataset. We used 191 patient data of which 63 were associated with PBRM1 mutations. The tumors were delineated by a radiologist with over 10 years of experience, on slices that displayed the largest diameter of the tumor. Features were extracted and normalized. After feature selection, the KNN classification with Random Subspace method was used as it is known to have advantages over the simple k-nearest-neighbor method. Results: Prediction accuracy for PBRM1 was found 83.8 %. Conclusions: A single slice of the CT scan image of CCRCC can be used for predicting PBRM1 mutations using KNN classification in Random Subspaces with an acceptable accuracy.
Brain Tumor Automatic Detection from MRI Images Using Transfer Learning Model with Deep Convolutional Neural Network
Bayoumi, Esraa
Abd-Ellah, mahmoud
Khalaf, Ashraf A. M.
Gharieb, Reda
Journal of Advanced Engineering Trends2021Journal Article, cited 1 times
Website
Brain-Tumor-Progression
RIDER NEURO MRI
Computer Aided Detection (CADe)
Convolutional Neural Network (CNN)
Transfer learning
Resnet50
Vgg16
AlexNet
Inceptionv3
Brain tumor detection successfully in early-stage plays important role in improving patient treatment and survival. Evaluating magnetic resonance imaging (MRI) images manually is a very difficult task due to the numerous numbers of images produced in the clinic routinely. So, there is a need for using a computer-aided diagnosis (CAD) system for early detection and classification of brain tumors as normal and abnormal. The paper aims to design and evaluate the convolution neural network (CNN) Transfer Learning state-of-the-art performance proposed for image classification over the recent years. Five different modifications have been applied to five different famous CNN to know the most effective modification. Five-layer modifications with parameter tuning are applied for each architecture providing a new CNN architecture for brain tumor detection. Most brain tumor datasets have a small number of images to train the deep learning structure. Therefore, two datasets are used in the evaluation to ensure the effectiveness of the proposed structures. Firstly, a standard dataset from the RIDER Neuro MRI database including 349 brain MRI images with 109 normal images and 240 abnormal images. Secondly, a collection of 120 brain MRI images including 60 abnormal images and 60 normal images. The results show that the proposed CNN Transfer Learning with MRI’s can learn significant biomarkers of brain tumor, however, the best accuracy, specificity, and sensitivity gained is 100% for all of them.
Automatic Lung Segmentation and Lung Nodule Type Identification over LIDC-IDRI dataset
Suji, R. Jenkin
Bhadauria, Sarita Singh
Indian Journal of Computer Science and Engineering2021Journal Article, cited 0 times
Website
LIDC-IDRI
Lung CT Segmentation Challenge 2017
Segmentation
Python
Vasculature
Computer Aided Detection (CADe)
Algorithm Development
Accurate segmentation of lung parenchyma is one of the basic steps for lung nodule detection and diagnosis. Using thresholding and morphology based methods for lung parenchyma segmentation is challenging due to the homogeneous intensities present in lung images. Further, typically, datasets do not contain explicit labels of their nodule types and there little literature on how to typify nodules into different nodule types ; eventhough identifying nodule types help to understand and explain the progress and shortcomings of various steps in the computer aided diagnosis pipeline. Hence, this work also presents methods for identification of nodule types, juxta-vascular, juxta-pleural and isolated. This work presents thresholding and morphological operation based methods for both lung segmentation and lung nodule type identification. Thresholding and morphology based methods have been chosen over sophisticated approaches due to the reasons of simplicity and rapidity. Qualitative validation of the proposed lung segmentation method is provided in terms of step by step output on a scan from LIDC-IDRI dataset and lung nodule type identification method is provided by output volume images. Further, the lung segmentation method is validated by percentage of overlap and the results on nodule type identification for various lung segmentation outputs have been analysed. The provided analysis offers a peekview into the ability to analyse the lung segmentation algorithms and nodule detection and segmentation algorithms interms of nodule types and motivates the need to provide nodule type groundtruth information also for developing better nodule type classification/identification algorithms.; Keywords: Lung Segmentation; Juxta-vascular nodules; Juxta-pleural nodules; Thresholding; Morphological operations.
Un esquema para el realce de imágenes de neoplasias malignas en tejidos formados a partir del endodermo embrionario
Rangel, José Gerardo Chacón
Ramon, Josep Nestor Sequeda
Fernandez, Johel Enrique Rodriguez
Fuentes, Anderson Smith Florez
2021Journal Article, cited 0 times
TCGA-LIHC
El impacto de los artefactos y el ruido en las imágenes de tomografía computarizada determina la calidad y la comprensión de losprocedimientos computacionales de análisisde imágenes médicas. En la evaluaciónautomática de los tumores cancerosos, elrealce de la imagen es una tarea preliminarnecesaria para los métodos de segmentaciónutilizados para localizar el tumor y cuantificarlos volúmenes de las neoplasias. En estedocumento, se describe un procedimientopara evaluar la capacidad de un conjunto defiltros de suavizado utilizados para disminuirel impacto de los artefactos y el ruido en lasimágenes de tomografía computarizada depulmón, hígado y estómago en presencia detumores cancerosos. La determinación de losmejores filtros de mejora se realiza medianteuna función de puntuación basada en lafusión de medidas de mejora de imagen dereferencia completa y de referencia ciega.
A computational model for texture analysis in images with a reaction-diffusion based filter
Hamid, Lefraich
Fahim, Houda
Zirhem, Mariam
Alaa, Nour Eddine
Journal of Mathematical Modeling2021Journal Article, cited 0 times
Website
TCGA-SARC
RIDER NEURO MRI
Texture analysis
Model
Radiomics
As one of the most important tasks in image processing, texture analysis is related to a class of mathematical models that characterize the spatial variations of an image.; In this paper, in order to extract features of interest, we propose a reaction diffusion based model which uses the variational approach. In the first place, we describe the mathematical model, then, aiming to simulate the latter accurately, we suggest an efficient numerical scheme.; Thereafter, we compare our method to literature findings. Finally, we conclude our analysis by a number of experimental results showing the robustness and the performance of our algorithm.
Breast Cancer Diagnostic System Based on MR images Using KPCA-Wavelet Transform and Support Vector Machine
Brain tumor segmentation using morphological processing and the discrete wavelet transform
Lojzim, Joshua Michael
Fries, Marcus
Journal of Young Investigators2017Journal Article, cited 0 times
Website
MATLAB
MRI
Segmentation
Brain
BCDNet: A Deep Learning Model with Improved Convolutional Neural Network for Efficient Detection of Bone Cancer Using Histology Images
Bolleddu Devananda, Rao
K. Madhavi
2024Journal Article, cited 0 times
Osteosarcoma-Tumor-Assessment
Bone Cancer
Computer Aided Detection (CADe)
Deep Learning
Convolutional Neural Network (CNN)
Among the several types of cancer, bone cancer is the most lethal prevailing in the world. Its prevention is better than cure. Besides early detection of bone cancer has potential to have medical intervention to prevent spread of malignant cells and help patients to recover from the disease. Many medical imaging modalities such as histology, histopathology, radiology, X-rays, MRIs, CT scans, phototherapy, PET and ultrasounds are being used in bone cancer detection research. However, hematoxylin and eosin stained histology images are found crucial for early diagnosis of bone cancer. Existing Convolutional Neural Network (CNN) based deep learning techniques are found suitable for medical image analytics. However, the models are prone to mediocre performance unless configured properly with empirical study. Within this article, we suggested a framework centered on deep learning for automatic bone cancer detection. We also proposed a CNN variant known as Bone Cancer Detection Network (BCDNet) which is configured and optimized for detection of a common kind of bone cancer named Osteosarcoma. An algorithm known as Learning based Osteosarcoma Detection (LbOD). It exploits BCDNet model for both binomial and multi-class classification. Osteosarcoma-Tumor-Assessment is the histology dataset used for our empirical study. Our the outcomes of the trial showed that BCDNet outperforms baseline models with 96.29% accuracy in binary classification and 94.69% accuracy in multi-class classification.
A NOVEL COMPARATIVE STUDY FOR AUTOMATIC THREE-CLASS AND FOUR-CLASS COVID-19 CLASSIFICATION ON X-RAY IMAGES USING DEEP LEARNING
Yaşar, Hüseyin
Ceylan, Murat
Malaysian Journal of Computer Science2022Journal Article, cited 0 times
Website
COVID-19-AR
COVID-19
Convolutional Neural Network (CNN)
X-Rays
Classification
Deep learning
The contagiousness rate of the COVID-19 virus, which was evaluated to have been transmitted from an animal to a human during the last months of 2019, is higher than the MERS-Cov and SARS-Cov viruses originating from the same family. The high rate of contagion has caused the COVID-19 virus to spread rapidly to all countries of the world. It is of great importance to be able to detect cases quickly in order to control the spread of the COVID-19 virus. Therefore, the development of systems that make automatic COVID-19 diagnoses using artificial intelligence approaches based on Xray, CT scans, and ultrasound images are an urgent and indispensable requirement. In order to increase the number of X-ray images used within the study, a mixed data set was created by combining eight different data sets, thus maximizing the scope of the study. In the study, a total of 9,667 X ray images were used, including 3,405 of COVID-19 samples, 2,780 of bacterial pneumonia samples, 1,493 of viral pneumonia samples and 1,989 of healthy samples. In this study, which aims to diagnose COVID-19 disease using X-ray images, automatic classification has been performed using two different classification structures: COVID-19 Pneumonia/Other Pneumonia/Healthy and COVID-19 Pneumonia/Bacterial Pneumonia/Viral Pneumonia/Healthy. Convolutional Neural Networks (CNNs), a successful deep learning method, were used as a classifier within the study. A total of seven CNN architectures were used: Mobilenetv2, Resnet101, Googlenet, Xception, Densenet201, Efficientnetb0, and Inceptionv3 architectures. The classification results were obtained from the original X-ray images, and the images were obtained by using Local Binary Pattern and Local Entropy. Then, new classification results were calculated from the obtained results using a pipeline algorithm. Detailed results were obtained to meet the scope of the study. According to the results of the experiments carried out, the three most successful CNN architectures for both three-class and four class automatic classification were Densenet201, Xception, and Inceptionv3, respectively. In addition, it is understood that the pipeline algorithm used in the study is very useful for improving the results. The study results show that up to an improvement of 1.57% were achieved in some comparison parameters.
Multi-View Attention-based Late Fusion (MVALF) CADx system for breast cancer using deep learning
Iftikhar, Hina
Shahid, Ahmad Raza
Raza, Basit
Khan,Hasan Nasir
Machine Graphics & Vision2020Journal Article, cited 0 times
Website
CBIS-DDSM
Computer Aided Diagnosis (CADx)
BREAST
Transfer learning
Mammography
Information fusion
Breast cancer is a leading cause of death among women. Early detection can significantly reduce the mortality rate among women and improve their prognosis. Mammography is the first line procedure for early diagnosis. In the early era, conventional Computer-Aided Diagnosis (CADx) systems for breast lesion diagnosis were based on just single view information. The last decade evidence the use of two views mammogram: Medio-Lateral Oblique (MLO) and Cranio-Caudal (CC) view for the CADx systems. Most recent studies show the effectiveness of four views of mammogram to train CADx system with feature fusion strategy for classification task. In this paper, we proposed an end-to-end Multi-View Attention-based Late Fusion (MVALF) CADx system that fused the obtained predictions of four view models, which is trained for each view separately. These separate models have different predictive ability for each class. The appropriate fusion of multi-view models can achieve better diagnosis performance. So, it is necessary to assign the proper weights to the multi-view classification models. To resolve this issue, attention-based weighting mechanism is adopted to assign the proper weights to trained models for fusion strategy. The proposed methodology is used for the classification of mammogram into normal, mass, calcification, malignant masses and benign masses. The publicly available datasets CBIS-DDSM and mini-MIAS are used for the experimentation. The results show that our proposed system achieved 0.996 AUC for normal vs. abnormal, 0.922 for mass vs. calcification and 0.896 for malignant vs. benign masses. Superior results are seen for the classification of malignant vs benign masses with our proposed approach, which is higher than the results using single view, two views and four views early fusion-based systems. The overall results of each level show the potential of multi-view late fusion with transfer learning in the diagnosis of breast cancer.
Multi-Class Classification Framework for Brain Tumor MR Image Classification by Using Deep CNN with Grid-Search Hyper Parameter Optimization Algorithm
Mukkapati, Naveen
Anbarasi, MS
International Journal of Computer Science & Network Security;; 국제컴퓨터통신보호논문지학회2022Journal Article, cited 0 times
Website
REMBRANDT
RIDER NEURO MRI
TCGA-LGG
Deep convolutional neural network (DCNN)
Classification
Magnetic Resonance Imaging (MRI)
BRAIN
Softmax
Histopathological analysis of biopsy specimens is still used for diagnosis and classifying the brain tumors today. The available procedures are intrusive, time consuming, and inclined to human error. To overcome these disadvantages, need of implementing a fully automated deep learning-based model to classify brain tumor into multiple classes. The proposed CNN model with an accuracy of 92.98 % for categorizing tumors into five classes such as normal tumor, glioma tumor, meningioma tumor, pituitary tumor, and metastatic tumor. Using the grid search optimization approach, all of the critical hyper parameters of suggested CNN framework were instantly assigned. Alex Net, Inception v3, Res Net -50, VGG -16, and Google - Net are all examples of cutting-edge CNN models that are compared to the suggested CNN model. Using huge, publicly available clinical datasets, satisfactory classification results were produced. Physicians and radiologists can use the suggested CNN model to confirm their first screening for brain tumor Multi-classification.
Semi-Supervised Learning with Pseudo-Labeling for Pancreatic Cancer Detection on CT Scans
Kurasova, Olga
Medvedev, Viktor
Šubonienė, Aušra
Dzemyda, Gintautas
Gulla, Aistė
Samuilis, Artūras
Jagminas, Džiugas
Strupas, Kęstutis
2023Conference Paper, cited 0 times
Pancreas-CT
Semi-supervised learning
PANCREAS
Computer Aided Detection (CADe)
Deep learning techniques have recently gained increasing attention not only among computer science researchers but are also being applied in a wide range of fields. However, deep learning models demand huge amounts of data. Furthermore, fully supervised learning requires labeled data to solve classification, recognition, and segmentation problems. Data labeling and annotation in the medical domain are time-consuming and labor-intensive. Semi-supervised learning has demonstrated the ability to improve deep learning performance when labeled data is scarce. However, it is still an open and challenging question on how to leverage not only labeled data but also the huge amount of unlabeled data. In this paper, the problem of pancreatic cancer detection on CT scans is addressed by a semi-supervised learning approach based on pseudo-labeling. Preliminary results are promising and show the potential of semi-supervised deep learning to detect pancreatic cancer at an early stage with a limited amount of labeled data.
Database Acquisition for the Lung Cancer Computer Aided Diagnostic Systems
Meldo, Anna
Utkin, Lev
Lukashin, Aleksey
Muliukha, Vladimir
Zaborovsky, Vladimir
2019Conference Paper, cited 0 times
LIDC-IDRI
TCGA-LUAD
CPTAC-LUAD
LIDC-IDRI
LUNG
Computer Aided Diagnosis (CADx)
Algorithm Development
Most of the used computer aided diagnostic (CAD) systems based on applying the deep learning algorithms are similar from the point of view of data processing stages. The main typical stages are the training data acquisition, pre-processing, segmentation and classification. Homogeneity of a training dataset structure and its completeness are very important for minimizing inaccuracies in the development of the CAD systems. The main difficulties in the medical training data acquisition are concerned with their heterogeneity and incompleteness. Another problem is a lack of a sufficient large amount of data for training deep neural networks which are a basis of the CAD systems. In order to overcome these problems in the lung cancer CAD systems, a new methodology of the dataset acquisition is proposed by using as an example the database called LIRA which has been applied to training the intellectual lung cancer CAD system called by Dr. AIzimov. One of the important peculiarities of the dataset LIRA is the morphological confirmation of diseases. Another peculiarity is taking into account and including “atypical” cases from the point of view of radiographic features. The database development is carried out in the interdisciplinary collaboration of radiologists and data scientists developing the CAD system.
Denoising on Low-Dose CT Image Using Deep CNN
Sadamatsu, Yuta
Murakami, Seiichi
Li, Guangxu
Kamiya, Tohru
2022Conference Paper, cited 0 times
Lung Phantom
Image denoising
Deep convolutional neural network (DCNN)
Computed Tomography (CT) scans are widely used in Japan, and they contribute to public health. On the other hand, there is also a risk of radiation exposure. To solve this problem, attempts are being made to reduce the radiation dose during imaging. However, reducing the radiation dose causes noise and degrades image quality. In this paper, we propose an image analysis method that efficiently removes noise by changing the activation function of Deep Convolutional Neural Network (Deep CNN). Experimental tests using full-body slice CT images of pigs and phantom CT images of lungs with Poisson noise show that the proposed method is helpful by comparing them with normal-dose CT images and evaluating image quality using peak signal-to-noise ratio (PSNR).
Towards a Whole Body [18F] FDG Positron Emission Tomography Attenuation Correction Map Synthesizing using Deep Neural Networks
Rodríguez Colmeiro, Ramiro Germán
Verrastro, Claudio
Minsky, Daniel
Grosges, Thomas
Journal of Computer Science and Technology2021Journal Article, cited 0 times
HNSCC-3D-CT-RT
Deep Learning
U-Net
The correction of attenuation effects in Positron Emission Tomography (PET) imaging is fundamental to obtain a correct radiotracer distribution. However direct measurement of this attenuation map is not error-free and normally results in additional ionization radiation dose to the patient. Here, we explore the task of whole body attenuation map generation using 3D deep neural networks. We analyze the advantages that an adversarial network training cand provide to such models. The networks are trained to learn the mapping from non attenuation corrected [18 ^F]-fluorodeoxyglucose PET images to a synthetic Computerized Tomography (sCT) and also to label the input voxel tissue. Then the sCT image is further refined using an adversarial training scheme to recover higher frequency details and lost structures using context information. This work is trained and tested on public available datasets, containing several PET images from different scanners with different radiotracer administration and reconstruction modalities. The network is trained with 108 samples and validated on 10 samples. The sCT generation was tested on 133 samples from 8 distinct datasets. The resulting mean absolute error of the networks is 90±20 and 103±18HU and a peak signal to noise ratio of 19.3±1.7 dB and 18.6±1.5, for the base model and the adversarial model respectively. The attenuation correction is tested by means of attenuation sinograms, obtaining a line of response attenuation mean error lower than 1% with a standard deviation lower than 8%. The proposed deep learning topologies are capable of generating whole body attenuation maps from uncorrected PET image data. Moreover, the accuracy of both methods holds in the presence of data from multiple sources and modalities and are trained on publicly available datasets. Finally, while the adversarial layer enhances visual appearance of the produced samples, the 3D U-Net achieves higher metric performance.
Brain Tumor Classification Using Deep Neural Network
Çınarer, Gökalp
Emiroğlu, Bülent Gürsel
Arslan, Recep Sinan
Yurttakal, Ahmet Haşim
2020Journal Article, cited 0 times
REMBRANDT
EfficientNetB3-Based Transfer Learning Model for Accurate Classification of Acute Lymphoblastic Leukemia Blasts
Background: Acute lymphoblastic leukemia (ALL) predominantly affects pediatric patients and is characterized by the proliferation of immature lymphoblasts in the bone marrow. This uncontrolled growth impairs normal hematopoiesis, leading to anemia, immunodeficiency, and increased susceptibility to infections. Accurate detection and classification of these immature blasts are crucial for effective treatment planning and monitoring. Methods: This study utilizes transfer learning (TL) to improve the detection of immature ALL blasts in microscopic images. We employed the EfficientNetB3 model, a convolutional neural network (CNN) known for its efficient scaling and superior performance. The model was pre-trained on large datasets and fine-tuned with a dataset of 15,135 images from Kaggle, encompassing ALL-positive and ALL-negative samples. Image preprocessing techniques such as normalization, noise reduction, and segmentation were applied to enhance data quality. Results: The proposed TL model achieved a high training accuracy, indicating effective learning from the provided data. At epoch 19, the model's validation accuracy reached 97.75%, demonstrating strong generalization capabilities. The confusion matrix analysis showed high true positive and true negative rates, with minimal false positives and false negatives, underscoring the model's precision and recall. Conclusion: The use of TL with EfficientNetB3 significantly enhances the accuracy and reliability of detecting immature ALL blasts in microscopic images. This approach addresses the challenges posed by limited labeled data and image quality inconsistencies, providing a robust tool for improving leukemia diagnostics. The findings suggest that TL models can be instrumental in advancing clinical decision-making and patient outcomes in ALL treatment.
Multi-stage AI analysis system to support prostate cancer diagnostic imaging
An Artificial intelligence (AI) system was developed to support interpretation of pre-biopsy prostate multiparametric MRI (mpMRI), aiming to improve patient selection for biopsy, biopsy target identification, and productivity of segmentation and reporting, in the prostate cancer diagnostic pathway.; ; For segmentation, the system achieved 92% average Dice score for prostate gland segmentation on held-out test cases from the PROMISE12 dataset (10 patients).; ; For biopsy assessment, the system identified patients with Gleason ≥3+4 clinically significant prostate cancer (csPCA) with sensitivity 93% (95% CI 82-100%), specificity 76% (64-87%), NPV 95% (88-100%), and AUC 0.92 (0.84-0.98), using biparametric MRI (bpMRI) data from the combined PROSTATEx development validation and test sets (80 patients). Performance on the held-out PROSTATEx test set (40 patients) was higher. Radiologists in major studies achieved 93% per-patient sensitivity at specificity from 18-73%. Equivalent sensitivity is reported for comparable AI/CAD systems at specificity from 6%-42%.; ; For biopsy targeting, the system identified lesions containing csPCa in the PROSTATEx blinded test set (208 lesions, 140 patients) with AUC 0.84/0.85 with bpMRI/mpMRI data respectively.; ; The AI system shows promising performance compared to radiologists and the literature. Further development and evaluation with larger, multi-centre datasets is now planned to support regulatory approval of the system.
Defining the biological and clinical basis of radiomics: towards clinical imaging biomarkers
Radiotherapy is one of the main treatment modalities used for cancer. Nowadays, due to emerging artificial intelligence (AI) technologies, radiotherapy has become a broader field. This thesis investigated how AI can make the life of doctors, physicists and researchers easier. This thesis also showed that clinical routine tasks, such as quality assurance tests, can be automated. Researchers can reuse machine-readable data, while physicists can validate and improve novel treatment techniques such as proton therapy. The abovementioned three pillars contribute to the improvement of patients care (personalised radiotherapy). In conclusion, this technological revolution requires a re-thinking of the original professional figures in radiotherapy and the design of AI studies. This thesis concluded to the fact that radiotherapy professionals and researchers can improve their ability to perform tasks, having AI as a supplementary helping tool.
Generative models improve radiomics: reproducibility and performance in low dose CTs
English summary; Along with the increasing demand of low dose CT in clinical practices, low; dose CT radiomics has shown its potential to provide clinical decision; support in oncology. As a trade-off of low radiation exposure in low dose; CT imaging, higher noise is present in these images. Noise in low dose CT; decreases the texture information of image, and the reproducibility and; performance of CT radiomics. One potential solution worth exploring for; improving the reproducibility and performance of radiomics based on low; dose CT is denoising the images before extracting radiomic features. As the; state of art method for low dose CT denoising, generative models have been; widely used in denoising practices. This thesis investigated the possibility; of using generative models to enhance the image quality of low dose CTs; and improve radiomics reproducibility and performance.; In the first research chapter (Chapter 2) of this thesis, we investigate the; benefits of shortcuts in encoder-decoder network for CT denoising. An; encoder-decoder network (EDN) is an important architecture for the; generator in generative models and this chapter provides some guidelines to; help us design generative models. Results showed that over half of the; shortcuts are necessary for CT denoising. However, the network should keep; sparse connection between the encoder and decoder. Moreover, deeper; shortcuts have a higher priority to be removed in favor of keeping sparse; connections.; Paired training datasets are needed for training most generative models.; However, collecting these kinds of datasets is difficult and time-consuming.; To investigate the effect of generative models in improving low dose CT; radiomics reproducibility, (Chapter 3) two included generative models –; Conditional Generative Adversarial Network (CGAN) and END - were; trained on paired simulation low-high dose CT images. The trained models; are applied to simulated noisy CT images and real low dose CT images.; Results showed that denoising using EDN and CGANs can improve the; reproducibility of radiomic features from noisy CTs (including simulated; data and real low dose CTs).; To test the improvement of enhanced low dose CT radiomics in real; applications more comprehensively, low dose CT radiomics was applied for; a new application. (Chapter 4) The objective of this application is to; develop a lung cancer classification model at the subject (patient) level from; multiple examined nodules, without the need to have specific expert findings; reported at the level of each individual nodule. Lung cancer classification; was regarded as a multiple instances learning problem, CT radiomics were; used as biomarkers to extract information from each nodule and deep; attention-based MIL is used as the classification algorithm at the patient; level. Results showed that the proposed method can achieve the best; performance in lung cancer classification compared with other MIL methods; and that the introduced attention mechanism can increase the interpretability; of results.; To comprehensively investigate the improvements of generative models for; CT radiomics performance in real applications, pre-trained generative; models are applied into multiple real low dose CT datasets without fine-; tuning. (Chapter 5) Improved radiomics features were applied into multiple; radiomics related applications – tumor pre-treatment survival prediction and; deep attention-based MIL based lung cancer diagnosis. The results showed; that generative models can improve low dose CT radiomics performance.; To investigate the possibility of using unpaired real low-high dose CT image; to establish a denoiser and using thus trained denoiser to enhance radiomics; reproducibility and performance, a Cycle GAN was adopted as the testing; model in this chapter. (Chapter 6) The Cycle GAN was trained based on; paired simulated datasets (for comparison study with EDN and CGAN) and; unpaired real datasets. The trained models were applied to simulated noisy; CT images and real low dose CT images to test the improvement of; radiomics reproducibility and performance. The results showed that Cycle; GANs trained on both simulated and real data can improve radiomics; reproducibility and performance in low dose CT and achieve similar results; compared to CGANs and EDNs; Finally, the discussion section of this thesis (Chapter 7) summarized the; barriers that prevent generative models to be applied apply for real low dose; CT radiomics and propose the possible solutions for these barriers.; Moreover, this discussion section mentioned other possible methods to; improve low dose CT radiomics performance.
DETEKSI PNEUMONIA PADA PASIEN COVID-19 BERDASARKAN CITRA X-RAY DADA MENGGUNAKAN METODE TRANSFER LEARNING
Suhaeri, Suhaeri
2022Journal Article, cited 0 times
MIDRC-RICORD-1C
ABSTRACT The detection of pneumonia in Covid-19 patients has an important role because it can be used to determine the appropriate treatment for patients. This helps ensure that patients receive the highest quality of care possible. The use of a method based on machine learning is one of the approaches that can be taken. At the moment, developing a classification model that has a high level of accuracy is a difficult task. This is due to the fact that the X-ray image of the chest has a complicated structure. The purpose of this research is to evaluate an approach called performance transfer learning in order to determine whether or not a chest x-ray image of a patient with the covid-19 strain contains an infection of pneumonia. The findings indicate that, by utilizing a transfer learning architecture, one can achieve an accuracy of approximately 86.36 % . Keywords : Pneumonia, Detection, Transfer Learning, Covid-19. Abstrak Deteksi pneumonia pada pasien Covid-19 memiliki peran penting karena dapat digunakan untuk menentukan jenis perawatan yang tepat bagi pasien. Penggunaan metode deteksi berdasarkan citra x-ray berbebasis machine learning adalah salah satu cara yang dapat dilakukan saat ini, namun pada aplikasinya model machine learning yang dihasilkan belum optimal. Hal ini disebabkan karena citra x-ray memiliki struktur yang kompleks sehingga model machine learning dengan struktur yang sederhana tidak dapat menangkap pola pada citra x-ray dalam mendeteksi infeksi pneumonia pada pasien Covid-19. Riset ini bertujuan untuk mengevaluasi performa arsitektur model machine learning berbasis transfer learning untuk klasifikasi infeksi pneumonia berdasarkan citra x-ray pasien Covid-19. Studi ini menunjukkan bahwa, penggunaan beberapa arsitektur transfer learning menghasilkan tingkat akurasi hingga 84,36%. Kata Kunci : Pneumonia, Deteksi, Transfer Learning, Covid-19
Detection of Tumor Slice in Brain Magnetic Resonance Images by Feature Optimized Transfer Learning
Celik, Salih
KASIM, Ömer
Aksaray University Journal of Science and Engineering2020Journal Article, cited 0 times
Website
REMBRANDT
RIDER NEURO MRI
Brain-Tumor-Progression
radiomic features
Improving Classification with CNNs using Wavelet Pooling with Nesterov-Accelerated Adam
Zhou, Wenjin
Rossetto, Allison
2019Conference Proceedings, cited 0 times
LIDC-IDRI
Computed Tomography (CT)
Convolutional Neural Network (CNN)
Wavelet Analysis
machine learning
Wavelet pooling methods can improve the classification accuracy of Convolutional Neural Networks (CNNs). Combining wavelet pooling with the Nesterov-accelerated Adam (NAdam) gradient calculation method can improve both the accuracy of the CNN. We have implemented wavelet pooling with NAdam in this work using both a Haar wavelet (WavPool-NH) and a Shannon wavelet (WavPool-NS). The WavPool-NH and WavPool- NS methods are most accurate of the methods we considered for the MNIST and LIDC- IDRI lung tumor data-sets. The WavPool-NH and WavPool-NS implementations have an accuracy of 95.92% and 95.52%, respectively, on the LIDC-IDRI data-set. This is an improvement from the 92.93% accuracy obtained on this data-set with the max pooling method. The WavPool methods also avoid overfitting which is a concern with max pool- ing. We also found WavPool performed fairly well on the CIFAR-10 data-set, however, overfitting was an issue with all the methods we considered. Wavelet pooling, especially when combined with an adaptive gradient and wavelets chosen specifically for the data, has the potential to outperform current methods.
Classification of Acute Lymphoblastic Leukemia based on White Blood Cell Images using InceptionV3 Model
Rizki Firdaus, Mulya
Ema, Utami
Dhani, Ariatmanto
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi)2023Journal Article, cited 0 times
C-NMC 2019
Convolutional Neural Network (CNN)
Acute lymphoblastic leukemia (ALL)
Computer Aided Detection (CADe)
Acute lymphoblastic leukemia (ALL) is the most common form of leukemia that occurs in children. Detection of ALL through white blood cell image analysis can help with the prognosis and appropriate treatment. In this study, the author proposes an approach to classifying ALL based on white blood cell images using a convolutional neural network (CNN) model called InceptionV3. The dataset used in this research consists of white blood cell images collected from patients with ALL and healthy individuals. These images were obtained from The Cancer Imaging Archive (TCIA), which is a service that stores large-scale cancer medical images available to the public. During the evaluation phase, the author used training data evaluation metrics such as accuracy and loss to measure the model's performance. The research results show that the InceptionV3 model is capable of classifying white blood cell images with a high level of accuracy. This model achieves an average ALL recognition accuracy of 0.9896 with a loss of 0.031. The use of CNN models such as InceptionV3 in medical image analysis has the potential to improve the efficiency and precision of image-based disease diagnosis.
IMAGE FUSION BASED LUNG NODULE DETECTION USING STRUCTURAL SIMILARITY AND MAX RULE
Mohana, P
Venkatesan, P
INTERNATIONAL JOURNAL OF ADVANCES IN SIGNAL AND IMAGE SCIENCES2019Journal Article, cited 0 times
Website
RIDER Lung CT
PET-CT
image fusion
Registration
Computer Assisted Detection (CAD)
The uncontrollable cells in the lungs are the main cause of lung cancer that reduces the ability to breathe. In this study, fusion of Computed Tomography (CT) lung image and Positron Emission Tomography (PET) lung image using their structural similarity is presented. The fused image has more information compared to individual CT and PET lung images which helps radiologists to make decision quickly. Initially, the CT and PET images are divided into blocks of predefined size in an overlapping manner. The structural similarity between each block of CT and PET are computed for fusion. Image fusion is performed using a combination of structural similarity and MAX rule. If the structural similarity between CT and PET block is greater than a particular threshold, the MAX rule is applied; otherwise the pixel intensities in CT image are used. A simple thresholding approach is employed to detect the lung nodule from the fused image. The qualitative analyses show that the fusion approach provides more information with accurate detection of lung nodules.
Convolutional Neural Network featuring VGG-16 Model for Glioma Classification
Agus, Minarno Eko
Bagas, Sasongko Yoni
Yuda, Munarko
Hanung, Nugroho Adi
Ibrahim, Zaidah
JOIV : International Journal on Informatics Visualization2022Journal Article, cited 0 times
Website
REMBRANDT
Magnetic Resonance Imaging (MRI)
VGG-16 Convolutional Neural Network
Machine Learning
BRAIN
Computer Aided Detection (CADe)
Magnetic Resonance Imaging (MRI) is a body sensing technique that can produce detailed images of the condition of organs and tissues. Specifically related to brain tumors, the resulting images can be analyzed using image detection techniques so that tumor stages can be classified automatically. Detection of brain tumors requires a high level of accuracy because it is related to the effectiveness of medical actions and patient safety. So far, the Convolutional Neural Network (CNN) or its combination with GA has given good results. For this reason, in this study, we used a similar method but with a variant of the VGG-16 architecture. VGG-16 variant adds 16 layers by modifying the dropout layer (using softmax activation) to reduce overfitting and avoid using a lot of hyper-parameters. We also experimented with using augmentation techniques to anticipate data limitations. Experiment using data The Cancer Imaging Archive (TCIA) - The Repository of Molecular Brain Neoplasia Data (REMBRANDT) contains MRI images of 130 patients with different ailments, grades, races, and ages with 520 images. The tumor type was Glioma, and the images were divided into grades II, III, and IV, with the composition of 226, 101, and 193 images, respectively. The data is divided by 68% and 32% for training and testing purposes. We found that VGG-16 was more effective for brain tumor image classification, with an accuracy of up to 100%.
An Integrated Machine Learning Framework Identifies Prognostic Gene Pair Biomarkers Associated with Programmed Cell Death Modalities in Clear Cell Renal Cell Carcinoma
Chen, B.
Zhou, M.
Guo, L.
Huang, H.
Sun, X.
Peng, Z.
Wu, D.
Chen, W.
Front Biosci (Landmark Ed)2024Journal Article, cited 0 times
TCGA-KIRC
Humans
*Carcinoma
Renal Cell/genetics
Prognosis
Apoptosis
Machine Learning
*Kidney Neoplasms/genetics
Biomarkers
Prss23
Clear cell renal cell carcinoma (ccRCC)
programmed cell death
Radiomics
single-cell RNA-seq
BACKGROUND: Clear cell renal cell carcinoma (ccRCC) is a common and lethal urological malignancy for which there are no effective personalized therapeutic strategies. Programmed cell death (PCD) patterns have emerged as critical determinants of clinical prognosis and immunotherapy responses. However, the actual clinical relevance of PCD processes in ccRCC is still poorly understood. METHODS: We screened for PCD-related gene pairs through single-sample gene set enrichment analysis (ssGSEA), consensus cluster analysis, and univariate Cox regression analysis. A novel machine learning framework incorporating 12 algorithms and 113 unique combinations were used to develop the cell death-related gene pair score (CDRGPS). Additionally, a radiomic score (Rad_Score) derived from computed tomography (CT) image features was used to classify the CDRGPS status as high or low. Finally, we conclusively verified the function of PRSS23 in ccRCC. RESULTS: The CDRGPS was developed through an integrated machine learning approach that leveraged 113 algorithm combinations. CDRGPS represents an independent prognostic biomarker for overall survival and demonstrated consistent performance between training and external validation cohorts. Moreover, CDRGPS showed better prognostic accuracy compared to seven previously published cell death-related signatures. In addition, patients classified as high-risk by CDRGPS exhibited increased responsiveness to tyrosine kinase inhibitors (TKIs), mammalian Target of Rapamycin (mTOR) inhibitors, and immunotherapy. The Rad_Score demonstrated excellent discrimination for predicting high versus low CDRGPS status, with an area under the curve (AUC) value of 0.813 in the Cancer Imaging Archive (TCIA) database. PRSS23 was identified as a significant factor in the metastasis and immune response of ccRCC, thereby validating experimental in vitro results. CONCLUSIONS: CDRGPS is a robust and non-invasive tool that has the potential to improve clinical outcomes and enable personalized medicine in ccRCC patients.
Domain-Based Analysis of Colon Polyp in CT Colonography Using Image-Processing Techniques
Manjunath, K N
Siddalingaswamy, PC
Prabhu, GK
Asian Pacific Journal of Cancer Prevention2019Journal Article, cited 0 times
Computer-aided detection and diagnosis
Segmentation
shape analysis
colon polyp
COLON
Background: The purpose of the research was to improve the polyp detection accuracy in CT Colonography (CTC)through effective colon segmentation, removal of tagged fecal matter through Electronic Cleansing (EC), and measuringthe smaller polyps. Methods: An improved method of boundary-based semi-automatic colon segmentation with theknowledge of colon distension, an adaptive multistep method for the virtual cleansing of segmented colon based onthe knowledge of Hounsfield Units, and an automated method of smaller polyp measurement using skeletonizationtechnique have been implemented. Results: The techniques were evaluated on 40 CTC dataset. The segmentationmethod was able to delineate the colon wall accurately. The submerged colonic structures were preserved withoutsoft tissue erosion, pseudo enhanced voxels were corrected, and the air-contrast layer was removed without losingthe adjacent tissues. The smaller polyp of size less than validated qualitatively and quantitatively. Segmented colons were validated through volumetric overlap computation,and accuracy of 95.826±0.6854% was achieved. In polyp measurement, the paired t-test method was applied to comparethe difference with ground truth and at α=5%, t=0.9937 and p=0.098 was achieved. The statistical values of TPR=90%,TNR=82.3% and accuracy=88.31% were achieved. Conclusion: An automated system of polyp measurement has beendeveloped starting from colon segmentation to improve the existing CTC solutions. The analysis of domain-basedapproach of polyp has given good results. A prototype software, which can be used as a low-cost polyp diagnosis tool,has been developed.
Data Integrity of Radiology Images Over an Insecure Network Using AES Technique
Prabhu, Pavithra
K N, Manjunath
Rajarama, Chitra
Kulkarni, Anjali
Kurady, Rajendra
Asian Pacific Journal of Cancer Prevention2021Journal Article, cited 0 times
CT COLONOGRAPHY
BACKGROUND: While transmitting the medical images in radiology information systems the adversary effect can break the CIA (Confidentiality, Integrity, and Availability) triads of information security. The objective of the study was to transmit the complete set of image objects in a dataset without data integrity violation.
METHODS: In this paper a hybrid cryptographic technique which combines the prime details from the patient dataset (stack of axial 2D images) and the Advanced Encryption Standard (AES) method has been proposed. The steps include a) Creating an artificial X-ray image (DRR) from the 3D volume, b) dividing the DRR image in x and y directions equally into four regions, c) applying the zig-zag pattern to each quadrant, and d) encryption of each quadrant with block cipher mode using the AES algorithm. After dataset transmission the DRR image was regenerated at the receiver and compared each of the deciphered blocks (transmitted ones) using the histogram technique.
RESULTS: The technique was tested on CT and MRI scans of sixty datasets. The image injection techniques, such as adding and deleting an image from the dataset and modifying the image pixels, were tested. The results were validated statistically using mean square error and histogram matching techniques.
CONCLUSION: The combination of the DRR and the AES technique has ensured the secured transmission of the entire dataset and not an individual image.
Validation of Segmented Brain Tumor from MRI Images Using 3D Printing
Nayak, U. A.
Balachandra, M.
K, N. M.
Kurady, R.
Asian Pac J Cancer Prev2021Journal Article, cited 0 times
Website
RIDER NEURO MRI
Magnetic Resonance Imaging (MRI)
Segmentation
3D printing
Otsu's thresholding method
BraTS
BACKGROUND: Early diagnosis of a brain tumor is important for improving the treatment possibilities. Manually segmenting the tumor from the volumetric data is time-consuming, and the visualization of the tumor is rather challenging. METHODS: This paper proposes a user-guided brain tumour segmentation from MRI (Magnetic Resonance Imaging) images developed using Medical Imaging Interaction Toolkit (MITK) and printing the segmented object using the 3D printer for tumour quantification. The proposed method includes segmenting the tumour interactively using connected threshold method, then printing the physical object from the segmented volume of interest. Then the distance between two voxels was measured using electronic callipers on the 3D volume in a specific direction. And next, the same distance was measured in the same direction on the 3D printed object. RESULTS: The technique was tested with n=5 samples (20 readings) of brain MRI images from RIDER Neuro MRI dataset of National Cancer Institute. MITK provides various tools that enable image visualization, registration, and contouring. We were able to achieve the same measurements using both the approaches and this has been tested statistically with paired t-test method. Through this and the observer's opinion, the accuracy of the segmentation was proved. CONCLUSION: When the difference in measurement of tumor volume through the electronic calipers and with 3D printed object equates to zero, proves that the segmentation technique is accurate. This helps to delineate the tumor more accurately during radio therapy.
A Systematic Approach of Data Collection and Analysis in Medical Imaging Research
Manjunath, K.
Manuel, C.
Hegde, G.
Kulkarni, A.
Kurady, R.
K, M.
Asian Pac J Cancer Prev2021Journal Article, cited 0 times
Website
CT COLONOGRAPHY
Segmentation
Classification
Computer Aided Detection (CADe)
BACKGROUND: Obtaining the right image dataset for the medical image research systematically is a tedious task. Anatomy segmentation is the key step before extracting the radiomic features from these images. OBJECTIVE: The purpose of the study was to segment the 3D colon from CT images and to measure the smaller polyps using image processing techniques. This require huge number of samples for statistical analysis. Our objective was to systematically classify and arrange the dataset based on the parameters of interest so that the empirical testing becomes easier in medical image research. MATERIALS AND METHODS: This paper discusses a systematic approach of data collection and analysis before using it for empirical testing. In this research the image were considered from National Cancer Institute (NCI). TCIA from NCI has a vast collection of diagnostic quality images for the research community. These datasets were classified before empirical testing of the research objectives. The images in the TCIA collection were acquired as per the standard protocol defined by the American College of Radiology. Patients in the age group of 50-80 years were involved in various clinical trials (multicenter). The dataset collection has more than 10 billion of DICOM images of various anatomies. In this study, the number of samples considered for empirical testing was 300 (n) acquired from both supine and prone positions. The datasets were classified based on the parameters of interest. The classified dataset makes the dataset selection easier during empirical testing. The images were validated for the data completeness as per the DICOM standard of the 2020b version. A case study of CT Colonography dataset is discussed. CONCLUSION: With this systematic approach of data collection and classification, analysis will be become more easier during empirical testing.<br />.
Quality Assurance of Image Registration Using Combinatorial Rigid Registration Optimization (CORRO)
Yorke, Afua A.
McDonald, Gary C.
Solis, David
Guerrero, Thomas
Cancer Research and Cellular Therapeutics2021Journal Article, cited 0 times
Pelvic-Reference-Data
Purpose: Expert selected landmark points on clinical image pairs to provide a basis for rigid registration validation. Using combinatorial rigid registration optimization (CORRO) provide a statistically characterized reference data set for image registration of the pelvis by estimating optimal registration.; ; Materials ad Methods: Landmarks for each CT/CBCT image pair for 58 cases were identified. From the landmark pairs, combination subsets of k-number of landmark pairs were generated without repeat, forming k-set for k=4, 8, and 12. A rigid registration between the image pairs was computed for each k-combination set (2,000-8,000,000). The mean and standard deviation of the registration were used as final registration for each image pair. Joint entropy was used to validate the output results.; ; Results: An average of 154 (range: 91-212) landmark pairs were selected for each CT/CBCT image pair. The mean standard deviation of the registration output decreased as the k-size increased for all cases. In general, the joint entropy evaluated was found to be lower than results from commercially available software. Of all 58 cases 58.3% of the k=4, 15% of k=8 and 18.3% of k=12 resulted in the better registration using CORRO as compared to 8.3% from a commercial registration software. The minimum joint entropy was determined for one case and found to exist at the estimated registration mean in agreement with the CORRO algorithm.; ; Conclusion: The results demonstrate that CORRO works even in the extreme case of the pelvic anatomy where the CBCT suffers from reduced quality due to increased noise levels. The estimated optimal registration using CORRO was found to be better than commercially available software for all k-sets tested. Additionally, the k-set of 4 resulted in overall best outcomes when compared to k=8 and 12, which is anticipated because k=8 and 12 are more likely to have combinations that affected the accuracy of the registration.
An Efficient Framework for Accurate Arterial Input Selection in DSC-MRI of Glioma Brain Tumors
Rahimzadeh, H
Kazerooni, A Fathi
Deevband, MR
Rad, H Saligheh
Journal of Biomedical Physics and Engineering2018Journal Article, cited 0 times
Website
DSC-MRI
glioma
TCGA
arterial input function (AIF)
Algorithm Development
Predicting Lung Cancer Patients’ Survival Time via Logistic Regression-based Models in a Quantitative Radiomic Framework
Shayesteh, S. P.
Shiri, I.
Karami, A. H.
Hashemian, R.
Kooranifar, S.
Ghaznavi, H.
Shakeri-Zadeh, A.
Journal of Biomedical Physics and Engineering2019Journal Article, cited 0 times
LungCT-Diagnosis
Algorithm Development
Classification
Radiomics
Objectives: The aim of this study was to predict the survival time of lung cancer patients using the advantages of both radiomics and logistic regression-based classification models.; ; Material and Methods: Fifty-nine patients with primary lung adenocarcinoma were included in this retrospective study and pre-treatment contrast-enhanced CT images were acquired. The patients lived more than 2 years were classified as the ‘Alive’ class and otherwise as the ‘Dead’ class. In our proposed quantitative radiomic framework, we first extracted the associated regions of each lung lesion from pre-treatment CT images for each patient via grow cut segmentation algorithm. Then, 40 radiomic features were extracted from the segmented lung lesions. In order to enhance the generalizability of the classification models, the mutual information-based feature selection method was applied to each feature vector. We investigated the performance of six logistic regression-based classification models with consider to acceptable evaluation measures such as F1 score and accuracy.; ; Results: It was observed that the mutual information feature selection method can help the classifier to achieve better predictive results. In our study, the Logistic regression (LR) and Dual Coordinate Descent method for Logistic Regression (DCD-LR) models achieved the best results indicating that these classification models have strong potential for classifying the more important class (i.e., the ‘Alive’ class).; ; Conclusion: The proposed quantitative radiomic framework yielded promising results, which can guide physicians to make better and more precise decisions and increase the chance of treatment success.
Prediction of Human Papillomavirus (HPV) Status in Oropharyngeal Squamous Cell Carcinoma Based on Radiomics and Machine Learning Algorithms: A Multi-Cohort Study
Zhinan, Liang
Wei, Zhang
Yudi, You
Yabing, Dong
Yuanzhe, Xiao
Xiulan, Liu
Systematic Reviews in Pharmacy2022Journal Article, cited 0 times
Website
OPC-Radiomics
Head-Neck-Radiomics-HN1
Classification
Image analysis
Background: Human Papillomavirus status has significant implications for prognostic evaluation and clinical decision-making for Oropharyngeal Squamous Cell Carcinoma patients. As a novel method, radiomics provides a possibility for non-invasive diagnosis. The aim of this study was to examine whether Computed Tomography (CT) radiomics and machine learning classifiers can effectively predict Human Papillomavirus types and be validated in external data in patients with Oropharyngeal Squamous Cell Carcinoma based on imaging data from multi-institutional and multi-national cohorts.; ; Materials and methods: 651 patients from three multi-institutional and multi-national cohorts are collected in this retrospective study: OPC-Radiomics cohort (n=497), MAASTRO cohort (n=74), and SNPH cohort (n=80). OPC-Radiomics cohort was randomized into training cohort and validation cohort with a ratio of 2:1. MAASTRO cohort and SNPH cohort were used as independent external testing cohorts. 1316 quantitative features were extracted from the Computed Tomography images of primary tumors. After feature selection by using Logistic Regression and Recursive Feature Elimination algorithms, 10 different machine- learning classifiers were trained and compared in different cohorts.; ; Results: By comparing 10 kinds of machine-learning classifiers, we found that the best performance was achieved when using a Random Forest-based model, with the Area Under the Receiver Operating Characteristic (ROC) Curves(AUCs) of 0.97, 0.72, 0.63, and 0.78 in the training cohort, validation cohort, testing cohort 1 (MAASTRO cohort), and testing cohort 2 (SNPH cohort), respectively.; ; Conclusion: The Random Forest-based radiomics model was effective in differentiating Human Papillomavirus status of Oropharyngeal Squamous Cell Carcinoma in multi-national population, which provides the possibility for this non-invasive method to be widely applied in clinical practice.
Automated Systems of High-Productive Identification of Image Objects by Geometric Features
Poplavskyi, Oleksandr
Bondar, Olena
Pavlov, Sergiy
Poplavska, Anna
Applied Geometry and Engineering Graphics2020Journal Article, cited 0 times
CPTAC-GBM
Machine Learning
Magnetic Resonance Imaging (MRI)
The article substantiates the feasibility and practical value of using a specific simulation modeling methodology, which provides for digital processing and the mathematical essence of neural network technology. A brain tumor is a serious disease, and the number of people who die due to a brain tumor, despite significant progress in treatment remains impressive.; In this research presents in detail the developed algorithm for high performance identification of objects (early detection and identification of tumors) on MRI images by geometric features. This algorithm, based on image pre-processing, analyzes the data array using a convolutional neural network (CNN) and recognizes pathologies in the images. The obtained algorithm is a step towards the creation of autonomous automatic identification and decision-making systems for the diagnosis of malignant tumors and other neoplasms in the brain by geometric features.
Automatic Finding of Brain-Tumour Group Using CNN Segmentation and Moth-Flame-Algorithm, Selected Deep and Handcrafted Features
Naimi, Imad Saud Al
Junid, Syed Alwee Aljunid Syed
Ahmad, Muhammad lmran
Manic, K. Suresh
2024Journal Article, cited 0 times
BraTS 2015
TCGA-LGG
TCGA-GBM
Augmentation of abnormal cells in the brain causes brain tumor (BT), and early screening and treatment will reduce its harshness in patients. BT’s clinical level screening is usually performed with Magnetic Resonance Imaging (MRI) due to its multi-modality nature. The overall aims of the study is to introduce, test and verify an advanced image processing technique with algorithms to automatically extract tumour sections from brain MRI scans, facilitating improved accuracy. The research intends to devise a reliable framework for detecting the BT region in the two-dimensional (2D) MRI slice, and identifying its class with improved accuracy. The methodology for the devised framework comprises the phases of: (i) Collection and resizing of images, (ii) Implementation and Segmentation of Convolutional Neural Network (CNN), (iii) Deep feature extraction, (iv) Handcrafted feature extraction, (v) Moth-Flame-Algorithm (MFA) supported feature reduction, and (vi) Performance evaluation. This study utilized clinical-grade brain MRI of BRATS and TCIA datasets for the investigation. This framework segments detected the glioma (low/high grade) and glioblastoma class BT. This work helped to get a segmentation accuracy of over 98% with VGG-UNet and a classification accuracy of over 98% with the VGG16 scheme. This study has confirmed that the implemented framework is very efficient in detecting the BT in MRI slices with/without the skull section.
Hybrid Optimized Learning for Lung Cancer Classification
Vidhya, R.
Mirnalinee, T. T.
2022Journal Article, cited 0 times
LIDC-IDRI
Computer tomography (CT) scan images can provide more helpful diagnosis information regarding the lung cancers. Many machine learning and deep learning algorithms are formulated using CT input scan images for the improvisation in diagnosis and treatment process. But, designing an accurate and intelligent system still remains in darker side of the research side. This paper proposes the novel classification model which works on the principle of fused features and optimized learning network. The proposed framework incorporates the principle of saliency maps as a first tier segmentation, which is then fused with deep convolutional neural networks to improve the classification maps and eventually reduce the risk of overfitting problems. Furthermore, the proposed work has replaced the traditional neural network with Ant-Lion Optimized Feed Forward Layers (ALO_FFL) to obtain the best classification of cancers in Lung CT scan images. The proposed algorithm has been implemented using Tensorflow 1.8 and Keras API with Python 3.8 programming. The extensive experimentations are carried out using the LIDC-IDRI image datasets and various performance metrics such as accuracy, sensitivity, specificity, precision and f1-score are calculated and analyzed. Simulation results show the proposed framework shows 99.89% accuracy, 99.8% sensitivity, 99.76% specificity, 99.8% precision and even 99.88% F1-score respectively. At last, comparative analysis is done with other existing models to prove the excellence of the proposed framework.
Multistage Lung Cancer Detection and Prediction Using Deep Learning
Jawarkar, Jay
Solanki, Nishit
Vaishnav, Meet
Vichare, Harsh
Degadwala, Sheshang
International Journal of Scientific Research in Science, Engineering and Technology2021Journal Article, cited 0 times
Website
TCGA-LUAD
Machine Learning
K Nearest Neighbor (KNN)
Random forest classifier
LUNG
Radiomics
Earlier, the progression of the descending lung was the primary driver of the chaos that runs across the world between the two people, with more than a million people dies per year goes by. The cellular breakdown in the lungs has been greatly transferred to the inconvenience that people have looked at for a very predictable amount of time. When an entity suffers a lung injury, they have erratic cells that clump together to form a cyst. A dangerous tumor is a social affair involving terrifying, enhanced cells that can interfere with and strike tissue near them. The area of lung injury in the onset period became necessary. As of now, various systems that undergo a preparedness profile and basic learning methodologies are used for lung risk imaging. For this, CT canal images are used to see and save the adverse lung improvement season from these handles. In this paper, we present an unambiguous method for seeing lung patients in a painful stage. We have considered the shape and surface features of CT channel pictures for the sales. The perspective is done using undeniable learning methodologies and took a gender at their outcome.; Keywords : Decision Tree, KNN, RF, DF, Machine Learning
Detection of Lung Nodules on CT Images based on the Convolutional Neural Network with Attention Mechanism
Lai, Khai Dinh
Nguyen, Thuy Thanh
Le, Thai Hoang
2021Journal Article, cited 0 times
LIDC-IDRI
The development of Computer-aided diagnosis (CAD) systems for automatic lung nodule detection through thoracic computed tomography (CT) scans has been an active area of research in recent years. Lung Nodule Analysis 2016 (LUNA16 challenge) encourages researchers to suggest a variety of successful nodule detection algorithms based on two key stages (1) candidates detection, (2) false-positive reduction. In the scope of this paper, a new convolutional neural network (CNN) architecture is proposed to efficiently solve the second challenge of LUNA16. Specifically, we find that typical CNN models pay little attention to the characteristics of input data, in order to address this constraint, we apply the attention-mechanism: propose a technique to attach Squeeze and Excitation-Block (SE-Block) after each convolution layer of CNN to emphasize important feature maps related to the characteristics of the input image - forming Attention sub-Convnet. The new CNN architecture is suggested by connecting the Attention sub-Convnets. In addition, we also analyze the selection of triplet loss or softmax loss functions to boost the rating performance of the proposed CNN. From the study, this is agreed to select softmax loss during the CNN training phase and triplet loss for the testing phase. Our suggested CNN is used to minimize the number of redundant candidates in order to improve the efficiency of false-positive reduction with the LUNA database. The results obtained in comparison to the previous models indicate the feasibility of the proposed model.
The prognostic value of CT radiomic features from primary tumours and pathological lymphnodes in head and neck cancer patients
Head and neck cancer (HNC) is responsible for about 0.83 million new cancer cases and 0.43 million cancer deaths worldwide every year. Around 30%-50% of patients with locally advanced HNC experience treatment failures, predominantly occurring at the site of the primary tumor, followed by regional failures and distant metastases. In order to optimize treatment strategy, the overall aim of this thesis is to identify the patients who are at high risk of treatment failures. We developed and externally validated a series of models on the different patterns of failure to predict the risk of local failures, regional failures, distant metastasis and individual nodal failures in HNC patients. New type of radiomic features based on the CT image were included in our modelling analysis, and we firstly showed that the radiomic features improved the prognostic performance of the models containing clinical factors significantly. Our studies provide clinicians new tools to predict the risk of treatment failures. This may support optimization of treatment strategy of this disease, and subsequently improve the patient survival rate.
BME Frontiers2022Journal Article, cited 0 times
Website
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Synthetic images
Algorithm Development
Model
MNI152
Magnetic Resonance Imaging (MRI)
Objective. Seven types of MRI artifacts, including acquisition and preprocessing errors, were simulated to test a machine learning brain tumor segmentation model for potential failure modes. Introduction. Real-world medical deployments of machine learning algorithms are less common than the number of medical research papers using machine learning. Part of the gap between the performance of models in research and deployment comes from a lack of hard test cases in the data used to train a model. Methods. These failure modes were simulated for a pretrained brain tumor segmentation model that utilizes standard MRI and used to evaluate the performance of the model under duress. These simulated MRI artifacts consisted of motion, susceptibility induced signal loss, aliasing, field inhomogeneity, sequence mislabeling, sequence misalignment, and skull stripping failures. Results. The artifact with the largest effect was the simplest, sequence mislabeling, though motion, field inhomogeneity, and sequence misalignment also caused significant performance decreases. The model was most susceptible to artifacts affecting the FLAIR (fluid attenuation inversion recovery) sequence. Conclusion. Overall, these simulated artifacts could be used to test other brain MRI models, but this approach could be used across medical imaging applications.
GRAPH-BASED SIGNAL PROCESSING TO CONVOLUTIONAL NEURAL NETWORKS FOR MEDICAL IMAGE SEGMENTATION
Le-Tien, Thuong
To, Thanh-Nha
Vo, Giang
SEATUC journal of science and engineering2022Journal Article, cited 0 times
TCGA-LGG
Graph Signal Processing
Graph Convolutional Neural Network
Deep Learning
Image Segmentation
Medical Image
Automatic medical image segmentation normally is a difficult task because medical images are complex in nature therefore many researchers have studied a lot of approaches to analyze patterns of images. In which, the crucial applications of deep learning in medicine are growing trends, especially Convolutional Neural Networks (CNNs) in the field of Computer Vision, yielding many remarkable results. In this paper, we propose a method to apply graph-based signal processing to CNNs architecture for medical image segmentation application. In particular, the processed architecture is based on the graph convolution to extract features in the image instead of the traditional convolution in DSP (Digital Signal Processing). The proposed solution is effective in learning neighboring links. We also introduce a back-propagation algorithm that optimizes the weights of the graph filter and finds the adjacency matrix that fits the training data. Then, the network model is applied on the dataset of medical images to help detect abnormal areas.
A Diagnostic Study of Content-Based Image Retrieval Technique for Studying the CT Images of Lung Nodules and Prediction of Lung Cancer as a Biometric Tool
Dixit, Rajeev
Kumar, Dr Pankaj
Ojha, Dr Shashank
International Journal of Electrical and Electronics Research2023Journal Article, cited 0 times
Website
LIDC-IDRI
Content based image retrieval (CBIR)
LUNG
Automatic detection
Content Based Medical Image Retrieval (CBMIR) can be defined as a digital image search using the contents of the images. CBMIR plays a very important part in medical applications such as retrieving CT images and more accurately diagnosing aberrant lung tissues in CT images. The Content-Based Medical Image Retrieval (CBMIR) method might aid radiotherapists in examining a patient's CT image in order to retrieve comparable pulmonary nodes more precisely by utilizing query nodes. Intending a particular query node, the CBMIR system searches a large chest CT image database for comparable nodes. The prime aim of this research is to evaluate an end-to-end method for developing a CBIR system for lung cancer diagnosis.
Development and external validation of a deep learning-based computed tomography classification system for COVID-19
Kataoka, Yuki
Baba, Tomohisa
Ikenoue, Tatsuyoshi
Matsuoka, Yoshinori
Matsumoto, Junichi
Kumasawa, Junji
Tochitani, Kentaro
Funakoshi, Hiraku
Hosoda, Tomohiro
Kugimiya, Aiko
Shirano, Michinori
Hamabe, Fumiko
Iwata, Sachiyo
Kitamura, Yoshiro
Goto, Tsubasa
Hamaguchi, Shingo
Haraguchi, Takafumi
Yamamoto, Shungo
Sumikawa, Hiromitsu
Nishida, Koji
Nishida, Haruka
Ariyoshi, Koichi
Sugiura, Hiroaki
Nakagawa, Hidenori
Asaoka, Tomohiro
Yoshida, Naofumi
Oda, Rentaro
Koyama, Takashi
Iwai, Yui
Miyashita, Yoshihiro
Okazaki, Koya
Tanizawa, Kiminobu
Handa, Tomohiro
Kido, Shoji
Fukuma, Shingo
Tomiyama, Noriyuki
Hirai, Toyohiro
Ogura, Takashi
2022Journal Article, cited 0 times
CT Images in COVID-19
NSCLC-Radiomics
PleThora
BACKGROUND: We aimed to develop and externally validate a novel machine learning model that can classify CT image findings as positive or negative for SARS-CoV-2 reverse transcription polymerase chain reaction (RT-PCR).
METHODS: We used 2,928 images from a wide variety of case-control type data sources for the development and internal validation of the machine learning model. A total of 633 COVID-19 cases and 2,295 non-COVID-19 cases were included in the study. We randomly divided cases into training and tuning sets at a ratio of 8:2. For external validation, we used 893 images from 740 consecutive patients at 11 acute care hospitals suspected of having COVID-19 at the time of diagnosis. The dataset included 343 COVID-19 patients. The reference standard was RT-PCR.
RESULTS: In external validation, the sensitivity and specificity of the model were 0.869 and 0.432, at the low-level cutoff, 0.724 and 0.721, at the high-level cutoff. Area under the receiver operating characteristic was 0.76.
CONCLUSIONS: Our machine learning model exhibited a high sensitivity in external validation datasets and may assist physicians to rule out COVID-19 diagnosis in a timely manner at emergency departments. Further studies are warranted to improve model specificity.
Journal of Student Research2021Journal Article, cited 0 times
Website
TCGA-GBM
Algorithm Development
Magnetic Resonance Imaging (MRI)
Pathomics
Radiomics
Digital pathology
Machine Learning
Computer Aided Diagnosis (CADx)
Cancer is the common name used to categorize a collection of diseases. In the United States, there were an estimated 1.8 million new cancer cases and 600,000 cancer deaths in 2020. Though it has been proven that an early diagnosis can significantly reduce cancer mortality, cancer screening is inaccessible to much of the world’s population. Machine learning approaches are increasingly successful in image-based diagnosis, disease prognosis, and risk assessment. A literature search with the Google Scholar and PubMed databases from January 2020 to June 2021 determined that currently, no machine learning model (n=0/417) has an accuracy of 90% or higher in diagnosing multiple cancers. We propose our model HOPE, the Heuristic Oncological Prognosis Evaluator, a transfer learning diagnostic tool for the screening of patients with common cancers. By applying this approach to magnetic resonance (MRI) and digital whole slide pathology images, HOPE 2.0 demonstrates an overall accuracy of 95.52% in classifying brain, breast, colorectal, and lung cancer. HOPE 2.0 is a unique state-of-the-art model, as it possesses the ability to analyze multiple types of image data (radiology and pathology) and has an accuracy higher than existing models. HOPE 2.0 may ultimately aid in accelerating the diagnosis of multiple cancer types, resulting in improved clinical outcomes compared to previous research that focused on singular cancer diagnosis.
Improved Predictive Sparse Decomposition Method with Densenet for Prediction of Lung Cancer
Mienye, Ibomoiye Domor
Sun, Yanxia
Wang, Zenghui
International Journal of Computing2020Journal Article, cited 0 times
Website
LUNG
Convolutional Neural Network (CNN)
Deep Learning
DenseNet
Computed Tomography (CT)
NSCLC Radiogenomics
Lung cancer is the second most common form of cancer in both men and women. It is responsible for at least 25% of all cancer-related deaths in the United States alone. Accurate and early diagnosis of this form of cancer can increase the rate of survival. Computed tomography (CT) imaging is one of the most accurate techniques for diagnosing the disease. In order to improve the classification accuracy of pulmonary lesions indicating lung cancer,this paper presents an improved method for training a densely connected convolutionalnetwork (DenseNet). The optimized setting ensures that code prediction error and reconstruction error within hidden layers are simultaneously minimized. To achieve this and improve the classification accuracy of the DenseNet, we propose an improved predictive sparse decomposition (PSD) approach for extracting sparse features from the medical images. The sparse decomposition is achieved by using a linear combination of basis functions over the L2 norm. The effect of dropout and hidden layer expansion on the classification accuracy of the DenseNet is also investigated. CT scans of human lung samples are obtained from The Cancer Imaging Archive (TCIA) hosted by the University of Arkansas for Medical Sciences (UAMS). The proposed method outperforms seven other neural network architectures and machine learning algorithms with a classification accuracy of 95%.
Lung cancer classification with Convolutional Neural Network Architectures
Mohammed, Shivan H. M.
Çinar, Ahmet
Qubahan Academic Journal2021Journal Article, cited 0 times
Website
One of the most common malignant tumors in the world today is lung cancer, and it is the primary cause of death from cancer. With the continuous advancement of urbanization and industrialization, the problem of air pollution has become more and more serious. The best treatment period for lung cancer is the early stage. However, the early stage of lung cancer often does not have any clinical symptoms and is difficult to be found. In this paper, lung nodule classification has been performed; the data have used of CT image is SPIE AAPM-Lung. In recent years, deep learning (DL) was a popular approach to the classification process. One of the DL approaches that have used is Transfer Learning (TL) to eliminate training costs from scratch and to train for deep learning with small training data. Nowadays, researchers have been trying various deep learning techniques to improve the efficiency of CAD (computer-aided system) with computed tomography in lung cancer screening. In this work, we implemented pre-trained CNN include: AlexNet, ResNet18, Googlenet, and ResNet50 models. These networks are used for training the network and CT image classification. CNN and TL are used to achieve high performance resulting and specify lung cancer detection on CT images. The evaluation of models is calculated by some matrices such as confusion matrix, precision, recall, specificity, and f1-score.
An Automated Prostate-cancer Prediction System (APPS) Based on Advanced DFO-ConGA2L Model using MRI Imaging Technique
The prostate cancer is a deadly form of cancer that assassinates a significant number of men due of its mediocre identification process. Images from people with cancer include important and intricate details that are difficult for conventional diagnostic methods to extract. This work establishes a novel Automated Prostate-cancer Prediction System (APPS) model for the goal of detecting and classifying prostate cancer utilizing MRI imaging sequences. The supplied medical image is normalized using a Coherence Diffusion Filtering (CDFilter) approach for improved quality and contrast. The appropriate properties are also extracted from the normalized image using the morphological and texture feature extraction approach, which helps to increase the classifier's accuracy. In order to train the classifier, the most important properties are also selected utilizing the cutting-edge Dragon Fly Optimized Feature Selection (DFO-FS) algorithm. Using this method greatly improves the classifier's overall disease diagnosis performance in less time and with faster processing. More specifically, the provided MRI input data are used to categorize the prostate cancer-affected and healthy tissues using the new Convoluted Gated Axial Attention Learning Model (ConGA2L) based on the selected features. This study compares and validates the performance of the APPS model by looking at several aspects using publicly available prostate cancer data.
Is sarcopenia a predictor of overall survival in primary IDH-wildtype GBM patients with and without MGMT promoter hypermethylation?
Korkmaz, Serhat
Demirel, Emin
Neurology Asia2023Journal Article, cited 0 times
UCSF-PDGM
UPENN-GBM
Sarcopenia
Glioblastoma
MGMT methylation status
Magnetic Resonance Imaging (MRI)
Survival
Radiomic features
Radiogenomics
Background: In this study, we aimed to examine the success of temporal muscle thickness (TMT) and masseter muscle thickness (MMT) in predicting overall survival (OS) in primary IDH-wild glioblastoma (GBM) patients with and without MGMT promoter hypermethylation through publicly available datasets. Methods: We included 345 primary IDH-wild GBM patients with known MGMT promoter hypermethylation status who underwent gross-total resection and standard treatment, whose data were obtained from the open datasets. TMT was evaluated on axial thin section postcontrast T1-weighted images, and MMT was evaluated on axial T2-weighted images. The median TMT and MMT were used to determine the cut-off point. Results: The findings showed that median TMT 9.5 mm and median MMT 12.7 mm determined the cut-off value in predicting survival. Both TMT and MMT values less than the median muscle thickness were negatively associated with OS (TMT<9.5: HR 3.63 CI 2.34–4.23, p <0.001, MMT<12.7: HR 3.53 CI 2.27–4.07, p <0.001). When patients were classified according to MGMT positivity, the findings showed MGMT-negative patients (TMT<9.5: HR 2.54 CI 1.89–3.56, p <0.001, MMT<12.7: HR 2.65 CI 2.07–3.62, p <0.001) and MGMT-positive patients (TMT<9.5: HR 3.84 CI 2.48–4.28, p <0.001, MMT<12.7: HR 3.73 CI 2.98–4.71, p <0.001). Conclusion: Both TMT and MMT successfully predict survival in primary GBM patients. In addition, it can successfully predict survival in patients with and without MGMT promoter hypermethylation.
Improving the Pulmonary Nodule Classification Based on KPCA-CNN Model
Jiang, Peichen
Highlights in Science, Engineering and Technology2022Journal Article, cited 0 times
Website
LIDC-IDRI
Deep Learning
Convolutional Neural Network (CNN)
Principal component analysis (PCA)
LUNG
Segmentation
LUNA16 Challenge
Lung cancer mortality, the main cause of cancer-associated death all over the world, can be reduced by screening risky patients with low-dose computed tomography (CT) scans for lung cancer. In CT screening, radiologists will have to examine millions of CT pictures, putting a great load on them. Convolutional neural networks (CNNs) with deep convolutions have the potential to improve screening efficiency. In the examination of lung cancer screening CT images, estimating the chance of a malignant nodule in a specific location on a CT scan is a critical step. Low dimensional convolutional neural networks and other methods are unable to provide sufficient estimation for this task, even though the most advanced 3-dimensional CNN (3D-CNN) has extremely high computing requirements. This article presents a novel strategy for reducing false positives in automatic pulmonary nodule diagnosis from 3-dimensional CT imaging by merging a kernel Principal Component Analysis (kPCA) approach with a 2-dimensional CNN (2D-CNN). To recreate 3-dimensional CT images, the kPCA method is utilized, with the goal of reducing the dimension of data, minimizing noise from raw sensory data while maintaining neoplastic information. The CNN can diagnose new CT scans with an accuracy of up to 90% when trained with the regenerated data, which is better than existing 2D-CNNs and on par with the best 3D-CNNs. The short duration of training, and certain accuracy shows the potential of the kPCA-CNN to adapt to CT scans with different parameters in practice. The study shows that the kPCA-CNN modeling technique can improve the efficiency of lung cancer diagnosis.
CCCD: Corner detection and curve reconstruction for improved 3D surface reconstruction from 2D medical images
Sarmah, Mriganka
Neelima, Arambam
Turkish Journal of Electrical Engineering and Computer Sciences2023Journal Article, cited 0 times
RIDER Lung PET-CT
UPENN-GBM
LiTS
Computed Tomography (CT)
Segmentation
Graph Convolutional Neural Network
The conventional approach to creating 3D surfaces from 2D medical images is the marching cube algorithm, but it often results in rough surfaces. On the other hand, B-spline curves and nonuniform rational B-splines (NURBSs) offer a smoother alternative for 3D surface reconstruction. However, NURBSs use control points (CTPs) to define the object shape and corners play an important role in defining the boundary shape as well. Thus, in order to fill the research gap in applying corner detection (CD) methods to generate the most favorable CTPs, in this paper corner points are identified to predict organ shape. However, CTPs must be in ordered coordinate pairs. This ordering problem is resolved using curve reconstruction (CR) or chain code (CC) algorithms. Existing CR methods lead to issues like holes, while some chain codes have junction-induced errors that need preprocessing. To address the above issues, a new graph neural network (GNN)-based approach named curvature and chain code-based corner detection (CCCD) is introduced that not only orders the CTPs but also removes junction errors. The goal is to improve accuracy and reliability in generating smooth surfaces. The paper fuses well-known CD methods with a curve generation technique and compares these alternative fused methods with CCCD. CCCD is also compared against other curve reconstruction techniques to establish its superiority. For validation, CCCD?s accuracy in predicting boundaries is compared with deep learning models like Polar U-Net, KiU-Net 3D, and HdenseUnet, achieving an impressive Dice score of 98.49%, even with only 39.13% boundary points.
Prostate Cancer Classifier based on Three-Dimensional Magnetic Resonance Imaging and Convolutional Neural Networks
Minda, Ana-Maria
Albu, Adriana
Computer Science Journal of Moldova2023Journal Article, cited 0 times
Website
PROSTATEx
prostate cancer
Classification
Computer Aided Diagnosis (CADx)
Convolutional Neural Network (CNN)
Magnetic Resonance Imaging (MRI)
Algorithm Development
The main reason for this research is the worldwide existence of a large number of prostate cancers. This article underlines how necessary medical imaging is, in association with artificial intelligence, in early detection of this medical condition. The diagnosis of a patient with prostate cancer is conventionally made based on multiple biopsies, histopathologic tests and other procedures that are time consuming and directly dependent on the experience level of the radiologist. The deep learning algorithms reduce the investigation time and could help medical staff. This work proposes a binary classification algorithm which uses convolutional neural networks to predict whether a 3D MRI scan contains a malignant lesion or not. The provided result can be a starting point in the diagnosis phase. The investigation, however, should be finalized by a human expert.
Brain Tumor Segmentation and Classification Using ResNet50 and U-Net with TCGA-LGG and TCIA MRI Scans
Interreader Variability of Dynamic Contrast-enhanced MRI of Recurrent Glioblastoma: The Multicenter ACRIN 6677/RTOG 0625 Study
Barboriak, Daniel P
Zhang, Zheng
Desai, Pratikkumar
Snyder, Bradley S
Safriel, Yair
McKinstry, Robert C
Bokstein, Felix
Sorensen, Gregory
Gilbert, Mark R
Boxerman, Jerrold L
Radiology2019Journal Article, cited 2 times
Website
ACRIN-DSC-MR-Brain
ACRIN 6677
Purpose To evaluate factors contributing to interreader variation (IRV) in parameters measured at dynamic contrast material-enhanced (DCE) MRI in patients with glioblastoma who were participating in a multicenter trial. Materials and Methods A total of 18 patients (mean age, 57 years +/- 13 [standard deviation]; 10 men) who volunteered for the advanced imaging arm of ACRIN 6677, a substudy of the RTOG 0625 clinical trial for recurrent glioblastoma treatment, underwent analyzable DCE MRI at one of four centers. The 78 imaging studies were analyzed centrally to derive the volume transfer constant (K(trans)) for gadolinium between blood plasma and tissue extravascular extracellular space, fractional volume of the extracellular extravascular space (ve), and initial area under the gadolinium concentration curve (IAUGC). Two independently trained teams consisting of a neuroradiologist and a technologist segmented the enhancing tumor on three-dimensional spoiled gradient-recalled acquisition in the steady-state images. Mean and median parameter values in the enhancing tumor were extracted after registering segmentations to parameter maps. The effect of imaging time relative to treatment, map quality, imager magnet and sequence, average tumor volume, and reader variability in tumor volume on IRV was studied by using intraclass correlation coefficients (ICCs) and linear mixed models. Results Mean interreader variations (+/- standard deviation) (difference as a percentage of the mean) for mean and median IAUGC, mean and median K(trans), and median ve were 18% +/- 24, 17% +/- 23, 27% +/- 34, 16% +/- 27, and 27% +/- 34, respectively. ICCs for these metrics ranged from 0.90 to 1.0 for baseline and from 0.48 to 0.76 for posttreatment examinations. Variability in reader-derived tumor volume was significantly related to IRV for all parameters. Conclusion Differences in reader tumor segmentations are a significant source of interreader variation for all dynamic contrast-enhanced MRI parameters. (c) RSNA, 2018 Online supplemental material is available for this article. See also the editorial by Wolf in this issue.
Magnetic resonance spectroscopy as an early indicator of response to anti-angiogenic therapy in patients with recurrent glioblastoma: RTOG 0625/ACRIN 6677
Ratai, E. M.
Zhang, Z.
Snyder, B. S.
Boxerman, J. L.
Safriel, Y.
McKinstry, R. C.
Bokstein, F.
Gilbert, M. R.
Sorensen, A. G.
Barboriak, D. P.
Neuro-Oncology2013Journal Article, cited 0 times
ACRIN-DSC-MR-Brain
ACRIN 6677
Background. The prognosis for patients with recurrent glioblastoma remains poor. The purpose of this study was to assess the potential role of MR spectroscopy as an early indicator of response to anti-angiogenic therapy.; Methods. Thirteen patients with recurrent glioblastoma were enrolled in RTOG 0625/ACRIN 6677, a prospective multicenter trial in which bevacizumab was used in combination with either temozolomide or irinotecan. Patients were scanned prior to treatment and at specific timepoints during the treatment regimen. Postcontrast T1-weighted MRI was used to assess 6-month progression-free survival. Spectra from the enhancing tumor and peritumoral regions were defined on the postcontrast T1-weighted images. Changes in the concentration ratios of N-acetylaspartate/creatine (NAA/Cr), choline-containing compounds (Cho)/Cr, and NAA/Cho were quantified in comparison with pretreatment values.; Results. NAA/Cho levels increased and Cho/Cr levels decreased within enhancing tumor at 2 weeks relative to pretreatment levels (P = .048 and P = .016, respectively), suggesting a possible antitumor effect of bevacizumab with cytotoxic chemotherapy. Nine of the 13 patients were alive and progression free at 6 months. Analysis of receiver operating characteristic curves for NAA/Cho changes in tumor at 8 weeks revealed higher levels in patients progression free at 6 months (area under the curve = 0.85), suggesting that NAA/Cho is associated with treatment response. Similar results were observed for receiver operating characteristic curve analyses against 1-year survival. In addition, decreased Cho/Cr and increased NAA/Cr and NAA/Cho in tumor periphery at 16 weeks posttreatment were associated with both 6-month progression-free survival and 1-year survival.; Conclusion. Changes in NAA and Cho by MR spectroscopy may potentially be useful as imaging biomarkers in assessing response to anti-angiogenic treatment.
Dynamic susceptibility contrast MRI measures of relative cerebral blood volume as a prognostic marker for overall survival in recurrent glioblastoma: results from the ACRIN 6677/RTOG 0625 multicenter trial
Schmainda, K. M.
Zhang, Z.
Prah, M.
Snyder, B. S.
Gilbert, M. R.
Sorensen, A. G.
Barboriak, D. P.
Boxerman, J. L.
Neuro Oncol2015Journal Article, cited 0 times
ACRIN-DSC-MR-Brain
ACRIN 6677
Glioblastoma Multiforme (GBM)
Background. The study goal was to determine whether changes in relative cerebral blood volume (rCBV) derived from dynamic susceptibility contrast (DSC) MRI are predictive of overall survival (OS) in patients with recurrent glioblastoma multiforme (GBM) when measured 2, 8, and 16 weeks after treatment initiation.; Methods. Patients with recurrent GBM (37/123) enrolled in ACRIN 6677/RTOG 0625, a multicenter, randomized, phase II trial of bevacizumab with irinotecan or temozolomide, consented to DSC-MRI plus conventional MRI, 21 with DSC-MRI at baseline and at least 1 postbaseline scan. Contrast-enhancing regions of interest were determined semi-automatically using pre- and postcontrast T1-weighted images. Mean tumor rCBV normalized to white matter (nRCBV) and standardized rCBV (sRCBV) were determined for these regions of interest. The OS rates for patients with positive versus negative changes from baseline in nRCBV and sRCBV were compared using Wilcoxon rank-sum and Kaplan-Meier survival estimates with log-rank tests.; Results. Patients surviving at least 1 year (OS-1) had significantly larger decreases in nRCBV at week 2 (P=.0451) and sRCBV at week 16 (P=.014). Receiver operating characteristic analysis found the percent changes of nRCBV and sRCBV at week 2 and sRCBV at week 16, but not rCBV data at week 8, to be good prognostic markers for OS-1. Patients with positive change from baseline rCBV had significantly shorter OS than those with negative change at both week 2 and week 16 (P=.0015 and P=.0067 for nRCBV and P=.0251 and P=.0004 for sRCBV, respectively).; Conclusions. Early decreases in rCBV are predictive of improved survival in patients with recurrent GBM treated with bevacizumab.
Quantitative Delta T1 (dT1) as a Replacement for Adjudicated Central Reader Analysis of Contrast-Enhancing Tumor Burden: A Subanalysis of the American College of Radiology Imaging Network 6677/Radiation Therapy Oncology Group 0625 Multicenter Brain Tumor Trial.
Schmainda, K M
Prah, M A
Zhang, Z
Snyder, B S
Rand, S D
Jensen, T R
Barboriak, D P
Boxerman, J L
AJNR Am J Neuroradiol2019Journal Article, cited 0 times
ACRIN-DSC-MR-Brain
ACRIN 6677
BACKGROUND AND PURPOSE: Brain tumor clinical trials requiring solid tumor assessment typically rely on the 2D manual delineation of enhancing tumors by >/=2 expert readers, a time-consuming step with poor interreader agreement. As a solution, we developed quantitative dT1 maps for the delineation of enhancing lesions. This retrospective analysis compares dT1 with 2D manual delineation of enhancing tumors acquired at 2 time points during the post therapeutic surveillance period of the American College of Radiology Imaging Network 6677/Radiation Therapy Oncology Group 0625 (ACRIN 6677/RTOG 0625) clinical trial. MATERIALS AND METHODS: Patients enrolled in ACRIN 6677/RTOG 0625, a multicenter, randomized Phase II trial of bevacizumab in recurrent glioblastoma, underwent standard MR imaging before and after treatment initiation. For 123 patients from 23 institutions, both 2D manual delineation of enhancing tumors and dT1 datasets were evaluable at weeks 8 (n = 74) and 16 (n = 57). Using dT1, we assessed the radiologic response and progression at each time point. Percentage agreement with adjudicated 2D manual delineation of enhancing tumor reads and association between progression status and overall survival were determined. RESULTS: For identification of progression, dT1 and adjudicated 2D manual delineation of enhancing tumor reads were in perfect agreement at week 8, with 73.7% agreement at week 16. Both methods showed significant differences in overall survival at each time point. When nonprogressors were further divided into responders versus nonresponders/nonprogressors, the agreement decreased to 70.3% and 52.6%, yet dT1 showed a significant difference in overall survival at week 8 (P = .01), suggesting that dT1 may provide greater sensitivity for stratifying subpopulations. CONCLUSIONS: This study shows that dT1 can predict early progression comparable with the standard method but offers the potential for substantial time and cost savings for clinical trials.
Diffusion MRI quality control and functional diffusion map results in ACRIN 6677/RTOG 0625: a multicenter, randomized, phase II trial of bevacizumab and chemotherapy in recurrent glioblastoma
Ellingson, Benjamin M
Kim, Eunhee
Woodworth, Davis C
Marques, Helga
Boxerman, Jerrold L
Safriel, Yair
McKinstry, Robert C
Bokstein, Felix
Jain, Rajan
Chi, T Linda
Sorensen, A Gregory
Gilbert, Mark R
Barboriak, Daniel P
Int J Oncol2015Journal Article, cited 27 times
Website
ACRIN-DSC-MR-Brain
ACRIN 6677
BRAIN
Glioblastoma Multiforme (GBM)
Magnetic Resonance Imaging (MRI)
Functional diffusion mapping (fDM) is a cancer imaging technique that quantifies voxelwise changes in apparent diffusion coefficient (ADC). Previous studies have shown value of fDMs in bevacizumab therapy for recurrent glioblastoma multiforme (GBM). The aim of the present study was to implement explicit criteria for diffusion MRI quality control and independently evaluate fDM performance in a multicenter clinical trial (RTOG 0625/ACRIN 6677). A total of 123 patients were enrolled in the current multicenter trial and signed institutional review board-approved informed consent at their respective institutions. MRI was acquired prior to and 8 weeks following therapy. A 5-point QC scoring system was used to evaluate DWI quality. fDM performance was evaluated according to the correlation of these metrics with PFS and OS at the first follow-up time-point. Results showed ADC variability of 7.3% in NAWM and 10.5% in CSF. A total of 68% of patients had usable DWI data and 47% of patients had high quality DWI data when also excluding patients that progressed before the first follow-up. fDM performance was improved by using only the highest quality DWI. High pre-treatment contrast enhancing tumor volume was associated with shorter PFS and OS. A high volume fraction of increasing ADC after therapy was associated with shorter PFS, while a high volume fraction of decreasing ADC was associated with shorter OS. In summary, DWI in multicenter trials are currently of limited value due to image quality. Improvements in consistency of image quality in multicenter trials are necessary for further advancement of DWI biomarkers.
Deep learning-based convolutional neural network for intramodality brain MRI synthesis
Osman, A. F. I.
Tamam, N. M.
J Appl Clin Med Phys2022Journal Article, cited 2 times
Website
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Computer Aided Diagnosis (CADx)
U-Net
*Deep Learning
Humans
Image Processing
Computer-Assisted/methods
Magnetic Resonance Imaging/methods
Neural Networks
Computer
BRAIN
Convolutional Neural Network (CNN)
deep learning
magnetic resonance imaging (MRI)
medical imaging synthesis
PURPOSE: The existence of multicontrast magnetic resonance (MR) images increases the level of clinical information available for the diagnosis and treatment of brain cancer patients. However, acquiring the complete set of multicontrast MR images is not always practically feasible. In this study, we developed a state-of-the-art deep learning convolutional neural network (CNN) for image-to-image translation across three standards MRI contrasts for the brain. METHODS: BRATS'2018 MRI dataset of 477 patients clinically diagnosed with glioma brain cancer was used in this study, with each patient having T1-weighted (T1), T2-weighted (T2), and FLAIR contrasts. It was randomly split into 64%, 16%, and 20% as training, validation, and test set, respectively. We developed a U-Net model to learn the nonlinear mapping of a source image contrast to a target image contrast across three MRI contrasts. The model was trained and validated with 2D paired MR images using a mean-squared error (MSE) cost function, Adam optimizer with 0.001 learning rate, and 120 epochs with a batch size of 32. The generated synthetic-MR images were evaluated against the ground-truth images by computing the MSE, mean absolute error (MAE), peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM). RESULTS: The generated synthetic-MR images with our model were nearly indistinguishable from the real images on the testing dataset for all translations, except synthetic FLAIR images had slightly lower quality and exhibited loss of details. The range of average PSNR, MSE, MAE, and SSIM values over the six translations were 29.44-33.25 dB, 0.0005-0.0012, 0.0086-0.0149, and 0.932-0.946, respectively. Our results were as good as the best-reported results by other deep learning models on BRATS datasets. CONCLUSIONS: Our U-Net model exhibited that it can accurately perform image-to-image translation across brain MRI contrasts. It could hold great promise for clinical use for improved clinical decision-making and better diagnosis of brain cancer patients due to the availability of multicontrast MRIs. This approach may be clinically relevant and setting a significant step to efficiently fill a gap of absent MR sequences without additional scanning.
An efficient magnetic resonance image data quality screening dashboard
Gates, E. D. H.
Celaya, A.
Suki, D.
Schellingerhout, D.
Fuentes, D.
J Appl Clin Med Phys2022Journal Article, cited 0 times
Website
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Magnetic Resonance Imaging (MRI)
Quality control
NIfTI
ITK
BRAIN
PURPOSE: Complex data processing and curation for artificial intelligence applications rely on high-quality data sets for training and analysis. Manually reviewing images and their associated annotations is a very laborious task and existing quality control tools for data review are generally limited to raw images only. The purpose of this work was to develop an imaging informatics dashboard for the easy and fast review of processed magnetic resonance (MR) imaging data sets; we demonstrated its ability in a large-scale data review. METHODS: We developed a custom R Shiny dashboard that displays key static snapshots of each imaging study and its annotations. A graphical interface allows the structured entry of review data and download of tabulated review results. We evaluated the dashboard using two large data sets: 1380 processed MR imaging studies from our institution and 285 studies from the 2018 MICCAI Brain Tumor Segmentation Challenge (BraTS). RESULTS: Studies were reviewed at an average rate of 100/h using the dashboard, 10 times faster than using existing data viewers. For data from our institution, 1181 of the 1380 (86%) studies were of acceptable quality. The most commonly identified failure modes were tumor segmentation (9.6% of cases) and image registration (4.6% of cases). Tumor segmentation without visible errors on the dashboard had much better agreement with reference tumor volume measurements (root-mean-square error 12.2 cm(3) ) than did segmentations with minor errors (20.5 cm(3) ) or failed segmentations (27.4 cm(3) ). In the BraTS data, 242 of 285 (85%) studies were acceptable quality after processing. Among the 43 cases that failed review, 14 had unacceptable raw image quality. CONCLUSION: Our dashboard provides a fast, effective tool for reviewing complex processed MR imaging data sets. It is freely available for download at https://github.com/EGates1/MRDQED.
Modified fast adaptive scatter kernel superposition (mfASKS) correction and its dosimetric impact on CBCT‐based proton therapy dose calculation
Nomura, Yusuke
Xu, Qiong
Peng, Hao
Takao, Seishin
Shimizu, Shinichi
Xing, Lei
Shirato, Hiroki
Medical Physics2020Journal Article, cited 0 times
Website
Radiation Dosage
Proton Radiation Therapy
lung CT
Auto‐segmentation of organs at risk for head and neck radiotherapy planning: from atlas‐based to deep learning methods
Vrtovec, Tomaž
Močnik, Domen
Strojan, Primož
Pernuš, Franjo
Ibragimov, Bulat
Medical Physics2020Journal Article, cited 2 times
Website
Head-Neck Cetuximab
TCGA-HNSC
QIN-HEADNECK
Data from Head and Neck Cancer CT Atlas
AAPM RT-MAC Grand Challenge 2019
Head-Neck-PET-CT
HEAD AND NECK
MRI-based prostate and dominant lesion segmentation using cascaded scoring convolutional neural network
Eidex, Z. A.
Wang, T.
Lei, Y.
Axente, M.
Akin-Akintayo, O. O.
Ojo, O. A. A.
Akintayo, A. A.
Roper, J.
Bradley, J. D.
Liu, T.
Schuster, D. M.
Yang, X.
Med Phys2022Journal Article, cited 0 times
PROSTATEx
Humans
Image Processing
Computer-Assisted/methods
Magnetic Resonance Imaging (MRI)
Male
Neural Networks
Computer
PET/CT
*Positron Emission Tomography Computed Tomography
*Prostate/diagnostic imaging
Retrospective Studies
Deep learning
prostate and dominant lesion segmentation
PURPOSE: Dose escalation to dominant intraprostatic lesions (DILs) is a novel treatment strategy to improve the treatment outcome of prostate radiation therapy. Treatment planning requires accurate and fast delineation of the prostate and DILs. In this study, a 3D cascaded scoring convolutional neural network is proposed to automatically segment the prostate and DILs from MRI. METHODS AND MATERIALS: The proposed cascaded scoring convolutional neural network performs end-to-end segmentation by locating a region-of-interest (ROI), identifying the object within the ROI, and defining the target. A scoring strategy, which is learned to judge the segmentation quality of DIL, is integrated into cascaded convolutional neural network to solve the challenge of segmenting the irregular shapes of the DIL. To evaluate the proposed method, 77 patients who underwent MRI and PET/CT were retrospectively investigated. The prostate and DIL ground truth contours were delineated by experienced radiologists. The proposed method was evaluated with fivefold cross-validation and holdout testing. RESULTS: The average centroid distance, volume difference, and Dice similarity coefficient (DSC) value for prostate/DIL are 4.3 +/- 7.5/3.73 +/- 3.78 mm, 4.5 +/- 7.9/0.41 +/- 0.59 cc, and 89.6 +/- 8.9/84.3 +/- 11.9%, respectively. Comparable results were obtained in the holdout test. Similar or superior segmentation outcomes were seen when compared the results of the proposed method to those of competing segmentation approaches. CONCLUSIONS: The proposed automatic segmentation method can accurately and simultaneously segment both the prostate and DILs. The intended future use for this algorithm is focal boost prostate radiation therapy.
Ultralow-parameter denoising: Trainable bilateral filter layers in computed tomography
Wagner, F.
Thies, M.
Gu, M.
Huang, Y.
Pechmann, S.
Patwari, M.
Ploner, S.
Aust, O.
Uderhardt, S.
Schett, G.
Christiansen, S.
Maier, A.
Med Phys2022Journal Article, cited 1 times
Website
LDCT-and-Projection-data
bilateral filter
denoising
known operator learning
low-dose CT
BACKGROUND: Computed tomography (CT) is widely used as an imaging tool to visualize three-dimensional structures with expressive bone-soft tissue contrast. However, CT resolution can be severely degraded through low-dose acquisitions, highlighting the importance of effective denoising algorithms. PURPOSE: Most data-driven denoising techniques are based on deep neural networks, and therefore, contain hundreds of thousands of trainable parameters, making them incomprehensible and prone to prediction failures. Developing understandable and robust denoising algorithms achieving state-of-the-art performance helps to minimize radiation dose while maintaining data integrity. METHODS: This work presents an open-source CT denoising framework based on the idea of bilateral filtering. We propose a bilateral filter that can be incorporated into any deep learning pipeline and optimized in a purely data-driven way by calculating the gradient flow toward its hyperparameters and its input. Denoising in pure image-to-image pipelines and across different domains such as raw detector data and reconstructed volume, using a differentiable backprojection layer, is demonstrated. In contrast to other models, our bilateral filter layer consists of only four trainable parameters and constrains the applied operation to follow the traditional bilateral filter algorithm by design. RESULTS: Although only using three spatial parameters and one intensity range parameter per filter layer, the proposed denoising pipelines can compete with deep state-of-the-art denoising architectures with several hundred thousand parameters. Competitive denoising performance is achieved on x-ray microscope bone data and the 2016 Low Dose CT Grand Challenge data set. We report structural similarity index measures of 0.7094 and 0.9674 and peak signal-to-noise ratio values of 33.17 and 43.07 on the respective data sets. CONCLUSIONS: Due to the extremely low number of trainable parameters with well-defined effect, prediction reliance and data integrity is guaranteed at any time in the proposed pipelines, in contrast to most other deep learning-based denoising architectures.
Pharmacokinetic modeling of dynamic contrast‐enhanced MRI using a reference region and input function tail
Ahmed, Zaki
Levesque, Ives R
Magnetic Resonance in Medicine2020Journal Article, cited 0 times
Website
TCGA-GBM
3D Reconstruction from CT Images Using Free Software Tools
Automatic classification of solitary pulmonary nodules in PET/CT imaging employing transfer learning techniques
Apostolopoulos, Ioannis D
Pintelas, Emmanuel G
Livieris, Ioannis E
Apostolopoulos, Dimitris J
Papathanasiou, Nikolaos D
Pintelas, Panagiotis E
Panayiotakis, George S
Medical & Biological Engineering & Computing2021Journal Article, cited 0 times
Website
LIDC-IDRI
machine learning
Transfer learning
Automated pulmonary nodule CT image characterization in lung cancer screening
Reeves, Anthony P
Xie, Yiting
Jirapatnakul, Artit
International Journal of Computer Assisted Radiology and Surgery2016Journal Article, cited 19 times
Website
NLST
Radiomic feature
Large-scale retrieval for medical image analytics: A comprehensive review
Li, Zhongyu
Zhang, Xiaofan
Müller, Henning
Zhang, Shaoting
Medical Image Analysis2018Journal Article, cited 23 times
Website
Medical image analysis Information retrieval Large scale Computer aided diagnosis
Three-dimensional visualization of brain tumor progression based accurate segmentation via comparative holographic projection
Abdelazeem, R. M.
Youssef, D.
El-Azab, J.
Hassab-Elnaby, S.
Agour, M.
PLoS One2020Journal Article, cited 0 times
Website
Brain-Tumor-Progression
We propose a new optical method based on comparative holographic projection for visual comparison between two abnormal follow-up magnetic resonance (MR) exams of glioblastoma patients to effectively visualize and assess tumor progression. First, the brain tissue and tumor areas are segmented from the MR exams using the fast marching method (FMM). The FMM approach is implemented on a computed pixel weight matrix based on an automated selection of a set of initialized target points. Thereafter, the associated phase holograms are calculated for the segmented structures based on an adaptive iterative Fourier transform algorithm (AIFTA). Within this approach, a spatial multiplexing is applied to reduce the speckle noise. Furthermore, hologram modulation is performed to represent two different reconstruction schemes. In both schemes, all calculated holograms are superimposed into a single two-dimensional (2D) hologram which is then displayed on a reflective phase-only spatial light modulator (SLM) for optical reconstruction. The optical reconstruction of the first scheme displays a 3D map of the tumor allowing to visualize the volume of the tumor after treatment and at the progression. Whereas, the second scheme displays the follow-up exams in a side-by-side mode highlighting tumor areas, so the assessment of each case can be fast achieved. The proposed system can be used as a valuable tool for interpretation and assessment of the tumor progression with respect to the treatment method providing an improvement in diagnosis and treatment planning.
Prognostic value of baseline [18F]-fluorodeoxyglucose positron emission tomography parameters MTV, TLG and asphericity in an international multicenter cohort of nasopharyngeal carcinoma patients
Zschaeck, S.
Li, Y.
Lin, Q.
Beck, M.
Amthauer, H.
Bauersachs, L.
Hajiyianni, M.
Rogasch, J.
Ehrhardt, V. H.
Kalinauskaite, G.
Weingartner, J.
Hartmann, V.
van den Hoff, J.
Budach, V.
Stromberger, C.
Hofheinz, F.
PLoS One2020Journal Article, cited 1 times
Website
Head-Neck-PET-CT
QIN-HEADNECK
HNSCC
PURPOSE: [18F]-fluorodeoxyglucose (FDG) positron emission tomography (PET) parameters have shown prognostic value in nasopharyngeal carcinomas (NPC), mostly in monocenter studies. The aim of this study was to assess the prognostic impact of standard and novel PET parameters in a multicenter cohort of patients. METHODS: The established PET parameters metabolic tumor volume (MTV), total lesion glycolysis (TLG) and maximal standardized uptake value (SUVmax) as well as the novel parameter tumor asphericity (ASP) were evaluated in a retrospective multicenter cohort of 114 NPC patients with FDG-PET staging, treated with (chemo)radiation at 8 international institutions. Uni- and multivariable Cox regression and Kaplan-Meier analysis with respect to overall survival (OS), event-free survival (EFS), distant metastases-free survival (FFDM), and locoregional control (LRC) was performed for clinical and PET parameters. RESULTS: When analyzing metric PET parameters, ASP showed a significant association with EFS (p = 0.035) and a trend for OS (p = 0.058). MTV was significantly associated with EFS (p = 0.026), OS (p = 0.008) and LRC (p = 0.012) and TLG with LRC (p = 0.019). TLG and MTV showed a very high correlation (Spearman's rho = 0.95), therefore TLG was subesequently not further analysed. Optimal cutoff values for defining high and low risk groups were determined by maximization of the p-value in univariate Cox regression considering all possible cutoff values. Generation of stable cutoff values was feasible for MTV (p<0.001), ASP (p = 0.023) and combination of both (MTV+ASP = occurrence of one or both risk factors, p<0.001) for OS and for MTV regarding the endpoints OS (p<0.001) and LRC (p<0.001). In multivariable Cox (age >55 years + one binarized PET parameter), MTV >11.1ml (hazard ratio (HR): 3.57, p<0.001) and ASP > 14.4% (HR: 3.2, p = 0.031) remained prognostic for OS. MTV additionally remained prognostic for LRC (HR: 4.86 p<0.001) and EFS (HR: 2.51 p = 0.004). Bootstrapping analyses showed that a combination of high MTV and ASP improved prognostic value for OS compared to each single variable significantly (p = 0.005 and p = 0.04, respectively). When using the cohort from China (n = 57 patients) for establishment of prognostic parameters and all other patients for validation (n = 57 patients), MTV could be successfully validated as prognostic parameter regarding OS, EFS and LRC (all p-values <0.05 for both cohorts). CONCLUSIONS: In this analysis, PET parameters were associated with outcome of NPC patients. MTV showed a robust association with OS, EFS and LRC. Our data suggest that combination of MTV and ASP may potentially further improve the risk stratification of NPC patients.
Artificial intelligence in cancer imaging: Clinical challenges and applications
Bi, Wenya Linda
Hosny, Ahmed
Schabath, Matthew B
Giger, Maryellen L
Birkbak, Nicolai J
Mehrtash, Alireza
Allison, Tavis
Arnaout, Omar
Abbosh, Christopher
Dunn, Ian F
CA: a cancer journal for clinicians2019Journal Article, cited 0 times
Website
Radiomics
Challenge
Automated 3-D Tissue Segmentation Via Clustering
Edwards, Samuel
Brown, Scott
Lee, Michael
Journal of Biomedical Engineering and Medical Imaging2018Journal Article, cited 0 times
Head-Neck Cetuximab
Segmentation
clustering
Non-invasive tumor genotyping using radiogenomic biomarkers, a systematic review and oncology-wide pathway analysis
Jansen, Robin W
van Amstel, Paul
Martens, Roland M
Kooi, Irsan E
Wesseling, Pieter
de Langen, Adrianus J
Menke-Van der Houven, Catharina W
Oncotarget2018Journal Article, cited 0 times
Website
Radiogenomics
meta-analysis
Parallel CNN‐deep learning clinical‐imaging signature for assessing pathologic grade and prognosis of soft tissue sarcoma patients
Guo, Jia
Li, Yi‐ming
Guo, Hongwei
Hao, Da‐peng
Xu, Jing‐xu
Huang, Chen‐cui
Han, Hua‐wei
Hou, Feng
Yang, Shi‐feng
Cui, Jian‐ling
Journal of Magnetic Resonance Imaging2024Journal Article, cited 1 times
Website
Soft-tissue-Sarcoma
Variations of dynamic contrast-enhanced magnetic resonance imaging in evaluation of breast cancer therapy response: a multicenter data analysis challenge
Huang, W.
Li, X.
Chen, Y.
Li, X.
Chang, M. C.
Oborski, M. J.
Malyarenko, D. I.
Muzi, M.
Jajamovich, G. H.
Fedorov, A.
Tudorica, A.
Gupta, S. N.
Laymon, C. M.
Marro, K. I.
Dyvorne, H. A.
Miller, J. V.
Barbodiak, D. P.
Chenevert, T. L.
Yankeelov, T. E.
Mountz, J. M.
Kinahan, P. E.
Kikinis, R.
Taouli, B.
Fennessy, F.
Kalpathy-Cramer, J.
Transl Oncol2014Journal Article, cited 60 times
Website
QIN Breast DCE-MRI
DCE-MRI
Pharmacokinetic analysis of dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) time-course data allows estimation of quantitative parameters such as K (trans) (rate constant for plasma/interstitium contrast agent transfer), v e (extravascular extracellular volume fraction), and v p (plasma volume fraction). A plethora of factors in DCE-MRI data acquisition and analysis can affect accuracy and precision of these parameters and, consequently, the utility of quantitative DCE-MRI for assessing therapy response. In this multicenter data analysis challenge, DCE-MRI data acquired at one center from 10 patients with breast cancer before and after the first cycle of neoadjuvant chemotherapy were shared and processed with 12 software tools based on the Tofts model (TM), extended TM, and Shutter-Speed model. Inputs of tumor region of interest definition, pre-contrast T1, and arterial input function were controlled to focus on the variations in parameter value and response prediction capability caused by differences in models and associated algorithms. Considerable parameter variations were observed with the within-subject coefficient of variation (wCV) values for K (trans) and v p being as high as 0.59 and 0.82, respectively. Parameter agreement improved when only algorithms based on the same model were compared, e.g., the K (trans) intraclass correlation coefficient increased to as high as 0.84. Agreement in parameter percentage change was much better than that in absolute parameter value, e.g., the pairwise concordance correlation coefficient improved from 0.047 (for K (trans)) to 0.92 (for K (trans) percentage change) in comparing two TM algorithms. Nearly all algorithms provided good to excellent (univariate logistic regression c-statistic value ranging from 0.8 to 1.0) early prediction of therapy response using the metrics of mean tumor K (trans) and k ep (=K (trans)/v e, intravasation rate constant) after the first therapy cycle and the corresponding percentage changes. The results suggest that the interalgorithm parameter variations are largely systematic, which are not likely to significantly affect the utility of DCE-MRI for assessment of therapy response.
Brain tumor segmentation in multi‐spectral MRI using convolutional neural networks (CNN)
Iqbal, Sajid
Ghani, M Usman
Saba, Tanzila
Rehman, Amjad
Microscopy research and technique2018Journal Article, cited 8 times
Website
TCGA-LGG
MICCAI-BraTS
BraTS datasets
Convolutional Neural Network (CNN)
deep learning
feature mining
tumor segmentation
Malignancy Classification of Lung Nodule Based on Accumulated Multi Planar Views and Canonical Correlation Analysis
Appearance of a small round or oval shaped in a Computed Tomography (CT) scan of lung is an alarm to suspicion of lung cancer. In order to avoid the misdiagnose of lung cancer at early stage, Computer Aided Diagnosis (CAD) assists oncologists to classify pulmonary nodules as malignant (cancerous) or benign (noncancerous). This paper introduces a novel approach for pulmonary nodules classification employing three accumulated views (top, front, and side) of CT slices and Canonical Correlation Analysis (CCA). Nodule is extracted from 2D CT slice to obtain the Region of Interest (ROI) patch. All patches from sequential slices are accumulated from three different views. Vector representation of each view is correlated with two training sets, malignant and benign sets, employing CCA in spatial and Radon Transform (RT) domain. According to the correlation coefficients, each view is classified and the final classification decision is taken based on the priority decision. For training and testing, 1010 patients are downloaded from Lung Image Database Consortium (LIDC). The final results show that the proposed method achieved the best performance with an accuracy of 90.93% compared with existing methods.
A robust index for metal artifact quantification in computed tomography
Cammin, Jochen
Journal of applied clinical medical physics2024Journal Article, cited 0 times
Website
TCGA-PRAD
NaF PROSTATE
COVID-19-AR
Risk assessment model based on nucleotide metabolism-related genes highlights SLC27A2 as a potential therapeutic target in breast cancer
Zhang, B.
Zhang, Y.
Chang, K.
Hou, N.
Fan, P.
Ji, C.
Liu, L.
Wang, Z.
Li, R.
Wang, Y.
Zhang, J.
Ling, R.
J Cancer Res Clin Oncol2024Journal Article, cited 0 times
Website
TCGA-BRCA
Phenotype
Humans
Female
*Breast Neoplasms/genetics/pathology
Prognosis
Risk Assessment/methods
Nucleotides/genetics
Nomograms
Biomarkers
Tumor/genetics/metabolism
Animals
Gene Expression Regulation
Neoplastic
Mice
Cell Line
Tumor
Breast cancer
Cox regression
Gene signature
LASSO regression
Nucleotide metabolism
Slc27a2
PURPOSE: Breast cancer (BC) is the most prevalent malignant tumor worldwide among women, with the highest incidence rate. The mechanisms underlying nucleotide metabolism on biological functions in BC remain incompletely elucidated. MATERIALS AND METHODS: We harnessed differentially expressed nucleotide metabolism-related genes from The Cancer Genome Atlas-BRCA, constructing a prognostic risk model through univariate Cox regression and LASSO regression analyses. A validation set and the GSE7390 dataset were used to validate the risk model. Clinical relevance, survival and prognosis, immune infiltration, functional enrichment, and drug sensitivity analyses were conducted. RESULTS: Our findings identified four signature genes (DCTPP1, IFNG, SLC27A2, and MYH3) as nucleotide metabolism-related prognostic genes. Subsequently, patients were stratified into high- and low-risk groups, revealing the risk model's independence as a prognostic factor. Nomogram calibration underscored superior prediction accuracy. Gene Set Variation Analysis (GSVA) uncovered activated pathways in low-risk cohorts and mobilized pathways in high-risk cohorts. Distinctions in immune cells were noted between risk cohorts. Subsequent experiments validated that reducing SLC27A2 expression in BC cell lines or using the SLC27A2 inhibitor, Lipofermata, effectively inhibited tumor growth. CONCLUSIONS: We pinpointed four nucleotide metabolism-related prognostic genes, demonstrating promising accuracy as a risk prediction tool for patients with BC. SLC27A2 appears to be a potential therapeutic target for BC among these genes.
Invariant Content Representation for Generalizable Medical Image Segmentation
Cheng, Z.
Wang, S.
Gao, Y.
Zhu, Z.
Yan, C.
J Imaging Inform Med2024Journal Article, cited 0 times
Website
ISBI-MR-Prostate-2013
Algorithm Development
Data augmentation
Domain generalization
Invariant content mining
Medical image segmentation
Domain generalization (DG) for medical image segmentation due to privacy preservation prefers learning from a single-source domain and expects good robustness on unseen target domains. To achieve this goal, previous methods mainly use data augmentation to expand the distribution of samples and learn invariant content from them. However, most of these methods commonly perform global augmentation, leading to limited augmented sample diversity. In addition, the style of the augmented image is more scattered than the source domain, which may cause the model to overfit the style of the source domain. To address the above issues, we propose an invariant content representation network (ICRN) to enhance the learning of invariant content and suppress the learning of variability styles. Specifically, we first design a gamma correction-based local style augmentation (LSA) to expand the distribution of samples by augmenting foreground and background styles, respectively. Then, based on the augmented samples, we introduce invariant content learning (ICL) to learn generalizable invariant content from both augmented and source-domain samples. Finally, we design domain-specific batch normalization (DSBN) based style adversarial learning (SAL) to suppress the learning of preferences for source-domain styles. Experimental results show that our proposed method improves by 8.74% and 11.33% in overall dice coefficient (Dice) and reduces 15.88 mm and 3.87 mm in overall average surface distance (ASD) on two publicly available cross-domain datasets, Fundus and Prostate, compared to the state-of-the-art DG methods. The code is available at https://github.com/ZMC-IIIM/ICRN-DG .
Pure Vision Transformer (CT-ViT) with Noise2Neighbors Interpolation for Low-Dose CT Image Denoising
Marcos, L.
Babyn, P.
Alirezaie, J.
J Imaging Inform Med2024Journal Article, cited 0 times
Website
LDCT-and-Projection-data
Image denoising
CT denoising
Convolutional Neural Networks (CNN)
Deep learning
Machine learning
Medical image processing
Vision transformers
Convolutional neural networks (CNN) have been used for a wide variety of deep learning applications, especially in computer vision. For medical image processing, researchers have identified certain challenges associated with CNNs. These challenges encompass the generation of less informative features, limitations in capturing both high and low-frequency information within feature maps, and the computational cost incurred when enhancing receptive fields by deepening the network. Transformers have emerged as an approach aiming to address and overcome these specific limitations of CNNs in the context of medical image analysis. Preservation of all spatial details of medical images is necessary to ensure accurate patient diagnosis. Hence, this research introduced the use of a pure Vision Transformer (ViT) for a denoising artificial neural network for medical image processing specifically for low-dose computed tomography (LDCT) image denoising. The proposed model follows a U-Net framework that contains ViT modules with the integration of Noise2Neighbor (N2N) interpolation operation. Five different datasets containing LDCT and normal-dose CT (NDCT) image pairs were used to carry out this experiment. To test the efficacy of the proposed model, this experiment includes comparisons between the quantitative and visual results among CNN-based (BM3D, RED-CNN, DRL-E-MP), hybrid CNN-ViT-based (TED-Net), and the proposed pure ViT-based denoising model. The findings of this study showed that there is about 15-20% increase in SSIM and PSNR when using self-attention transformers than using the typical pure CNN. Visual results also showed improvements especially when it comes to showing fine structural details of CT images.
CbcErDL: Classification of breast cancer from mammograms using enhance image reduction and deep learning framework
Agrawal, Rohit
Singh, Navneet Pratap
Shelke, Nitin Arvind
Tripathi, Kuldeep Narayan
Singh, Ranjeet Kumar
Multimedia Tools and Applications2024Journal Article, cited 0 times
Website
CBIS-DDSM
Breast cancer is a major health concern for women worldwide, and early detection is vital to improve treatment outcomes. While existing techniques in mammogram classification have demonstrated promising results, their limitations become apparent when applied to larger datasets. The decline in performance with increased dataset size highlights the need for further research and advancements in the field to enhance the scalability and generalizability of these techniques. In this study, we propose a framework to classify breast cancer from mammograms using techniques such as mammogram enhancement, discrete cosine transform (DCT) dimensionality reduction, and deep convolutional neural network (DCNN). The first step is to improve the mammogram display to improve the visibility of key features and reduce noise. For this, we use 2-stage Contrast Limited Adaptive Histogram Equalization (CLAHE). DCT is then used to enhance mammograms to reduce residual data. It can provide effective reduction while preserving important diagnostic information. In this way, we reduce the computational complexity and increase the results of subsequent classification algorithms. Finally, DCNN is used on size-reduced DCT coefficients to learn feature discrimination and classification of mammograms. DCNN architectures have been optimized with various techniques to improve their performance, including regularization and hyperparameter tuning. We perform experiments on the DDSM dataset, a large dataset containing approximately 55,000 mammogram images, and demonstrate the effectiveness of the proposed method. We assess the proposed model’s performance by computing the precision, recall, accuracy, F1-Score, and area under the receiver operating characteristic curve (AUC). We achieve Precision and Recall values of 0.929 and 0.963, respectively. The classification accuracy of the proposed models is 0.963. Moreover, the F1-Score and AUC values are 0.962 and 0.987, respectively. These results are better as compared to the standard techniques and the techniques from the literature. The proposed approach has the potential to assist radiologists in accurately diagnosing breast cancer, thereby facilitating early detection and timely intervention.
Integrated Dataset-Preparation System for ML-Based Medical Image Diagnosis with High Clinical Applicability in Various Modalities and Diagnoses
Nguyen, My N
Harada, Kotori
Yoshimoto, Takahiro
Duong, Nam Phong
Sowa, Yoshihiro
Sakai, Koji
Fukuzawa, Masayuki
SN Computer Science2024Journal Article, cited 0 times
Website
Duke-Breast-Cancer-MRI
Multi kernel cross sparse graph attention convolutional neural network for brain magnetic resonance imaging super-resolution
Hua, Xin
Du, Zhijiang
Ma, Jixin
Yu, Hongjian
Biomedical Signal Processing and Control2024Journal Article, cited 0 times
Website
BraTS 2019
GammaKnife-Hippocampal
Brain MRI
Sparse Graph Attention
Deep Learning
Medical image super-resolution
High-resolution Magnetic Resonance Imaging (MRI) is pivotal in both diagnosing and treating brain tumors, assisting physicians in diagnosis and treatment by displaying anatomical structures. Utilizing convolutional neural network-based super-resolution methods enables the efficient acquisition of high-resolution MRI images.However, Convolutional neural networks are limited by their kernel size, which restricts their ability to capture a wider field of view, potentially leading to feature omission and difficulties in establishing global and local feature relationships. To overcome these shortcomings, We have designed a novel network architecture that highlights three main modules: (i) Multiple Convolutional Feature (MCF)extraction module, which diversifies convolution operations for extracting image features, achieving comprehensive feature representation. (ii) Multiple Groupss of Cross-Iterative Feature(MGCIF) modules, promoting inter-channel feature interactions and emphasizing crucial features needed for subsequent learning. (iii) A Graph Neural Network Module based on a sparse attention mechanism, capable of connecting distant pixel features and identifying influential neighboring pixels for target pixel inpainting. To evaluate the accuracy of our proposed network, we conducted tests on four datasets, comprising two sets of brain tumor data and two sets of healthy head MRI data, all of which underwent varying degrees of degradation. We conducted experiments using nineteen super-resolution (SR) models. Our experiments were carried out on four datasets, and the results demonstrate that our method outperforms the current leading-edge methods. In the four datasets, our model showed improvements in Peak Signal-to-Noise Ratio (PSNR) scores compared to the second-place model,with increase of 1.16 %,1.08 %,0.19 % and 0.53 % for ×2,and 2.26 %,1.67 %,0.13 % and 0.45 % for × 4,respectively.
Multi-scale signaling and tumor evolution in high-grade gliomas
Liu, Jingxian
Cao, Song
Imbach, Kathleen J.
Gritsenko, Marina A.
Lih, Tung-Shing M.
Kyle, Jennifer E.
Yaron-Barir, Tomer M.
Binder, Zev A.
Li, Yize
Strunilin, Ilya
Wang, Yi-Ting
Tsai, Chia-Feng
Ma, Weiping
Chen, Lijun
Clark, Natalie M.
Shinkle, Andrew
Naser Al Deen, Nataly
Caravan, Wagma
Houston, Andrew
Simin, Faria Anjum
Wyczalkowski, Matthew A.
Wang, Liang-Bo
Storrs, Erik
Chen, Siqi
Illindala, Ritvik
Li, Yuping D.
Jayasinghe, Reyka G.
Rykunov, Dmitry
Cottingham, Sandra L.
Chu, Rosalie K.
Weitz, Karl K.
Moore, Ronald J.
Sagendorf, Tyler
Petyuk, Vladislav A.
Nestor, Michael
Bramer, Lisa M.
Stratton, Kelly G.
Schepmoes, Athena A.
Couvillion, Sneha P.
Eder, Josie
Kim, Young-Mo
Gao, Yuqian
Fillmore, Thomas L.
Zhao, Rui
Monroe, Matthew E.
Southard-Smith, Austin N.
Li, Yang E.
Jui-Hsien Lu, Rita
Johnson, Jared L.
Wiznerowicz, Maciej
Hostetter, Galen
Newton, Chelsea J.
Ketchum, Karen A.
Thangudu, Ratna R.
Barnholtz-Sloan, Jill S.
Wang, Pei
Fenyö, David
An, Eunkyung
Thiagarajan, Mathangi
Robles, Ana I.
Mani, D. R.
Smith, Richard D.
Porta-Pardo, Eduard
Cantley, Lewis C.
Iavarone, Antonio
Chen, Feng
Mesri, Mehdi
Nasrallah, MacLean P.
Zhang, Hui
Resnick, Adam C.
Chheda, Milan G.
Rodland, Karin D.
Liu, Tao
Ding, Li
Cancer Cell2024Journal Article, cited 0 times
Website
CPTAC-GBM
UPENN-GBM
glioblastoma
glycoproteomics
tumor recurrence
lipidome
metabolome
proteomics
single nuclei RNA-seq
single nuclei ATAC-seq
Summary Although genomic anomalies in glioblastoma (GBM) have been well studied for over a decade, its 5-year survival rate remains lower than 5%. We seek to expand the molecular landscape of high-grade glioma, composed of IDH-wildtype GBM and IDH-mutant grade 4 astrocytoma, by integrating proteomic, metabolomic, lipidomic, and post-translational modifications (PTMs) with genomic and transcriptomic measurements to uncover multi-scale regulatory interactions governing tumor development and evolution. Applying 14 proteogenomic and metabolomic platforms to 228 tumors (212 GBM and 16 grade 4 IDH-mutant astrocytoma), including 28 at recurrence, plus 18 normal brain samples and 14 brain metastases as comparators, reveals heterogeneous upstream alterations converging on common downstream events at the proteomic and metabolomic levels and changes in protein-protein interactions and glycosylation site occupancy at recurrence. Recurrent genetic alterations and phosphorylation events on PTPN11 map to important regulatory domains in three dimensions, suggesting a central role for PTPN11 signaling across high-grade gliomas.
Quality-driven deep cross-supervised learning network for semi-supervised medical image segmentation
Zhang, Zhenxi
Zhou, Heng
Shi, Xiaoran
Ran, Ran
Tian, Chunna
Zhou, Feng
Computers in Biology and Medicine2024Journal Article, cited 0 times
Website
Pancreas-CT
Semi-supervised learning
Medical image segmentation
PyTorch
Cardiac segmentation
Semi-supervised medical image segmentation presents a compelling approach to streamline large-scale image analysis, alleviating annotation burdens while maintaining comparable performance. Despite recent strides in cross-supervised training paradigms, challenges persist in addressing sub-network disagreement and training efficiency and reliability. In response, our paper introduces a novel cross-supervised learning framework, Quality-driven Deep Cross-supervised Learning Network (QDC-Net). QDC-Net incorporates both an evidential sub-network and an vanilla sub-network, leveraging their complementary strengths to effectively handle disagreement. To enable the reliability and efficiency of semi-supervised training, we introduce a real-time quality estimation of the model’s segmentation performance and propose a directional cross-training approach through the design of directional weights. We further design a truncated form of sample-wise loss weighting to mitigate the impact of inaccurate predictions and collapsed samples in semi-supervised training. Extensive experiments on LA and Pancreas-CT datasets demonstrate that QDC-Net surpasses other state-of-the-art methods in semi-supervised medical image segmentation. Code release is imminent at https://github.com/Medsemiseg.
Addressing label noise in leukemia image classification using small loss approach and pLOF with weighted-average ensemble
Aziz, Md Tarek
Mahmud, S. M. Hasan
Goh, Kah Ong Michael
Nandi, Dip
Egyptian Informatics Journal2024Journal Article, cited 0 times
Website
C-NMC 2019
Algorithm Development
Pathomics
Classification
Label noise
Model uncertainty
Feature extraction
Shapley values
Ensemble learning
Explainable AI
Acute Lymphoblastic Leukemia (ALL)
Machine learning (ML) and deep learning (DL) models have been extensively explored for the early diagnosis of various cancer diseases, including Leukemia, with many of them achieving significant performance improvements comparable to those of human experts. However, challenges like limited image data, inaccurate annotations, and prediction reliability still hinder their broad implementation to establish a trustworthy computer-aided diagnosis (CAD) system. This paper introduces a novel weighted-average ensemble model for classifying Acute Lymphoblastic Leukemia, along with a reliable Computer-Aided Diagnosis (CAD) system that combines the strengths of both ML and DL approaches. Initially, a variety of filtering methods are extensively analyzed to determine the most suitable image representation, with subsequent data augmentation techniques to expand the training data. Second, a modified VGG-19 model was proposed with fine-tuning that was utilized as a feature extractor to extract meaningful features from the training samples. Third, A small-loss approach and probabilistic local outlier factor (pLOF) have been developed on the extracted features to address the label noise issue. Fourth, we proposed an weighted-average ensemble model based on the top five models as base learners, with weights calculated based on their model uncertainty to ensure reliable predictions. Fifth, we calculated Shapley values based on cooperative game theory and performed feature selection with different feature combinations to determine the optimal number of features using SHAP. Finally, we integrate these strategies to develop an interpretable CAD system. This system not only predicts the disease but also generates Grad-CAM images to visualize potential affected areas, enhancing both clarity and diagnostic insight. All of our code is provided in the following repository: https://github.com/taareek/leukemia-classification
The prognostic relevance of a gene expression signature in MRI-defined highly vascularized Glioblastoma
Background
The vascular heterogeneity of glioblastomas (GB) remains an important area of research, since tumor progression and patient prognosis are closely tied to this feature. With this study, we aim to identify gene expression profiles associated with MRI-defined tumor vascularity and to investigate its relationship with patient prognosis.
Methods
The study employed MRI parameters calculated with DSC Perfusion Quantification of ONCOhabitats glioma analysis software and RNA-seq data from the TCGA-GBM project dataset. In our study, we had a total of 147 RNA-seq samples, which 15 of them also had MRI parameter information. We analyzed the gene expression profiles associated with MRI-defined tumor vascularity using differential gene expression analysis and performed Log-rank tests to assess the correlation between the identified genes and patient prognosis.
Results
The findings of our research reveal a set of 21 overexpressed genes associated with the high vascularity pattern. Notably, several of these overexpressed genes have been previously implicated in worse prognosis based on existing literature. Our log-rank test further validates that the collective upregulation of these genes is indeed correlated with an unfavorable prognosis. This set of genes includes a variety of molecules, such as cytokines, receptors, ligands, and other molecules with diverse functions.
Conclusions
Our findings suggest that the set of 21 overexpressed genes in the High Vascularity group could potentially serve as prognostic markers for GB patients. These results highlight the importance of further investigating the relationship between the molecules such as cytokines or receptors underlying the vascularity in GB and its observation through MRI and developing targeted therapies for this aggressive disease.
Fair evaluation of federated learning algorithms for automated breast density classification: The results of the 2022 ACR-NCI-NVIDIA federated learning challenge
Schmidt, Kendall
Bearce, Benjamin
Chang, Ken
Coombs, Laura
Farahani, Keyvan
Elbatel, Marawan
Mouheb, Kaouther
Marti, Robert
Zhang, Ruipeng
Zhang, Yao
Wang, Yanfeng
Hu, Yaojun
Ying, Haochao
Xu, Yuyang
Testagrose, Conrad
Demirer, Mutlu
Gupta, Vikash
Akünal, Ünal
Bujotzek, Markus
Maier-Hein, Klaus H.
Qin, Yi
Li, Xiaomeng
Kalpathy-Cramer, Jayashree
Roth, Holger R.
Medical Image Analysis2024Journal Article, cited 0 times
Website
CBIS-DDSM
Federated learning
Challenge
Algorithm Development
BREAST
Mammography
Radiomic features
The correct interpretation of breast density is important in the assessment of breast cancer risk. AI has been shown capable of accurately predicting breast density, however, due to the differences in imaging characteristics across mammography systems, models built using data from one system do not generalize well to other systems. Though federated learning (FL) has emerged as a way to improve the generalizability of AI without the need to share data, the best way to preserve features from all training data during FL is an active area of research. To explore FL methodology, the breast density classification FL challenge was hosted in partnership with the American College of Radiology, Harvard Medical Schools’ Mass General Brigham, University of Colorado, NVIDIA, and the National Institutes of Health National Cancer Institute. Challenge participants were able to submit docker containers capable of implementing FL on three simulated medical facilities, each containing a unique large mammography dataset. The breast density FL challenge ran from June 15 to September 5, 2022, attracting seven finalists from around the world. The winning FL submission reached a linear kappa score of 0.653 on the challenge test data and 0.413 on an external testing dataset, scoring comparably to a model trained on the same data in a central location.
Prognostication of colorectal cancer liver metastasis by CE-based radiomics and machine learning
Luo, Xijun
Deng, Hui
Xie, Fei
Wang, Liyan
Liang, Junjie
Zhu, Xianjun
Li, Tao
Tang, Xingkui
Liang, Weixiong
Xiang, Zhiming
He, Jialin
Translational Oncology2024Journal Article, cited 0 times
Website
Colorectal-Liver-Metastases
Colorectal cancer
Liver metastasis
Machine learning
Radiomics
Disease-free survival
Cox regression
Elastic net and random survival forest
The liver is the most common organ for the formation of colorectal cancer metastasis. Non-invasive prognostication of colorectal cancer liver metastasis (CRLM) may better inform clinicians for decision-making. Contrast-enhanced computed tomography images of 180 CRLM cases were included in the final analyses. Radiomics features, including shape, first-order, wavelet, and texture, were extracted with Pyradiomics, followed by feature engineering by penalized Cox regression. Radiomics signatures were constructed for disease-free survival (DFS) by both elastic net (EN) and random survival forest (RSF) algorithms. The prognostic potential of the radiomics signatures was demonstrated by Kaplan-Meier curves and multivariate Cox regression. 11 radiomics features were selected for prognostic modelling for the EN algorithm, with 835 features for the RSF algorithm. Survival heatmap indicates a negative correlation between EN or RSF risk scores and DFS. Radiomics signature by EN algorithm successfully separates DFS of high-risk and low-risk cases in the training dataset (log-rank test: p < 0.01, hazard ratio: 1.45 (1.07–1.96), p < 0.01) and test dataset (hazard ratio: 1.89 (1.17–3.04), p < 0.05). RSF algorithm shows a better prognostic implication potential for DFS in the training dataset (log-rank test: p < 0.001, hazard ratio: 2.54 (1.80–3.61), p < 0.0001) and test dataset (log-rank test: p < 0.05, hazard ratio: 1.84 (1.15–2.96), p < 0.05). Radiomics features have the potential for the prediction of DFS in CRLM cases.
SAROS: A dataset for whole-body region and organ segmentation in CT imaging
Koitka, S.
Baldini, G.
Kroll, L.
van Landeghem, N.
Pollok, O. B.
Haubold, J.
Pelka, O.
Kim, M.
Kleesiek, J.
Nensa, F.
Hosch, R.
Sci Data2024Journal Article, cited 0 times
Website
SAROS
ACRIN-NSCLC-FDG-PET
CPTAC-LSCC
Soft-tissue-Sarcoma
NSCLC Radiogenomics
Lung-PET-CT-Dx
NSCLC-Radiomics
LIDC-IDRI
TCGA-LUAD
TCGA-STAD
Anti-PD-1_MELANOMA
TCGA-UCEC
CPTAC-CM
TCGA-LUSC
ACRIN-FLT-Breast
Anti-PD-1_Lung
HNSCC
QIN-HEADNECK
CPTAC-LUAD
C4KC-KiTS
Head-Neck Cetuximab
TCGA-LIHC
CPTAC-PDA
NSCLC-Radiomics-Genomics
ACRIN-HNSCC-FDG-PET-CT
Pancreas-CT
TCGA-HNSC
COVID-19-NY-SBU
Female
Humans
Male
Segmentation
Algorithm Development
Model
Image Processing
Computer-Assisted
*Tomography
X-Ray Computed
*Whole Body Imaging
The Sparsely Annotated Region and Organ Segmentation (SAROS) dataset was created using data from The Cancer Imaging Archive (TCIA) to provide a large open-access CT dataset with high-quality annotations of body landmarks. In-house segmentation models were employed to generate annotation proposals on randomly selected cases from TCIA. The dataset includes 13 semantic body region labels (abdominal/thoracic cavity, bones, brain, breast implant, mediastinum, muscle, parotid/submandibular/thyroid glands, pericardium, spinal cord, subcutaneous tissue) and six body part labels (left/right arm/leg, head, torso). Case selection was based on the DICOM series description, gender, and imaging protocol, resulting in 882 patients (438 female) for a total of 900 CTs. Manual review and correction of proposals were conducted in a continuous quality control cycle. Only every fifth axial slice was annotated, yielding 20150 annotated slices from 28 data collections. For the reproducibility on downstream tasks, five cross-validation folds and a test set were pre-defined. The SAROS dataset serves as an open-access resource for training and evaluating novel segmentation models, covering various scanner vendors and diseases.
Comprehensive Collection of Whole-Slide Images and Genomic Profiles for Patients with Bladder Cancer
Xu, Pei-Hang
Li, Tianqi
Qu, Fengmei
Tian, Mingkang
Wang, Jun
Gan, Hualei
Ye, Dingwei
Ren, Fei
Shen, Yijun
Scientific Data2024Journal Article, cited 0 times
Website
TCGA-BLCA
3D medical image encryption algorithm using biometric key and cubic S-box
Liu, Yunhao
Xue, Ru
Physica Scripta2024Journal Article, cited 0 times
Website
QIN BREAST
StageII-Colorectal-CT
TCGA-CESC
Security
Simulation
Considering the scarcity of research on 3D medical image encryption, this paper proposes a novel 3D medical image encryption scheme based on biometric key and cubic S-box. To enhance the data security, biometric keys are utilized to overcome the limitations of traditional methods where secret keys with no practical meaning, fixed length, and finite key space, while cubic S-box is constructed to increase the nonlinearity of image cryptosystem. The proposed cryptosystem mainly consists of four phases: pseudo-random sequence generation, confusion, substitution, and diffusion. Firstly, the stepwise iterative algorithm based on coupled chaotic systems is utilized for generating pseudo-random sequences for confusion and diffusion. Secondly, the confusion algorithm based on multiple sorting can scramble pixel positions in 3D images. Thirdly, guided by the designed cubic S-box, pixel substitution is executed sequentially. Lastly, the diffusion algorithm based on ECA and finite field multiplication is capable of increasing the plaintext sensitivity of cryptosystem by concealing the statistical characteristics of plaintext. Simulation experiments performed on multiple 3D medical images demonstrate that the proposed encryption scheme exhibits favorable statistical performance, sufficiently large key space, strong system sensitivity and robustness, and can resist various typical cryptographic attacks.
A Multi-View Deep Evidential Learning Approach for Mammogram Density Classification
Gudhe, Naga Raju
Mazen, Sudah
Sund, Reijo
Kosma, Veli-Matti
Behravan, Hamid
Mannermaa, Arto
IEEE Access2024Journal Article, cited 0 times
Website
CBIS-DDSM
CMMD
Mammography
Classification
Image normalization
Convolutional Neural Network (CNN)
ResNet-101
ResNet-50
ResNet18
DenseNet
EfficientNet-B3
EfficientNet
Artificial intelligence algorithms, specifically deep learning, can assist radiologists by automating mammogram density assessment. However, trust in such algorithms must be established before they are widely adopted in clinical settings. In this study, we present an evidential deep learning approach called MV-DEFEAT, incorporating the strength of Dempster Shafer evidential theory and subjective logic, for the mammogram density classification task. The framework combines evidence from multiple mammograms’ views to mimic a radiologist decision making process. In this study, we utilized four open-source datasets, namely VinDr-Mammo, DDSM, CMMD, and VTB, to mitigate inherent biases and provide a diverse representation of the data. Our experimental findings demonstrate MV-DEFEAT’s superior performance in terms of weighted macro-average area under the receiver operating curves (AUCs) compared to the state-of-the-art multi-view deep learning model, referred to as MVDL. MV-DEFEAT yields a relative improvement of 12.57%, 14.51%, 19.9%, and 22.53%, on the VTB, VinDr-Mammo, CMMD, and DDSM datasets, respectively, for the mammogram density classification task. Additionally, for BIRADS classification and the classification of mammograms as benign or malignant, MV-DEFEAT exhibits substantial enhancements compared to MVDL, with a relative improvement of 31.46% and 50.78% on the DDSM and VinDr-Mammo datasets, respectively. These results underscore the efficacy of our approach. Through meticulous curation of diverse datasets and comprehensive comparative analyses, we ensure the robustness and reliability of our findings, thereby enhancing trust to adopt MV-DEFEAT framework for various mammogram assessment tasks in clinical settings.
Two-Stage Training Framework Using Multicontrast MRI Radiomics for IDH Mutation Status Prediction in Glioma
Truong, Nghi CD
Bangalore Yogananda, Chandan Ganesh
Wagner, Benjamin C
Holcomb, James M
Reddy, Divya
Saadat, Niloufar
Hatanpaa, Kimmo J
Patel, Toral R
Fei, Baowei
Lee, Matthew D
Radiology: Artificial Intelligence2024Journal Article, cited 1 times
Website
IvyGAP
Prostate Cancer Detection from MRI Using Efficient Feature Extraction with Transfer Learning
Islam, Rafiqul
Imran, Al
Rabbi, Md Fazle
Farhan, Mohd
Prostate Cancer2024Journal Article, cited 0 times
Website
PROSTATE-MRI
Computer Aided Detection (CADe)
Random forest classifier
Transfer learning
Radiomic features
Machine Learning
Deep Learning
Prostate cancer is a common cancer with significant implications for global health. Prompt and precise identification is crucial for efficient treatment strategizing and enhanced patient results. This research study investigates the utilization of machine learning techniques to diagnose prostate cancer. It emphasizes utilizing deep learning models, namely VGG16, VGG19, ResNet50, and ResNet50V2, to extract relevant features. The random forest approach then uses these features for classification. The study begins by doing a thorough comparison examination of the deep learning architectures outlined above to evaluate their effectiveness in extracting significant characteristics from prostate cancer imaging data. Key metrics such as sensitivity, specificity, and accuracy are used to assess the models’ efficacy. With an accuracy of 99.64%, ResNet50 outperformed other tested models when it came to identifying important features in images of prostate cancer. Furthermore, the analysis of understanding factors aims to offer valuable insights into the decision-making process, thereby addressing a critical problem for clinical practice acceptance. The random forest classifier, a powerful ensemble learning method renowned for its adaptability and ability to handle intricate datasets, then uses the collected characteristics as input. The random forest model seeks to identify patterns in the feature space and produce precise predictions on the presence or absence of prostate cancer. In addition, the study tackles the restricted availability of datasets by utilizing transfer learning methods to refine the deep learning models using a small amount of annotated prostate cancer data. The objective of this method is to improve the ability of the models to generalize across different patient populations and clinical situations. This study’s results are useful because they show how well VGG16, VGG19, ResNet50, and ResNet50V2 work for extracting features in the field of diagnosing prostate cancer, when used with random forest’s classification abilities. The results of this work provide a basis for creating reliable and easily understandable machine learning-based diagnostic tools for detecting prostate cancer. This will enhance the possibility of an early and precise diagnosis in clinical settings such as index terms deep learning, machine learning, prostate cancer, cancer identification, and cancer classification.
Enhancing pathological complete response prediction in breast cancer: the role of dynamic characterization of DCE-MRI and its association with tumor heterogeneity
Zhang, X.
Teng, X.
Zhang, J.
Lai, Q.
Cai, J.
Breast Cancer Res2024Journal Article, cited 0 times
Website
Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI)
Radiomics
Treatment response prediction
BACKGROUND: Early prediction of pathological complete response (pCR) is important for deciding appropriate treatment strategies for patients. In this study, we aimed to quantify the dynamic characteristics of dynamic contrast-enhanced magnetic resonance images (DCE-MRI) and investigate its value to improve pCR prediction as well as its association with tumor heterogeneity in breast cancer patients. METHODS: The DCE-MRI, clinicopathologic record, and full transcriptomic data of 785 breast cancer patients receiving neoadjuvant chemotherapy were retrospectively included from a public dataset. Dynamic features of DCE-MRI were computed from extracted phase-varying radiomic feature series using 22 CAnonical Time-sereis CHaracteristics. Dynamic model and radiomic model were developed by logistic regression using dynamic features and traditional radiomic features respectively. Various combined models with clinical factors were also developed to find the optimal combination and the significance of each components was evaluated. All the models were evaluated in independent test set in terms of area under receiver operating characteristic curve (AUC). To explore the potential underlying biological mechanisms, radiogenomic analysis was implemented on patient subgroups stratified by dynamic model to identify differentially expressed genes (DEGs) and enriched pathways. RESULTS: A 10-feature dynamic model and a 4-feature radiomic model were developed (AUC = 0.688, 95%CI: 0.635-0.741 and AUC = 0.650, 95%CI: 0.595-0.705) and tested (AUC = 0.686, 95%CI: 0.594-0.778 and AUC = 0.626, 95%CI: 0.529-0.722), with the dynamic model showing slightly higher AUC (train p = 0.181, test p = 0.222). The combined model of clinical, radiomic, and dynamic achieved the highest AUC in pCR prediction (train: 0.769, 95%CI: 0.722-0.816 and test: 0.762, 95%CI: 0.679-0.845). Compared with clinical-radiomic combined model (train AUC = 0.716, 95%CI: 0.665-0.767 and test AUC = 0.695, 95%CI: 0.656-0.714), adding the dynamic component brought significant improvement in model performance (train p < 0.001 and test p = 0.005). Radiogenomic analysis identified 297 DEGs, including CXCL9, CCL18, and HLA-DPB1 which are known to be associated with breast cancer prognosis or angiogenesis. Gene set enrichment analysis further revealed enrichment of gene ontology terms and pathways related to immune system. CONCLUSION: Dynamic characteristics of DCE-MRI were quantified and used to develop dynamic model for improving pCR prediction in breast cancer patients. The dynamic model was associated with tumor heterogeniety in prognostic-related gene expression and immune-related pathways.
Multi-institutional validation of a radiomics signature for identification of postoperative progression of soft tissue sarcoma
Yu, Y.
Guo, H.
Zhang, M.
Hou, F.
Yang, S.
Huang, C.
Duan, L.
Wang, H.
Cancer Imaging2024Journal Article, cited 0 times
Website
BACKGROUND: To develop a magnetic resonance imaging (MRI)-based radiomics signature for evaluating the risk of soft tissue sarcoma (STS) disease progression. METHODS: We retrospectively enrolled 335 patients with STS (training, validation, and The Cancer Imaging Archive sets, n = 168, n = 123, and n = 44, respectively) who underwent surgical resection. Regions of interest were manually delineated using two MRI sequences. Among 12 machine learning-predicted signatures, the best signature was selected, and its prediction score was inputted into Cox regression analysis to build the radiomics signature. A nomogram was created by combining the radiomics signature with a clinical model constructed using MRI and clinical features. Progression-free survival was analyzed in all patients. We assessed performance and clinical utility of the models with reference to the time-dependent receiver operating characteristic curve, area under the curve, concordance index, integrated Brier score, decision curve analysis. RESULTS: For the combined features subset, the minimum redundancy maximum relevance-least absolute shrinkage and selection operator regression algorithm + decision tree classifier had the best prediction performance. The radiomics signature based on the optimal machine learning-predicted signature, and built using Cox regression analysis, had greater prognostic capability and lower error than the nomogram and clinical model (concordance index, 0.758 and 0.812; area under the curve, 0.724 and 0.757; integrated Brier score, 0.080 and 0.143, in the validation and The Cancer Imaging Archive sets, respectively). The optimal cutoff was - 0.03 and cumulative risk rates were calculated. DATA CONCLUSION: To assess the risk of STS progression, the radiomics signature may have better prognostic power than a nomogram/clinical model.
Precision Lung Cancer Segmentation from CT & PET Images Using Mask2Former
Lung cancer is a leading cause of death worldwide, highlighting the critical need for early diagnosis. Lung image analysis and segmentation are essential steps in this process, but manual segmentation of medical images is extremely time-consuming for radiation oncologists. The complexity of this task is heightened by the significant variability in lung tumors, which can differ greatly in size, shape, and texture due to factors like tumor subtype, stage, and patient-specific characteristics. Traditional segmentation methods often struggle to accurately capture this diversity. To address these challenges, we propose a lung cancer diagnosis system based on Mask2Former, utilizing CT (Computed Tomography) and PET (Positron Emission Tomography) images. This system excels in generating high-quality instance segmentation masks, enabling it to better adapt to the heterogeneous nature of lung tumors compared to traditional methods. Additionally, our system classifies the segmented output as either benign or malignant, leveraging a self-supervised network. The proposed approach offers a powerful tool for early diagnosis and effective management of lung cancer using CT and PET data. Extensive experiments demonstrate its effectiveness in achieving improved segmentation and classification results.
Multimodal Data-Driven Intelligent Systems for Breast Cancer Prediction
A Novel Deep Learning Model for Pancreas Segmentation: Pascal U-Net
Kurnaz, Ender
Ceylan, Rahime
Bozkurt, Mustafa Alper
Cebeci, Hakan
Koplay, Mustafa
Inteligencia Artificial2024Journal Article, cited 0 times
Website
Pancreas-CT
Convolutional Neural Network (CNN)
U-Net
Segmentation
Organ segmentation
A robust and reliable automated organ segmentation from abdomen images is a crucial problem in both quantitative imaging analysis and computer aided diagnosis. Especially, automatic pancreas segmentation from abdomen CT images is most challenging task which based on in two main aspects (1) high variability in anatomy (like as shape, size, etc.) and location across different patients (2) low contrast with neighboring tissues. Due to these reasons, achievement of high accuracies in pancreas segmentation is hard image segmentation problem. In this paper, we propose a novel deep learning model which is convolutional neural network-based model called Pascal U-Net for pancreas segmentation. Performance of the proposed model is evaluated on The Cancer Imaging Archive (TCIA) Pancreas CT database and abdomen CT dataset which is taken from Selcuk University Medicine Faculty Radiology Department. During the experimental studies, k-fold cross-validation method is used. Furthermore, results of the proposed model are compared with results of traditional U-Net. If results obtained by Pascal U-Net and traditional U-net for different batch size and fold number is compared, it can be seen that experiments on both datasets validate the effectiveness of Pascal U-Net model for pancreas segmentation.
Studies on the treatment planning method for online adaptive proton therapy
This research endeavor is dedicated to the integration of specialized attentional mechanisms within the intricate web of deep neural network architectures aimed at discerning indications of lung carcinoma from monochromatic snapshots derived from computerized axial tomography. Within this exploration, we propose a myriad of adaptations to the traditional non-local blocks, infusing them with bespoke attentional nuances to resonate with the idiosyncrasies of medical imaging data. These bespoke adaptations ushered in discernible ameliorations in the performance metrics of the fundamental deep neural network model. Our solution facilitated a reduction in the model parameter count without compromising classification efficiency significantly. Additionally, it enabled a streamlined approach to feature extraction, contributing to enhanced interpretability and efficiency in the recognition process. These advancements were meticulously validated across test subsets meticulously curated from the Open Joint Monochrome Lungs Computer Tomography dataset, the Lung Image Database Consortium and Image Database Resource Initiative dataset, the Iraq-Oncology Teaching Hospital / National Center for Cancer Diseases dataset, Radiology Moscow and The Cancer Imaging Archive and from several others.
Deep learning infers clinically relevant protein levels and drug response in breast cancer from unannotated pathology images
Liu, H.
Xie, X.
Wang, B.
NPJ Breast Cancer2024Journal Article, cited 0 times
Website
CPTAC-BRCA
HER2 tumor ROIs
TCGA-BRCA
BREAST
Breast cancer
Imaging features
Deep Learning
Weakly supervised learning
Whole Slide Imaging (WSI)
Biomarker
Pathomics
Algorithm Development
The computational pathology has been demonstrated to effectively uncover tumor-related genomic alterations and transcriptomic patterns. Although proteomics has indeed shown great potential in the field of precision medicine, few studies have focused on the computational prediction of protein levels from pathology images. In this paper, we assume that deep learning-based pathological features imply the protein levels of tumor biomarkers that are indicative of prognosis and drug response. For this purpose, we propose wsi2rppa, a weakly supervised contrastive learning framework to infer the protein levels of tumor biomarkers from whole slide images (WSIs) in breast cancer. We first conducted contrastive learning-based pre-training on tessellated tiles to extract pathological features, which are then aggregated by attention pooling and adapted to downstream tasks. We conducted extensive evaluation experiments on the TCGA-BRCA cohort (1978 WSIs of 1093 patients with protein levels of 223 biomarkers) and the CPTAC-BRCA cohort (642 WSIs of 134 patients). The results showed that our method achieved state-of-the-art performance in tumor diagnostic tasks, and also performed well in predicting clinically relevant protein levels and drug response. To show the model interpretability, we spatially visualized the WSIs colored the tiles by their attention scores, and found that the regions with high scores were highly consistent with the tumor and necrotic regions annotated by a 10-year experienced pathologist. Moreover, spatial transcriptomic data further verified that the heatmap generated by attention scores agrees greatly with the spatial expression landscape of two typical tumor biomarker genes. In predicting the response to drug trastuzumab treatment, our method achieved a 0.79 AUC value which is much higher than the previous study reported 0.68. These findings showed the remarkable potential of computational pathology in the prediction of clinically relevant protein levels, drug response, and clinical outcomes.