Cancer is a leading cause of death globally, and early detection is crucial for better outcomes. This research aims to improve Region Of Interest (ROI) segmentation and feature extraction in medical image analysis using Radiomics techniques
with 3D Slicer, Pyradiomics, and Python. Dimension reduction methods, including PCA, K-means, t-SNE, ISOMAP, and Hierarchical Clustering, were applied to high dimensional features to enhance interpretability and efficiency. The study assessed the ability of the reduced feature set to predict T-staging, an essential component of the TNM system for cancer diagnosis. Multinomial logistic regression models were developed and evaluated using MSE, AIC, BIC, and Deviance Test. The dataset consisted of CT and PET-CT DICOM images from 131 lung cancer patients. Results showed that PCA identified 14 features, Hierarchical Clustering 17, t-SNE 58, and ISOMAP 40, with texture-based features being the most critical. This study highlights the potential of integrating Radiomics and unsupervised learning techniques to enhance cancer prediction from medical images.
An Augmentation in the Diagnostic Potency of Breast Cancer through A Deep Learning Cloud-Based AI Framework to Compute Tumor Malignancy & Risk
Agarwal, O
International Research Journal of Innovations in Engineering and Technology (IRJIET)2019Journal Article, cited 0 times
CBIS-DDSM
This research project focuses on developing a web-based multi-platform solution for augmenting prognostic strategies to diagnose breast cancer (BC), from a variety of different tests, including histology, mammography, cytopathology, and fine-needle aspiration cytology, all inan automated fashion. The respective application utilizes tensor-based data representations and deep learning architectural algorithms, to produce optimized models for the prediction of novel instances against each of these medical tests. This system has been designed in a way that all of its computation can be integrated seamlessly into a clinical setting, without posing any disruption to a clinician’s productivity or workflow, but rather an enhancement of their capabilities. This software can make the diagnostic process automated, standardized, faster, and even more accurate than current benchmarks achieved by both pathologists, and radiologists, which makes it invaluable from a clinical standpoint to make well-informed diagnostic decisions with nominal resources.
Adaptive Multi-Column Deep Neural Networks with Application to Robust Image Denoising
Agostinelli, Forest
Anderson, Michael R
Lee, Honglak
2013Conference Proceedings, cited 118 times
Website
Head-Neck Cetuximab
Algorithm Development
Image denoising
Machine Learning
Deep Learning
Stacked sparse denoising auto-encoders (SSDAs) have recently been shown to be successful at removing noise from corrupted images. However, like most denoising techniques, the SSDA is not robust to variation in noise types beyond what it has seen during training. We present the multi-column stacked sparse denoising autoencoder, a novel technique of combining multiple SSDAs into a multi-column SSDA (MC-SSDA) by combining the outputs of each SSDA. We eliminate the need to determine the type of noise, let alone its statistics, at test time. We show that good denoising performance can be achieved with a single system on a variety of different noise types, including ones not seen in the training set. Additionally, we experimentally demonstrate the efficacy of MC-SSDA denoising by achieving MNIST digit error rates on denoised images at close to that of the uncorrupted images.
FEATURE EXTRACTION OF LUNG CANCER USING IMAGE ANALYSIS TECHNIQUES
ALAYUE, L.T.
GOSHU, B.S.
TAJU, ENDRIS
Romanian Journal of Biophysics2022Journal Article, cited 0 times
Website
TCGA-LUSC
Computed Tomography (CT)
Lung Cancer
Computer Aided Detection (CADe)
MATLAB
Lung cancer is one of the most life-threatening diseases. It is a medical problem that needs accurate diagnosis and timely treatment by healthcare professionals. Although CT is preferred over other imaging modalities, visual interpretation of CT scan images may be subject to error and can cause a delay in lung cancer detection. Therefore, image processing techniques are widely used for early-stage detection of lung tumors. This study was conducted to perform pre-processing, segmentation, and feature extraction of lung CT images using image processing techniques. We used the MATLAB programming language to devise a stepwise approach that included image acquisition, pre-processing, segmentation, and features extraction. A total of 14 lung CT scan images in the age group of 55–75 years were downloaded from an open access repository. The analyzed images were grayscale, 8 bits, with a resolution ranging from 151 213 to 721 900, and Digital Imaging and Communications in Medicine (DICOM) format. In the pre-processing stage median filter was used to remove noise from the original image since it preserved the edges of the image, whereas segmentation was done through edge detection and threshold analysis. The results show that solid tumors were detected in three CT images corresponding to patients aged between 71 and 75 years old. Our study indicates that image processing plays a significant role in lung cancer recognition and early-stage treatment. Health professionals need to work closely with medical physicists to improve the accuracy of diagnosis.
Self-organizing Approach to Learn a Level-set Function for Object Segmentation in Complex Background Environments
Boundary extraction for object region segmentation is one of the most challenging tasks in image processing and computer vision areas. The complexity of large variations in the appearance of the object and the background in a typical image causes the performance degradation of existing segmentation algorithms. One of the goals of computer vision studies is to produce algorithms to segment object regions to produce accurate object boundaries that can be utilized in feature extraction and classification.; ; This dissertation research considers the incorporation of prior knowledge of intensity/color of objects of interest within segmentation framework to enhance the performance of object region and boundary extraction of targets in unconstrained environments. The information about intensity/color of object of interest is taken from small patches as seeds that are fed to learn a neural network. The main challenge is accounting for the projection transformation between the limited amount of prior information and the appearance of the real object of interest in the testing data. We address this problem by the use of a Self-organizing Map (SOM) which is an unsupervised learning neural network. The segmentation process is achieved by the construction of a local fitted image level-set cost function, in which, the dynamic variable is a Best Matching Unit (BMU) coming from the SOM map.; ; The proposed method is demonstrated on the PASCAL 2011 challenging dataset, in which, images contain objects with variations of illuminations, shadows, occlusions and clutter. In addition, our method is tested on different types of imagery including thermal, hyperspectral, and medical imagery. Metrics illustrate the effectiveness and accuracy of the proposed algorithm in improving the efficiency of boundary extraction and object region detection.; ; In order to reduce computational time, a lattice Boltzmann Method (LBM) convergence criteria is used along with the proposed self-organized active contour model for producing faster and effective segmentation. The lattice Boltzmann method is utilized to evolve the level-set function rapidly and terminate the evolution of the curve at the most optimum region. Experiments performed on our test datasets show promising results in terms of time and quality of the segmentation when compared to other state-of-the-art learning-based active contour model approaches. Our method is more than 53% faster than other state-of-the-art methods. Research is in progress to employ Time Adaptive Self- Organizing Map (TASOM) for improved segmentation and utilize the parallelization property of the LBM to achieve real-time segmentation.
Multi-modal Multi-temporal Brain Tumor Segmentation, Growth Analysis and Texture-based Classification
Brain tumor analysis is an active field of research, which has received a lot of attention from both the medical and the technical communities in the past decades. The purpose of this thesis is to investigate brain tumor segmentation, growth analysis and tumor classification based on multi-modal magnetic resonance (MR) image datasets of low- and high-grade glioma making use of computer vision and machine learning methodologies. Brain tumor segmentation involves the delineation of tumorous structures, such as edema, active tumor and necrotic tumor core, and healthy brain tissues, often categorized in gray matter, white matter and cerebro-spinal fluid. Deep learning frameworks have proven to be among the most accurate brain tumor segmentation techniques, performing particularly well when large accurately annotated image datasets are available. A first project is designed to build a more flexible model, which allows for intuitive semi-automated user-interaction, is less dependent on training data, and can handle missing MR modalities. The framework is based on a Bayesian network with hidden variables optimized by the expectation-maximization algorithm, and is tailored to handle non-Gaussian multivariate distributions using the concept of Gaussian copulas. To generate reliable priors for the generative probabilistic model and to spatially regularize the segmentation results, it is extended with an initialization and a post-processing module, both based on supervoxels classified by random forests. Brain tumor segmentation allows to assess tumor volumetry over time, which is important to identify disease progression (tumor regrowth) after therapy. In a second project, a dataset of temporal MR sequences is analyzed. To that end, brain tumor segmentation and brain tumor growth assessment are unified within a single framework using a conditional random field (CRF). The CRF extends over the temporal patient datasets and includes directed links with infinite weight in order to incorporate growth or shrinkage constraints. The model is shown to obtain temporally coherent tumor segmentation and aids in estimating the likelihood of disease progression after therapy. Recent studies classify brain tumors based on their genotypic parameters, which are reported to have an important impact on the prognosis and the therapy of patients. A third project is aimed to investigate whether the genetic profile of glioma can be predicted based on the MR images only, which would eliminate the need to take biopsies. A multi-modal medical image classification framework is built, classifying glioma in three genetic classes based on DNA methylation status. The framework makes use of short local image descriptors as well as deep-learned features acquired by denoising auto-encoders to generate meaningful image features. The framework is successfully validated and shown to obtain high accuracies even though the same image-based classification task is hardly possible for medical experts.
Application of Fuzzy c-means and Neural networks to categorize tumor affected breast MR Images
Anand, Shruthi
Vinod, Viji
Rampure, Anand
International Journal of Applied Engineering Research2015Journal Article, cited 4 times
Website
The image semantic segmentation challenge consists of classifying each pixel of an image (or just several ones) into an instance, where each instance (or category) corresponds to an object. This task is a part of the concept of scene understanding or better explaining the global context of an image. In the medical image analysis domain, image segmentation can be used for image-guided interventions, radiotherapy, or improved radiological diagnostics. Following a comprehensive review of state-of-the-art deep learning-based medical and non-medical image segmentation solutions, we make the following contributions. A deep learning-based (medical) image segmentation typical pipeline includes designing layers (A), designing an architecture (B), and defining a loss function (C). A clean/modified (D)/adversarialy perturbed (E) image is fed into a model (consisting of layers and loss function) to predict a segmentation mask for scene understanding etc. In some cases where the number of segmentation annotations is limited, weakly supervised approaches (F) are leverages. For some applications where further analysis is needed e.g., predicting volumes and objects burden, the segmentation mask is fed into another post-processing step (G). In this thesis, we tackle each of the steps (A-G). I) As for step (A and E), we studied the effect of the adversarial perturbation on image segmentation models and proposed a method that improves the segmentation performance via a non-linear radial basis convolutional feature mapping by learning a Mahalanobis-like distance function on both adversarially perturbed and unperturbed images. Our method then maps the convolutional features onto a linearly well-separated manifold, which prevents small adversarial perturbations from forcing a sample to cross the decision boundary. II) As for step (B), we propose light, learnable skip connections which learn first to select the most discriminative channels and then aggregate the selected ones as single-channel attending to the most discriminative regions of input. Compared to the heavy classical skip connections, our method reduces the computation cost and memory usage while it improves segmentation performance. III) As for step (C), we examined the critical choice of a loss function in order to handle the notorious imbalance problem that plagues both the input and output of a learning model. In order to tackle both types of imbalance during training and inference, we introduce a new curriculum learning-based loss function. Specifically, we leverage the Dice similarity coefficient to deter model parameters from being held at bad local minima and at the same time, gradually learn better model parameters by penalizing for false positives/negatives using a cross-entropy term which also helps. IV) As for step (D), we propose a new segmentation performance-boosting paradigm that relies on optimally modifying the network's input instead of the network itself. In particular, we leverage the gradients of a trained segmentation network with respect to the input to transfer it into a space where the segmentation accuracy improves. V) As for step (F), we propose a weakly supervised image segmentation model with a learned spatial masking mechanism to filter out irrelevant background signals from attention maps. The proposed method minimizes mutual information between a masked variational representation and the input while maximizing the information between the masked representation and class labels. VI) Although many semi-automatic segmentation based methods have been developed, as for step (G), we introduce a method that completely eliminates the segmentation step and directly estimates the volume and activity of the lesions from positron emission tomography scans.
Computer Aided Detection Scheme To Improve The Prognosis Assessment Of Early Stage Lung Cancer Patients
Athira, KV
Nithin, SS
Computer2018Journal Article, cited 0 times
Website
Radiomics
non-small cell lung cancer
Machine learning
To develop a computer aided detection scheme to predict the stage 1 non-small cell lung cancer recurrence risk in lung cancer patients after surgery. By using chest computed tomography images; that taken before surgery, this system automatically segment the tumor that seen on CT images and extract the tumor related morphological and texture-based image features. We trained a Naïve Bayesian network classifier using six image features and an ANN classifier using two genomic biomarkers, these biomarkers are protein expression of the excision repair cross-complementing 1 gene (ERCC1) & a regulatory subunit of ribonucleotide reductase (RRM1) to predict the cancer recurrence risk, respectively. We developed a new approach that has a high potential to assist doctors in more effectively managing first stage NSCLC patients to reduce the cancer recurrence risk.
BIOMEDICAL IMAGE RETRIEVAL USING LBWP
Babu, Joyce Sarah
Mathew, Soumya
Simon, Rini
International Research Journal of Engineering and Technology2017Journal Article, cited 0 times
Website
Glioblastoma (GBM) is an aggressive cancer with an average 5-year survival rate of about 5%. Following treatment with surgery, radiation, and chemotherapy, diagnosing tumor recurrence requires serial magnetic resonance imaging (MRI) scans. Infiltrative tumor cells beyond gadolinium enhancement on T1-weighted MRI are difficult to detect. This study therefore aims to improve tumor detection beyond traditional tumor margins. To accomplish this, a neural network model was trained to classify tissue samples as ‘tumor’ or ‘not tumor’. ; This model was then used to classify thousands of tiles from histology samples acquired at autopsy with known MRI locations on the patient’s final clinical MRI scan. This combined radiological-pathological (rad-path) dataset was then treated as a ground truth to train a second model for predicting tumor presence from MRI alone. Predictive maps were created for seven patients left out of the training steps, and tissue samples were tested to determine the model’s accuracy. The final model produced a receiver operator characteristic (ROC) area under the curve (AUC) of 0.70. This study demonstrates a new method for detecting infiltrative tumor beyond conventional radiologist defined margins based on neural networks applied to rad-path datasets in glioblastoma.
Machine Intelligence for Advanced Medical Data Analysis: Manifold Learning Approach
In the current work, linear and non-linear manifold learning techniques, specifically Principle Component Analysis (PCA) and Laplacian Eigenmaps, are studied in detail. Their applications in medical image and shape analysis are investigated.; In the first contribution, a manifold learning-based multi-modal image registration technique is developed, which results in a unified intensity system through intensity transformation between the reference and sensed images. The transformation eliminates intensity variations in multi-modal medical scans and hence facilitates employing well-studied monomodal registration techniques. The method can be used for registering multi-modal images with full and partial data.; Next, a manifold learning-based scale invariant global shape descriptor is introduced. The proposed descriptor benefits from the capability of Laplacian Eigenmap in dealing with high dimensional data by introducing an exponential weighting scheme. It eliminates the limitations tied to the well-known cotangent weighting scheme, namely dependency on triangular mesh representation and high intra-class quality of 3D models.; In the end, a novel descriptive model for diagnostic classification of pulmonary nodules is presented. The descriptive model benefits from structural differences between benign and malignant nodules for automatic and accurate prediction of a candidate nodule. It extracts concise and discriminative features automatically from the 3D surface structure of a nodule using spectral features studied in the previous work combined with a point cloud-based deep learning network.; Extensive experiments have been conducted and have shown that the proposed algorithms based on manifold learning outperform several state-of-the-art methods. Advanced computational techniques with a combination of manifold learning and deep networks can play a vital role in effective healthcare delivery by providing a framework for several fundamental tasks in image and shape processing, namely, registration, classification, and detection of features of interest.
Towards High Performing and Reliable Deep Convolutional Neural Network Models for Typically Limited Medical Imaging Datasets
Artificial Intelligence (AI) is “The science and engineering of making intelligent machines, especially intelligent computer programs” [93]. Artificial Intelligence has been applied in a wide range of fields including automobiles, space, robotics, and healthcare.; According to recent reports, AI will have a huge impact on increasing the world economy by 2030 and it’s expected that the greatest impact will be in the field of healthcare. The global market size of AI in healthcare was estimated at USD 10.4 billion in 2021 and is; expected to grow at a high rate from 2022 to 2030 (CAGR of 38.4%) [124]. Applications of AI in healthcare include robot-assisted surgery, disease detection, health monitoring, and; automatic medical image analysis. Healthcare organizations are becoming increasingly in terested in how artificial intelligence can support better patient care while reducing costs; and improving efficiencies.; Deep learning is a subset of AI that is becoming transformative for healthcare. Deep; learning offers fast and accurate data analysis. Deep learning is based on the concept of; artificial neural networks to solve complex problems.; In this dissertation, we propose deep learning-based solutions to the problems of limited; medical imaging in two clinical contexts: brain tumor prognosis and COVID-19 diagno sis. For brain tumor prognosis, we suggest novel systems for overall survival prediction; of Glioblastoma patients from small magnetic resonance imaging (MRI) datasets based on; ensembles of convolutional neural networks (CNNs). For COVID-19 diagnosis, we reveal; one critical problem with CNN-based approaches for predicting COVID-19 from chest X-ray; (CXR) imaging: shortcut learning. Then, we experimentally suggest methods to mitigate; this problem to build fair, reliable, robust, and transparent deep learning based clinical; decision support systems. We discovered this problem with CNNs and using Chest Xray imaging. However, the issue and solutions generally apply to other imaging modalities and; recognition problems.
Detection of Motion Artifacts in Thoracic CT Scans
The analysis of a lung CT scan can be a complicated task due to the presence of certain image artifacts such as cardiac motion, respiratory motion, beam hardening artefacts, and so on. In this project, we have built a deep learning based model for the detection of these motion artifacts in the image. Using biomedical image segmentation models we have trained the model on lung CT scans from the LIDC dataset. The developed model is able to identify the regions in the scan which are affected by motion by segmenting the image. Further it is also able to separate normal (or easy to analyze) CT scans from CT scans that may provide incorrect quantitative analysis, even when the examples of image artifacts or low quality scans are scarce. In addition, the model is able to evaluate a quality score for the scan based on the amount of artifacts detected which could hamper its authenticity for the further diagnosisof disease or disease progression. We used two main approaches during the experimentation process - 2D slice based approaches and 2D patch based approaches of which the patch based approaches yielded the final model. The final model gave an AUC of 0.814 in the ROC analysis of the evaluation study conducted. Discussions on the approaches and findings of the final model are provided and future directions are proposed.
COMPARISON OF A PATIENT-SPECIFIC COMPUTED TOMOGRAPHY ORGAN DOSE SOFTWARE WITH COMMERCIAL PHANTOM-BASED TOOLS
Computed Tomography imaging is an important diagnostic tool but carries some; risk due to radiation dose used to form the image. Currently, CT scanners report a; measure of radiation dose for each scan that reflects the radiation emitted by the scanner,; not the radiation dose absorbed by the patient. The radiation dose absorbed by organs,; known as organ dose, is a more relevant metric that is important for risk assessment and; CT protocol optimization. Tools for rapid organ-dose estimation are available but are; limited to using general patient models. These publicly available tools are unable to; model patient-specific anatomy and positioning within the scanner. To address these; limitations, the Personalized Rapid Estimator of Dose in Computed Tomography; (PREDICT) dosimetry tool was recently developed. This study validated the organ doses; estimated by ‘PREDICT’ with ground truth values. The patient-specific PREDICT; performance was also compared to two publicly available phantom-based methods:; VirtualDose and NCICT. The PREDICT tool demonstrated lower organ dose errors; compared to the phantom-based methods, demonstrating the benefit of patient-specific; modeling. This study also developed a method to extract the walls of cavity organs, such; as the bladder and the intestines, and quantified the effect of organ wall extraction on; organ dose. The study found that the exogenous material within the cavity organ can; affect organ dose estimate, therefore demonstrating the importance of boundary wall; extraction in dosimetry tools such as PREDICT.
High Capacity and Reversible Fragile Watermarking Method for Medical Image Authentication and Patient Data Hiding
Bouarroudj, Riadh
Bellala, Fatma Zohra
Souami, Feryel
Journal of Medical Systems2024Journal Article, cited 0 times
Website
TCGA-LUAD
Selección de un algoritmo para la clasificación de Nódulos Pulmonares Solitarios
Castro, Arelys Rivero
Correa, Luis Manuel Cruz
Lezcano, Jeffrey Artiles
Revista Cubana de Informática Médica2016Journal Article, cited 0 times
Website
LIDC-IDRI
Optimizations for Deep Learning-Based CT Image Enhancement
Computed tomography (CT) combined with deep learning (DL) has recently shown great potential in biomedical imaging. Complex DL models with varying architectures inspired by the human brain are improving imaging software and aiding diagnosis. However, the accuracy of these DL models heavily relies on the datasets used for training, which often contain low-quality CT images from low-dose CT (LDCT) scans. Moreover, in contrast to the neural architecture of the human brain, DL models today are dense and complex, resulting in a significant computational footprint. Therefore, in this work, we propose sparse optimizations to minimize the complexity of the DL models and leverage architecture-aware optimization to reduce the total training time of these DL models. To that end, we leverage a DL model called DenseNet and Deconvolution Network (DDNet). The model enhances LDCT chest images into high-quality (HQ) ones but requires many hours to train. To further improve the quality of final HQ images, we first modified DDNet's architecture with a more robust multi-level VGG (ML-VGG) loss function to achieve state-of-the-art CT image enhancement. However, improving the loss function results in increased computational cost. Hence, we introduce sparse optimizations to reduce the complexity of the improved DL model and then propose architecture-aware optimizations to efficiently utilize the underlying computing hardware to reduce the overall training time. Finally, we evaluate our techniques for performance and accuracy using state-of-the-art hardware resources.
MRI prostate cancer radiomics: Assessment of effectiveness and perspectives
Concussion is a traumatic brain injury usually caused by a direct or indirect blow to the head that affects brain function. The maximum mechanical impedance of the brain tissue occurs at 450±50 Hz and may be affected by the skull resonant frequencies. After an impact to the head, vibration resonance of the skull damages the underlying cortex. The skull deforms and vibrates, like a bell for 3 to 5 milliseconds, bruising the cortex. Furthermore, the deceleration forces the frontal and temporal cortex against the skull, eliminating a layer of cerebrospinal fluid. When the skull vibrates, the force spreads directly to the cortex, with no layer of cerebrospinal fluid to reflect the wave or cushion its force. To date, there is few researches investigating the effect of transient vibration of the skull. Therefore, the overall goal of the proposed research is to gain better understanding of the role of transient vibration of the skull on concussion. This goal will be achieved by addressing three research objectives. First, a MRI skull and brain segmentation automatic technique is developed. Due to bones’ weak magnetic resonance signal, MRI scans struggle with differentiating bone tissue from other structures. One of the most important components for a successful segmentation is high-quality ground truth labels. Therefore, we introduce a deep learning framework for skull segmentation purpose where the ground truth labels are created from CT imaging using the standard tessellation language (STL). Furthermore, the brain region will be important for a future work, thus, we explore a new initialization concept of the convolutional neural network (CNN) by orthogonal moments to improve brain segmentation in MRI. Second, the creation of a novel 2D and 3D Automatic Method to Align the Facial Skeleton is introduced. An important aspect for further impact analysis is the ability to precisely simulate the same point of impact on multiple bone models. To perform this task, the skull must be precisely aligned in all anatomical planes. Therefore, we introduce a 2D/3D technique to align the facial skeleton that was initially developed for automatically calculating the craniofacial symmetry midline. In the 2D version, the entire concept of using cephalometric landmarks and manual image grid alignment to construct the training dataset was introduced. Then, this concept was extended to a 3D version where coronal and transverse planes are aligned using CNN approach. As the alignment in the sagittal plane is still undefined, a new alignment based on these techniques will be created to align the sagittal plane using Frankfort plane as a framework. Finally, the resonant frequencies of multiple skulls are assessed to determine how the skull resonant frequency vibrations propagate into the brain tissue. After applying material properties and mesh to the skull, modal analysis is performed to assess the skull natural frequencies. Finally, theories will be raised regarding the relation between the skull geometry, such as shape and thickness, and vibration with brain tissue injury, which may result in concussive injury.; ; Summary for Lay Audience; A concussion is a traumatic brain injury usually caused by a direct or indirect blow to the head that affects brain function. As the maximum mechanical impedance of the brain tissue occurs at 450±50 Hz, skull resonant frequencies may play an important role in the propagation of this vibration into the brain tissue. The overall goal of the proposed research is to gain a better understanding of the role of transient vibration of the skull on concussion. This goal will be achieved by addressing three research objectives: I) develop an automatic method to segment/extract skull and brain from magnetic resonance imaging (MRI), II) create a novel 2D and 3D automatic method to align the facial skeleton, and III) identify the skull resonant frequencies and raise the theory of how these vibrations may propagate into brain tissue. For objective 1, 58 MRI and their respective computed tomography (CT) scans were used to create a convolutional neural network framework for skull and brain segmentation in MRI. Moreover, an invariant moment kernel was introduced to improve the brain segmentation accuracy in MRI. For objective 2, a 2D and 3D technique for automatically calculating the craniofacial symmetry midline from head CT scans using deep learning techniques was used to precisely align the facial skeleton for future impact analysis. In objective 3, several skulls segmented were tested to identify their natural resonant frequencies. Those with a resonant frequency of 450±50 Hz were selected to improve understanding of how their shapes and thickness may help the vibration to propagate deeply in the brain tissue. The results from this study will improve our understanding of the role of transient vibration of the skull on concussion.
Feature Extraction In Medical Images by Using Deep Learning Approach
Dara, S
Tumma, P
Eluri, NR
Kancharla, GR
International Journal of Pure and Applied Mathematics2018Journal Article, cited 0 times
Website
TCGA-LUAD
Machine Learning
Deep Learning
Feature Extraction
Impact of GAN-based Lesion-Focused Medical Image Super-Resolution on Radiomic Feature Robustness
Robust machine learning models based on radiomic features might allow for accu rate diagnosis, prognosis, and medical decision-making. Unfortunately, the lack of standardized radiomic feature extraction has hampered their clinical use. Since the radiomic features tend to be affected by low voxel statistics in regions of interest, in creasing the sample size would improve their robustness in clinical studies. Therefore, we propose a Generative Adversarial Network (GAN)-based lesion-focused framework for Computed Tomography (CT) image Super-Resolution (SR); for the lesion (i.e., can cer) patch-focused training, we incorporate Spatial Pyramid Pooling (SPP) into GAN-Constrained by the Identical, Residual, and Cycle Learning Ensemble (GAN-CIRCLE).; At 2× SR, the proposed model achieved better perceptual quality with less blurring than the other considered state-of-the-art SR methods, while producing comparable results at 4× SR. We also evaluated the robustness of our model’s radiomic feature in terms of quantization on a different lung cancer CT dataset using Principal Component Analysis (PCA). Intriguingly, the most important radiomic features in our PCA-based analysis were the most robust features extracted on the GAN-super-resolved images. These achievements pave the way for the application of GAN-based image Super-Resolution techniques for studies of radiomics for robust biomarker discovery.
COMPUTATIONAL IMAGING AND MULTIOMIC BIOMARKERS FOR PRECISION MEDICINE: CHARACTERIZING HETEROGENEITY IN LUNG CANCER
Lung cancer is the leading cause of cancer deaths and is the third most diagnosed cancer in both men and women in the United States. Non-small cell lung cancer (NSCLC) accounts for 84% of all lung cancer cases. The inherent intra-tumor and inter-tumor heterogeneity in lung tumors has been linked with adverse clinical outcomes. A well-rounded characterization of tumor heterogeneity by personalized biomarkers is needed to develop precision medicine treatment strategies for cancer. Large-scale genome-based characterization poses the disadvantages of high cost and technical complexity. Further, a histopathological sample from a tumor biopsy may not be able to fully represent the structural and functional properties of the entire tumor. Medical imaging is now emerging as a key player in the field of personalized medicine, due to its ability to non-invasively characterize the anatomical and physiological properties of the tumor regions. The studies included in this thesis introduce analytical tools developed thorough machine learning and bioinformatics and use information from diagnostic images and other “omic” sources, to develop computational imaging and multiomic biomarkers to characterize intratumor heterogeneity. A novel radiomic biomarker, that integrates with PDL1 expression, ECOG status, BMI, and smoking status, to enhance the ability to predict progression-free survival in a preliminary cohort of patients with stage 4 NSCLC, treated with first-line anti-PD1/PDL1 checkpoint inhibitor therapy PEMBROLIZUMAB. This study also showed that mitigation of the heterogeneity introduced by voxel spacing and image acquisition parameters improves the prognostic performance of the radiomic phenotypes. We further performed a detailed investigation of the effects of heterogeneity in image parameters on the reproducibility of prognostic performance of models built using radiomic biomarkers. The results of this second study indicated that accounting for heterogeneity in image parameters is important to obtain more reproducible prognostic scores, irrespective of image site or modality. In the third study, we developed novel multiomic phenotypes in a larger cohort of patients with stage 4 NSCLC treated with PEMBROLIZUMAB. These multiomic phenotypes, formed by integration of radiomics, radiological and pathological information of the patients, enhanced precision in progression-free survival prediction upon combination with prognostic clinical variables. To our knowledge, our study is the first to construct a “multiomic signature for prognosis of NSCLC patient response to immunotherapy, in contrast to prior radiogenomic approaches leveraging a radiomics signature to identify patient categories based on a genomic biomarker-based classification. In the exploratory fourth study, we evaluated the performance of radiomics analyses of part-solid lung nodules to detect nodule invasiveness using several approaches: radiomics analysis in the presurgical CT scan, delta radiomics over three time-points leading up to surgical resection and nodule volumetry. The best performing model for the prediction of nodule invasiveness was the model built using a combination of immediate pre-surgical, delta radiomics, delta volumes and clinical assessment. The study showed that the combined utilization of clinical, volumetric and radiomic features may facilitate complex decision making in the management of subsolid lung nodules. To summarize, the studies included in this thesis demonstrate the value of computational radiomic and multiomic biomarkers in the characterization of lung tumor heterogeneity and have the potential to be utilized in the advancement of precision medicine in oncology.
An introduction to Topological Object Data Analysis
Summary and analysis are important foundations in Statistics, but typical methods may prove ineffective at providing thorough summaries of complex object data. Topological data analysis (TDA) (also called topological object data analysis (TODA) when applied to object data) provides additional topological summaries, such as the persistence diagram and persistence landscape, that can be useful in distinguishing distributions based on data sets. The main tool is persistent homology, which tracks the births and deaths of various homology classes as one steps through a filtered simplicial complex that covers the sample. The persistence diagrams and landscapes can also be used to provide confidence sets for “significant” features and two-sample tests between groups. An example of application is provided via analyzing mammogram images for patients with benign and malignant masses.
Collaborative learning of joint medical image segmentation tasks from heterogeneous and weakly-annotated data
Convolutional Neural Networks (CNNs) have become the state-of-the-art for most image segmentation tasks and therefore one would expect them to be able to learn joint tasks, such as brain structures and pathology segmentation. However, annotated databases required to train CNNs are usually dedicated to a single task, leading to partial annotations (e.g. brain structure or pathology delineation but not both for joint tasks). Moreover, the information required for these tasks may come from distinct magnetic resonance (MR) sequences to emphasise different types of tissue contrast, leading to datasets with different sets of image modalities. Similarly, the scans may have been acquired at different centres, with different MR parameters, leading to differences in resolution and visual appearance among databases (domain shift). Given the large amount of resources, time and expertise required to carefully annotate medical images, it is unlikely that large and fully-annotated databases will become readily available for every joint problem. For this reason, there is a need to develop collaborative approaches that exploit existing heterogeneous and task-specific datasets, as well as weak annotations instead of time-consuming pixel-wise annotations.; ; In this thesis, I present methods to learn joint medical segmentation tasks from task-specific, domain-shifted, hetero-modal and weakly-annotated datasets. The problem lies at the intersection of several branches of Machine Learning: Multi-Task Learning, Hetero-Modal Learning, Domain Adaptation and Weakly Supervised Learning. First, I introduce a mathematical formulation of a joint segmentation problem under the constraint of missing modalities and partial annotations, in which Domain Adaptation techniques can be directly integrated, and a procedure to optimise it. Secondly, I propose a principled approach to handle missing modalities based on Hetero-Modal Variational Auto-Encoders. Thirdly, in this thesis, I focus on Weakly Supervised Learning techniques and present a novel approach to train deep image segmentation networks using particularly weak train-time annotations: only 4 (2D) or 6 (3D) extreme clicks at the boundary of the objects of interest. The proposed framework connects the extreme points using a new formulation of geodesics that integrates the network outputs and uses the generated paths for supervision. Fourthly, I introduce a new weakly-supervised Domain Adaptation technique using scribbles on the target domain and formulate as a cross-domain CRF optimisation problem. Finally, I led the organisation of the first medical segmentation challenge for unsupervised cross-modality domain adaptation (crossMoDA). The benchmark reported in this thesis provides a comprehensive characterisation of cross-modality domain adaptation techniques.; ; Experiments are performed on brain MR images from patients with different types of brain diseases: gliomas, white matter lesions and vestibular schwannoma. The results demonstrate the broad applicability of the presented frameworks to learn joint segmentation tasks with the potential to improve brain disease diagnosis and patient management in clinical practice.
3D-RADNet: Extracting labels from DICOM metadata for training general medical domain deep 3D convolution neural networks
Training deep convolution neural network requires a large amount of data to obtain good performance and generalisable results. Transfer learning approaches from datasets such as ImageNet had become important in increasing accuracy and lowering training samples required. However, as of now, there has not been a popular dataset for training 3D volumetric medical images. This is mainly due to the time and expert knowledge required to accurately annotate medical images. In this study, we present a method in extracting labels from DICOM metadata that information on the appearance of the scans to train a medical domain 3D convolution neural network. The labels include imaging modalities and sequences, patient orientation and view, presence of contrast agent, scan target and coverage, and slice spacing. We applied our method and extracted labels from a large amount of cancer imaging dataset from TCIA to train a medical domain 3D deep convolution neural network. We evaluated the effectiveness of using our proposed network in transfer learning a liver segmentation task and found that our network achieved superior segmentation performance (DICE=90.0) compared to training from scratch (DICE=41.8). Our proposed network shows promising results to be used as a backbone network for transfer learning to another task. Our approach along with the utilising our network, can potentially be used to extract features from large-scale unlabelled DICOM datasets.
A Content-Based-Image-Retrieval Approach for Medical Image Repositories
Despite recent advances in life sciences and technology, the amount of time and money spent in the drug development process remain drastically inflated. Thus, there is a need to rapidly recognize characteristics that will help identify novel therapies.; First, we address the increased need for drug repurposing, the approach of identifying new indications for approved or investigational drugs. We present a novel drug repurposing method called Creating A Translational Network for Indication Prediction (CATNIP) which relies solely on biological and chemical drug characteristics to identify disease areas for specific drugs and drug classes. This drug-focused approach could allow our approach to be used for both FDA approved drugs as well as investigational drugs. Our method, trained with 2,576 diverse small molecules, is built using easily interpretable features, such as chemical structure and targets, allowing for probable drug-disease mechanisms to be discovered from the predictions made. The strength of this method's approach is demonstrated through a repurposing network that can be utilized identify drug class candidate opportunities. In order to treat many of these conditinos, a drug compound is orally ingested by a patient. One of the major absorption sites for drugs is the small intestine, and drug properties such as permeability are proven important to maximize treatment efforts. Poor absorption of drug candidates is likely to lead to failure in the drug development process, so we propose an innovative approach to predict the permeability of a drug. The Caco-2 cell model is a standard surrogate for predicting in vitro intestinal permeability. We collected one of the largest experimentally based datasets of Caco-2 values to create a computational model. Using an approach called graph convolutional networks that treats molecules as graphs, we are able to take in a line-notation form molecular structure and successfully make predictions about a drug compound's permeability. ; Altogether, this work demonstrates how the integration of diverse datasets can aid in addressing the multitutde of challenging problems in the field of drug discovery. Computational approaches such as these, that prioritize applicability and interpretability, have the strong potential to transform and improve upon the drug development pipeline.
A COMPUTER AIDED DIAGNOSIS SYSTEM FOR LUNG CANCER DETECTION USING SVM
Computer aided diagnosis is starting to be implemented broadly in the diagnosis and; detection of many varieties of abnormities acquired during various imaging procedures.; The main aim of the CAD systems is to increase the accuracy and decrease the time of; diagnoses, while the general achievement for CAD systems are to find the place of nodules; and to determine the characteristic features of the nodule. As lung cancer is one of the fatal; and leading cancer types, there has been plenty of studies for the usage of the CAD; systems to detect lung cancer. Yet, the CAD systems need to be developed a lot in order to; identify the different shapes of nodules, lung segmentation and to have higher level of; sensitivity, specifity and accuracy. This challenge is the motivation of this study in; implementation of CAD system for lung cancer detection. In the study, LIDC database is; used which comprises of an image set of lung cancer thoracic documented CT scans. The; presented CAD system consists of CT image reading, image pre-processing, segmentation,; feature extraction and classification steps. To avoid losing important features, the CT; images were read as a raw form in DICOM file format. Then, filtration and enhancement; techniques were used as an image processing. Otsu’s algorithm, edge detection and; morphological operations are applied for the segmentation, following the feature; extractions step. Finally, support vector machine with Gaussian RBF is utilized for the; classification step which is widely used as a supervised classifier.
Attention P-Net for Segmentation of Post-operative Glioblastoma in MRI
Segmentation of post-operative glioblastoma is important for follow-up treatment. In this thesis, Fully Convolutional Networks (FCN) are utilised together with attention modules for segmentation of post-operative glioblastoma in MR images. Attention-based modules help the FCN to focus on relevant features to improve segmentation results. Channel and spatial attention combines both the spatial context as well as the semantic information in MR images. P-Net is used as a backbone for creating an architecture with existing bottleneck attention modules and was named attention P-Net. The proposed network and competing techniques were evaluated on a Uppsala University database containing T1-weighted MR images of brain from 12 subjects. The proposed framework shows substantial improvement over the existing techniques.
Sparse View Deep Differentiated Backprojection for Circular Trajectories in CBCT
In this paper, we present a method for removing streak artifacts from reconstructions of sparse cone beam CT (CBCT) projections along circular trajectories. The differentiated backprojection on 2-D planes is combined with convolutional neural networks for both artifact reduction and the ill-posed inversion of the Hilbert transform. Undersampling errors occur at different stages of the algorithm, so the influence of applying the neural networks at these stages is investigated. Spectral blending is used to combine coronal and sagittal planes to a full 3-D reconstruction. Experimental results show that using a neural network to reconstruct a plane-of-interest from the differentiated backprojection of few projections works best by additionally providing FDK reconstructed planes to the network. This approach reduces streaking and cone beam artifacts compared to the direct FDK reconstruction and is also superior to post-processing CNNs.
A study of machine learning and deep learning models for solving medical imaging problems
Application of machine learning and deep learning methods on medical imaging aims to create systems that can help in the diagnosis of disease and the automation of analyzing medical images in order to facilitate treatment planning. Deep learning methods do well in image recognition, but medical images present unique challenges. The lack of large amounts of data, the image size, and the high class-imbalance in most datasets, makes training a machine learning model to recognize a particular pattern that is typically present only in case images a formidable task.; Experiments are conducted to classify breast cancer images as healthy or nonhealthy, and to detect lesions in damaged brain MRI (Magnetic Resonance Imaging) scans. Random Forest, Logistic Regression and Support Vector Machine perform competitively in the classification experiments, but in general, deep neural networks beat all conventional methods. Gaussian Naïve Bayes (GNB) and the Lesion Identification with Neighborhood Data Analysis (LINDA) methods produce better lesion detection results than single path neural networks, but a multi-modal, multi-path deep neural network beats all other methods. The importance of pre-processing training data is also highlighted and demonstrated, especially for medical images, which require extensive preparation to improve classifier and detector performance. Only a more complex and deeper neural network combined with properly pre-processed data can produce the desired accuracy levels that can rival and maybe exceed those of human experts.
LCD-OpenPACS: sistema integrado de telerradiologia com auxílio ao diagnóstico de nódulos pulmonares em exames de tomografia computadorizada
Machine Learning Methods for Image Analysis in Medical Applications From Alzheimer’s Disease, Brain Tumors, to Assisted Living
Chenjie Ge
2020Thesis, cited 0 times
Thesis
Dissertation
Machine learning
Supervised
Convolutional Neural Network (CNN)
BraTS
Classification
Generative Adversarial Network (GAN)
ADNI
Healthcare has progressed greatly nowadays owing to technological advances, where machine learning plays an important role in processing and analyzing a large amount of medical data. This thesis investigates four healthcare-related issues (Alzheimer's disease detection, glioma classification, human fall detection, and obstacle avoidance in prosthetic vision), where the underlying methodologies are associated with machine learning and computer vision. For Alzheimer’s disease (AD) diagnosis, apart from symptoms of patients, Magnetic Resonance Images (MRIs) also play an important role. Inspired by the success of deep learning, a new multi-stream multi-scale Convolutional Neural Network (CNN) architecture is proposed for AD detection from MRIs, where AD features are characterized in both the tissue level and the scale level for improved feature learning. Good classification performance is obtained for AD/NC (normal control) classification with test accuracy 94.74%. In glioma subtype classification, biopsies are usually needed for determining different molecular-based glioma subtypes. We investigate non-invasive glioma subtype prediction from MRIs by using deep learning. A 2D multi-stream CNN architecture is used to learn the features of gliomas from multi-modal MRIs, where the training dataset is enlarged with synthetic brain MRIs generated by pairwise Generative Adversarial Networks (GANs). Test accuracy 88.82% has been achieved for IDH mutation (a molecular-based subtype) prediction. A new deep semi-supervised learning method is also proposed to tackle the problem of missing molecular-related labels in training datasets for improving the performance of glioma classification. In other two applications, we also address video-based human fall detection by using co-saliency-enhanced Recurrent Convolutional Networks (RCNs), as well as obstacle avoidance in prosthetic vision by characterizing obstacle-related video features using a Spiking Neural Network (SNN). These investigations can benefit future research, where artificial intelligence/deep learning may open a new way for real medical applications.
Brain tumor detection from MRI image: An approach
Ghosh, Debjyoti
Bandyopadhyay, Samir Kumar
International Journal of Applied Research2017Journal Article, cited 0 times
Website
Algorithm Development
REMBRANDT
BRAIN
Magnetic Resonance Imaging (MRI)
Segmentation
Computer Aided Detection (CADe)
A brain tumor is an abnormal growth of cells within the brain, which can be cancerous or noncancerous (benign). This paper detects different types of tumors and cancerous growth within the brain and other associated areas within the brain by using computerized methods on MRI images of a patient.; It is also possible to track the growth patterns of such tumors.
When the machine does not know measuring uncertainty in deep learning models of medical images
Recently, Deep learning (DL), which involves powerful black box predictors, has outperformed human experts in several medical diagnostic problems. However, these methods focus exclusively on improving the accuracy of point predictions without assessing their outputs’ quality and ignore the asymmetric cost involved in different types of misclassification errors. Neural networks also do not deliver confidence in predictions and suffer from over and under confidence, i.e. are not well calibrated. Knowing how much confidence there is in a prediction is essential for gaining clinicians’ trust in the technology. Calibrated uncertainty quantification is a challenging problem as no ground truth is available. To address this, we make two observations: (i) cost-sensitive deep neural networks with Dropweights models better quantify calibrated predictive uncertainty, and (ii) estimated uncertainty with point predictions in Deep Ensembles Bayesian Neural Networks with DropWeights can lead to a more informed decision and improve prediction quality. This dissertation focuses on quantifying uncertainty using concepts from cost-sensitive neural networks, calibration of confidence, and Dropweights ensemble method. First, we show how to improve predictive uncertainty by deep ensembles of neural networks with Dropweights learning an approximate distribution over its weights in medical image segmentation and its application in active learning. Second, we use the Jackknife resampling technique to correct bias in quantified uncertainty in image classification and propose metrics to measure uncertainty performance. The third part of the thesis is motivated by the discrepancy between the model predictive error and the objective in quantified uncertainty when costs for misclassification errors or unbalanced datasets are asymmetric. We develop cost-sensitive modifications of the neural networks in disease detection and propose metrics to measure the quality of quantified uncertainty. Finally, we leverage an adaptive binning strategy to measure uncertainty calibration error that directly corresponds to estimated uncertainty performance and address problematic evaluation methods. We evaluate the effectiveness of the tools on nuclei images segmentation, multi-class Brain MRI image classification, multi-level cell type-specific protein expression prediction in ImmunoHistoChemistry (IHC) images and cost-sensitive classification for Covid-19 detection from X-Rays and CT image dataset. Our approach is thoroughly validated by measuring the quality of uncertainty. It produces an equally good or better result and paves the way for the future that addresses the practical problems at the intersection of deep learning and Bayesian decision theory. In conclusion, our study highlights the opportunities and challenges of the application of estimated uncertainty in deep learning models of medical images, representing the confidence of the model’s prediction, and the uncertainty quality metrics show a significant improvement when using Deep Ensembles Bayesian Neural Networks with DropWeights.
Deep Learning Architecture to Improve Edge Accuracy of Auto-Contouring for Head and Neck Radiotherapy
The manual delineation of the gross tumor volume (GTV) for Head and Neck Cancer (HNC) patients is an essential step in the radiotherapy treatment process. Methods to automate this process have the potential to decrease the amount of time it takes for a clinician to complete a plan, while also decreasing the inter-observer variability between clinicians. Deep learning (DL) methods have shown great promise in auto-segmentation problems. For HNC, we show that DL methods systematically fail at the axial edges of GTV where the segmentation is dependent on both information from the center of the tumor and nearby slices. These failures may decrease trust and usage of proposed Auto-Contouring Systems if not accounted for. In this paper we propose a modified version of the U-Net, a fully convolutional network for image segmentation, which can more accurately process dependence between slices to create a more robust GTV contour. We also show that it can outperform the current proposed methods that capture slice dependencies by leveraging 3D convolutions. Our method uses Convolutional Recurrent Neural Networks throughout the decoder section of the U-Net to capture both spatial and adjacent-slice information when considering a contour. To account for shifts in anatomical structures through adjacent CT slices, we allow an affine transformation to the adjacent feature space using Spatial Transformer Networks. Our proposed model increases accuracy at the edges by 12% inferiorly and 26% superiorly over a baseline 2D U-Net, which has no inherent way to capture information between adjacent slices.
Targeted Design Choices in Machine Learning Architectures Can Both Improve Model Performance and Support Joint Activity
Opaque models do not support Joint Activity and create brittle systems that fail rapidly when the model reaches the edges of its operating conditions. Instead, we should use models which are observable, directable, and predictable – qualities which are better suited by transparent or ‘explainable’ models. However, using explainable models has traditionally been seen as a trade-off in machine performance, ignoring the potential benefits to the performance of the human machine teams. While the cost to model performance is negligible when considering the cost to the human machine team, there is a benefit to machine learning that has increased accuracy or capabilities when designed appropriately to deal with failure. Increased accuracy can indicate better alignment with the world and the increased capability to generalize across a broader variety of cases. Increased capability does not always have to come at the cost of explainability, and this dissertation will discuss approaches to make traditionally opaque models more usable in human machine teaming architectures.
Real-Time Computed Tomography-based Medical Diagnosis Using Deep Learning
Computed tomography has been widely used in medical diagnosis to generate accurate images of the body's internal organs. However, cancer risk is associated with high X-ray dose CT scans, limiting its applicability in medical diagnosis and telemedicine applications. CT scans acquired at low X-ray dose generate low-quality images with noise and streaking artifacts. Therefore we develop a deep learning-based CT image enhancement algorithm for improving the quality of low-dose CT images. Our algorithm uses a convolution neural network called DenseNet and Deconvolution network (DDnet) to remove noise and artifacts from the input image. To evaluate its advantages in medical diagnosis, we use DDnet to enhance chest CT scans of COVID-19 patients. We show that image enhancement can improve the accuracy of COVID-19 diagnosis (~5% improvement), using a framework consisting of AI-based tools. For training and inference of the image enhancement AI model, we use heterogeneous computing platform for accelerating the execution and decreasing the turnaround time. Specifically, we use multiple GPUs in distributed setup to exploit batch-level parallelism during training. We achieve approximately 7x speedup with 8 GPUs running in parallel compared to training DDnet on a single GPU. For inference, we implement DDnet using OpenCL and evaluate its performance on multi-core CPU, many-core GPU, and FPGA. Our OpenCL implementation is at least 2x faster than analogous PyTorch implementation on each platform and achieves comparable performance between CPU and FPGA, while FPGA operated at a much lower frequency.
Pulmonary nodule segmentation in computed tomography with deep learning
Early detection of lung cancer is essential for treating the disease. Lung nodule segmentation systems can be used together with Computer-Aided Detection (CAD) systems, and help doctors diagnose and manage lung cancer. In this work, we create a lung nodule segmentation system based on deep learning. Deep learning is a sub-field of machine learning responsible for state-of-the-art results in several segmentation datasets such as the PASCAL VOC 2012. Our model is a modified 3D U-Net, trained on the LIDC-IDRI dataset, using the intersection over union (IOU) loss function. We show our model works for multiple types of lung nodules. Our model achieves state-of-the-art performance on the LIDC test set, using nodules annotated by at least 3 radiologists and with a consensus truth of 50%.
Privacy-Preserving Dashboard for F.A.I.R Head and Neck Cancer data supporting multi-centered collaborations
Research in modern healthcare requires vast volumes of data from various healthcare centers across the globe. It is not always feasible to centralize clinical data without compromising privacy. A tool addressing these issues and facilitating reuse of clinical data is the need of the hour. The Federated Learning approach, governed in a set of agreements such as the Personal Health Train (PHT) manages to tackle these concerns by distributing models to the data centers instead of the traditional approach of centralizing datasets. One of the prerequisites of PHT is using semantically interoperable datasets for the models to be able to find them. FAIR (Findable, Accessible, Interoperable, Reusable) principles help in building interoperable and reusable data by adding knowledge representation and providing descriptive metadata. However, the process of making data FAIR is not always easy and straight-forward. Our main objective is to disentangle this process by using domain and technical expertise and get data prepared for federated learning. This paper introduces applications that are easily deployable as Docker containers, which will automate parts of the aforementioned process and significantly simplify the task of creating FAIR clinical data. Our method bypasses the need for clinical researchers to have a high degree of technical skills. We demonstrate the FAIR-ification process by applying it to five Head and Neck cancer datasets (four public and one private). The PHT paradigm is explored by building a distributed visualization dashboard from the aggregated summaries of the FAIR-ified datasets. Using the PHT infrastructure for exchanging only statistical summaries or model coefficients allows researchers to explore data from multiple centers without breaching privacy.
Interoperable encoding and 3D printing of anatomical structures resulting from manual or automated delineation
Gregoir, Thibault
2023Thesis, cited 0 times
Thesis
Pancreatic-CT-CBCT-SEG
Segmentation
3D printing
ChatGPT
Computed Tomography (CT)
RTSTRUCT
Surface reconstruction
Interoperable encoding
Manual or automated delineation
The understanding and visualization of the human body have been instrumental in the progress of medical science. Over time, the shift from cumbersome and invasive methods to modern scanners highlights the significance of expertise in retrieving, utilizing, and comprehending the resulting data. 3D rendering and printing of organic structures offer promising applications such as surgical planning and medical education.; However, challenges arise as technological advancements generate increasingly vast amounts of data, necessitating seamless manipulation and transfer within the medical field. The goal of this master thesis is to explore interoperability in encoding 3D models and the ability to print those models resulting from 3D reconstruction on medical input data. This exploration will be done for models that were originally segmented by manual delineation or in an automated way. Different parts of this thematic were already explored in a specific way like for the surface reconstruction or the automatic segmentation. The idea here will be to combine the different aspects of this thesis in a single tool available and usable by everyone.
Using Deep Learning for Pulmonary Nodule Detection & Diagnosis
Gruetzemacher, Richard
Gupta, Ashish
2016Conference Paper, cited 0 times
LIDC-IDRI
Generative Models and Feature Extraction on Patient Images and Structure Data in Radiation Therapy
The aim of this thesis was to examine and enhance the scientific groundwork for translating deep learning (DL) algorithms for brain tumour segmentation into clinical decision support tools. Paper II describes a scoping review conducted to map the field of automatic brain lesion segmentation on magnetic resonance (MR) images according to a predefined and peer-reviewed study protocol (Paper I). Insufficient preprocessing description was identified as one factor hindering clinical implementation of the reviewed algorithms. A reproducibility and replicability analysis of two algorithms was described in Paper III. The two algorithms and their validation studies were previously assessed as reproducible. In this experimental investigation, the original validation results were reproduced and replicated for one algorithm. Analysing the reasons for failure to reproduce validation of the second algorithm led to a suggested update to a commonly-used reproducibility checklist; the importance of a thorough description of preprocessing was highlighted. In Paper IV, radiologists' perception of DL-generated brain tumour labels in tumour volume growth assessment was examined. Ten radiologists participated in a reading/questionnaire session of 20 MR examination cases. The readers were confident that the label-derived volume change is more accurate than their visual assessment, even when the inter-rater agreement on the label quality was poor. In Paper V, the broad theme of trust in artificial intelligence (AI) in radiology was explored. A semi-structured interview study with twenty-six AI implementation stakeholders was conducted. Four requirements of the implemented tools and procedures were identified that promote trust in AI: reliability, quality control, transparency, and inter-organisational compatibility. The findings indicate that current strategies to validate DL algorithms do not suffice to assess their accuracy in a clinical setting. Despite the recognition from radiologists that DL algorithms can improve the accuracy of tumour volume assessment, implementation strategies require more work and the involvement of multiple stakeholders.
User-centered design and evaluation of interactive segmentation methods for medical images
Segmentation of medical images is a challenging task that aims to identify a particular structure present on the image. Among the existing methods involving the user at different levels, from a fully-manual to a fully-automated task, interactive segmentation methods provide assistance to the user during the task to reduce the variability in the results and allow occasional corrections of segmentation failures. Therefore, they offer a compromise between the segmentation efficiency and the accuracy of the results. It is the user who judges whether the results are satisfactory and how to correct them during the segmentation, making the process subject to human factors. Despite the strong influence of the user on the outcomes of a segmentation task, the impact of such factors has received little attention, with the literature focusing the assessment of segmentation processes on computational performance. Yet, involving the user performance in the analysis is more representative of a realistic scenario. Our goal is to explore the user behaviour in order to improve the efficiency of interactive image segmentation processes. This is achieved through three contributions. First, we developed a method which is based on a new user interaction mechanism to provide hints as to where to concentrate the computations. This significantly improves the computation efficiency without sacrificing the quality of the segmentation. The benefits of using such hints are twofold: (i) because our contribution is based on user interaction, it generalizes to a wide range of segmentation methods, and (ii) it gives comprehensive indications about where to focus the segmentation search. The latter advantage is used to achieve the second contribution. We developed an automated method based on a multi-scale strategy to: (i) reduce the user’s workload and, (ii) improve the computational time up to tenfold, allowing real-time segmentation feedback. Third, we have investigated the effects of such improvements in computations on the user’s performance. We report an experiment that manipulates the delay induced by the computation time while performing an interactive segmentation task. Results reveal that the influence of this delay can be significantly reduced with an appropriate interaction mechanism design. In conclusion, this project provides an effective image segmentation solution that has been developed in compliance with user performance requirements. We validated our approach through multiple user studies that provided a step forward into understanding the user behaviour during interactive image segmentation.; ; Titre traduit; ; Conception et évaluation orientées utilisateur des méthodes de segmentation interactives des images médicales; Résumé traduit; ; La segmentation d’images consiste à identifier une structure particulière dans une image. Parmi les méthodes existantes qui impliquent l’utilisateur à différents niveaux, les méthodes de segmentation interactives fournissent un support logiciel pour assister l’utilisateur dans cette tâche, ce qui aide à réduire la variabilité des résultats et permet de corriger les erreurs occasionnelles. Ces méthodes offrent un compromis entre l’efficacité et la précision des résultats. En effet, durant la segmentation, l’utilisateur décide si les résultats sont satisfaisants et dans le cas contraire, comment les corriger, rendant le processus sujet aux facteurs humains. Malgré la forte influence qu’a l’utilisateur sur l’issue de la segmentation, l’impact de ces facteurs a reçu peu d’attention de la part de la communauté scientifique, qui souvent, réduit l’évaluation des methods de segmentation à leurs performances de calcul. Pourtant, inclure la performance de l’utilisateur lors de l’évaluation de la segmentation permet une représentation plus fidèle de la réalité. Notre but est d’explorer le comportement de l’utilisateur afin d’améliorer l’efficacité des méthodes de segmentation interactives. Cette tâche est réalisée en trois contributions. Dans un premier temps, nous avons développé un nouveau mécanisme d’interaction utilisateur qui oriente la méthode de segmentation vers les endroits de l’image où concentrer les calculs. Ceci augmente significativement l’efficacité des calculs sans atténuer la qualité de la segmentation. Il y a un double avantage à utiliser un tel mécanisme: (i) puisque notre contribution est base sur l’interaction utilisateur, l’approche est généralisable à un grand nombre de méthodes de segmentation, et (ii) ce mécanisme permet une meilleure compréhension des endroits de l’image où l’on doit orienter la recherche du contour lors de la segmentation. Ce dernier point est exploité pour réaliser la deuxième contribution. En effet, nous avons remplacé le mécanisme d’interaction par une méthode automatique basée sur une stratégie multi-échelle qui permet de: (i) réduire l’effort produit par l’utilisateur lors de la segmentation, et (ii) améliorer jusqu’à dix fois le temps de calcul, permettant une segmentation en temps-réel. Dans la troisième contribution, nous avons étudié l’effet d’une telle amélioration des performances de calculs sur l’utilisateur. Nous avons mené une expérience qui manipule les délais des calculs lors de la segmentation interactive. Les résultats révèlent qu’une conception appropriée du mécanisme d’interaction peut réduire l’effet de ces délais sur l’utilisateur. En conclusion, ce projet offer une solution interactive de segmentation d’images développée en tenant compte de la performance de l’utilisateur. Nous avons validé notre approche à travers de multiples études utilisateurs qui nous ont permis une meilleure compréhension du comportement utilisateur durant la segmentation interactive des images.
Efficient Transfer Learning using Pre-trained Models on CT/MRI
The medical imaging field has unique obstacles to face when performing computer vision classification tasks. The retrieval of the data, be it CT scans or MRI, is not only expensive but also limited due to the lack of publicly available labeled data. In spite of this, clinicians often need this medical imaging data to perform diagnosis and recommendations for treatment. This motivates the use of efficient transfer learning techniques to not only condense the complexity of the data as it is often volumetric, but also to achieve better results faster through the use of established machine learning techniques like transfer learning, fine-tuning, and shallow deep learning. In this paper, we introduce a three-step process to perform classification using CT scans and MRI data. The process makes use of fine-tuning to align the pretrained model with the target class, feature extraction to preserve learned information for downstream classification tasks, and shallow deep learning to perform subsequent training. Experiments are done to compare the performance of the proposed methodology as well as the time cost trade offs for using our technique compared to other baseline methods. Through these experiments we find that our proposed method outperforms all other baselines while achieving a substantial speed up in overall training time.
Brain Tumor Detection using Curvelet Transform and Support Vector Machine
Gupta, Bhawna
Tiwari, Shamik
International Journal of Computer Science and Mobile Computing2014Journal Article, cited 8 times
Website
Artificial Intelligence for Detection of Lung and Airway Nodules in Clinical Chest CT scans
Segmentation of the prostate and its internal anatomical zones in magnetic resonance images is an important step in many diagnostic applications. This task can be time consuming, and is therefore a good candidate for introducing an automated method.; The aim of this thesis has been to train a three dimensional Convolutional Neural Network (CNN) that segments the prostate and its four anatomical zones, according to the global PI-RADS standard for use as decision support in the delineation process.; This was performed on a publicly available data set that included images for training (n=78) and validation (n=20). For the evaluation, an internal data set from the University Hospital of Umeå consisting of forty patients, were used to test the generalization capability of the model. Prior to training, the delineations of anterior fibromuscular stroma (AFS), the peripheral (PZ), central (CZ) and transitional (TZ) zones, as well as the prostatic urethra, were validated in collaboration with an experienced radiologist.; The Dice score for the segmentation of the prostate was 0.88, and for the internal zones: PZ: 0.72, CZ: 0.40, TZ: 0.72, U: 0.05, and AFS: 0.34, for the test dataset. Accurate segmentation of the Urethra was challenging due to the structural differences between the data sets, and therefore these results can easily be discarded and viewed as less relevant when reviewing the structures. In conclusion, the trained CNN can be used as decision support for prostate zone delineation.
Renal Cancer Cell Nuclei Detection from Cytological Images Using Convolutional Neural Network for Estimating Proliferation Rate
Hossain, Shamim
Jalab, Hamid A.
Zulfiqar, Fariha
Pervin, Mahfuza
Journal of Telecommunication, Electronic and Computer Engineering2019Journal Article, cited 0 times
Website
TCGA-KIRC
Kidney
Convolutional Neural Network (CNN)
Machine Learning
The Cytological images play an essential role in monitoring the progress of cancer cell mutation. The proliferation rate of the cancer cell is the prerequisite for cancer treatment. It is hard to accurately identify the nucleus of the abnormal cell in a faster way as well as find the correct; proliferation rate since it requires an in-depth manual examination, observation and cell counting, which are very tedious and time-consuming. The proposed method starts with segmentation to separate the background and object regions with K-means clustering. The small candidate regions, which contain cell region is detected based on the value of support vector machine automatically. The sets of cell regions are marked with selective search according to the local distance between the nucleus and cell boundary, whether they are overlapping or non-overlapping cell regions. After that, the selective segmented cell features are taken to learn the normal and abnormal cell nuclei separately from the regional convolutional neural network. Finally, the proliferation rate in the invasive cancer area is calculated based on the number of; abnormal cells. A set of renal cancer cell cytological images is taken from the National Cancer Institute, USA and this data set is available for the research work. Quantitative evaluation of this method is performed by comparing its accuracy with the accuracy of the other state of the art cancer cell nuclei detection methods. Qualitative assessment is done based on human observation. The proposed method is able to detect renal cancer cell nuclei accurately and provide automatic proliferation rate.
The Study on Data Hiding in Medical Images
Huang, Li-Chin
Tseng, Lin-Yu
Hwang, Min-Shiang
International Journal of Network Security2012Journal Article, cited 25 times
Website
Algorithm Development
Image analysis
Reversible data hiding plays an important role in medical image systems. Many hospitals have already applied the electronic medical information in healthcare systems. Reversible data hiding is one of the feasible methodologies to protect the individual privacy and confidential information. With application in several high quality medical devices, the detection rate of diseases and treating are improved at the early stage. Its demands havebeen rising for recognizing complicated anatomical structures in high quality images. However, most data hiding methods are still applied in 8-bit depth medical images with 255 intensity levels. This paper summarizes the existing reversible data hiding algorithms and introduces basic knowledge in medical image.
Radiomics of NSCLC: Quantitative CT Image Feature Characterization and Tumor Shrinkage Prediction
APPLICATION OF MAGNETIC RESONANCE RADIOMICS PLATFORM (MRP) FOR MACHINE LEARNING BASED FEATURES EXTRACTION FROM BRAIN TUMOR IMAGES
Idowu, B.A.
Dada, O. M.
Awojoyogbe, O.B.
Journal of Science, Technology, Mathematics and Education (JOSTMED)2021Journal Article, cited 0 times
Website
TCGA-GBM
TCGA-LGG
BRAIN
Magnetic Resonance Imaging (MRI)
Machine Learning
Radiomic features
NIfTI
This study investigated the implementation of magnetic resonance radiomics platform (MRP) for machine learning based features extraction from brain tumor images. Magnetic resonance imaging data publicly available in The Cancer Imaging Archive (TCIA) were downloaded and used to perform image Coregistration, Multi-Modality, Images interpolation, Morphology and Extraction of radiomic features with MRP tools. Radiomics analyses were then applied to the data (containing AX-T1-POST, Diffusion weighted, AX-T2-FSE and AX-T2-FLAIR sequences) using wavelet decomposition principles. The results employing different configurations of low-pass and high-pass filters were exported to Microsoft excel data sheets. The exported data were visualized using MATLAB’s classification learner tool. These exported data and the visualizations provide a new way of deep assessment of image data as well as easier interpretation of image scans. Findings from this study revealed that Machine learning Radiomics Platform is important in characterizing, visualizing and gives adequate information of a brain tumor.
X-ray CT scatter correction by a physics-motivated deep neural network
A fundamental problem in X-ray Computed Tomography (CT) is the scatter occurring due to the interaction of photons with the imaged object. Unless it is corrected, this phenomenon manifests itself as degradations in the reconstructions in the form of various artifacts. This makes scatter correction a critical step to obtain the desired reconstruction quality. Scatter correction methods consist of two groups: hardware-based and software-based. Despite success in specific settings, hardware-based methods require modification in the hardware or an increase in the scan time or dose. This makes software-based methods attractive. In this context, Monte-Carlo based scatter estimation, analytical-numerical and kernel-based methods were developed. Furthermore, the capacity of data-driven approaches to tackle this problem was recently demonstrated. In this thesis, two novel physics-motivated deep-learning-based methods are proposed. The methods estimate and correct for the scatter in the obtained projection measurements. They incorporate both an initial reconstruction of the object of interest and the scatter-corrupted measurements related to it. They use a common specific deep neural network architecture and a cost function adapted to the problem. Numerical experiments with data obtained by Monte-Carlo simulations of the imaging of phantoms reveal noticeable improvement over a recent projection-domain deep neural network correction method.
Lung Cancer Detection and Classification using Machine Learning Algorithm
Ismail, Meraj Begum Shaikh
Turkish Journal of Computer and Mathematics Education (TURCOMAT)2021Journal Article, cited 0 times
Website
LungCT-Diagnosis
Machine Learning
Segmentation
LUNG
co-occurrence matrix
The Main Objective of this research paper is to find out the early stage of lung cancer and explore the accuracy levels of various machine learning algorithms. After a systematic literature study, we found out that some classifiers have low accuracy and some are higher accuracy but difficult to reached nearer of 100%. Low accuracy and high implementation cost due to improper dealing with DICOM images. For medical image processing many different types of images are used but Computer Tomography (CT) scans are generally preferred because of less noise. Deep learning is proven to be the best method for medical image processing, lung nodule detection and classification, feature extraction and lung cancer stage prediction. In the first stage of this system used image processing techniques to extract lung regions. The segmentation is done using K Means. The features are extracted from the segmented images and the classification are done using various machine learning algorithm. The performances of the proposed approaches are evaluated based on their accuracy,; sensitivity, specificity and classification time.
Retina U-Net: Embarrassingly Simple Exploitation of Segmentation Supervision for Medical Object Detection
The task of localizing and categorizing objects in medical images often remains formulated as a semantic segmentation problem. This approach, however, only indirectly solves the coarse localization task by predicting pixel-level scores, requiring ad-hoc heuristics when mapping back to object-level scores. State-of-the-art object detectors on the other hand, allow for individual object scoring in an end-to-end fashion, while ironically trading in the ability to exploit the full pixel-wise supervision signal. This can be particularly disadvantageous in the setting of medical image analysis, where data sets are notoriously small. In this paper, we propose Retina U-Net, a simple architecture, which naturally fuses the Retina Net one-stage detector with the U-Net architecture widely used for semantic segmentation in medical images. The proposed architecture recaptures discarded supervision signals by complementing object detection with an auxiliary task in the form of semantic segmentation without introducing the additional complexity of previously proposed two-stage detectors. We evaluate the importance of full segmentation supervision on two medical data sets, provide an in-depth analysis on a series of toy experiments and show how the corresponding performance gain grows in the limit of small data sets. Retina U-Net yields strong detection performance only reached by its more complex two-staged counterparts. Our framework including all methods implemented for operation on 2D and 3D images is available at github.com/pfjaeger/medicaldetectiontoolkit.
Quantitative imaging in radiation oncology: An emerging science and clinical service
We present an AI-assisted approach for classification of malignancy of lung nodules in CT scans for explainable AI-assisted lung cancer screening. We evaluate this explainable classification to estimate lung nodule malignancy against the LIDC-IDRI dataset. The LIDC-IDRI dataset includes biomarkers from Radiologist’s annotations thereby providing a training dataset for nodule malignancy suspicion and other findings. The algorithm employs a 3D Convolutional Neural Network (CNN) to predict both the malignancy suspicion level as well as the biomarker attributes. Some biomarkers such as malignancy and subtlety are ordinal in nature, but others such as internal structure and calcification are categorical. Our approach is uniquely able to predict a multitude of fields such as to not only estimate malignancy but many other correlated biomarker variables. We evaluate the malignancy classification algorithm in several ways including presentation of the accuracy of malignancy screening, as well as comparable metrics for biomarker fields.
A First Step Towards an Algorithm for Breast Cancer Reoperation Prediction Using Machine Learning and Mammographic Images
Abstract; Cancer is the second leading cause of death worldwide and 30% of all cancer cases among women are breast cancer. A popular treatment is breast-conserving surgery, where only a part of the breast is surgically removed. Surgery is expensive and has a significant impact on the body, and on some women, a reoperation is needed. The aim of this thesis was to see if there is a possibility to predict whether a person will be in need of reoperation with the help of whole mammographic images and deep learning.; The data used in this thesis were collected from two different open sources: (1) The Chinese Mammography Database (CMMD) where 1052 benign images and 1090 malignant images were used. (2) The Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM) where 182 benign images and 145 malignant images were used. With those images, both a simple convolutional neural network (CNN) and a transfer learning network using the pre-trained model MobileNet were trained to classify the images as benign or malignant. All the networks were evaluated using learning curves, confusion matrix, accuracy, sensitivity, specificity, AUC and a ROC-curve.; The highest results obtained belonged to a transfer learning network that used the pre-trained model MobileNet and trained on the CMMD data set. It got an AUC value of 0.599.; Sammanfattning; Cancer är idag det näst vanligaste dödsorsaken i världen, där 30% av alla cancerfall bland kvinnor är bröstcancer. En vanlig behandling är bröstbevarande operation, där en bit av bröstet kirurgiskt tas bort. Operationer är både dyrt och har en betydande inverkan på kroppen och för vissa kvinnor krävs en omoperation efter den första operationen. Syftet med detta arbete har varit att undersöka möjligheten att förutsäga om en person kommer att vara i behov av en omoperation med hjälp av hela mammografibilder och maskininlärning. ; Datan som användes i arbetet hämtades från två olika öppna källor: (1) The Chinese Mammography Database (CMMD) där 1052 benigna bilder och 1090 maligna bilder användes. (2) The Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM) får 182 benigna bilder och 145 maligna bilder användes. Med dessa bilder tränades både ett enkelt konvoluionellt nätverk och ett överförningsinlärningsnätverk med den för-tränade modellen MobileNet för att klassificera bilderna som benigna eller maligna. Alla nätverken utvärderades med inlärningskurvor, confusion matrix, nog grannhet, känslighet, specificitet och en ROC-kurva.; De högsta resultaten som erhölls var ett AUC-värde på 0.599 och tillhörde ett överföringsinlärning nätverk som använt den för-tränade modellen MobileNet och tränat på CMMD-datauppsättningen.
Radiogenomic correlation for prognosis in patients with glioblastoma multiformae
Training of deep convolutional neural nets to extract radiomic signatures of tumors
Kim, J.
Seo, S.
Ashrafinia, S.
Rahmim, A.
Sossi, V.
Klyuzhin, I.
Journal of Nuclear Medicine2019Journal Article, cited 0 times
Head-Neck-PET-CT
Radiomics
Objectives: Radiomics-based analysis of FDG PET images has been shown to improve the assessment and prediction of tumor growth rate, response to treatment and other patient outcomes [1]. An alternative new approach to image analysis involves the use of convolutional neural networks (CNNs), wherein relevant image features are learned implicitly and automatically in the process of network training [2]; this is in contrast to radiomics analyses, where the features are “hand-crafted” and are explicitly computed (EC). Although CNNs represent a more general approach, it is not clear whether the implicitly learned features may, or have the ability to include radiomics features (RFs) as a subset. If this is the case, CNN-based approaches may eventually obviate the use of EC RFs. Further, the use of CNNs instead of RFs may completely eliminate the need for feature selection and tumor delineation, enabling high-throughput data analyses. Thus, our objective was to test whether CNNs can learn to act similarly to several commonly used RFs. Using a set of simulated and real FDG PET images of tumors, we train the CNNs to estimate the values of RFs from the images without the explicit computation. We then compare the values of the CNN-estimated and EC features. Methods: Using a stochastic volumetric model for tumor growth, 2000 FDG images of tumors confined to a bounding box (BB) were simulated (40x40x40 voxels, voxel size 2.0 mm), and 10 RFs (3 x morphology, 4 x intensity histogram, 3 x texture features) were computed for each image using the SERA library [3] (compliant with the Image Biomarker Standardization Initiative, IBSI [4]). A 3D CNN with 4 convolutional layers, and a total of 164 filters, was implemented in Python using the Keras library with TensorFlow backend (https://www.keras.io). The mean absolute error was the optimized loss function. The CNN was trained to automatically estimate the values each of the 10 RFs for each image; 1900 of images were used for training, and 100 were used for testing, to compare the CNN-estimated values to the EC feature values. We also used a secondary test set comprised of 133 real tumor images, obtained from the head and neck PET/CT imaging study [5] publicly available at the Cancer Imaging Archive. The tumors were cropped to a BB, and the images were resampled to yield similar image size to the simulated image set. Results: After the training procedure, on the simulated test set the CNN was able to estimate the values of most EC RFs with 10-20% error (relative to the range). In the morphology group, the errors were 3.8% for volume, 12.0% for compactness, 15.7% for flatness. In the intensity group, the errors were 13.7% for the mean, 15.4% for variance, 12.3% for skewness, and 13.1% for kurtosis. In the texture group, the error was 10.6% for GLCM contrast, 13.4% for cluster tendency, and 21.7% for angular momentum. With all features, the difference between the CNN-estimated and EC feature values were statistically insignificant (two-sample t-test), and the correlation between the feature values was highly significant (p<0.01). On the real image test set, we observed higher error rates, on the order of 20-30%; however, with all but one feature (angular momentum), there was a significant correlation between the CNN-estimated and EC features (p<0.01). ; Conclusions: Our results suggest that CNNs can be trained to act similarly to several widely used RFs. While the accuracy of CNN-based estimates varied between the features, in general, the CNN showed a good propensity for learning. Thus, it is likely that with more complex network architectures and training data, features can be estimated more accurately. While a greater number of RFs need to be similarly tested in the future, these initial experiments provide first evidence that, given the sufficient quality and quantity of the training data, the CNNs indeed represent a more general approach to feature extraction, and may potentially replace radiomics-based analyses without compromising the descriptive thoroughness.
A Study on the Geometrical Limits and Modern Approaches to External Beam Radiotherapy
Radiation therapy is integral to treating cancer and improving survival probability. Improving treatment methods and modalities can lead to significant impacts on the life quality of cancer patients. One such method is stereotactic radiotherapy. Stereotactic radiotherapy is a form of External Beam Radiotherapy (EBRT). It delivers a highly conformal dose of radiation to a target from beams arranged at many different angles. The goal of any radiotherapy treatment is to deliver radiation only to the cancerous cells while maximally sparing other tissues. However, such a perfect treatment outcome is difficult to achieve due to the physical limitations of EBRT. The quality of treatment is dependent on the characteristics of these beams and the number of angles of which radiation is delivered. However, as technology and techniques have improved, the dependence on the quality of beams and beam coverage may have become less critical.; ; This thesis investigates different geometric aspects of stereotactic radiotherapy and their impacts on treatment quality. The specific aims are: (1) To explore the treatment outcome of a virtual stereotactic delivery where no geometric limit exists in the sense of physical collisions. This allows for the full solid angle treatment space to be investigated and to explore if a large solid angle space is necessary to improve treatment. (2) To evaluate the effect of a reduced solid angle with a specific radiotherapy device using real clinical cases. (3) To investigate how the quality of a single beam influences treatment outcome when multiple overlapping beams are in use. (4) To study the feasibility of using a novel treatment method of lattice radiotherapy with an existing stereotactic device for treating breast cancer. All these aims were investigated with the use of inverse planning optimization and Monte-Carlo based particle transport simulations.
An Enhanced Convolutional Neural Architecture with Residual Module for Mri Brain Image Classification System
Kumar, S Mohan
Yadav, K.P.
Turkish Journal of Physiotherapy and Rehabilitation2021Journal Article, cited 0 times
Website
Deep Learning
Classification
REMBRANDT
Computer Aided Diagnosis (CADx)
Deep Neural Network (DNN) has played an important role in the analysis of image and signal processing. It has the ability to abstract features very deeply. In the field of medical image processing DNN has provided a recognition method for classifying the abnormality of the medical images. In this paper, DNN based Magnetic Resonance Imaging (MRI) brain image classification with modified residual module named Pyramid Design of Residual (PDR) system is developed. The conventional residual module is arranged in a pyramid like architecture. The MRI image classification tests performed on REpository of Molecular BRAin Neoplasia DaTa (REMBRANDT) database demonstrated that the DNN-PDR system can improve the accuracy. The classification test results also show that there is notable improvement in terms of accuracy (99.5%), specificity (100%) and sensitivity (99%). A comparison between the DNN-PDR system and the existing systems are also given.
Textural Analysis of Tumour Imaging: A Radiomics Approach
Conventionally, tumour characteristic are assessed by performing a biopsy. These biopsies are invasive and submissive to the problem of tumour heterogeneity. However, analysis; of imaging data may render the need for such biopsies obsolete. This master’s dissertation describes in what matter images of tumour masses can be post-processed to classify the tumours in a variety of respective clinical response classes. Tumour images obtained using both computed tomography and magnetic resonance imaging are analysed. The analysis of these images is done; using a radiomics approach. This approach will convert the imaging data into a high dimensional mineable feature space. The features considered are first-order statistics, texture features, wavelet-based features and shape parameters. Post-processing techniques applied on this feature space include k-means clustering, assessment of stability and prognostic performance and; machine learning techniques. Both random forests and neural networks are included. Results from these analyses show that the radiomics features can be correlated with different clinical response classes as well as serve as input data to create predictive models with correct prediction rates up to 63.9 % in CT and 66.0 % in MRI. Furthermore, a radiomics signature can be created; that consists of four features and is capable of predicting clinical response factors with almost the same accuracy as obtained using the entire data space.; Keywords - Radiomics, texture analysis, lung tumour, CT, brain tumour, MRI, clustering,; random forest, neural network, machine learning, radiomics signature, biopsy, tumour heterogeneity
Conditional random fields improve the CNN-based prostate cancer classification performance
Prostate cancer is a condition with life-threatening implications but without clear causes yet identified. Several diagnostic procedures can be used, ranging from human dependent and very invasive to using state of the art non-invasive medical imaging. With recent academic and industry focus on the deep learning field, novel research has been performed on to how to improve prostate cancer diagnosis using Convolutional Neural Networks to interpret Magnetic Resonance images. Conditional Random Fields have achieved outstanding results in the image segmentation task, by promoting homogeneous classification at the pixel level. A new implementation, CRF-RNN defines Conditional Random Fields by means of convolutional layers, allowing the end to end training of the feature extractor and classifier models. This work tries to repurpose CRFs for the image classification task, a more traditional sub-field of imaging analysis, on a way that to the best of the author’s knowledge, has not been implemented before. To achieve this, a purpose-built architecture was refitted, adding a CRF layer as a feature extractor step. To serve as the implementation’s benchmark, a multi-parametric Magnetic Resonance Imaging dataset was used, initially provided for the PROSTATEx Challenge 2017 and collected by the Radboud University. The results are very promising, showing an increase in the network’s classification quality.; ; ; Cancro da próstata é uma condição que pode apresentar risco de vida, mas sem causas ainda corretamente identificadas. Vários métodos de diagnóstico podem ser utilizados, desde bastante invasivos e dependentes do operador humano a métodos não invasivos de ponta através de imagens médicas. Com o crescente interesse das universidades e da indústria no campo do deep learning, investigação tem sido desenvolvida com o propósito de melhorar o diagnóstico de cancro da próstata através de Convolutional Neural Networks (CNN) (Redes Neuronais Convolucionais) para interpretar imagens de Ressonância Magnética. Conditional Random Fields (CRF) (Campos Aleatórios Condicionais) alcançaram resultados muito promissores no campo da Segmentação de Imagem, por promoverem classificações homogéneas ao nível do pixel. Uma nova implementação, CRF-RNN redefine os CRF através de camadas de CNN, permitindo assim o treino integrado da rede que extrai as características e o modelo que faz a classificação. Este trabalho tenta aproveitar os CRF para a tarefa de Classificação de Imagem, um campo mais tradicional, numa abordagem que nunca foi implementada anteriormente, para o conhecimento do autor. Para conseguir isto, uma nova arquitetura foi definida, utilizando uma camada CRF-RNN como um extrator de características. Como meio de comparação foi utilizada uma base de dados de imagens multiparamétricas de Ressonância Magnética, recolhida pela Universidade de Radboud e inicialmente utilizada para o PROSTATEx Challenge 2017. Os resultados são bastante promissores, mostrando uma melhoria na capacidade de classificação da rede neuronal.
Automatic Prostate Cancer Segmentation Using Kinetic Analysis in Dynamic Contrast-Enhanced MRI
Lavasani, S Navaei
Mostaar, A
Ashtiyani, M
Journal of Biomedical Physics & Engineering2018Journal Article, cited 0 times
Website
QIN PROSTATE
DCE-MRI
Prostate Cancer
Semi-quantitative Feature
Wavelet Kinetic Feature
Segmentation
Quantitative neuroimaging with handcrafted and deep radiomics in neurological diseases
Lavrova, Elizaveta
2024Thesis, cited 0 times
Dissertation
Thesis
Radiomics
LGG-1p19qDeletion
TCGA-LGG
neuroimaging
medical image analysis
clinical decision support
Magnetic Resonance Imaging (MRI)
Deep learning
The motivation behind this thesis is to explore the potential of "radiomics" in the field of neurology, where early diagnosis and accurate treatment selection are crucial for improving patient outcomes. Neurological diseases are a major cause of disability and death globally, and there is a pressing need for reliable imaging biomarkers to aid in disease detection and monitoring. While radiomics has shown promising results in oncology, its application in neurology remains relatively unexplored. Therefore, this work aims to investigate the feasibility and challenges of implementing radiomics in the neurological context, addressing various limitations and proposing potential solutions. The thesis begins with a demonstration of the predictive power of radiomics for identifying important diagnostic biomarkers in neuro-oncology. Building on this foundation, the research then delves into radiomics in non-oncological neurology, providing an overview of the pipeline steps, potential clinical applications, and existing challenges. Despite promising results in proof-of-concept studies, the field faces limitations, mostly data-related, such as small sample sizes, retrospective nature, and lack of external validation. To explore the predictive power of radiomics in non-oncological tasks, a radiomics approach was implemented to distinguish between multiple sclerosis patients and normal controls. Notably, radiomic features extracted from normal-appearing white matter were found to contain distinctive information for multiple sclerosis detection, confirming the hypothesis of the thesis. To overcome the data harmonization challenge, in this work quantitative mapping of the brain was used. Unlike traditional imaging methods, quantitative mapping involves measuring the physical properties of brain tissues, providing a more standardized and consistent data representation. By reconstructing the physical properties of each voxel based on multi-echo MRI acquisition, quantitative mapping produces data that is less susceptible to domain-specific biases and scanner variability. Additionally, the insights gained from quantitative mapping are building the bridge toward the physical and biological properties of brain tissues, providing a deeper understanding of the underlying pathology. Another crucial challenge in radiomics is robust and fast data labeling, particularly segmentation. A deep learning method was proposed to perform automated carotid artery segmentation in stroke at-risk patients, surpassing current state-of-the-art approaches. This novel method showcases the potential of automated segmentation to enhance radiomics pipeline implementation. In addition to addressing specific challenges, the thesis also proposes a community-driven open-source toolbox for radiomics, aimed at enhancing pipeline standardization and transparency. This software package would facilitate data curation and exploratory analysis, fostering collaboration and reproducibility in radiomics research. Through an in-depth exploration of radiomics in neuroimaging, this thesis demonstrates its potential to enhance neurological disease diagnosis and monitoring. By uncovering valuable information from seemingly normal brain tissues, radiomics holds promise for early disease detection. Furthermore, the development of innovative tools and methods, including deep learning and quantitative mapping, has the potential to address data labeling and harmonization challenges. Looking to the future, embracing larger, diverse datasets and longitudinal studies will further enhance the generalizability and predictive power of radiomics in neurology. By addressing the challenges identified in this thesis and fostering collaboration within the research community, radiomics can advance toward clinical implementation, revolutionizing precision medicine in neurology.
Machine Learning Models on Prognostic Outcome Prediction for Cancer Images with Multiple Modalities
Machine learning algorithms have been applied to predict different prognostic outcomes for many different diseases by directly using medical images. However, the higher resolution in various types of medical imaging modalities and new imaging feature extraction framework bringsnew challenges for predicting prognostic outcomes. Compared to traditional radiology practice, which is only based on visual interpretation and simple quantitative measurements, medical imaging featurescan dig deeper within medical images and potentially provide further objective support for clinical decisions.In this dissertation, we cover three projects with applying or designing machine learning models on predicting prognostic outcomes using various types of medical images.
Deep learning for magnetic resonance imaging-genomic mapping of invasive breast carcinoma
To identify MRI-based radiomic features that could be obtained automatically by a deep learning (DL) model and could predict the clinical characteristics of breast cancer (BC). Also, to explain the potential underlying genomic mechanisms of the predictive radiomic features. A denoising autoencoder (DA) was developed to retrospectively extract 4,096 phenotypes from the MRI of 110 BC patients collected by The Cancer Imaging Archive (TCIA). The associations of these phenotypes with genomic features (commercialized gene signatures, expression of risk genes, and biological pathways activities extracted from the same patients’ mRNA expression collected by The Cancer Genome Atlas (TCGA)) were tested based on linear mixed effect (LME) models. A least absolute shrinkage and selection operator (LASSO) model was used to identify the most predictive MRI phenotypes for each clinical phenotype (tumor size (T), lymph node metastasis(N), status of estrogen receptor (ER), progesterone receptor (PR), and human epidermal growth factor receptor 2 (HER2)). More than 1,000 of the 4,096 MRI phenotypes were associated with the activities of risk genes, gene signatures, and biological pathways (adjusted P-value < 0.05). High performances are obtained in the prediction of the status of T, N, ER, PR, HER2 (AUC>0.9). These identified MRI phenotypes also show significant power to stratify the BC tumors. DL based auto MRI features performed very well in predicting clinical characteristics of BC and these phenotypes were identified to have genomic significance.
Evaluating the Interference of Noise when Performing MRI Segmentation
Lung cancer is diagnosed through the detection and interpretation of (pulmonary) lung nodules, small masses of tissues, in a patient’s lung. In order to determine a patient’s risk of lung cancer, radiologists assess each of these nodules’ malignancy risk based on their characteristics, such as location, size and shape.; The task of lung nodule malignancy classification has been shown to be successfully solved by deep learning models, but these models are still susceptible to over-confident or wrong predictions. It is difficult to understand the reasoning behind these predictions because of the models’ black-box nature. As a result, medical experts lack trust in these models, which hinders the adaptation of the models in practice. This lack of trust of experts can be addressed through the field of explainable AI (XAI) as well as visual analytics (VA). Explainable AI addresses the reasoning about the decisions of a machine learning models through several explainability techniques. Visual analytics, on the other hand, focuses on the transparent communication of the predictions of the model as well through solving complex analysis tasks.; We propose LungVISX, a system designed to explain lung nodule malignancy classification by implementing explainability techniques in a visual analytics tool to enable experts to explore and analyze the predictions of a nodule malignancy classification model. We address explainability; through a model that incorporates the nodule characteristics in its decisions. Moreover, ensembles, which provide the uncertainty of predictions, and attribution methods, which provide location based information for these predictions, are used to explain the model’s decisions. The visual analytics tool of the system allows for complex analysis of the explanations of the models. A nodule can be compared to its cohort in terms of characteristics and malignancy both for the prediction score and uncertainty. Moreover, detection and analysis of important and uncertain; areas of a nodule, related to characteristic and malignancy predictions, can be performed. To our knowledge, no tool has been proposed that provides such an exploration of explainable methods in the context of lung nodule malignancy classification.; The value of the proposed system has been assessed based on use cases, model performance and a user study with three radiologists. The use cases explore and illustrate the capabilities of the visual tool. The model performance and model interpretability face a trade-off, as incorporating; characteristics predictions in the model led to a lower performance. However, the radiologists evaluated the final system as interpretable and effective, highlighting the potential of the tool for explaining the reasoning of a lung cancer malignancy classification model.
Quantitative cone-beam computed tomography reconstruction for radiotherapy planning
Radiotherapy planning involves the calculation of dose deposition throughout the patient, based upon quantitative electron density images from computed tomography (CT) scans taken before treatment. Cone beam CT (CBCT), consisting of a point source and flat panel detector, is often built onto radiotherapy delivery machines and used during a treatment session to ensure alignment of the patient to the plan. If the plan could be recalculated throughout the course of treatment, then margins of uncertainty and toxicity to healthy tissues could be reduced. CBCT reconstructions are normally too poor to be used as the basis of planning however, due to their insufficient sampling, beam hardening and high level of scatter. In this work, we investigate reconstruction techniques to enable dose calculation from CBCT. Firstly, we develop an iterative method for directly inferring electron density from the raw X-ray measurements, which is robust to both low doses and polyenergetic artefacts from hard bone and metallic implants. Secondly, we supplement this with a fast integrated scatter model, also able to take into account the polyenergetic nature of the diagnostic X-ray source. Finally, we demonstrate the ability to provide accurate dose calculation using our methodology from numerical and physical experiments. Not only does this unlock the capability to perform CBCT radiotherapy planning, offering more targeted and less toxic treatment, but the developed techniques are also applicable and beneficial for many other CT applications.
“One Stop Shop” for Prostate Cancer Staging using Imaging Biomarkers and Spatially Registered Multi-Parametric MRI
Mayer, Rulon
2020Patent, cited 0 times
Prostate
Biomarker
Multi-parametric MRI
patent
EQUIPMENT TO ADDRESS INFRASTRUCTURE AND HUMAN RESOURCE CHALLENGES FOR RADIOTHERAPY IN LOW-RESOURCE SETTINGS
Millions of people in low- and middle- income countries (LMICs) are without access to radiation therapy and as rate of population growth in these regions increase and lifestyle factors which are indicative of cancer increase; the cancer burden will only rise. There are a multitude of reasons for lack of access but two themes among them are the lack of access to affordable and reliable teletherapy units and insufficient properly trained staff to deliver high quality care. The purpose of this work was to investigate to two proposed efforts to improve access to radiotherapy in low-resource areas; an upright radiotherapy chair (to facilitate low-cost treatment devices) and a fully automated treatment planning strategy.; ; A fixed-beam patient treatment device would allow for reduced upfront and ongoing cost of teletherapy machines. The enabling technology for such a device is the immobilization chair. A rotating seated patient not only allows for a low-cost fixed treatment machine but also has dosimetric and comfort advantages. We examined the inter- and intra- fraction setup reproducibility, and showed they are less than 3mm, similar to reports for the supine position.; ; The head-and-neck treatment site, one of the most challenging treatment planning, greatly benefits from the use of advanced treatment planning strategies. These strategies, however, require time consuming normal tissue and target contouring and complex plan optimization strategies. An automated treatment planning approach could reduce the additional number of medical physicists (the primary treatment planners) in LMICs by up to half. We used in-house algorithms including mutli-atlas contouring and quality assurance checks, combined with tools in the Eclipse Treatment Planning System®, to automate every step of the treatment planning process for head-and-neck cancers. Requiring only the patient CT scan, patient details including dose and fractionation, and contours of the gross tumor volume, high quality treatment plans can be created in less than 40 minutes.
A Neural Network Approach to Deformable Image Registration
Deformable image registration (DIR) is an important component of a patient’s radiation therapy treatment. During the planning stage it combines complementary information from different imaging modalities and time points. During treatment, it aligns the patient to a reproducible position for accurate dose delivery. As the treatment progresses, it can inform clinicians of important changes in anatomy which trigger plan adjustment. And finally, after the treatment is complete, registering images at subsequent time points can help to monitor the patient’s health. ; The body’s natural non-rigid motion makes DIR a complex challenge. Recently neural networks have shown impressive improvements in image processing and have been leveraged for DIR tasks. ; This thesis is a compilation of neural network-based approaches addressing lingering issues in medical DIR, namely 1) multi-modality registration, 2) registration with different scan extents, and 3) modeling large motion in registration. For the first task we employed a cycle consistentgenerative adversarial network to translate images in the MRI domain to the CT domain, such that the moving and target images were in a common domain. DIR could then proceed as a synthetically bridged mono-modality registration. The second task used advances in network based inpainting to artificially extend images beyond their scan extent. The third task leveraged axial self-attention networks’ ability to learn long range interactions to predict the deformation in the presence of large motion. For all these studies we used images from the head and neck, which exhibit all of these challenges, although these results can be generalized to other parts of the anatomy.; The results of our experiments yielded networks that showed significant improvements in multi modal DIR relative to traditional methods. We also produced network which can successfully predict missing tissue and demonstrated a DIR workflow that is independent of scan length. ; Finally, we trained a network whose accuracy is a balance between large and small motion prediction, and which opens the door to non-convolution-based DIR.; By leveraging the power of artificial intelligence, we demonstrate a new paradigm in deformable image registration. Neural networks learn new patterns and connections in imaging data which go beyond the hand-crafted features of traditional image processing. This thesis shows how each step of registration, from the image pre-processing to the registration itself, can benefit from this exciting and cutting-edge approach.
Detection of Lung Cancer Nodule on CT scan Images by using Region Growing Method
Mhetre, Rajani R
Sache, Rukhsana G
International Journal of Current Trends in Engineering & Research2016Journal Article, cited 0 times
Website
LIDC-IDRI
Radiomics
Predicting survival status of lung cancer patients using machine learning
5-year survival rate of patients with metastasized non-small cell lung cancer (NSCLC) who received chemotherapy was less than 5% (Kathryn C. Arbour, 2019). Our ability to provide survival status of a patient i.e. Alive or death at any time in future is important from at least two standpoints: a) from clinical standpoint it enables clinicians to provide optimal delivery of healthcare and b) from personal standpoint by providing patient’s family with opportunities to plan their life ahead and potentially cope with emotional aspect of loss of life.; In this thesis, we investigate different approaches for predicting survival status of patients suffering from non-small cell lung cancer. In Chapter 2, we review background of machine learning and related work in cancer prediction followed by steps to follow before applying machine learning classifiers to training dataset. In chapter 3, we present different classifiers on which our analysis will be performed and later in the chapter we list evaluation metrics for measuring performance. In chapter 4, related dataset and results from different tests performed on training data will be discussed. In last chapter, we conclude our findings for this study and present suggestions for future work.
In this work, we present a novel method to segment brain tumors using deep learning. An accurate brain tumor segmentation is key for a patient to get the right treatment and for the doctor who must perform surgery. Due to the genetic differences that exist in different patients, even between the same kind of tumor, an accurate segmentation is crucial. To beat state-of-the-art methods, we want to use technology that has provided major breakthroughs in many different areas, including segmentation, deep learning, a new area of machine learning. It is a branch of machine learning that is attempting to model high level abstractions in data. We will be using Convolutional Neural Networks, CNNs, and we will evaluate the results that we obtain comparing our method against the best results obtained from the Brain Tumor Segmentation Challenge, BRATS.
Towards Explainable Deep Learning in Oncology: Integrating EfficientNet-B7 with XAI techniques for Acute Lymphoblastic Leukaemia
Acute Lymphoblastic Leukaemia (ALL), presents a potential risk to human health due to its rapid progression and impact on the body’s blood-producing system. The accurate diagnosis derived through investigations plays a crucial role in formulating effective treatment plans that can influence the likelihood of patient recovery. In the pursuit of improving diagnostic accuracy, diverse Machine Learning (ML) and Deep Learning (DL) approaches have been employed, demonstrating significant improvement in analyzing intricate biomedical data for identifying ALL. However, the complex nature of these algorithms often makes them difficult to comprehend, posing challenges for patients, medical professionals, and the wider community. To address this issue, it is essential to clarify the functioning of these ML/DL models, strengthen trust and providing users with a clearer understanding of diagnostic outcomes. This paper introduces an innovative framework for ALL diagnosis by incorporating the EfficientNet-B7 architecture with Explainable Artificial Intelligence (XAI) methods. Firstly, the proposed model accurately classified the ALL utilizing C-NMC-19 and Taleqani Hospital datasets. The efficacy of the proposed model was rigorously validated utilizing established evaluation metrics notably AUC, mAP, Accuracy, Precision, Recall, and F1-score. Secondly, the XAI approaches namely, Grad-CAM, LIME and IG were applied to explain the proposed model decision. Our contributions on pioneering the explanation of EfficientNet-B7 decisions using XAI for the diagnosis of ALL, set a new benchmark for trust and transparency in the medical field.
Lung Nodule Segmentation for Explainable AI-Based Cancer Screening
We present a novel approach for segmentation and identification of lung nodules in CT scans, for the purpose of Explainable AI assisted screening. Our segmentation approach combines the U-Net segmentation architecture with a graph-based connected component analysis for false positive nodule identification. CADe systems with high true nodule detection rate and low false positive nodules are desired. ; We also develop a 3D nodule dataset that can be used to build explainable classification model for nodule malignancy and biomarkers estimation. We train and evaluate the segmentation model based on its percentage of true nodules identified within the LIDC dataset which contains 1018 CT scans and nodule annotations marked by four board certified radiologists. We further present results of the segmentation and nodule filtering algorithm and description of 3D nodule dataset generated.
Automated Brain Lesion Detection and Segmentation Using Magnetic Resonance Images
Uma Proposta Para Utilização De Workflows Científicos Para A Definição De Pipelines Para A Recuperação De Imagens Médicas Por Conteúdo Em Um Ambiente Distribuído
A Neuro-Fuzzy Based System for the Classification of Cells as Cancerous or Non-Cancerous
Omotosho, Adebayo
Oluwatobi, Asani Emmanuel
Oluwaseun, Ogundokun Roseline
Chukwuka, Ananti Emmanuel
Adekanmi, Adegun
International Journal of Medical Research & Health Sciences2018Journal Article, cited 0 times
Website
Algorithm Development
lung cancer
neuro-fuzzy
Differential diagnosis of low-and high-grade gliomas using radiomics and deep learning fusion signatures based on multiple magnetic resonance imaging sequences
Cancer is hard to cure and radiation therapy is one of the most popular treatment modalities. Even though the benefits of radiation therapy are undeniable, it still has possible side effects. To avoid severe side effects, with clinical evidence, delivering optimal radiation doses to patients is crucial. Intensity-modulated radiation therapy (IMRT) is an advanced radiation therapy technique and will be discussed in this thesis. One important step when creating an IMRT treatment plan is radiation beam geometry generation, which means choosing the number of radiation beams and their directions. The primary goal of this thesis was to find good gantry angles for IMRT plans by combining computer graphics and machine learning. To aid the plan generation process, a new method called reverse beam was introduced in this work. The new solution consists of two stages: angle discovery and angle selection. In the first stage, an algorithm based on the ray casting technique will be used to find all potential angles of the beams. For the second stage, with a predefined beam number, K-means clustering algorithm will be employed to select the gantry angles based on the clusters. The proposed method was tested against non-small cell lung cancer dataset from The Cancer Imaging Archive. By using IMRT plans with seven equidistant fields with 45◦collimator rotations generated by the Ethos therapy system from Varian Medical Systems as a baseline for comparison, the plans generated by the reverse beam method illustrated good performance with the capability of avoiding organs while targeting tumors.
A Reversible and Imperceptible Watermarking Approach for Ensuring the Integrity and Authenticity of Brain MR Images
Qasim, Asaad Flayyih
2019Thesis, cited 0 times
Thesis
Dissertation
BRAIN
Magnetic Resonance Imaging (MRI)
The digital medical workflow has many circumstances in which the image data can be manipulated both within the secured Hospital Information Systems (HIS) and outside, as images are viewed, extracted and exchanged. This potentially grows ethical and legal concerns regarding modifying images details that are crucial in medical examinations. Digital watermarking is recognised as a robust technique for enhancing trust within medical imaging by detecting alterations applied to medical images. Despite its efficiency, digital watermarking has not been widely used in medical imaging. Existing watermarking approaches often suffer from validation of their appropriateness to medical domains. Particularly, several research gaps have been identified: (i) essential requirements for the watermarking of medical images are not well defined; (ii) no standard approach can be found in the literature to evaluate the imperceptibility of watermarked images; and (iii) no study has been conducted before to test digital watermarking in a medical imaging workflow. This research aims to investigate digital watermarking to designing, analysing and applying it to medical images to confirm manipulations can be detected and tracked. In addressing these gaps, a number of original contributions have been presented. A new reversible and imperceptible watermarking approach is presented to detect manipulations of brain Magnetic Resonance (MR) images based on Difference Expansion (DE) technique. Experimental results show that the proposed method, whilst fully reversible, can also realise a watermarked image with low degradation for reasonable and controllable embedding capacity. This is fulfilled by encoding the data into smooth regions (blocks that have least differences between their pixels values) inside the Region of Interest (ROI) part of medical images and also through the elimination of the large location map (location of pixels used for encoding the data) required at extraction to retrieve the encoded data. This compares favourably to outcomes reported under current state-of-art techniques in terms of visual image quality of watermarked images. This was also evaluated through conducting a novel visual assessment based on relative Visual Grading Analysis (relative VGA) to define a perceptual threshold in which modifications become noticeable to radiographers. The proposed approach is then integrated into medical systems to verify its validity and applicability in a real application scenario of medical imaging where medical images are generated, exchanged and archived. This enhanced security measure, therefore, enables the detection of image manipulations, by an imperceptible and reversible watermarking approach, that may establish increased trust in the digital medical imaging workflow.
Detection, quantification, malignancy prediction and growth forecasting of pulmonary nodules using deep learning in follow-up CT scans
Nowadays, lung cancer assessment is a complex and tedious task mainly performed by radiological visual inspection of suspicious pulmonary nodules, using; computed tomography (CT) scan images taken to patients over time.; Several computational tools relying on conventional artificial intelligence and computer vision algorithms have been proposed for supporting lung cancer detection and classification. These solutions mostly rely on the analysis of individual lung CT images of patients and on the use of hand-crafted image descriptors. Unfortunately, this makes them unable to cope with the complexity and variability of the problem. Recently, the advent of deep learning has led to a major breakthrough in the medical image domain, outperforming conventional approaches.; Despite recent promising achievements in nodule detection, segmentation, and lung cancer classification, radiologists are still reluctant to use these solutions in their day-to-day clinical practice. One of the main reasons is that current solutions do not provide support to automatic analysis of the temporal evolution of lung tumours. The difficulty to collect and annotate longitudinal lung CT cases to train models may partially explain the lack of deep learning studies that address this issue.; In this dissertation, we investigate how to automatically provide lung cancer assessment through deep learning algorithms and computer vision pipelines, especially taking into consideration the temporal evolution of the pulmonary nodules.; To this end, our first goal consisted on obtaining accurate methods for lung cancer assessment (diagnostic ground truth) based on individual lung CT images.; Since these types of labels are expensive and difficult to collect (e.g. usually after biopsy), we proposed to train different deep learning models, based on 3D; convolutional neural networks (CNN), to predict nodule malignancy based on radiologist visual inspection annotations (which are reasonable to obtain). These; classifiers were built based on ground truth consisting of the nodule malignancy, the position and the size of the nodules to classify. Next, we evaluated different ways of synthesizing the knowledge embedded by the nodule malignancy neural network, into an end-to-end pipeline aimed to detect pulmonary nodules and predict lung cancer at the patient level, given a lung CT image. The positive results confirmed the convenience of using CNNs for modelling nodule malignancy, according to radiologists, for the automatic prediction of lung cancer.; Next, we focused on the analysis of lung CT image series. Thus, we first faced the problem of automatically re-identifying pulmonary nodules from different lung CT scans of the same patient. To do this, we present a novel method based on a Siamese neural network (SNN) to rank similarity between nodules,; overpassing the need for image registration. This change of paradigm avoided; introducing potentially erroneous image deformations and provided computationally faster results. Different configurations of the SNN were examined, including the application of transfer learning, using different loss functions, and the combination of several feature maps of different network levels. This method obtained state-of-the-art performances for nodule matching both in an isolated manner and embedded in an end-to-end nodule growth detection pipeline.; Afterwards, we moved to the core problem of supporting radiologists in the longitudinal management of lung cancer. For this purpose, we created a novel; end-to-end deep learning pipeline, composed of four stages that completely au tomatize from the detection of nodules to the classification of cancer, through the detection of growth in the nodules. In addition, the pipeline integrated a novel approach for nodule growth detection, which relies on a recent hierarchical prob abilistic segmentation network adapted to report uncertainty estimates. Also, a second novel method was introduced for lung cancer nodule classification, integrating into a two stream 3D-CNN the estimated nodule malignancy probabilities derived from a pre-trained nodule malignancy network. The pipeline was evaluated in a longitudinal cohort and the reported outcomes (i.e. nodule detection, re-identification, growth quantification, and malignancy prediction) were compa rable with state-of-the-art work, focused on solving one or a few of the function alities of our pipeline.; Thereafter, we also investigated how to help clinicians to prescribe more ac curate tumour treatments and surgical planning. Thus, we created a novel method; to forecast nodule growth given a single image of the nodule. Particularly, the method relied on a hierarchical, probabilistic and generative deep neural network able to produce multiple consistent future segmentations of the nodule at a given time. To do this, the network learned to model the multimodal posterior distri bution of future lung tumour segmentations by using variational inference and injecting the posterior latent features. Eventually, by applying Monte-Carlo sampling on the outputs of the trained network, we estimated the expected tumour growth mean and the uncertainty associated with the prediction.; Although further evaluation in a larger cohort would be highly recommended, the proposed methods reported accurate results to adequately support the radiological workflow of pulmonary nodule follow-up. Beyond this specific application, the outlined innovations, such as the methods for integrating CNNs into computer vision pipelines, the re-identification of suspicious regions over time based on SNNs, without the need to warp the inherent image structure, or the proposed deep generative and probabilistic network to model tumour growth considering ambiguous images and label uncertainty, they could be easily applicable to other types of cancer (e.g. pancreas), clinical diseases (e.g. Covid-19) or medical applications (e.g. therapy follow-up).
REPRESENTATION LEARNING FOR BREAST CANCER LESION DETECTION
Raimundo, João Nuno Centeno
2022Thesis, cited 0 times
Thesis
Duke-Breast-Cancer-MRI
Computer Aided Detection (CADe)
BREAST
Machine Learning
Convolutional Neural Network (CNN)
Magnetic Resonance Imaging (MRI)
Graphics Processing Units (GPU)
Breast Cancer (BC) is the second type of cancer with a higher incidence in women, it is responsible for the death of hundreds of thousands of women every year. However, when detected in the early stages of the disease, treatment methods have proven to be very effective in increasing life expectancy and, in many cases, patients fully recover. Several medical image modalities, such as MG – Mammography (X-Rays), US - Ultrasound, CT - Computer Tomography, MRI - Magnetic Resonance Imaging, and Tomosynthesis have been explored to support radiologists/physicians in clinical decision-making workflows for the detection and diagnosis of BC. MG is the imaging modality more used at the worldwide level, however, recent research results have demonstrated that breast MRI is more sensitive than mammography to find pathological lesions, and it is not limited/affected by breast density issues. Therefore, it is currently a trend to introduce MRI-based breast assessment into clinical workflows (screening and diagnosis), but when compared to MG the workload of radiologists/physicians increases, MRI assessment is a more time-consuming task, and its effectiveness is affected not only by the variety of morphological characteristics of each specific tumor phenotype and its origin but also by human fatigue. Computer-Aided Detection (CADe) methods have been widely explored primarily in mammography screening tasks, but it remains an unsolved problem in breast MRI settings. ; This work aims to explore and validate BC detection models using Machine (Deep) Learning algorithms. ; As the main contribution, we have developed and validated an innovative method that improves the “breast MRI preprocessing phase” to select the patient’s image slices and bounding boxes representing pathological lesions. With this, it is possible to build a more robust training dataset to feed the deep learning models, reducing the computation time and the dimension of the dataset, and more importantly, to identify with high accuracy the specific regions (bounding boxes) for each of the patient images, in ; which a possible pathological lesion (tumor) has been identified. In experimental settings using a fully annotated (released for public domain) dataset comprising a total of 922 MRI-based BC patient cases, we have achieved, as the most accurate trained model, an accuracy rate of 97.83%, and subsequently, applying a ten-fold cross-validation method, a mean accuracy on the trained models of 94.46% and an associated standard deviation of 2.43%.
Intelligent texture feature extraction and indexing for MRI image retrieval using curvelet and PCA with HTF
Rajakumar, K
Muttan, S
Deepa, G
Revathy, S
Priya, B Shanmuga
Advances in Natural and Applied Sciences2015Journal Article, cited 0 times
Website
Radiomics
Content based image retrieval (CBIR)
Magnetic Resonance Imaging (MRI)
BRAIN
BREAST
PROSTATE
PHANTOM
MATLAB
With the development of multimedia network technology and the rapid increase of image application, Content Based Image Retrieval (CBIR) has become the most active area in image retrieval system. The fields of application of CBIR are becoming more and more exhaustive and wide. Most traditional image retrieval systems usually use color, texture, shape and spatial relationship. At present texture features play a very important role in computer vision and pattern recognition, especially in describing the content of images. Most texture image retrieval systems are providing retrieval result with insufficient retrieval accuracy. We address this problem, by using curvelet with PCA using Haralick Texture Feature (HTF) based image retrieval system is proposed in this paper. The combined approach of curvelet and PCA using HTF has produced better results than other proposed techniques.
Improving semi-supervised deep learning under distribution mismatch for medical image analysis applications
Deep learning methodologies have shown outstanding success in different image analysis applications. They rely on the abundance of labelled observations to build the model. However, frequently it is expensive to gather labelled observations of data, making the usage of deep learning models imprudent. Different practical examples of this challenge can be found in the analysis of medical images. For instance, labelling images to solve medical imaging problems require expensive labelling efforts, as experts (i.e., radiologists) are required to produce reliable labels. Semi-supervised learning is an increasingly popular alternative approach to deal with small labelled datasets and increase model test accuracy, by leveraging unlabelled data. However, in real-world usage settings, an unlabelled dataset might present a different distribution than the labelled dataset (i.e., the labelled dataset was sampled from a target clinic and the unlabelled dataset from a source clinic). There are different causes for a distribution mismatch between the labelled and the unlabelled dataset: a prior probability shift, a set of observations from unseen classes in the labelled dataset, and a covariate shift of the features. In this work, we assess the impact of this phenomena, for the state of the art semi-supervised model known as MixMatch. We evaluate both label and feature distribution mismatch impact in MixMatch in a real-world application: the classification of chest X-ray images for COVID-19 detection. We also test the performance gain of using MixMatch for malignant cancer detection using mammograms. For both study cases we managed to build new datasets from a private clinic in Costa Rica. We propose different approaches to address different causes of a distribution mismatch between the labelled and unlabelled datasets. First, regarding the prior probability shift, a simple model-oriented approach to deal with this challenge, is proposed. According to our experiments, the proposed method yielded accuracy gains of up to 14% statistical significance. As for more challenging distribution mismatch settings caused by a covariate shift in the feature space and sampling unseen classes in the unlabelled dataset we propose a data-oriented approach to deal with such challenges. As an assessment tool, we propose a set of dataset dissimilarity metrics designed to measure how much performance benefit a semi-supervised training regime can get from using a specific unlabelled dataset over another. Also, two techniques designed to score each unlabelled observation according to how much accuracy might bring including such observation into the unlabelled dataset for semi-supervised training are proposed. These scores can be used to discard harmful unlabelled observations. The novel methods use a generic feature extractor to build a feature space where the metrics and scores are computed. The dataset dissimilarity metrics yielded a linear correlation of up to 90% to the performance of the state-of-the-art Mix- Match semi-supervised training algorithm, suggesting that such metrics can be used to assess the quality of an unlabelled dataset. As for the scoring methods for unlabelled data, according to our tests, using them to discard harmful unlabelled data, was able to increase the performance of MixMatch to around 20%. This in the context of medical image analysis applications.
Accelerating Machine Learning with Training Data Management
One of the biggest bottlenecks in developing machine learning applications today is the need for large hand-labeled training datasets. Even at the world's most sophisticated technology companies, and especially at other organizations across science, medicine, industry, and government, the time and monetary cost of labeling and managing large training datasets is often the blocking factor in using machine learning. In this thesis, we describe work on training data management systems that enable users to programmatically build and manage training datasets, rather than labeling and managing them by hand, and present algorithms and supporting theory for automatically modeling this noisier process of training set specification in order to improve the resulting training set quality. We then describe extensive empirical results and real-world deployments demonstrating that programmatically building, managing, and modeling training sets in this way can lead to radically faster, more flexible, and more accessible ways of developing machine learning applications. We start by describing data programming, a paradigm for labeling training datasets programmatically rather than by hand, and Snorkel, an open source training data management system built around data programming that has been used by major technology companies, academic labs, and government agencies to build machine learning applications in days or weeks rather than months or years. In Snorkel, rather than hand-labeling training data, users write programmatic operators called labeling functions, which label data using various heuristic or weak supervision strategies such as pattern matching, distant supervision, and other models. These labeling functions can have noisy, conflicting, and correlated outputs, which Snorkel models and combines into clean training labels without requiring any ground truth using theoretically consistent modeling approaches we develop. We then report on extensive empirical validations, user studies, and real-world applications of Snorkel in industrial, scientific, medical, and other use cases ranging from knowledge base construction from text data to medical monitoring over image and video data. Next, we will describe two other approaches for enabling users to programmatically build and manage training datasets, both currently integrated into the Snorkel open source framework: Snorkel MeTaL, an extension of data programming and Snorkel to the setting where users have multiple related classification tasks, in particular focusing on multi-task learning; and TANDA, a system for optimizing and managing strategies for data augmentation, a critical training dataset management technique wherein a labeled dataset is artificially expanded by transforming data points. Finally, we will conclude by outlining future research directions for further accelerating and democratizing machine learning workflows, such as higher-level programmatic interfaces and massively multi-task frameworks.
Segmentation of candidates for pulmonary nodules based on computed tomorography
Rocha, Maura G. R. da
Saraiva, Willyams M.
Drumond, Patrícia M. L de L.
Carvalho Filho, Antonio O. de
de Sousa, Alcilene D.
2016Conference Paper, cited 0 times
LIDC-IDRI
Computed Tomography (CT)
Image Processing
Segmentation
Automated computer aided diagnosis
Automatic detection
Abstract: The present work presents a methodology for automatic segmentation of pulmonary solitary nodules candidates using cellular automaton. Early detection of pulmonary solitary nodules that may become cancer is essential; for survival of patients. To assist the experts in the identification of these nodules are being developed computer aided; systems that aim to automate the work of detection and classification. The segmentation stage plays a key role in automatic detection of lung nodules, as it allows separating the image elements in regions, which have the same property or; characteristic. The methodology used in the article includes acquisition of images, noise elimination, pulmonary parenchyma segmentation and segmentation of pulmonary solitary nodules candidates. The tests were conducted using set; of images of the LIDC-IDRI base, containing 739 nodules. The test results show a sensitivity of 95.66% of the nodules.
High Level Mammographic Information Fusion For Real World Ontology Population
Salem, Yosra Ben
Idodi, Rihab
Ettabaa, Karim Saheb
Hamrouni, Kamel
Solaiman, Basel
Journal of Digital Information Management2017Journal Article, cited 1 times
Website
Ontology
BREAST
Imaging features
Mammography
Magnetic Resonance Imaging (MRI)
In this paper, we propose a novel approach for ontology instantiating from real data related to the mammographic domain. In our study, we are interested in handling two modalities of mammographic images:mammography and Breast MRI. Firstly, we propose to model both images content in ontological representations since ontologies allow the description of the objects from a common perspective. In order, to overcome the ambiguity problem of representation of image’s entities, we propose to take advantage of the possibility theory applied to the ontological representation. Second, both local generated ontologies are merged in a unique formal representation with the use of two similarity measures: syntactic measure and possibilistic measure. The candidate instances are, finally, used for the global domain ontology populating in order to empower the mammographic knowledge base. The approach was validated on real world domain and the results were evaluated in terms of precision and recall by an expert.
Towards Generation, Management, and Exploration of Combined Radiomics and Pathomics Datasets for Cancer Research
Saltz, Joel
Almeida, Jonas
Gao, Yi
Sharma, Ashish
Bremer, Erich
DiPrima, Tammy
Saltz, Mary
Kalpathy-Cramer, Jayashree
Kurc, Tahsin
AMIA Summits on Translational Science Proceedings2017Journal Article, cited 4 times
Website
Radiomics
Pathomics
Glioblastoma Multiforme (GBM)
TCGA-LUSC
TCGA-GBM
Non Small Cell Lung Cancer (NSCLC)
Cancer is a complex multifactorial disease state and the ability to anticipate and steer treatment results will require information synthesis across multiple scales from the host to the molecular level. Radiomics and Pathomics, where image features are extracted from routine diagnostic Radiology and Pathology studies, are also evolving as valuable diagnostic and prognostic indicators in cancer. This information explosion provides new opportunities for integrated, multi-scale investigation of cancer, but also mandates a need to build systematic and integrated approaches to manage, query and mine combined Radiomics and Pathomics data. In this paper, we describe a suite of tools and web-based applications towards building a comprehensive framework to support the generation, management and interrogation of large volumes of Radiomics and Pathomics feature sets and the investigation of correlations between image features, molecular data, and clinical outcome.;
Classification of Lung CT Images using BRISK Features
Sambasivarao, B.
Prathiba, G.
International Journal of Engineering and Advanced Technology (IJEAT)2019Journal Article, cited 0 times
Website
Lung cancer is the major cause of death in humans. To increase the survival rate of the people, early detection of cancer is required. Lung cancer that starts in the cells of lung is mainly of two types i.e., cancerous (malignant) and non-cancerous cell (benign). In this paper, work is done on the lung images obtained from the Society of Photographic Instrumentation Engineers (SPIE) database. This SPIE database contains normal, benign and malignant images. In this work, 300 images from the database are used out of which 150 are benign and 150 are malignant. Feature points of lung tumor images are extracted by using Binary Robust Invariant Scale Keypoints (BRISK). BRISK attains commensurate characteristic of correspondence at much less computation time. BRISK is adaptive, high quality accomplishments in avant-grade algorithms. BRISK features divide the pairs of pixels surrounding the keypoint into two subsets: short-distance and long-distance pairs. The orientation of the feature point is calculated by Local intensity gradients from long distance pairs. Rotation of Short distance pairs is obtained using this orientation. These BRISK features are used by classifier for classifying the lung tumors as either benign or malignant. The performance is evaluated by calculating the accuracy.
Lung Cancer Detection on CT Scan Images Using Artificial Neural Network
These days, image processing techniques are generally utilized in a few clinical regions for image improvement in prior discovery and treatment stages, where the time factor is critical to find the variation from the norm issues in target images, particularly in different malignant growth tumors, for example, lung disease, malignancy, and so forth. Image quality and precision is the center components of this exploration, image quality evaluation just as progress are relying upon the upgrade stage where low pre-processing methods is utilized dependent on channels inside principles. Following the segmentation principles, an improved area of the object of interest that is utilized as a fundamental establishment of highlight extraction is acquired. Depending on broad highlights, a typicality examination is made. In this exploration, the principle identified highlights for exact images examination are pixels rate which assists with recognizing the harmful nodules present in the CT scan images and gives the qualification between the pictures containing nodule is benign or malignant.
Resolving the molecular complexity of brain tumors through machine learning approaches for precision medicine
Glioblastoma (GBM) tumors are highly aggressive malignant brain tumors and are resistant to conventional therapies. The Cancer Genome Atlas (TCGA) efforts distinguished histologically similar GBM tumors into unique molecular subtypes. The World Health Organization (WHO) has also since incorporated key molecular indicators such as IDH mutations and 1p/19q co-deletions in the clinical classification scheme. The National Neuroscience Institute (NNI) Brain Tumor Resource distinguishes itself as the exclusive collection of patient tumors with corresponding live cells capable of re-creating the full spectrum of the original patient tumor molecular heterogeneity. These cells are thus important to re-create “mouse-patient tumor replicas” that can be prospectively tested with novel compounds, yet have retrospective clinical history, transcriptomic data and tissue paraffin blocks for data mining. My thesis aims to establish a computational framework for the molecular subtyping of brain tumors using machine learning approaches. The applicability of the empirical Bayes model has been demonstrated in the integration of various transcriptomic databases. We utilize predictive algorithms such as template-based, centroid-based, connectivity map (CMAP) and recursive feature elimination combined with random forest approaches to stratify primary tumors and GBM cells. These subtyping approaches serve as key factors for the development of predictive models and eventually, improving precision medicine strategies. We validate the robustness and clinical relevance of our Brain Tumor Resource by evaluating two critical pathways for GBM maintenance. We identify a sialyltransferase enzyme (ST3Gal1) transcriptomic program contributing to tumorigenicity and tumor cell invasiveness. Further, we generate a STAT3 functionally-tuned signature and demonstrate its pivotal role in patient prognosis and chemoresistance. We show that IGF1-R mediates resistance in non-responders to STAT3 inhibitors. Taken together, our studies demonstrate the application of machine learning approaches in revealing molecular insights into brain tumors and subsequently, the translation of these integrative analyses into more effective targeted therapies in the clinics.
BRAIN CANCER DETECTION FROM MRI: A MACHINE LEARNING APPROACH (TENSORFLOW)
COMPUTER AIDED DETECTION OF LUNG CYSTS USING CONVOLUTIONAL NEURAL NETWORK (CNN)
Kishore Sebastian
S. Devi
Turkish Journal of Physiotherapy and Rehabilitation2021Journal Article, cited 0 times
Website
LIDC-IDRI
LUNG
Algorithm Development
Support Vector Machine (SVM)
Lung cancer is one of the baleful diseases. The survival rate will be low if the diagonisation and treatment of lung tumour gets delayed. But the survival rate and saving lives can be enhanced with opportune diagnosis and prompt treatment. The seriousness of the disease calls for a highly efficient system that can identify cancerous growth with high accuracy level. Computer Tomography (CT) scan is used to obtain detailed picture of different body parts. However it is difficult to scrutinize the presence and coverage of cancerous cells in the lungs using this scan; even for professionals.So a new model based on the Mumford and Shah Model using convolutional neural network (CNN) classification is proposed in this paper. The proposed model will provide an output with higher efficiency and accuracy in lesser amount of time. This system uses seven metrics for assessment used in this system are Classification Accuracy, sensitivity, AUC, F Measure, Specificity, precision, Brier Score and MCC. And finally the results obtained using SVM are then compared in terms of these seven metrics with the results obtained using Decision-Tree, KNN, CNN and Adaptive Boosting algorithms, and this clearly shows the higher accuracy of the proposed system over the existing system
Deep Learning Architectures for Automated Image Segmentation
Image segmentation is widely used in a variety of computer vision tasks, such as object localization and recognition, boundary detection, and medical imaging. This thesis proposes deep learning architectures to improve automatic object localization and boundary delineation for salient object segmentation in natural images and for 2D medical image segmentation.; First, we propose and evaluate a novel dilated dense encoder-decoder architecture with a custom dilated spatial pyramid pooling block to accurately localize and delineate boundaries for salient object segmentation. The dilation offers better spatial understanding and the dense connectivity preserves features learned at shallower levels of the network for better localization. Tested on three publicly available datasets, our architecture outperforms the state-of-the-art for one and is very competitive on the other two.; Second, we propose and evaluate a custom 2D dilated dense UNet architecture for accurate lesion localization and segmentation in medical images. This architecture can be utilized as a stand alone segmentation framework or used as a rich feature extracting backbone to aid other models in medical image segmentation. Our architecture outperforms all baseline models for accurate lesion localization and segmentation on a new dataset. We furthermore explore the main considerations that should be taken into account for 3D medical image segmentation, among them preprocessing techniques and specialized loss functions.
Analysis and Application of clustering and visualization methods of computed tomography radiomic features to contribute to the characterization of patients with non-metastatic Non-small-cell lung cancer.
Serra, Maria Mercedes
2022Thesis, cited 0 times
Thesis
NSCLC-Radiomics
Radiomic feature
Visualization
Non-Small Cell Lung Cancer (NSCLC)
Background: The lung is the most common site for cancer and has the highest worldwide cancer-related mortality. Routine study of patients with lung cancer usually includes at least one computed tomography (CT) study previous to the histopathological diagnosis. In the last decade the development of tools that help extract quantitative measures from medical imaging, known as radiomic characteristics, have become increasingly relevant in this domain, including mathematically extracted measures of volume, shape, texture analysis, etc. Radiomics can quantify tumor phenotypic characteristics non-invasively and could potentially contribute with objective elements to support these patients' diagnosis, management and prognosis in routine clinical practice. Methodology: LUNG1 dataset frommUniversity of Maastricht and publicly available in The Cancer Imaging Archive was obtained. Radiomic feature extraction was performed with pyRadiomics package v3.0.1 using CT scans from 422 non-small cell lung cancer (NSCLC) patients, including manual segmentations of the gross tumor volume. A single data frame was constructed including clinical data, radiomic features output, CT manufacturer and study date acquisition information. Exploratory data analysis, curation, feature selection, modeling and visualization was performed using R Software. Model based clustering was performed using VarselLCM library both with and without wrapper feature selection. Results: During exploratory data analysis lack of independence was found between histology and age and overall stage, and between survival curves and scanner manufacturer model. Features related to the manufacturer model were excluded from further analysis. Additional feature filtering was performed using the MRMR algorithm. When performing clustering analysis both models, with and without variable selection, showed significant association between partitions generated and survival curves, significance of this association was greater for the model with wrapper variable selection which selected only radiomic variables. original\_shape\_VoxelVolume feature showed the highest discriminative power for both models along with log.sigma.5.0.mm.3D\_glzm\_LargeAreaLowGrayLevelEmphasis and wavelet\_LHL\_glzm\_LargeAreaHighGrayLevelEmphasis. Clusters with significant lower median survival were also related to higher Clinical T stages, greater mean values of original\_shape\_VoxelVolume, log.sigma.5.0.mm.3D\_glzm\_LargeAreaLowGrayLevelEmphasis and wavelet\_LHL\_glzm\_LargeAreaHighGrayLevelEmphasis and lower mean wavelet.HHl\_glcm\_ClusterProminence. A weaker relationship was found between histology and selected clusters. Conclusions: Potential sources of bias given by relationship between different variables of interest and technical sources should be taken into account when analyzing this data set. Aside from original\_shape\_VoxelVolume feature, texture features applied to images with LoG and wavelet filters where found most significantly associated with different clinical characteristics in the present analysis. Value: This work highlights the relevance of analyzing clinical data and technical sources when performing radiomic analysis. It also goes through the different steps needed to extract, analyze and visualize a high dimensional dataset of radiomic features and describes associations between radiomic features and clinical variables establishing the base for future work.
Topological Data Analysis for Medical Imaging and RNA Data Analysis on Tree Spaces
Ideas from the algebraic topology of studying object data are used to introduce a framework for using; persistence landscapes to vectorized objects. These methods are applied to analyze data from The Cancer; Imaging Archive (TCIA), using a technique developed earlier for regular digital images. Our study aims; at tumor differentiation from medical images, including brain images from CPTAC Glioblastoma patients.; The result shows that persistence landscapes that capture topological features are distinguishing on average; between tumor and normal brains. Besides topological object data analysis, asymptotics of sample means; on stratified spaces are also introduced and developed in this dissertation. A stratified space is a metric space; that admits a filtration by closed subspaces, such that the difference between the d-th indexed subspace and; the (d − 1) indexed subspace is empty or is a d-dimensional manifold, called the d-th stratum. Examples of; stratified sample spaces, which are not themselves manifolds, include similarity shape spaces, affine shape; spaces, projective shape spaces, phylogenetic tree spaces, and graphs. The behavior of the Frechet sample ´; means is different around the singular Frechet mean points in some cases of stratified spaces, such as on ´; open books. The asymptotic results for the Frechet sample mean are extended from data on spiders, which ´; are open books, to a more general class of stratified spaces that are not open books. Phylogenetic tree; spaces are typically stratified spaces, including genetic information from nucleotide data, such as DNA,; RNA data. Coronavirus disease 2019 (Covid-19) is an infectious disease caused by severe acute respiratory; syndrome coronavirus 2 (SARS-CoV-2). The raw RNA sequences from SARS-CoV-2 are studied. The; ideas from the phylogenetic tree and statistical analysis on stratified spaces are applied to study distributions; on phylogenetic tree spaces. A framework is also presented for computing mean and applying Central; Limit Theorem(CLT), to provide statistical inference on data. We apply these methods to analyze RNA; sequences of SARS-CoV-2 from multiple sources. By building sample trees and applying the ensuing; statistical analysis, we could compare evolutionary results for SARS-CoV-2 vs other coronaviruses.
Combination of fuzzy c-means clustering and texture pattern matrix for brain MRI segmentation
Shijin Kumar, P.S.
Dharun, V.S.
Biomedical Research2017Journal Article, cited 0 times
RIDER NEURO MRI
MRI
BRAIN
Radiomic feature
The process of image segmentation can be defined as splitting an image into different regions. It is an important step in medical image analysis. We introduce a hybrid tumor tracking and segmentation algorithm for Magnetic Resonance Images (MRI). This method is based on Fuzzy C-means clustering algorithm (FCM) and Texture Pattern Matrix (TPM). The key idea is to use texture features along with intensity while performing segmentation. The performance parameters can be improved by using Texture Pattern Matrix (TPM). FCM is capable of predicting tumor cells with high accuracy. In FCM homogeneous regions in an image are obtained based on intensity. Texture Pattern Matrix (TPM) provides details about spatial distribution of pixels in an image. Experimental results obtained by applying proposed segmentation method for tracking tumors are presented. Various performance parameters are evaluated by comparing the outputs of proposed method and Fuzzy C-means algorithm. The computational complexity and computation time can be reduced by using this hybrid segmentation method.
A Novel Imaging-Genomic Approach to Predict Outcomes of Radiation Therapy
Singh, Apurva
Goyal, Sharad
Rao, Yuan James
Loew, Murray
2019Thesis, cited 0 times
Thesis
Radiogenomics
Radiomics
HNSCC
Head-Neck-PET-CT
TCGA-HNSC
TCGA-LUSC
TCGA-LUAD
TCGA-CESC
K Nearest Neighbor (KNN)
Support Vector Machine (SVM)
Introduction: Tumor regions are populated by various cellular species. Intra-tumor radiogenomic heterogeneity can be attributed to factors including variations in the blood flow to the different parts of the tumor and variations in the gene mutation frequencies. This heterogeneity is further propagated by cancer cells which adopt an “evolutionarily enlightened” growth approach. This growth, which focuses on developing an adaptive mechanism to progressively develop a strong resistance to therapy, follows a unique pattern in each patient. This makes the development of a uniform treatment technique very challenging and makes the concept of “precision medicine”, which is developed using information unique to each patient, very crucial to the development of effective cancer treatment methods. Our study aims to determine whether information present in the heterogeneity of tumor regions in the pre-treatment PET scans of patients and in their gene mutation status can measure the efficacy of radiation therapy in their treatment. We wish to develop a scheme which could predict the effectiveness of therapy at the pre-treatment stage, reduce the unnecessary exposure of the patient to radiation which would ultimately not be helpful in curing the patient and thus help in choosing alternative cancer therapy measures for the patients under consideration. ; Materials and methods: Our radiomics analysis was developed using PET scans for 20 patients from the HNSCC database from TCIA (The Cancer Imaging Archive). Clinical data were used to divide the patients into two categories based on the recurrence status of the tumor. Radiation structures are overlain on the PET scans for tumor delineation. Texture features extracted from tumor regions are reduced using correlation matrix-based technique and are classified by methods including Weighted KNN, Linear SVM and Bagged Trees. Slice-wise classification results are computed, treating each slice as a 2D image and treating the collection of slices as a 3D volume. Patient-wise results are computed by a voting scheme which assigns to each patient the class label possessed by more than half of its slices. After the voting is complete, the assigned labels are compared to the actual labels to compute the patient-wise classification accuracies. This workflow was tested on a group of 53 patients of the database- Head-Neck-PET-CT. We further proceeded to develop a radiogenomic workflow by combining gene expression features with tumor texture features for a group of 11 patients of our third database: TCGA-HNSC. We developed geometric transform-based database augmentation method and used it to generate PET scans using images from the existing dataset. To evaluate our analysis, we decided to test our workflow on patients with tumors at different sites, using scans of different modalities. We included PET scans for 24 lung cancer patients (15 from TCGA-LUSC (Lung Squamous Cell Carcinoma) and 9 from TCGA-LUAD (Lung Adenocarcinoma) databases). We used wavelet features along with the existing group of texture features to improve the classification scores. Further, we used non-rigid transform-based techniques for database augmentation. We also included MR scans for 54 cervical cancer patients (from TCGA-CESC (Cervical Squamous Cell Carcinoma and Endocervical Carcinoma) database) in our study and employed Fisher based selection technique for reduction of the high dimensional feature space. ; Results: The classification accuracy obtained by the 2D and 3D texture analysis is about 70% for slice-wise classification and 80% for patient-wise classification for the head and neck cancer patients (HNSCC and Head-Neck-PT-CT databases). The overall classification accuracies obtained from the transformed tumor slices are comparable to the original tumor slices. Thus, geometric transformation is an effective method for database augmentation. The addition of binary genomic features to the texture features (TCGA-HNSC patients) increases the classification accuracies (from 80%-100% for 2D and from 60%-100% for 3D patient-wise classification). The classification accuracies increase from 58% to 84% (2D slice-wise) and from 58% to 70% (2D patient-wise) in the case of lung cancer patients with the inclusion of wavelet features to the existing texture feature group and by augmenting the database (non-rigid transformation) to include equal number of patients and slices in the recurrent and non-recurrent categories. The accuracies are about 64% for 2D slice-wise and patient-wise classification for cervical cancer patients (using correlation-matrix based feature selection) and increase to about 72% using Fisher- based selection criteria; Conclusion: Our study has introduced the novel approach of fusing the information present in The Cancer Imaging Archive (TCIA) and TCGA to develop a combined imaging phenotype and genotype expression for therapy personalization. Texture measures provide a measure of tumor heterogeneity, which can be used to predict recurrence status. Information from gene expression patterns of the patients, when combined with texture measures, provides a unique radiogenomic feature which substantially improves therapy response prediction scores.
Brain Tumor Segmentation Using Deep Learning Technique
Evaluating anatomical variations in structures like the nasal passage and sinuses is; challenging because their complexity can often make it difficult to differentiate normal; and abnormal anatomy. By statistically modeling these variations and estimating; individual patient anatomy using these models, quantitative estimates of similarity; or dissimilarity between the patient and the sample population can be made. In; order to do this, a spatial alignment, or registration, between patient anatomy and; the statistical model must first be computed.; In this dissertation, a deformable most likely point paradigm is introduced that; incorporates statistical variations into probabilistic feature-based registration algorithms. This paradigm is a variant of the most likely point paradigm, which incorporates feature uncertainty into the registration process. The deformable registration; algorithms optimize the probability of feature alignment as well as the probability of; model deformation allowing statistical models of anatomy to estimate, for instance,; structures seen in endoscopic video without the need for patient specific computed; tomography (CT) scans. The probabilistic framework also enables the algorithms to assess the quality of registrations produced, allowing users to know when an alignment; can be trusted. This dissertation covers three algorithms built within this paradigm; and evaluated in simulation and in-vivo experiments.
Simultaneous segmentation and correspondence improvement using statistical modes
Lung nodule detection using fuzzy clustering and support vector machines
Sivakumar, S
Chandrasekar, C
International Journal of Engineering and Technology2013Journal Article, cited 43 times
Website
Algorithm Development
Computer Aided Detection (CADe)
Computed Tomography (CT)
LUNG
Machine Learning
Lung cancer is the primary cause of tumor deaths for both sexes in most countries. Lung nodule, an abnormality which leads to lung cancer is detected by various medical imaging techniques like X-ray, Computerized Tomography (CT), etc. Detection of lung nodules is a challenging task since the nodules are commonly attached to the blood vessels. Many studies have shown that early diagnosis is the most efficient way to cure this disease. This paper aims to develop an efficient lung nodule detection scheme by performing nodule segmentation through fuzzy based clustering models; classification by using a machine learning technique called Support Vector Machine (SVM). This methodology uses three different types of kernels among these RBF kernel gives better class performance.
A STUDY ON IMAGE DENOISING FOR LUNG CT SCAN IMAGES
Sivakumar, S
Chandrasekar, C
International Journal of Emerging Technologies in Computational and Applied Sciences2014Journal Article, cited 1 times
Website
LIDC-IDRI
Image denoising
Computed Tomography (CT)
Medical imaging is the technique and process used to create images of the human body for clinical purposes and diagnosis. Medical imaging is often perceived to designate the set of techniques that non- invasively produce images of the internal aspect of the body. The x-ray computed tomographic (CT) scanner has made it possible to detect the presence of lesions of very low contrast. The noise in the reconstructed CT images is significantly reduced through the use of efficient x-ray detectors and electronic processing. The CT reconstruction technique almost completely eliminates the superposition of anatomic structures, leading to a reduction of "structural" noise. It is the random noise in a CT image that ultimately limits the ability of the radiologist to discriminate between two regions of different density. Because of its unpredictable nature, such noise cannot be completely eliminated from the image and will always lead to some uncertainty in the interpretation of the image. The noise present in the images may appear as additive or multiplicative components and the main purpose of denoising is to remove these noisy components while preserving the important signal as much as possible. In this paper we analyzed the denoising filters such as Mean, Median, Midpoint, Wiener filters and the three more modified filter approaches for the Lung CT scan images to remove the noise present in the images and compared by the quality parameters.
A Novel Noise Removal Method for Lung CT SCAN Images Using Statistical Filtering Techniques
Sivakumar, S
Chandrasekar, C
International Journal of Algorithms Design and Analysis2015Journal Article, cited 0 times
LIDC-IDRI
Automatic detection and segmentation of malignant lesions from [18F]FDG PET/CT images using machine learning techniques: application in lymphomas
New studies have arisen trying to automatically perform some clinical tasks, such as the detection and segmentation of medical images. Manual and, sometimes, semi-automatic methods, are very time-consuming and prone to inter-observer variability. This is especially significant when the lesions spread throughout the entire body, as happens with lymphomas. The main goal was to develop fully automatic deep learning-based models (U-Net and ResUNet) for detecting and segmenting lymphoma lesions in [ 18F]FDG PET images. A secondary goal was to study the impact the training data has on the final performance, namely the impact of the patient's primary tumour type, the acquisition scanner, the number of images, and the use of transfer learning. The Dice similarity coefficient (DSC) and the lesion detection index (LDI) were used to study the models’ performance. The training dataset contains 491 [ 18F]FDG PET images from the MICCAI AutoPET 2022 Challenge and 87 [ 18F]FDG PET images from the Champalimaud Clinical Centre (CCC). Primary tumours are lymphoma, melanoma, and lung cancer, among others The test set contains 39 [ 18F]FDG PET images from lymphoma patients from the CCC. Regarding the results, using data from the lymphoma patients during training positively impacts the performance of both models on lymphoma lesions’segmentation. The results also showed that when the training dataset increases in size and has images acquired in the same equipment as the images used in the test dataset, both DSC and LDI increase. The best model using a U-Net achieved a DSC of 0.593 and a LDI of 0.186. When using a ResU-Net, the best model had a DSC of 0.524 and a LDI of 0.200. In conclusion, this study confirms the adequacy of the U-Net and ResU-Net architectures for lesion segmentation in [18F]FDG PET/CT images of patients with lymphoma. Moreover, it pointed out some clues for future training strategies.
Information Fusion of Magnetic Resonance Images and Mammographic Scans for Improved Diagnostic Management of Breast Cancer
Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI)
mini-MIAS
InBreast
Computer Aided Detection (CADe)
Medical imaging is critical to non-invasive diagnosis and treatment of a wide spectrum of medical conditions. However, different modalities of medical imaging employ/apply di erent contrast mechanisms and, consequently, provide different depictions of bodily anatomy. As a result, there is a frequent problem where the same pathology can be detected by one type of medical imaging while being missed by others. This problem brings forward the importance of the development of image processing tools for integrating the information provided by different imaging modalities via the process of information fusion. One particularly important example of clinical application of such tools is in the diagnostic management of breast cancer, which is a prevailing cause of cancer-related mortality in women. Currently, the diagnosis of breast cancer relies mainly on X-ray mammography and Magnetic Resonance Imaging (MRI), which are both important throughout different stages of detection, localization, and treatment of the disease. The sensitivity of mammography, however, is known to be limited in the case of relatively dense breasts, while contrast enhanced MRI tends to yield frequent 'false alarms' due to its high sensitivity. Given this situation, it is critical to find reliable ways of fusing the mammography and MRI scans in order to improve the sensitivity of the former while boosting the specificity of the latter. Unfortunately, fusing the above types of medical images is known to be a difficult computational problem. Indeed, while MRI scans are usually volumetric (i.e., 3-D), digital mammograms are always planar (2-D). Moreover, mammograms are invariably acquired under the force of compression paddles, thus making the breast anatomy undergo sizeable deformations. In the case of MRI, on the other hand, the breast is rarely constrained and imaged in a pendulous state. Finally, X-ray mammography and MRI exploit two completely di erent physical mechanisms, which produce distinct diagnostic contrasts which are related in a non-trivial way. Under such conditions, the success of information fusion depends on one's ability to establish spatial correspondences between mammograms and their related MRI volumes in a cross-modal cross-dimensional (CMCD) setting in the presence of spatial deformations (+SD). Solving the problem of information fusion in the CMCD+SD setting is a very challenging analytical/computational problem, still in need of efficient solutions. In the literature, there is a lack of a generic and consistent solution to the problem of fusing mammograms and breast MRIs and using their complementary information. Most of the existing MRI to mammogram registration techniques are based on a biomechanical approach which builds a speci c model for each patient to simulate the effect of mammographic compression. The biomechanical model is not optimal as it ignores the common characteristics of breast deformation across different cases. Breast deformation is essentially the planarization of a 3-D volume between two paddles, which is common in all patients. Regardless of the size, shape, or internal con guration of the breast tissue, one can predict the major part of the deformation only by considering the geometry of the breast tissue. In contrast with complex standard methods relying on patient-speci c biomechanical modeling, we developed a new and relatively simple approach to estimate the deformation and nd the correspondences. We consider the total deformation to consist of two components: a large-magnitude global deformation due to mammographic compression and a residual deformation of relatively smaller amplitude. We propose a much simpler way of predicting the global deformation which compares favorably to FEM in terms of its accuracy. The residual deformation, on the other hand, is recovered in a variational framework using an elastic transformation model. The proposed algorithm provides us with a computational pipeline that takes breast MRIs and mammograms as inputs and returns the spatial transformation which establishes the correspondences between them. This spatial transformation can be applied in different applications, e.g., producing 'MRI-enhanced' mammograms (which is capable of improving the quality of surgical care) and correlating between different types of mammograms. We investigate the performance of our proposed pipeline on the application of enhancing mammograms by means of MRIs and we have shown improvements over the state of the art.
Dynamic Co-occurrence of Local Anisotropic Gradient Orientations (DyCoLIAGe) Descriptors from Pre-treatment Perfusion DSC-MRI to Predict Overall Survival in Glioblastoma
A significant clinical challenge in glioblastoma is to risk-stratify patients for clinical trials, preferably using MRI scans. Radiomics involves mining of sub-visual features that could serve as surrogate markers of tumor heterogeneity from routine imaging. Previously our group had developed a new gradient-based radiomic descriptor, Co-occurrence of Local Anisotropic Gradient Orientations (COLLAGE), to capture tumor heterogeneity on structural MRI. I present an extension of CoLLAGE on perfusion MRI, termed dynamic COLLAGE (DyCoLIAGe), and demonstrate its application in predicting overall survival in glioblastoma. Following manual segmentation, 52 CoLIAGe features were extracted from edema and enhancing tumor at different time phases during contrast administration of perfusion MRI. Each feature was separately plotted across different time-points, and a 3rd-order polynomial was fit to each feature curve. The corresponding polynomial coefficients were evaluated in terms of their prognosis performance. My results suggest that DyCoLIAGe may be prognostic of overall survival in glioblastoma.
Self-supervised pre-training of an attention-based model for 3D medical image segmentation
Sund Aillet, Albert
2023Thesis, cited 0 times
Thesis
TCGA-OV
TCGA-UCEC
CPTAC-UCEC
CPTAC-PDA
KiTS
CPTAC-LSCC
HNSCC
LCTSC
Computer vision
Deep learning
Segmentation
Algorithm Development
Self-supervised
Abstract [en]; Accurate segmentation of anatomical structures is crucial for radiation therapy in cancer treatment. Deep learning methods have been demonstrated effective for segmentation of 3D medical images, establishing the current standard. However, they require large amounts of labelled data and suffer from reduced performance on domain shift. A possible solution to these challenges is self-supervised learning, that uses unlabelled data to learn representations, which could possibly reduce the need for labelled data and produce more robust segmentation models. This thesis investigates the impact of self-supervised pre-training on an attention-based model for 3D medical image segmentation, specifically focusing on single-organ semantic segmentation, exploring whether self-supervised pre-training enhances the segmentation performance on CT scans with and without domain shift. The Swin UNETR is chosen as the deep learning model since it has been shown to be a successful attention-based architecture for semantic segmentation. During the pre-training stage, the contracting path is trained for three self-supervised pretext tasks using a large dataset of 5 465 unlabelled CT scans. The model is then fine-tuned using labelled datasets with 97, 142 and 288 segmentations of the stomach, the sternum and the pancreas. The results indicate that a substantial performance gain from self-supervised pre-training is not evident. Parameter freezing of the contracting path suggest that the representational power of the contracting path is not as critical for model performance as expected. Decreasing the amount of supervised training data shows that while the pre-training improves model performance when the amount of training data is restricted, the improvements are strongly decreased when more supervised training data is used.; ; Abstract [sv]; Noggrann segmentering av anatomiska strukturer är avgörande för strålbehandling inom cancervården. Djupinlärningmetoder har visat sig vara effektiva och utgör standard för segmentering av 3D medicinska bilder. Dessa metoder kräver däremot stora mängder märkt data och kännetecknas av lägre prestanda vid domänskift. Eftersom självövervakade inlärningsmetoder använder icke-märkt data för inlärning, kan de möjligen minska behovet av märkt data och producera mer robusta segmenteringsmodeller. Denna uppsats undersöker effekten av självövervakad förberedande träning av en attention-baserad modell för 3D medicinsk bildsegmentering, med särskilt fokus på semantisk segmentering av enskilda organ. Syftet är att studera om självövervakad förberedande träning förbättrar segmenteringsprestandan utan respektive med domänskift. Swin UNETR har valts som djupinlärningsmodell eftersom den har visat sig vara en framgångsrik attention-baserad arkitektur för semantisk segmentering. Under den förberedande träningsfasen optimeras modellens kontraherande del med 5 465 icke-märkta CT-scanningar. Modellen tränas sedan på märkta dataset med 97, 142 och 288 segmenterade skanningar av magen, bröstbenet och bukspottkörteln. Resultaten visar att prestandaökningen från självövervakad förberedande träning inte är tydlig. Parameterfrysning av den kontraherande delen visar att dess representationer inte lika avgörande för segmenteringsprestandan som förväntat. Minskning av mängden träningsdata tyder på att även om den förberedande träningen förbättrar modellens prestanda när mängden träningsdata är begränsad, minskas förbättringarna betydligt när mer träningsdata används.
Classification of Benign and Malignant Tumors of Lung Using Bag of Features
Suzan, A Melody
Prathibha, G
Journal of Scientific & Engineering Research2017Journal Article, cited 0 times
Website
This paper presents a novel approach for feature extraction and classification of lung cancer, i.e., Benign or malignant.; Classification of lung cancer is based on a code book generated by using Bag of features algorithm. In this paper 300 regions of Interest; (ROI’s) of lung cancer images from The Cancer Imaging Archive (TICA) sponsored by SPIE are used. In this approach Scale-Invariant; Feature Transform (SIFT) is used for feature extraction and this coefficients are quantized using a bag of features into a predefined code; book. This code book is given as input to the KNN classifier. The overall performance of the system in classifying tumors of lung is; evaluated by using Receiver Operating Characteristics Curve (ROC). Area under the curve (AUC) is Az=0.95.
Five Classifications of Mammography IMages Based on Deep Cooperation Convolutional Neural Network
Tang, Chun-ming
Cui, Xiao-Mei
Yu, Xiang
Yang, Fan
American Scientific Research Journal for Engineering, Technology, and Sciences (ASRJETS)2019Journal Article, cited 0 times
CBIS-DDSM
Convolutional Neural Network (CNN)
Mammography is currently the preferred imaging method for breast cancer screening. Masses and calcificationare the main positive signs of mammography. Due to the variable appearance of masses and calcification, asignificant number of breast cancer cases are missed or misdiagnosed if it is only depended on the radiologists’subjective judgement. At present, most of the studies are based on the classical Convolutional Neural Networks(CNN), which uses the transfer learning to classify the benign and malignant masses in the mammographyimages. However, the CNN is designed for natural images which are substantially different from medicalimages. Therefore, we propose a Deep Cooperation CNN (DCCNN) to classify mammography images of a dataset into five categories including benign calcification, benign mass, malignant calcification, malignant mass andnormal breast. The data set consists of 695 normal cases from DDSM, 753 calcification cases and 891 masscases from CBIS-DDSM. Finally, DCCNN achieves 91% accuracy and 0.98 AUC on the test set, whoseperformance is superior to VGG16, GoogLeNet and InceptionV3 models. Therefore, DCCNN can aidradiologists to make more accurate judgments, greatly reducing the rate of missed and misdiagnosis.
Automated Detection of Early Pulmonary Nodule in Computed Tomography Images
Tariq, Ahmed Usama
2019Thesis, cited 0 times
Thesis
LIDC-IDRI
LUNA16 Challenge
Classification
Classification of lung cancer in CT scans majorly have two steps, detect all suspicious lesions also known as pulmonary nodules and calculate the malignancy. Currently, a lot of studies are about nodules detection, but some are about the evaluation of nodule malignancy. Since the presence of nodule does not unquestionably define the presence lung cancer and the morphology of nodule has a complex association with malignant growth, the diagnosis of lung cancer requests cautious examinations on each suspicious nodule and integrateed information every nodule. We propose a 3D CNN CAD systemto solve this problem. The system consists of two modules a 3D CNN for nodule detection, which outputs all suspicious nodules for a subject and second module train on XGBoost classifier with selective data to acquire the probability of lung malignancy for the subject.
DESIGNING AND TESTING A MOLECULARLY TARGETED GLIOBLASTOMA THERANOSTIC: EXPERIMENTAL AND COMPUTATIONAL STUDIES
With an extremely poor patient prognosis glioblastoma multiforme (GBM) is one of the most aggressive forms of brain tumor with a median patient survival of less than 15 ; months. While new diagnostic and therapeutic approaches continue to emerge, the progress to reduce the mortality associated with the disease is insufficient. Thus, developing new methods having the potential to overcome problems that limit effective imaging and therapeutic efficacy in GBM is still a critical need. The overall goal of this research was therefore to develop targeted glioblastoma theranostics capable of imaging disease progression and simultaneously killing cancer cells. To achieve this, the state of the art of liposome based cancer theranostics are reviewed in detail and potential glioblastoma biomarkers for theranostic delivery are identified by querying different databases and by reviewing the literature. Then tumor targeting liposomes loaded with Gd3N@C80 and doxorubicin (DXR) are developed and tested in vitro. Finally, the stability of these formulations in different physiological salt solutions is evaluated using computational techniques including area per lipid, lipid interdigitaion, carbon-deuterium order parameter, radial distribution of ions as well as steered molecular dynamic simulations. In conclusion the experimental and computational studies of this dissertation ; demonstrated that DXR and Gd3N@C80-OH loaded and lactoferrin & transferrin dual tagged, PEGylated liposomes might be potential drug and imaging agent delivery systems for GBM treatment.
Lung Nodule Detection and Classification using Machine Learning Techniques
Tekade, Ruchita
ASIAN JOURNAL FOR CONVERGENCE IN TECHNOLOGY (AJCT)-UGC LISTED2018Journal Article, cited 0 times
Website
LIDC-IDRI
Machine learning
Computer Aided Detection (CADe)
As lung cancer is second most leading cause of death, early detection of lung cancer is became necessary in many computer aided dignosis (CAD) systems. Recently many CAD systems have been implemented to detect the lung nodules which uses Computer Tomography (CT) scan images [2]. In this paper, some image pre-processing methods such as thresholding, clearing borders, morphological operations (viz., erosion, closing, opening) are discussed to detect lung nodule regions ie, Region of Interest (ROI) in patient lung CT scan images. Also, machine learning techniques such as Support Vector Machine (SVM) and Convolutional Neural Network (CNN) has been discussed for classifying the lung nodules and non-nodules objects in patient lung ct scan images using the sets of lung nodule regions. In this study, Lung Image Database Consortium image collection (LIDC-IDRI) dataset having patient CT scan images has been used to detect and classify lung nodules. Lung nodule classification accuacy of SVM is 90% and that of CNN is 91.66%.
Improving radiomic model reliability and generalizability using perturbations in head and neck carcinoma
Teng, Xinzhi
2023Thesis, cited 0 times
Dissertation
RIDER Lung CT
Head-Neck-PET-CT
OPC-Radiomics
ACRIN 6698
I-SPY 2
Medical Radiology
Algorithm Development
Classification
Risk assessment
Background: Radiomic models for clinical applications need to be reliable. However, the model reliability is conventionally established in prospective settings, requiring proposal and special design of a separate study. As prospective studies are rare, the reliability of most proposed models is unknown. Facilitating the assessment of radiomic model reliability during development would help to identify the most promising models for prospective studies.; Purpose: This thesis aims to propose a framework to build reliable radiomic models using perturbation method. The aim was separated to three studies: 1) develop a perturbation-based assessment method to quantitatively evaluate the reliability of radiomic models, 2) evaluate perturbation-based method against test-retest method for developing reliable radiomic model, and 3) evaluate radiomic model reliability and generalizability after removing low-reliable radiomics features.; Methods and Materials: Four publicly available head-and-neck carcinoma (HNC) datasets and one breast cancer dataset, in total of 1,641 patients, were retrospectively recruited from The Cancer Image Archive (TCIA). The computed tomography (CT) images, their gross tumor volume (GTV) segmentations, distant metastasis (DM) and local-/regional-recurrence (LR) after definitive treatment were collected from HNC datasets. Multi-parametric diffusion-weighted images (DWI), test-retest DWI scans, pathological complete response (pCR) were collected from breast cancer dataset. For the development of reliability assessment method for radiomic model, one dataset with DM outcome as clinical task was used to build the survival model. Sixty perturbed datasets were simulated by randomly translating, rotating, and adding noise to the original image and randomizing GTV segmentation. The perturbed features were subsequently extracted from the perturbed datasets. The radiomic survival model was developed for DM risk prediction, and its reliability was quantified with intra-class coefficient of correlation (ICC) to evaluate the model prediction consistency on perturbed features. In addition, the sensitivity analysis was performed to verify the variation between input feature reliability and output prediction reliability. Then, a new radiomic model to predict pCR with DWI-derived apparent diffusion coefficient (ADC) map was developed, and its reliability was quantified with ICC to quantify the model prediction consistency on perturbed image features and test-retest image features respectively. Following the establishment of perturbation-based model reliability assessment (ICC), the model reliability and generalizability after removing low-reliable features (ICC thresholds of 0, 0.75 and 0.95) was evaluated under a repeated stratified cross-validation with HNC datasets. The model reliability is evaluated with perturbation-based ICC and the model generalizability is evaluated by the average train-test area under the receiver operating characteristic curve (AUC) difference in cross-validation. The experiment was conducted on all four HNC datasets, two clinical outcomes and five classification algorithms.; Results: In development of model reliability assessment method, the reliability index ICC was used to quantify the model output consistency in features extracted from the perturbed images and segmentations. In a six-feature radiomic model, the concordance indexes (C-indexes) of the survival model were 0.742 and 0.769 for the training and testing cohorts, respectively. For the perturbed training and testing datasets, the respective mean C-indexes were 0.686 and 0.678. This yielded ICC values of 0.565 (0.518–0.615) and 0.596 (0.527–0.670) for the perturbed training and testing datasets, respectively. When only highly reliable features were used for radiomic modeling, the model’s ICC increased to 0.782 (0.759–0.815) and 0.825 (0.782–0.867) and its C-index decreased to 0.712 and 0.642 for the training and testing data, respectively. It shows our assessment method is sensitive to the reliability of the input. In the comparison experiment between perturbation-based and test-retest method, the perturbation method achieved radiomic model with comparable reliability (ICC: 0.90 vs. 0.91, P-value > 0.05) and classification performance (AUC: 0.76 vs. 0.77, P-value > 0.05) to test-retest method. For the model reliability and generalizability evaluation after removing low-reliable features, the average model reliability ICC showed significant improvements from 0.65 to 0.78 (ICC threshold 0 vs 0.75, P-value < 0.01) and 0.91 (ICC threshold 0 vs. 0.95, P-value < 0.01) under the increasing reliability thresholds. Additionally, model generalizability has increased substantially, as the mean train-test AUC difference was reduced from 0.21 to 0.18 (P-value < 0.01) and 0.12 (P-value < 0.01), and the testing AUCs were maintained at the same level (P-value > 0.05).; Conclusions: We proposed a perturbation-based framework to evaluate radiomic model reliability and to develop more reliable and generalizable radiomic model. The perturbation-based method is a practical alternative to test-retest scans in assessing radiomic model reliability. Our results also suggest the pre-screening of low-reliable radiomics features prior to modeling is a necessary step to improve final model reliability and generalizability to the unseen dataset.
Extraction of Tumor in Brain MRI using Support Vector Machine and Performance Evaluation
Tunga, Prakash
Visvesvaraya Technological University Journal of Engineering Sciences and Management2019Journal Article, cited 0 times
Website
BraTS
Segmentation
Support Vector Machine (SVM)
BRAIN
In this article, we discuss mainly the extraction of tumor in brain MRI (Magnetic Resonance Imaging) images based on Support Vector Machine (SVM) technique. The work forms computer assisted demarcation of tumor from brain MRI and aims to be a part of routine which would otherwise performed manually by specialists. Here we focus on one of the common types of brain tumors, the Gliomas. These tumors have proved to be life threatening in advanced stages. MRI being a non-invasive procedure, can provide very good soft tissue contrast and so, forms a suitable imaging method for processing which leads to brain tumor detection and description. At first, we preprocess the given MRI image using anisotropic diffusion method, and then SVM technique is applied which classifies the image into tumor and non-tumorous regions. Next, we do the extraction of tumor, referred as Region of Interest (ROI) and describe it by calculating its size and position in the image. The remaining part, i.e., brain region with no tumor presence, refers to Non Region of Interest (NROI). Separation of ROI and NROI parts aids further processing such as ROI based compression. We also calculate the parameters that reflect the performance of the approach.
Cancer Risk Assessment Using Quantitative Imaging Features from Solid Tumors and Surrounding Structures
Medical imaging is a powerful tool for clinical practice allowing in-vivo insight into a patient’s disease state. Many modalities exist, allowing for the collection of diverse information about the underlying tissue structure and/or function. Traditionally, medical professionals use visual assessment of scans to search for disease, assess relevant disease predictors and propose clinical intervention steps. However, the imaging data contain potentially useful information beyond visual assessment by trained professional. To better use the full depth of information contained in the image sets, quantitative imaging characteristics (QICs), can be extracted using mathematical and statistical operations on regions or volumes of interests. The process of using QICs is a pipeline typically involving image acquisition, segmentation, feature extraction, set qualification and analysis of informatics. These descriptors can be integrated into classification methods focused on differentiating between disease states. Lung cancer, a leading cause of death worldwide, is a clear application for advanced in-vivo imaging based classification methods.; ; We hypothesize that QICs extracted from spatially-linked and size-standardized regions of surrounding lung tissue can improve risk assessment quality over features extracted from only the lung tumor, or nodule, regions. We require a robust and flexible pipeline for the extraction and selection of disease QICs in computed tomography (CT). This includes creating an optimized method for feature extraction, reduction, selection, and predictive analysis which could be applied to a multitude of disease imaging problems. This thesis expanded a developmental pipeline for machine learning using a large multicenter controlled CT dataset of lung nodules to extract CT QICs from the nodule, surrounding parenchyma, and greater lung volume and explore CT feature interconnectivity. Furthermore, it created a validated pipeline that is more computationally and time efficient and with stability of performance. The modularity of the optimized pipeline facilitates broader application of the tool for applications beyond CT identified pulmonary nodules.; ; We have developed a flexible and robust pipeline for the extraction and selection of Quantitative Imaging Characteristics for Risk Assessment from the Tumor and its Environment (QIC-RATE). The results presented in this thesis support our hypothesis, showing that classification of lung and breast tumors is improved through inclusion of peritumoral signal. Optimal performance in the lung application achieved with the QIC-RATE tool incorporating 75% of the nodule diameter equivalent in perinodular parenchyma with a development performance of 100% accuracy. The stability of performance was reflected in the maintained high accuracy (98%) in the independent validation dataset of 100 CT from a separate institution. In the breast QIC-RATE application, optimal performance was achieved using 25% of the tumor diameter in breast tissue with 90% accuracy in development, 82% in validation. We address the need for more complex assessments of medically imaged tumors through the QIC-RATE pipeline; a modular, scalable, transferrable pipeline for extracting, reducing and selecting, and training a classification tool based on QICs. Altogether, this research has resulted in a risk assessment methodology that is validated, stable, high performing, adaptable, and transparent.
Implementación de algoritmos de reconstrucción tomográfica mediante programación paralela (CUDA)
“La reconstrucción de imágenes médicas es clave en una amplia gama de tecnologías. Para los sistemas de tomografía computarizada clásica, la cantidad de señales medidas por segundo aumentó exponencialmente en las últimas cuatro décadas, mientras que la complejidad computacional de la mayoría de los algoritmos utilizados no ha cambiado significativamente. Un gran interés y desafío es proporcionar una calidad de imagen óptima con la menor dosis posible de radiación al paciente. Una solución y un campo de investigación activo para resolver ese problema son los métodos iterativos de reconstrucción de imágenes médicas. Su complejidad es múltiple en comparación con los métodos analíticos clásicos que se utilizaron en casi todos los sistemas disponibles comercialmente. En esta tesis se investiga el uso de tarjetas gráficas en el campo de la reconstrucción iterativa de imágenes médicas. Se presentan y evalúan los diferentes enfoques de algoritmos de reconstrucción de imagen acelerados por la GPU (Unidad de Procesamiento Gráfico, por sus siglas en inglés).”
Brain Tumor Classification using Support Vector Machine
Vani, N
Sowmya, A
Jayamma, N
International Research Journal of Engineering and Technology2017Journal Article, cited 0 times
Website
BRAIN
Classification
MATLAB
Computer Aided Detection (CADe)
image processing
Radiomics
Support Vector Machine (SVM)
Classification of benign and malignant lung nodules using image processing techniques
Vas, Moffy Crispin
Dessai, Amita
International Research Journal of Engineering and Technology2017Journal Article, cited 0 times
Website
LUNG
Computed Tomography (CT)
Segmentation
Haralick feature
Artificial Neural Network (ANN)
Cancer is the second leading cause of most number of deaths worldwide after the heart disease, out of which, lung cancer is the leading cause of deaths among all the cancer types. Hence, the lung cancer issue is of global concern and thus this work deals with detection of malignant lung cancer nodules and tries to distinguish it from the benign nodules by processing the Computer tomography (CT) images with the help of Haar wavelet decomposition, Haralick feature extraction followed by artificial neural networks (ANN) .
Una metodología para el análisis y selección de características extraídas mediante Deep Learning de imágenes de Tomografía Computerizada de pulmón.
Vega Gonzalo, María
2018Thesis, cited 0 times
Thesis
Dissertation
LUNG
Deep Learning
Computed Tomography (CT)
Segmentation
Radiomics
Classification
Algorithm Development
Este proyecto se enmarca dentro del proyecto de investigación europeo IASIS, en el cual participa el laboratorio de Análisis de Datos Médicos (MEDAL) del Centro de Tecnología Biómedica de la UPM. El proyecto IASIS pretende estructurar la información médica relacionada con el cáncer de pulmón y la enfermedad de Alzheimer, con el objetivo de analizarla y, a partir del conocimiento extraído, mejorar el diagnóstico y tratamiento de estas enfermedades. El objetivo del presente TFG es establecer una metodología que permita la reducción de la dimensionalidad de características extraídas mediante Deep Learning de imágenes de Tomografía Axial Computerizada. El motivo por el que se desea disminuir el número de variables de los datos, es que la extracción de dichos datos tiene como objetivo utilizarlos para clasificar los nódulos presentes en las imágenes mediante un clasificador. Sin embargo, la alta dimensionalidad de los datos puede perjudicar la precisión de la clasificación, además de suponer un alto coste computacional.; ; (below as google translates:) ; This project is part of the IASIS European research project, in which the Medical Data Analysis Laboratory (MEDAL) of the Center for Biological Technology of the UPM participates. The IASIS project aims to structure medical information related to lung cancer and Alzheimer's disease, with the aim of analyzing it and, based on the knowledge extracted, improving the diagnosis and treatment of these diseases. The objective of this TFG is to establish a methodology that allows the reduction of the dimensionality of features extracted through Deep Learning of Computerized Axial Tomography images. The reason why we want to reduce the number of data variables is that the extraction of said data is intended to be used to classify the nodules present in the images by means of a classifier. However, the high dimensionality of the data can impair the accuracy of the classification, in addition to assuming a high computational cost.
Using Radiomics to improve the 2-year survival of Non-Small Cell Lung Cancer Patients
This thesis both exploits and further contributes enhancements to the utilization of radiomics (extracted quantitative features of radiological imaging data) for improving cancer survival prediction. Several machine learning methods were compared in this analysis, including but not limited to support vector machines, convolutional neural networks and logistic regression.A technique for analysing prognostic image characteristics, for non-small cell lung cancer based on the edge regions, as well as tissues immediately surrounding visible tumours is developed. Regions external to and neighbouring a tumour were shown to also have prognostic value. By using the additional texture features an increase in accuracy, of 3%, is shown over previous approaches for predicting two-year survival, which has been determined by examining the outside rind tissue including the tumour compared to the volume without the rind. This indicates that while the centre of the tumour is currently the main clinical target for radiotherapy treatment, the tissue immediately around the tumour is also clinically important for survival analysis. Further, it was found that improved prediction resulted up to some 6 pixels outside the tumour volume, a distance of approximately 5mm outside the original gross tumour volume (GTV), when applying a support vector machine, which achieved the highest accuracy of 71.18%. This research indicates the periphery of the tumour is highly predictive of survival. To our knowledge this is the first study that has concentrically expanded and analysed the NSCLC rind for radiomic analysis.
Classificação Multirrótulo na Anotação Automática de Nódulo Pulmonar Solitário
Villani, Leonardo
Prati, Ronaldo Cristiano
2012Conference Proceedings, cited 0 times
Multi-Modality Automatic Lung Tumor Segmentation Method Using Deep Learning and Radiomics
Wang, Siqiu
Radiation Oncology2022Thesis, cited 0 times
Website
Dissertation
NSCLC Radiogenomics
Thesis
Inter-observer variability
Radiotherapy
Segmentation
Delineation of the tumor volume is the initial and fundamental step in the radiotherapy planning process. The current clinical practice of manual delineation is time-consuming and suffers from observer variability. This work seeks to develop an effective automatic framework to produce clinically usable lung tumor segmentations. First, to facilitate the development and validation of our methodology, an expansive database of planning CTs, diagnostic PETs, and manual tumor segmentations was curated, and an image registration and preprocessing pipeline was established. Then a deep learning neural network was constructed and optimized to utilize dual-modality PET and CT images for lung tumor segmentation. The feasibility of incorporating radiomics and other mechanisms such as a tumor volume-based stratification scheme for training/validation/testing were investigated to improve the segmentation performance. The proposed methodology was evaluated both quantitatively with similarity metrics and clinically with physician reviews. In addition, external validation with an independent database was also conducted. Our work addressed some of the major limitations that restricted clinical applicability of the existing approaches and produced automatic segmentations that were consistent with the manually contoured ground truth and were highly clinically-acceptable according to both the quantitative and clinical evaluations. Both novel approaches of implementing a tumor volume-based training/validation/ testing stratification strategy as well as incorporating voxel-wise radiomics feature images were shown to improve the segmentation performance. The results showed that the proposed method was effective and robust, producing automatic lung tumor segmentations that could potentially improve both the quality and consistency of manual tumor delineation.
A Gaussian Mixture Model based Level Set Method for Volume Segmentation in Medical Images
This thesis proposes a probabilistic level set method to be used in segmentation of tumors with heterogeneous intensities. It models the intensities of the tumor and surrounding tissue using Gaussian mixture models. Through a contour based initialization procedure samples are gathered to be used in expectation maximization of the mixture model parameters. The proposed method is compared against a threshold-based segmentation method using MRI images retrieved from The Cancer Imaging Archive. The cases are manually segmented and an automated testing procedure is used to find optimal parameters for the proposed method and then it is tested against the threshold-based method.; Segmentation times, dice coefficients, and volume errors are compared. The evaluation reveals that the proposed method has a comparable mean segmentation time to the threshold-based method, and performs faster in cases where the volume error does not exceed 40%. The mean dice coefficient and volume; error are also improved while achieving lower deviation.
Proton radiotherapy spot order optimization to maximize the FLASH effect
Widenfalk, Oscar
2023Thesis, cited 0 times
Thesis
NSCLC-Radiomics-Interobserver1
Radiotherapy
Optimization
PROSTATE
BRAIN
LUNG
Proton Radiation Therapy
Electron Radiation Therapy
Algorithm Development
Cancer is a group of deadly diseases, to which one treatment method is radiotherapy. Recent studies indicate advantages of delivering so-called FLASH treatments using ultra-high dose rates (> 40 Gy/s), with a normal tissue sparing FLASH effect.; Delivering a high dose in a short time imposes requirements on both the treatment machine and the treatment plan. To see as much of the FLASH effect as possible, the delivery pattern should be optimized, which is the focus of this thesis.; The optimization method was applied to 17 lung plans, and the results show that a local-searchbased optimization achieves overall good results, achieving a mean FLASH coverage of 31.7 % outside of the CTV after a mean optimization time of 8.75 s. This is faster than published results using a genetic algorithm.
Supervised Machine Learning Approach Utilizing Artificial Neural Networks for Automated Prostate Zone Segmentation in Abdominal MR images
Development of a method for automating effective patient diameter estimation for digital radiography
Worrall, Mark
2019Thesis, cited 0 times
Thesis
Dissertation
TCGA-SARC
RIDER_Lung CT
Algorithm Development
National patient dose audit of paediatric radiographic examinations is complicated by a lack of data containing a direct measurement of the patient diameter in the examination orientation or height and weight. This has meant that National Diagnostic Reference Levels (NDRLs) for paediatric radiographic examinations have not been updated in the UK since 2000, despite significant changes in imaging technology over that period.; This work is the first step in the development of a computational model intended to automate an estimate of paediatric patient diameter. Whilst the application is intended for a paediatric population, its development within this thesis uses an adult cohort. The computational model uses the radiographic image, the examination exposure factors and a; priori information relating to the x-ray system and the digital detector.; The computational model uses the Beer-Lambert law. A hypothesis was developed that this would work for clinical exposures despite its single energy photon basis. Values of initial air kerma are estimated from the examination exposure factors and measurements made on the x-ray system. Values of kerma at the image receptor are estimated from a measurement of pixel value made at the centre of the radiograph and the measured calibration between; pixel value and kerma for the image receptor. Values of effective linear attenuation coefficient are estimated from Monte Carlo simulations. Monte Carlo simulations were created for two x-ray systems. The simulations were optimised and thoroughly validated to ensure that any result obtained is accurate. The validation process compared simulation results with measurements made on the x-ray units themselves, producing values for effective linear attenuation coefficient that were demonstrated to be accurate.; Estimates of attenuator thickness can be made using the estimated values for each variable. ; The computational model was demonstrated to accurately estimate the thickness of single composition attenuators across a range of thicknesses and exposure factors on three different x-ray systems. The computational model was used in a clinical validation study of 20 adult patients undergoing AP abdominal x-ray examinations. For 19 of these examinations, it estimated the true patient thickness to within ±9%. This work presents a feasible computational model that could be used to automate the estimation of paediatric patient thickness during radiographic examinations allowing for automation of paediatric radiographic dose audit.
Voco: A simple-yet-effective volume contrastive learning framework for 3d medical image analysis
Wu, Linshan
Zhuang, Jiaxin
Chen, Hao
2024Conference Proceedings, cited 0 times
CT Images in COVID-19
Deep Learning
Deep Domain Adaptation Learning Framework for Associating Image Features to Tumour Gene Profile
Computational Models for Automated Histopathological Assessment of Colorectal Liver Metastasis Progression
XU, Xiaoyang
2019Thesis, cited 0 times
Thesis
Dissertation
Histopathology imaging features
COLON
Histopathology imaging is a type of microscopy imaging commonly used for the microlevel clinical examination of a patient’s pathology. Due to the extremely large size of; histopathology images, especially whole slide images (WSIs), it is difficult for pathologists to make a quantitative assessment by inspecting the details of a WSI. Hence, a computeraided system is necessary to provide a subjective and consistent assessment of the WSI for personalised treatment decisions. In this thesis, a deep learning framework for the automatic analysis of whole slide histopathology images is presented for the first time, which aims to address the challenging task of assessing and grading colorectal liver metastasis (CRLM). Quantitative evaluations of a patient’s condition with CRLM are conducted through quantifying different tissue components in resected tumorous specimens. This study mimics the visual examination process of human experts, by focusing on three levels of information, the tissue level, cell level and pixel level, to achieve the step by step segmentation of histopathology images. ; At the tissue level, patches with category information are utilised to analyse the WSIs. Both classification-based approaches and segmentation-based approaches are investigated to locate the metastasis region and quantify different components of the WSI. For the classification-based method, different factors that might affect the classification accuracy are explored using state-of-the-art deep convolutional neural networks (DCNNs). Furthermore, a novel network is proposed to merge the information from different magnification levels to include contextual information to support the final decision. With the support by the segmentation-based method, edge information from the image is integrated with the proposed fully convolutional neural network to further enhance the segmentation results.; At the cell level, nuclei related information is examined to tackle the challenge of inadequate annotations. The problem is approached from two aspects: a weakly supervised; nuclei detection and classification method is presented to model the nuclei in the CRLM by integrating a traditional image processing method and variational auto-encoder (VAE).; A novel nuclei instance segmentation framework is proposed to boost the accuracy of the nuclei detection and segmentation using the idea of transfer learning. Afterwards, a fusion framework is proposed to enhance the tissue level segmentation results by leveraging the statistical and spatial properties of the cells.; At the pixel level, the segmentation problem is tackled by introducing the information from the immunohistochemistry (IHC) stained images. Firstly, two data augmentation; approaches, synthesis-based and transfer-based, are proposed to address the problem of insufficient pixel level segmentation. Afterwards, with the paired image and masks having been obtained, an end-to-end model is trained to achieve pixel level segmentation. Secondly, another novel weakly supervised approach based on the generative adversarial network (GAN) is proposed to explore the feasibility of transforming unpaired haematoxylin and eosin (HE) images to IHC stained images. Extensive experiments reveal that the virtually stained images can also be used for pixel level segmentation.
Accelerating Brain DTI and GYN MRI Studies Using Neural Network
There always exists a demand to accelerate the time-consuming MRI acquisition process. Many methods have been proposed to achieve this goal, including deep learning method which appears to be a robust tool compared to conventional methods. While many works have been done to evaluate the performance of neural networks on standard anatomical MR images, few attentions have been paid to accelerating other less conventional MR image acquisitions. This work aims to evaluate the feasibility of neural networks on accelerating Brain DTI and Gynecological Brachytherapy MRI.Three neural networks including U-net, Cascade-net and PD-net were evaluated. Brain DTI data was acquired from public database RIDER NEURO MRI while cervix gynecological MRI data was acquired from Duke University Hospital clinic data. A 25% Cartesian undersampling strategy was applied to all the training and test data. Diffusion weighted images and quantitative functional maps in Brain DTI, T1-spgr and T2 images in GYN studies were reconstructed. The performance of the neural networks was evaluated by quantitatively calculating the similarity between the reconstructed images and the reference images, using the metric Total Relative Error (TRE). Results showed that with the architectures and parameters set in this work, all three neural networks could accelerate Brain DTI and GYN T2 MR imaging. Generally, PD-net slightly outperformed Cascade-net, and they both outperformed U-net with respect to image reconstruction performance. While this was also true for reconstruction of quantitative functional diffusion weighted maps and GYN T1-spgr images, the overall performance of the three neural networks on these two tasks needed further improvement. To be concluded, PD-net is very promising on accelerating T2-weighted-based MR imaging. Future work can focus on adjusting the parameters and architectures of the neural networks to improve the performance on accelerating GYN T1-spgr MR imaging and adopting more robust undersampling strategy such as radial undersampling strategy to further improve the overall acceleration performance.
Non-invasive Profiling of Molecular Markers in Brain Gliomas using Deep Learning and Magnetic Resonance Images
Gliomas account for the most common malignant primary brain tumors in both pediatric and adult populations. They arise from glial cells and are divided into low grade and high-grade gliomas with significant differences in patient survival. Patients with aggressive high-grade gliomas have life expectancies of less than 2 years. Glioblastoma (GBM) are aggressive brain tumors classified by the world health organization (WHO) as stage IV brain cancer. The overall survival for GBM patients is poor and is in the range of 12 to 15 months. These tumors are typically treated by surgery, followed by radiotherapy and chemotherapy. Gliomas often consist of active tumor tissue, necrotic tissue, and surrounding edema. Magnetic Resonance Imaging (MRI) is the most commonly used modality to assess brain tumors because of its superior soft tissue contrast. MRI tumor segmentation is used to identify the subcomponents as enhancing, necrotic or edematous tissue. Due to the heterogeneity and tissue relaxation differences in these subcomponents, multi-parametric (or multi-contrast) MRI is often used for accurate segmentation. Manual brain tumor segmentation is a challenging and tedious task for human experts due to the variability of tumor appearance, unclear borders of the tumor and the need to evaluate multiple MR images with different contrasts simultaneously. In addition, manual segmentation is often prone to significant intra- and inter-rater variability. To address these issues, Chapter 2 of my dissertation aims at designing and developing a highly accurate, 3D Dense-Unet Convolutional Neural Network (CNN) for segmenting brain tumors into subcomponents that can easily be incorporated into a clinical workflow. Primary brain tumors demonstrate broad variations in imaging features, response to therapy, and prognosis. It has become evident that this heterogeneity is associated with specific molecular and genetic profiles. For example, isocitrate dehydrogenase 1 and 2 (IDH 1/2) mutated gliomas demonstrate increased survival compared to wild-type gliomas with the same histologic grade. Identification of the IDH mutation status as a marker for therapy and prognosis is considered one of the most important recent discoveries in brain glioma biology. Additionally, 1p/19q co-deletion and O6-methyl guanine-DNA methyltransferase (MGMT) promoter methylation is associated with differences in response to specific chemoradiation regimens. Currently, the only reliable way of determining a molecular marker is by obtaining glioma tissue either via an invasive brain biopsy or following open surgical resection. Although the molecular profiling of gliomas is now a routine part of the evaluation of specimens obtained at biopsy or tumor resection, it would be helpful to have this information prior to surgery. In some cases, the information would aid in planning the extent of tumor resection. In others, for tumors in locations where resection is not possible, and the risk of a biopsy is high, accurate delineation of the molecular and genetic profile of the tumor might be used to guide empiric treatment with radiation and/or chemotherapy. The ability to non-invasively profile these molecular markers using only T2w MRI has significant implications in determining therapy, predicting prognosis, and feasible clinical translation. Thus, Chapters 3, 4 and 5 of my dissertation focuses on developing and evaluating deep learning algorithms for non-invasive profiling of molecular markers in brain gliomas using T2w MRI only. This includes developing highly accurate fully automated deep learning networks for, (i) classification of IDH mutation status (Chapter 3), (ii) classification of 1p/19q co-deletion status (Chapter 4), and (iii) classification of MGMT promoter status in Brain Gliomas (Chapter 5). An important caveat of using MRI is the effects of degradation on the images, such as motion artifact, and in turn, on the performance of deep learning-based algorithms. Motion artifacts are an especially pervasive source of MR image quality degradation and can be due to gross patient movements, as well as cardiac and respiratory motion. In clinical practice, these artifacts can interfere with diagnostic interpretation, necessitating repeat imaging. The effect of motion artifacts on medical images and deep learning based molecular profiling algorithms has not been studied systematically. It is likely that motion corruption will also lead to reduced performance of deep-learning algorithms in classifying brain tumor images. Deep learning based brain tumor segmentation and molecular profiling algorithms generally perform well only on specific datasets. Clinical translation of such algorithms has the potential to reduce interobserver variability, and improve planning for radiation therapy, improve speed & response to therapy. Although these algorithms perform very well on several publicly available datasets, their generalization to clinical datasets or tasks have been poor, preventing easy clinical translation. Thus, Chapter 6 of my dissertation focuses on evaluating the performance of the molecular profiling algorithms on motion corrupted, motion corrected and clinical T2w MRI. This includes, (i) evaluating the effect of motion corruption on the molecular profiling algorithms, (ii) determining if deep learning-based motion correction can recover the performance of these algorithms to levels similar to non-corrupted images, and (iii) evaluating the performance of these algorithms on clinical T2w MRI before & after motion correction. This chapter is an investigation on the effects of induced motion artifact on deep learning-based molecular classification, and the relative importance of robust correction methods in recovering the accuracies for potential clinical applicability. Deep-learning studies typically require a very large amount of data to achieve good performance. The number of subjects available from the TCIA database is relatively small when compared to the sample sizes typically required for deep learning. Despite this caveat, the data are representative of real-world clinical experience, with multiparametric MR images from multiple institutions, and represents one of the largest publicly available brain tumor databases. Additionally, the acquisition parameters and imaging vendor platforms are diverse across the imaging centers contributing data to TCIA. This study provides a framework for training, evaluating, and benchmarking any new artifact-correction architectures for potential insertion into a workflow. Although our results show promise for expeditious clinical translation, it will be essential to train and validate the algorithms using additional independent datasets. Thus, Chapter 7 of my dissertation discusses the limitations and possible future directions for this work.
Co-Segmentation Methods for Improving Tumor Target Delineation in PET-CT Images
Renal cancer is the seventh most prevalent cancer among men and the tenth most ; frequent cancer among women, accounting for 5% and 3% of all adult malignancies, ; respectively. Κidney cancer is increasing dramatically in developing countries due to ; inadequate living conditions but and in developed countries due to bad lifestyles, smoking, ; obesity, and hypertension. For decades, radical nephrectomy (RN) was the standard method ; to address the problem of the high incidence of kidney cancer. However, the utilization of ; minimally invasive partial nephrectomy (PN), for the treatment of localized small renal masses ; has increased with the advent of laparoscopic and robotic-assisted procedures. In this; framework, certain factors must be considered in surgical planning and decision-making of ; partial nephrectomies, such as the morphology and location of the tumor.; Advanced technologies such as automatic image segmentation, image and surface ; reconstruction, and 3D printing have been developed to assess the tumor anatomy before; surgery and its relationship to surrounding structures, such as the arteriovenous system, with ; the aim of preventing damage. Overall, it is obvious that 3D printed anatomical kidney models ; are very useful to urologists, surgeons, and researchers as a reference point for preoperative ; planning and intraoperative visualization for a more efficient treatment and a high standard ; of care. Furthermore, they can provide a lot of degrees of comfort in education, in patient ; counseling, and in delivering therapeutic methods customized to the needs of each individual ; patient. ; To this context, the fundamental objective of this thesis is to provide an analytical and ; general pipeline for the generation of a renal 3D printed model from CT images. In addition, ; there are proposed methods to enhance preoperative planning and help surgeons to prepare ; with increased accuracy the surgical procedure so that improve their performance.; Keywords: Medical Image, Computed Tomography (CT), Semantic Segmentation, ; Convolutional Neural Networks (CNNs), Surface Reconstruction, Mesh Processing, 3D ; Printing of Kidney, Operative assistance
Deep Learning for Automated Medical Image Analysis
Medical imaging is an essential tool in many areas of medical applications, used for both diagnosis and treatment. However, reading medical images and making diagnosis or treatment recommendations require specially trained medical specialists. The current practice of reading medical images is labor-intensive, time-consuming, costly, and error-prone. It would be more desirable to have a computer-aided system that can automatically make diagnosis and treatment recommendations.; Recent advances in deep learning enable us to rethink the ways of clinician diagnosis based on medical images. Early detection has proven to be critical to give patients the best chance of recovery and survival. Advanced computer-aided diagnosis systems are expected to have high sensitivities and small low positive rates. How to provide accurate diagnosis results and explore different types of clinical data is an important topic in the current computer-aided diagnosis research.; In this thesis, we will introduce 1) mammograms for detecting breast cancers, the most frequently diagnosed solid cancer for U.S. women, 2) lung Computed Tomography (CT) images for detecting lung cancers, the most frequently diagnosed malignant cancer, and 3) head and neck CT images for automated delineation of organs at risk in radiotherapy. First, we will show how to employ the adversarial concept to generate the hard examples improving mammogram mass segmentation. Second, we will demonstrate how to use the weakly labelled data for the mammogram breast cancer diagnosis by efficiently design deep learning for multiinstance learning. Third, the thesis will walk through DeepLung system which combines deep 3D ConvNets and Gradient Boosting Machine (GBM) for automated lung nodule detection and classification. Fourth, we will show how to use weakly labelled data to improve existing lung nodule detection system by integrating deep learning with a probabilistic graphic model. Lastly, we will demonstrate the AnatomyNet which is thousands of times faster and more accurate than previous methods on automated anatomy segmentation.
New Diagnostics for Bipedality: The hominin ilium displays landmarks of a modified growth trajectory
Association of Peritumoral Radiomics With Tumor Biology and Pathologic Response to Preoperative Targeted Therapy for HER2 (ERBB2)-Positive Breast Cancer
Braman, Nathaniel
Prasanna, Prateek
Whitney, Jon
Singh, Salendra
Beig, Niha
Etesami, Maryam
Bates, David D. B.
Gallagher, Katherine
Bloch, B. Nicolas
Vulchi, Manasa
Turk, Paulette
Bera, Kaustav
Abraham, Jame
Sikov, William M.
Somlo, George
Harris, Lyndsay N.
Gilmore, Hannah
Plecha, Donna
Varadan, Vinay
Madabhushi, Anant
JAMA Netw Open2019Journal Article, cited 0 times
Website
Radiogenomics
TCGA-BRCA
Importance There has been significant recent interest in understanding the utility of quantitative imaging to delineate breast cancer intrinsic biological factors and therapeutic response. No clinically accepted biomarkers are as yet available for estimation of response to human epidermal growth factor receptor 2 (currently known as ERBB2, but referred to as HER2 in this study)–targeted therapy in breast cancer.; ; Objective To determine whether imaging signatures on clinical breast magnetic resonance imaging (MRI) could noninvasively characterize HER2-positive tumor biological factors and estimate response to HER2-targeted neoadjuvant therapy.; ; Design, Setting, and Participants In a retrospective diagnostic study encompassing 209 patients with breast cancer, textural imaging features extracted within the tumor and annular peritumoral tissue regions on MRI were examined as a means to identify increasingly granular breast cancer subgroups relevant to therapeutic approach and response. First, among a cohort of 117 patients who received an MRI prior to neoadjuvant chemotherapy (NAC) at a single institution from April 27, 2012, through September 4, 2015, imaging features that distinguished HER2+ tumors from other receptor subtypes were identified. Next, among a cohort of 42 patients with HER2+ breast cancers with available MRI and RNaseq data accumulated from a multicenter, preoperative clinical trial (BrUOG 211B), a signature of the response-associated HER2-enriched (HER2-E) molecular subtype within HER2+ tumors (n = 42) was identified. The association of this signature with pathologic complete response was explored in 2 patient cohorts from different institutions, where all patients received HER2-targeted NAC (n = 28, n = 50). Finally, the association between significant peritumoral features and lymphocyte distribution was explored in patients within the BrUOG 211B trial who had corresponding biopsy hematoxylin-eosin–stained slide images. Data analysis was conducted from January 15, 2017, to February 14, 2019.; ; Main Outcomes and Measures Evaluation of imaging signatures by the area under the receiver operating characteristic curve (AUC) in identifying HER2+ molecular subtypes and distinguishing pathologic complete response (ypT0/is) to NAC with HER2-targeting.; ; Results In the 209 patients included (mean [SD] age, 51.1 [11.7] years), features from the peritumoral regions better discriminated HER2-E tumors (maximum AUC, 0.85; 95% CI, 0.79-0.90; 9-12 mm from the tumor) compared with intratumoral features (AUC, 0.76; 95% CI, 0.69-0.84). A classifier combining peritumoral and intratumoral features identified the HER2-E subtype (AUC, 0.89; 95% CI, 0.84-0.93) and was significantly associated with response to HER2-targeted therapy in both validation cohorts (AUC, 0.80; 95% CI, 0.61-0.98 and AUC, 0.69; 95% CI, 0.53-0.84). Features from the 0- to 3-mm peritumoral region were significantly associated with the density of tumor-infiltrating lymphocytes (R2 = 0.57; 95% CI, 0.39-0.75; P = .002).; ; Conclusions and Relevance A combination of peritumoral and intratumoral characteristics appears to identify intrinsic molecular subtypes of HER2+ breast cancers from imaging, offering insights into immune response within the peritumoral environment and suggesting potential benefit for treatment guidance.
A Competition, Benchmark, Code, and Data for Using Artificial Intelligence to Detect Lesions in Digital Breast Tomosynthesis
Konz, N.
Buda, M.
Gu, H.
Saha, A.
Yang, J.
Chledowski, J.
Park, J.
Witowski, J.
Geras, K. J.
Shoshan, Y.
Gilboa-Solomon, F.
Khapun, D.
Ratner, V.
Barkan, E.
Ozery-Flato, M.
Marti, R.
Omigbodun, A.
Marasinou, C.
Nakhaei, N.
Hsu, W.
Sahu, P.
Hossain, M. B.
Lee, J.
Santos, C.
Przelaskowski, A.
Kalpathy-Cramer, J.
Bearce, B.
Cha, K.
Farahani, K.
Petrick, N.
Hadjiiski, L.
Drukker, K.
Armato, S. G., 3rd
Mazurowski, M. A.
JAMA Netw Open2023Journal Article, cited 0 times
Website
Breast-Cancer-Screening-DBT
Challenge
Humans
Computer Aided Detection (CADe)
Benchmarking
Mammography/methods
Algorithm Development
Radiographic Image Interpretation
Computer-Assisted/methods
*Breast Neoplasms/diagnostic imaging
IMPORTANCE: An accurate and robust artificial intelligence (AI) algorithm for detecting cancer in digital breast tomosynthesis (DBT) could significantly improve detection accuracy and reduce health care costs worldwide. OBJECTIVES: To make training and evaluation data for the development of AI algorithms for DBT analysis available, to develop well-defined benchmarks, and to create publicly available code for existing methods. DESIGN, SETTING, AND PARTICIPANTS: This diagnostic study is based on a multi-institutional international grand challenge in which research teams developed algorithms to detect lesions in DBT. A data set of 22 032 reconstructed DBT volumes was made available to research teams. Phase 1, in which teams were provided 700 scans from the training set, 120 from the validation set, and 180 from the test set, took place from December 2020 to January 2021, and phase 2, in which teams were given the full data set, took place from May to July 2021. MAIN OUTCOMES AND MEASURES: The overall performance was evaluated by mean sensitivity for biopsied lesions using only DBT volumes with biopsied lesions; ties were broken by including all DBT volumes. RESULTS: A total of 8 teams participated in the challenge. The team with the highest mean sensitivity for biopsied lesions was the NYU B-Team, with 0.957 (95% CI, 0.924-0.984), and the second-place team, ZeDuS, had a mean sensitivity of 0.926 (95% CI, 0.881-0.964). When the results were aggregated, the mean sensitivity for all submitted algorithms was 0.879; for only those who participated in phase 2, it was 0.926. CONCLUSIONS AND RELEVANCE: In this diagnostic study, an international competition produced algorithms with high sensitivity for using AI to detect lesions on DBT images. A standardized performance benchmark for the detection task using publicly available clinical imaging data was released, with detailed descriptions and analyses of submitted algorithms accompanied by a public release of their predictions and code for selected methods. These resources will serve as a foundation for future research on computer-assisted diagnosis methods for DBT, significantly lowering the barrier of entry for new researchers.
Development and Validation of an Automated Image-Based Deep Learning Platform for Sarcopenia Assessment in Head and Neck Cancer
Ye, Zezhong
Saraf, Anurag
Ravipati, Yashwanth
Hoebers, Frank
Catalano, Paul J.
Zha, Yining
Zapaishchykova, Anna
Likitlersuang, Jirapat
Guthier, Christian
Tishler, Roy B.
Schoenfeld, Jonathan D.
Margalit, Danielle N.
Haddad, Robert I.
Mak, Raymond H.
Naser, Mohamed
Wahid, Kareem A.
Sahlsten, Jaakko
Jaskari, Joel
Kaski, Kimmo
Mäkitie, Antti A.
Fuller, Clifton D.
Aerts, Hugo J. W. L.
Kann, Benjamin H.
JAMA Network Open2023Journal Article, cited 0 times
HNSCC
sarcopenia
Deep Learning
Sarcopenia is an established prognostic factor in patients with head and neck squamous cell carcinoma (HNSCC); the quantification of sarcopenia assessed by imaging is typically achieved through the skeletal muscle index (SMI), which can be derived from cervical skeletal muscle segmentation and cross-sectional area. However, manual muscle segmentation is labor intensive, prone to interobserver variability, and impractical for large-scale clinical use.To develop and externally validate a fully automated image-based deep learning platform for cervical vertebral muscle segmentation and SMI calculation and evaluate associations with survival and treatment toxicity outcomes.For this prognostic study, a model development data set was curated from publicly available and deidentified data from patients with HNSCC treated at MD Anderson Cancer Center between January 1, 2003, and December 31, 2013. A total of 899 patients undergoing primary radiation for HNSCC with abdominal computed tomography scans and complete clinical information were selected. An external validation data set was retrospectively collected from patients undergoing primary radiation therapy between January 1, 1996, and December 31, 2013, at Brigham and Women’s Hospital. The data analysis was performed between May 1, 2022, and March 31, 2023.C3 vertebral skeletal muscle segmentation during radiation therapy for HNSCC.Overall survival and treatment toxicity outcomes of HNSCC.The total patient cohort comprised 899 patients with HNSCC (median [range] age, 58 [24-90] years; 140 female [15.6%] and 755 male [84.0%]). Dice similarity coefficients for the validation set (n = 96) and internal test set (n = 48) were 0.90 (95% CI, 0.90-0.91) and 0.90 (95% CI, 0.89-0.91), respectively, with a mean 96.2% acceptable rate between 2 reviewers on external clinical testing (n = 377). Estimated cross-sectional area and SMI values were associated with manually annotated values (Pearson r = 0.99; P < .001) across data sets. On multivariable Cox proportional hazards regression, SMI-derived sarcopenia was associated with worse overall survival (hazard ratio, 2.05; 95% CI, 1.04-4.04; P = .04) and longer feeding tube duration (median [range], 162 [6-1477] vs 134 [15-1255] days; hazard ratio, 0.66; 95% CI, 0.48-0.89; P = .006) than no sarcopenia.This prognostic study’s findings show external validation of a fully automated deep learning pipeline to accurately measure sarcopenia in HNSCC and an association with important disease outcomes. The pipeline could enable the integration of sarcopenia assessment into clinical decision making for individuals with HNSCC.
Quantitative variations in texture analysis features dependent on MRI scanning parameters: A phantom model
Buch, Karen
Kuno, Hirofumi
Qureshi, Muhammad M
Li, Baojun
Sakai, Osamu
Journal of applied clinical medical physics2018Journal Article, cited 0 times
Website
RIDER
TCGA
texture analysis
MRI
Assessment of prostate cancer prognostic Gleason grade group using zonal-specific features extracted from biparametric MRI using a KNN classifier
Jensen, C.
Carl, J.
Boesen, L.
Langkilde, N. C.
Ostergaard, L. R.
J Appl Clin Med Phys2019Journal Article, cited 0 times
Website
SPIE-AAPM PROSTATEx Challenge
PROSTATE
K Nearest Neighbor (KNN)
Classification
PURPOSE: To automatically assess the aggressiveness of prostate cancer (PCa) lesions using zonal-specific image features extracted from diffusion weighted imaging (DWI) and T2W MRI. METHODS: Region of interest was extracted from DWI (peripheral zone) and T2W MRI (transitional zone and anterior fibromuscular stroma) around the center of 112 PCa lesions from 99 patients. Image histogram and texture features, 38 in total, were used together with a k-nearest neighbor classifier to classify lesions into their respective prognostic Grade Group (GG) (proposed by the International Society of Urological Pathology 2014 consensus conference). A semi-exhaustive feature search was performed (1-6 features in each feature set) and validated using threefold stratified cross validation in a one-versus-rest classification setup. RESULTS: Classifying PCa lesions into GGs resulted in AUC of 0.87, 0.88, 0.96, 0.98, and 0.91 for GG1, GG2, GG1 + 2, GG3, and GG4 + 5 for the peripheral zone, respectively. The results for transitional zone and anterior fibromuscular stroma were AUC of 0.85, 0.89, 0.83, 0.94, and 0.86 for GG1, GG2, GG1 + 2, GG3, and GG4 + 5, respectively. CONCLUSION: This study showed promising results with reasonable AUC values for classification of all GG indicating that zonal-specific imaging features from DWI and T2W MRI can be used to differentiate between PCa lesions of various aggressiveness.
Impact of image preprocessing methods on reproducibility of radiomic features in multimodal magnetic resonance imaging in glioblastoma
Moradmand, Hajar
Aghamiri, Seyed Mahmoud Reza
Ghaderi, Reza
J Appl Clin Med Phys2019Journal Article, cited 0 times
TCGA-GBM
GLISTR
Radiomics
Magnetic Resonance Imaging (MRI)
To investigate the effect of image preprocessing, in respect to intensity inhomogeneity correction and noise filtering, on the robustness and reproducibility of the radiomics features extracted from the Glioblastoma (GBM) tumor in multimodal MR images (mMRI). In this study, for each patient 1461 radiomics features were extracted from GBM subregions (i.e., edema, necrosis, enhancement, and tumor) of mMRI (i.e., FLAIR, T1, T1C, and T2) volumes for five preprocessing combinations (in total 116 880 radiomics features). The robustness and reproducibility of the radiomics features were assessed under four comparisons: (a) Baseline versus modified bias field; (b) Baseline versus modified bias field followed by noise filtering; (c) Baseline versus modified noise, and (d) Baseline versus modified noise followed bias field correction. The concordance correlation coefficient (CCC), dynamic range (DR), and interclass correlation coefficient (ICC) were used as metrics. Shape features and subsequently, local binary pattern (LBP) filtered images were highly stable and reproducible against bias field correction and noise filtering in all measurements. In all MRI modalities, necrosis regions (NC: n ~449/1461, 30%) had the highest number of highly robust features, with CCC and DR >= 0.9, in comparison with edema (ED: n ~296/1461, 20%), enhanced (EN: n ~ 281/1461, 19%) and active-tumor regions (TM: n ~254/1461, 17%). The necrosis regions (NC: n ~ 449/1461, 30%) had a higher number of highly robust features (CCC and DR >= 0.9) than edema (ED: n ~ 296/1461, 20%), enhanced (EN: n ~ 281/1461, 19%) and active-tumor (TM: n ~ 254/1461, 17%) regions across all modalities. Furthermore, our results identified that the percentage of high reproducible features with ICC >= 0.9 after bias field correction (23.2%), and bias field correction followed by noise filtering (22.4%) were higher in contrast with noise smoothing and also noise smoothing follow by bias correction. These preliminary findings imply that preprocessing sequences can also have a significant impact on the robustness and reproducibility of mMRI-based radiomics features and identification of generalizable and consistent preprocessing algorithms is a pivotal step before imposing radiomics biomarkers into the clinic for GBM patients.
Dynamic conformal arcs for lung stereotactic body radiation therapy: A comparison with volumetric-modulated arc therapy
Bokrantz, R.
Wedenberg, M.
Sandwall, P.
J Appl Clin Med Phys2020Journal Article, cited 1 times
Website
4D-Lung
Computed Tomography (CT)
This study constitutes a feasibility assessment of dynamic conformal arc (DCA) therapy as an alternative to volumetric-modulated arc therapy (VMAT) for stereotactic body radiation therapy (SBRT) of lung cancer. The rationale for DCA is lower geometric complexity and hence reduced risk for interplay errors induced by respiratory motion. Forward planned DCA and inverse planned DCA based on segment-weight optimization were compared to VMAT for single arc treatments of five lung patients. Analysis of dose-volume histograms and clinical goal fulfillment revealed that DCA can generate satisfactory and near equivalent dosimetric quality to VMAT, except for complex tumor geometries. Segment-weight optimized DCA provided spatial dose distributions qualitatively similar to those for VMAT. Our results show that DCA, and particularly segment-weight optimized DCA, may be an attractive alternative to VMAT for lung SBRT treatments if the patient anatomy is favorable.
A feasibility study to estimate optimal rigid-body registration using combinatorial rigid registration optimization (CORRO)
Yorke, A. A.
Solis, D., Jr.
Guerrero, T.
J Appl Clin Med Phys2020Journal Article, cited 0 times
PURPOSE: Clinical image pairs provide the most realistic test data for image registration evaluation. However, the optimal registration is unknown. Using combinatorial rigid registration optimization (CORRO) we demonstrate a method to estimate the optimal alignment for rigid-registration of clinical image pairs. METHODS: Expert selected landmark pairs were selected for each CT/CBCT image pair for six cases representing head and neck, thoracic, and pelvic anatomic regions. Combination subsets of a k number of landmark pairs (k-combination set) were generated without repeat to form a large set of k-combination sets (k-set) for k = 4,8,12. The rigid transformation between the image pairs was calculated for each k-combination set. The mean and standard deviation of these transformations were used to derive final registration for each k-set. RESULTS: The standard deviation of registration output decreased as the k-size increased for all cases. The joint entropy evaluated for each k-set of each case was smaller than those from two commercially available registration programs indicating a stronger correlation between the image pair after CORRO was used. A joint histogram plot of all three algorithms showed high correlation between them. As further proof of the efficacy of CORRO the joint entropy of each member of 30 000 k-combination sets in k = 4 were calculated for one of the thoracic cases. The minimum joint entropy was found to exist at the estimated mean of registration indicating CORRO converges to the optimal rigid-registration results. CONCLUSIONS: We have developed a methodology called CORRO that allows us to estimate optimal alignment for rigid-registration of clinical image pairs using a large set landmark point. The results for the rigid-body registration have been shown to be comparable to results from commercially available algorithms for all six cases. CORRO can serve as an excellent tool that can be used to test and validate rigid registration algorithms.
SBRT of ventricular tachycardia using 4pi optimized trajectories
Reis, C.
Little, B.
Lee MacDonald, R.
Syme, A.
Thomas, C. G.
Robar, J. L.
J Appl Clin Med Phys2021Journal Article, cited 0 times
Website
CT Lymph Nodes
Radiation Therapy
Segmentation
radiosurgery
ventricular tachycardia
HEART
PURPOSE: To investigate the possible advantages of using 4pi-optimized arc trajectories in stereotactic body radiation therapy of ventricular tachycardia (VT-SBRT) to minimize exposure of healthy tissues. METHODS AND MATERIALS: Thorax computed tomography (CT) data for 15 patients were used for contouring organs at risk (OARs) and defining realistic planning target volumes (PTVs). A conventional trajectory plan, defined as two full coplanar arcs was compared to an optimized-trajectory plan provided by a 4pi algorithm that penalizes geometric overlap of PTV and OARs in the beam's-eye-view. A single fraction of 25 Gy was prescribed to the PTV in both plans and a comparison of dose sparing to OARs was performed based on comparisons of maximum, mean, and median dose. RESULTS: A significant average reduction in maximum dose was observed for esophagus (18%), spinal cord (26%), and trachea (22%) when using 4pi-optimized trajectories. Mean doses were also found to decrease for esophagus (19%), spinal cord (33%), skin (18%), liver (59%), lungs (19%), trachea (43%), aorta (11%), inferior vena cava (25%), superior vena cava (33%), and pulmonary trunk (26%). A median dose reduction was observed for esophagus (40%), spinal cord (48%), skin (36%), liver (72%), lungs (41%), stomach (45%), trachea (53%), aorta (45%), superior vena cava (38%), pulmonary veins (32%), and pulmonary trunk (39%). No significant difference was observed for maximum dose (p = 0.650) and homogeneity index (p = 0.156) for the PTV. Average values of conformity number were 0.86 +/- 0.05 and 0.77 +/- 0.09 for the conventional and 4pi optimized plans respectively. CONCLUSIONS: 4pi optimized trajectories provided significant reduction to mean and median doses to cardiac structures close to the target but did not decrease maximum dose. Significant improvement in maximum, mean and median doses for noncardiac OARs makes 4pi optimized trajectories a suitable delivery technique for treating VT.
Attention-guided duplex adversarial U-net for pancreatic segmentation from computed tomography images
Li, M.
Lian, F.
Li, Y.
Guo, S.
J Appl Clin Med Phys2022Journal Article, cited 0 times
Website
Pancreas-CT
Machine Learning
Generative adversarial network
Segmentation
PURPOSE: Segmenting the organs from computed tomography (CT) images is crucial to early diagnosis and treatment. Pancreas segmentation is especially challenging because the pancreas has a small volume and a large variation in shape. METHODS: To mitigate this issue, an attention-guided duplex adversarial U-Net (ADAU-Net) for pancreas segmentation is proposed in this work. First, two adversarial networks are integrated into the baseline U-Net to ensure the obtained prediction maps resemble the ground truths. Then, attention blocks are applied to preserve much contextual information for segmentation. The implementation of the proposed ADAU-Net consists of two steps: 1) backbone segmentor selection scheme is introduced to select an optimal backbone segmentor from three two-dimensional segmentation model variants based on a conventional U-Net and 2) attention blocks are integrated into the backbone segmentor at several locations to enhance the interdependency among pixels for a better segmentation performance, and the optimal structure is selected as a final version. RESULTS: The experimental results on the National Institutes of Health Pancreas-CT dataset show that our proposed ADAU-Net outperforms the baseline segmentation network by 6.39% in dice similarity coefficient and obtains a competitive performance compared with the-state-of-art methods for pancreas segmentation. CONCLUSION: The ADAU-Net achieves satisfactory segmentation results on the public pancreas dataset, indicating that the proposed model can segment pancreas outlines from CT images accurately.
Deep learning-based auto segmentation using generative adversarial network on magnetic resonance images obtained for head and neck cancer patients
Kawahara, D.
Tsuneda, M.
Ozawa, S.
Okamoto, H.
Nakamura, M.
Nishio, T.
Nagata, Y.
J Appl Clin Med Phys2022Journal Article, cited 0 times
Website
AAPM RT-MAC
*Deep Learning
*Head and Neck Neoplasms/diagnostic imaging/radiotherapy
Humans
Image Processing
Computer-Assisted/methods
Magnetic Resonance Imaging
Organs at Risk
Convolutional Neural Network (CNN)
Generative Adversarial Network (GAN)
deep learning
segmentation
PURPOSE: Adaptive radiotherapy requires auto-segmentation in patients with head and neck (HN) cancer. In the current study, we propose an auto-segmentation model using a generative adversarial network (GAN) on magnetic resonance (MR) images of HN cancer for MR-guided radiotherapy (MRgRT). MATERIAL AND METHODS: In the current study, we used a dataset from the American Association of Physicists in Medicine MRI Auto-Contouring (RT-MAC) Grand Challenge 2019. Specifically, eight structures in the MR images of HN region, namely submandibular glands, lymph node level II and level III, and parotid glands, were segmented with the deep learning models using a GAN and a fully convolutional network with a U-net. These images were compared with the clinically used atlas-based segmentation. RESULTS: The mean Dice similarity coefficient (DSC) of the U-net and GAN models was significantly higher than that of the atlas-based method for all the structures (p < 0.05). Specifically, the maximum Hausdorff distance (HD) was significantly lower than that in the atlas method (p < 0.05). Comparing the 2.5D and 3D U-nets, the 3D U-net was superior in segmenting the organs at risk (OAR) for HN patients. The DSC was highest for 0.75-0.85, and the HD was lowest within 5.4 mm of the 2.5D GAN model in all the OARs. CONCLUSIONS: In the current study, we investigated the auto-segmentation of the OAR for HN patients using U-net and GAN models on MR images. Our proposed model is potentially valuable for improving the efficiency of HN RT treatment planning.
Improving reproducibility and performance of radiomics in low‐dose CT using cycle GANs
Chen, Junhua
Wee, Leonard
Dekker, Andre
Bermejo, Inigo
Journal of applied clinical medical physics2022Journal Article, cited 0 times
LDCT-and-Projection-data
NSCLC-Radiomics
TCGA-LUAD
BACKGROUND: As a means to extract biomarkers from medical imaging, radiomics has attracted increased attention from researchers. However, reproducibility and performance of radiomics in low-dose CT scans are still poor, mostly due to noise. Deep learning generative models can be used to denoise these images and in turn improve radiomics' reproducibility and performance. However, most generative models are trained on paired data, which can be difficult or impossible to collect.
PURPOSE: In this article, we investigate the possibility of denoising low-dose CTs using cycle generative adversarial networks (GANs) to improve radiomics reproducibility and performance based on unpaired datasets.
METHODS AND MATERIALS: Two cycle GANs were trained: (1) from paired data, by simulating low-dose CTs (i.e., introducing noise) from high-dose CTs and (2) from unpaired real low dose CTs. To accelerate convergence, during GAN training, a slice-paired training strategy was introduced. The trained GANs were applied to three scenarios: (1) improving radiomics reproducibility in simulated low-dose CT images and (2) same-day repeat low dose CTs (RIDER dataset), and (3) improving radiomics performance in survival prediction. Cycle GAN results were compared with a conditional GAN (CGAN) and an encoder-decoder network (EDN) trained on simulated paired data.
RESULTS: The cycle GAN trained on simulated data improved concordance correlation coefficients (CCC) of radiomic features from 0.87 (95%CI, [0.833,0.901]) to 0.93 (95%CI, [0.916,0.949]) on simulated noise CT and from 0.89 (95%CI, [0.881,0.914]) to 0.92 (95%CI, [0.908,0.937]) on the RIDER dataset, as well improving the area under the receiver operating characteristic curve (AUC) of survival prediction from 0.52 (95%CI, [0.511,0.538]) to 0.59 (95%CI, [0.578,0.602]). The cycle GAN trained on real data increased the CCCs of features in RIDER to 0.95 (95%CI, [0.933,0.961]) and the AUC of survival prediction to 0.58 (95%CI, [0.576,0.596]).
CONCLUSION: The results show that cycle GANs trained on both simulated and real data can improve radiomics' reproducibility and performance in low-dose CT and achieve similar results compared to CGANs and EDNs.
Self‐adaption and texture generation: A hybrid loss function for low‐dose CT denoising
Wang, Zhenchuan
Liu, Minghui
Cheng, Xuan
Zhu, Jinqi
Wang, Xiaomin
Gong, Haigang
Liu, Ming
Xu, Lifeng
Journal of applied clinical medical physics2023Journal Article, cited 0 times
LDCT-and-Projection-data
BACKGROUND: Deep learning has been successfully applied to low-dose CT (LDCT) denoising. But the training of the model is very dependent on an appropriate loss function. Existing denoising models often use per-pixel loss, including mean abs error (MAE) and mean square error (MSE). This ignores the difference in denoising difficulty between different regions of the CT images and leads to the loss of large texture information in the generated image.
PURPOSE: In this paper, we propose a new hybrid loss function that adapts to the noise in different regions of CT images to balance the denoising difficulty and preserve texture details, thus acquiring CT images with high-quality diagnostic value using LDCT images, providing strong support for condition diagnosis.
METHODS: We propose a hybrid loss function consisting of weighted patch loss (WPLoss) and high-frequency information loss (HFLoss). To enhance the model's denoising ability of the local areas which are difficult to denoise, we improve the MAE to obtain WPLoss. After the generated image and the target image are divided into several patches, the loss weight of each patch is adaptively and dynamically adjusted according to its loss ratio. In addition, considering that texture details are contained in the high-frequency information of the image, we use HFLoss to calculate the difference between CT images in the high-frequency information part.
RESULTS: Our hybrid loss function improves the denoising performance of several models in the experiment, and obtains a higher peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). Moreover, through visual inspection of the generated results of the comparison experiment, the proposed hybrid function can effectively suppress noise and retain image details.
CONCLUSIONS: We propose a hybrid loss function for LDCT image denoising, which has good interpretation properties and can improve the denoising performance of existing models. And the validation results of multiple models using different datasets show that it has good generalization ability. By using this loss function, high-quality CT images with low radiation are achieved, which can avoid the hazards caused by radiation and ensure the disease diagnosis for patients.
Contrast-enhanced MRI synthesis using dense-dilated residual convolutions based 3D network toward elimination of gadolinium in neuro-oncology
Osman, A. F. I.
Tamam, N. M.
J Appl Clin Med Phys2023Journal Article, cited 0 times
Website
BraTS 2021
Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI)
Deep learning
dilated convolution
gadolinium-based contrast agents
glioma
medical image synthesis
neuro-oncology
residual connection
Recent studies have raised broad safety and health concerns about using of gadolinium contrast agents during magnetic resonance imaging (MRI) to enhance identification of active tumors. In this paper, we developed a deep learning-based method for three-dimensional (3D) contrast-enhanced T1-weighted (T1) image synthesis from contrast-free image(s). The MR images of 1251 patients with glioma from the RSNA-ASNR-MICCAI BraTS Challenge 2021 dataset were used in this study. A 3D dense-dilated residual U-Net (DD-Res U-Net) was developed for contrast-enhanced T1 image synthesis from contrast-free image(s). The model was trained on a randomly split training set (n = 800) using a customized loss function and validated on a validation set (n = 200) to improve its generalizability. The generated images were quantitatively assessed against the ground-truth on a test set (n = 251) using the mean absolute error (MAE), mean-squared error (MSE), peak signal-to-noise ratio (PSNR), structural similarity (SSIM), normalized mutual information (NMI), and Hausdorff distance (HDD) metrics. We also performed a qualitative visual similarity assessment between the synthetic and ground-truth images. The effectiveness of the proposed model was compared with a 3D U-Net baseline model and existing deep learning-based methods in the literature. Our proposed DD-Res U-Net model achieved promising performance for contrast-enhanced T1 synthesis in both quantitative metrics and perceptual evaluation on the test set (n = 251). Analysis of results on the whole brain region showed a PSNR (in dB) of 29.882 +/- 5.924, a SSIM of 0.901 +/- 0.071, a MAE of 0.018 +/- 0.013, a MSE of 0.002 +/- 0.002, a HDD of 2.329 +/- 9.623, and a NMI of 1.352 +/- 0.091 when using only T1 as input; and a PSNR (in dB) of 30.284 +/- 4.934, a SSIM of 0.915 +/- 0.063, a MAE of 0.017 +/- 0.013, a MSE of 0.001 +/- 0.002, a HDD of 1.323 +/- 3.551, and a NMI of 1.364 +/- 0.089 when combining T1 with other MRI sequences. Compared to the U-Net baseline model, our model revealed superior performance. Our model demonstrated excellent capability in generating synthetic contrast-enhanced T1 images from contrast-free MR image(s) of the whole brain region when using multiple contrast-free images as input. Without incorporating tumor mask information during network training, its performance was inferior in the tumor regions compared to the whole brain which requires further improvements to replace the gadolinium administration in neuro-oncology.
A deep learning approach to remove contrast from contrast-enhanced CT for proton dose calculation
Wang, X.
Hao, Y.
Duan, Y.
Yang, D.
J Appl Clin Med Phys2024Journal Article, cited 0 times
CPTAC-PDA
TCGA-STAD
Generative Adversarial Network (GAN)
Contrast enhancement
Computed Tomography (CT)
Deep learning
medical image processing
proton dose calculation
radiation therapy
Radiotherapy
PURPOSE: Non-Contrast Enhanced CT (NCECT) is normally required for proton dose calculation while Contrast Enhanced CT (CECT) is often scanned for tumor and organ delineation. Possible tissue motion between these two CTs raises dosimetry uncertainties, especially for moving tumors in the thorax and abdomen. Here we report a deep-learning approach to generate NCECT directly from CECT. This method could be useful to avoid the NCECT scan, reduce CT simulation time and imaging dose, and decrease the uncertainties caused by tissue motion between otherwise two different CT scans. METHODS: A deep network was developed to convert CECT to NCECT. The network receives a 3D image from CECT images as input and generates a corresponding contrast-removed NCECT image patch. Abdominal CECT and NCECT image pairs of 20 patients were deformably registered and 8000 image patch pairs extracted from the registered image pairs were utilized to train and test the model. CTs of clinical proton patients and their treatment plans were employed to evaluate the dosimetric impact of using the generated NCECT for proton dose calculation. RESULTS: Our approach achieved a Cosine Similarity score of 0.988 and an MSE value of 0.002. A quantitative comparison of clinical proton dose plans computed on the CECT and the generated NCECT for five proton patients revealed significant dose differences at the distal of beam paths. V100% of PTV and GTV changed by 3.5% and 5.5%, respectively. The mean HU difference for all five patients between the generated and the scanned NCECTs was approximately 4.72, whereas the difference between CECT and the scanned NCECT was approximately 64.52, indicating a approximately 93% reduction in mean HU difference. CONCLUSIONS: A deep learning approach was developed to generate NCECTs from CECTs. This approach could be useful for the proton dose calculation to reduce uncertainties caused by tissue motion between CECT and NCECT.
A deep learning-based framework (Co-ReTr) for auto-segmentation of non-small cell-lung cancer in computed tomography images
Kunkyab, T.
Bahrami, Z.
Zhang, H.
Liu, Z.
Hyde, D.
J Appl Clin Med Phys2024Journal Article, cited 0 times
Website
NSCLC-Radiomics
NSCLC Radiogenomics
Gross Tumor Volume (GTV)
Computed Tomography (CT)
Auto-segmentation
Model
Deep convolutional neural network (DCNN)
U-Net
encoder-decoder
Non-Small Cell Lung Cancer (NSCLC)
PURPOSE: Deep learning-based auto-segmentation algorithms can improve clinical workflow by defining accurate regions of interest while reducing manual labor. Over the past decade, convolutional neural networks (CNNs) have become prominent in medical image segmentation applications. However, CNNs have limitations in learning long-range spatial dependencies due to the locality of the convolutional layers. Transformers were introduced to address this challenge. In transformers with self-attention mechanism, even the first layer of information processing makes connections between distant image locations. Our paper presents a novel framework that bridges these two unique techniques, CNNs and transformers, to segment the gross tumor volume (GTV) accurately and efficiently in computed tomography (CT) images of non-small cell-lung cancer (NSCLC) patients. METHODS: Under this framework, input of multiple resolution images was used with multi-depth backbones to retain the benefits of high-resolution and low-resolution images in the deep learning architecture. Furthermore, a deformable transformer was utilized to learn the long-range dependency on the extracted features. To reduce computational complexity and to efficiently process multi-scale, multi-depth, high-resolution 3D images, this transformer pays attention to small key positions, which were identified by a self-attention mechanism. We evaluated the performance of the proposed framework on a NSCLC dataset which contains 563 training images and 113 test images. Our novel deep learning algorithm was benchmarked against five other similar deep learning models. RESULTS: The experimental results indicate that our proposed framework outperforms other CNN-based, transformer-based, and hybrid methods in terms of Dice score (0.92) and Hausdorff Distance (1.33). Therefore, our proposed model could potentially improve the efficiency of auto-segmentation of early-stage NSCLC during the clinical workflow. This type of framework may potentially facilitate online adaptive radiotherapy, where an efficient auto-segmentation workflow is required. CONCLUSIONS: Our deep learning framework, based on CNN and transformer, performs auto-segmentation efficiently and could potentially assist clinical radiotherapy workflow.
Molecular profiles of tumor contrast enhancement: A radiogenomic analysis in anaplastic gliomas
Liu, Xing
Li, Yiming
Sun, Zhiyan
Li, Shaowu
Wang, Kai
Fan, Xing
Liu, Yuqing
Wang, Lei
Wang, Yinyan
Jiang, Tao
Cancer medicine2018Journal Article, cited 0 times
Website
glioma
radiogenomics
gene set enrichment analysis (GSEA)
Molecular Signatures Database v5.1 (MSigDB)
radiomic features
Multiregional radiomics profiling from multiparametric MRI: Identifying an imaging predictor of IDH1 mutation status in glioblastoma
Li, Zhi‐Cheng
Bai, Hongmin
Sun, Qiuchang
Zhao, Yuanshen
Lv, Yanchun
Zhou, Jian
Liang, Chaofeng
Chen, Yinsheng
Liang, Dong
Zheng, Hairong
Cancer medicine2018Journal Article, cited 0 times
Website
TCGA-GBM
Radiogenomics
Glioblastoma multiforme (GBM)
Magnetic Resonance Imaging (MRI)
ITK
Random forest
Isocitrate dehydrogenase (IDH) mutation
PURPOSE: Isocitrate dehydrogenase 1 (IDH1) has been proven as a prognostic and predictive marker in glioblastoma (GBM) patients. The purpose was to preoperatively predict IDH mutation status in GBM using multiregional radiomics features from multiparametric magnetic resonance imaging (MRI). METHODS: In this retrospective multicenter study, 225 patients were included. A total of 1614 multiregional features were extracted from enhancement area, non-enhancement area, necrosis, edema, tumor core, and whole tumor in multiparametric MRI. Three multiregional radiomics models were built from tumor core, whole tumor, and all regions using an all-relevant feature selection and a random forest classification for predicting IDH1. Four single-region models and a model combining all-region features with clinical factors (age, sex, and Karnofsky performance status) were also built. All models were built from a training cohort (118 patients) and tested on an independent validation cohort (107 patients). RESULTS: Among the four single-region radiomics models, the edema model achieved the best accuracy of 96% and the best F1-score of 0.75 while the non-enhancement model achieved the best area under the receiver operating characteristic curve (AUC) of 0.88 in the validation cohort. The overall performance of the tumor-core model (accuracy 0.96, AUC 0.86 and F1-score 0.75) and the whole-tumor model (accuracy 0.96, AUC 0.88 and F1-score 0.75) was slightly better than the single-regional models. The 8-feature all-region radiomics model achieved an improved overall performance of an accuracy 96%, an AUC 0.90, and an F1-score 0.78. Among all models, the model combining all-region imaging features with age achieved the best performance of an accuracy 97%, an AUC 0.96, and an F1-score 0.84. CONCLUSIONS: The radiomics model built with multiregional features from multiparametric MRI has the potential to preoperatively detect the IDH1 mutation status in GBM patients. The multiregional model built with all-region features performed better than the single-region models, while combining age with all-region features achieved the best performance.
Using computer‐extracted image phenotypes from tumors on breast magnetic resonance imaging to predict breast cancer pathologic stage
Biomechanical model for computing deformations for whole‐body image registration: A meshless approach
Li, Mao
Miller, Karol
Joldes, Grand Roman
Kikinis, Ron
Wittek, Adam
International Journal for Numerical Methods in Biomedical Engineering2016Journal Article, cited 13 times
Website
Algorithm Development
Fuzzy C-means clustering (FCM)
Segmentation
Computed Tomography (CT)
Machine Learning
Novel approaches for glioblastoma treatment: Focus on tumor heterogeneity, treatment resistance, and computational tools
Valdebenito, Silvana
D'Amico, Daniela
Eugenin, Eliseo
Cancer Reports2019Journal Article, cited 0 times
TCGA-GBM
Radiogenomics
Background; Glioblastoma (GBM) is a highly aggressive primary brain tumor. Currently, the suggested line of action is the surgical resection followed by radiotherapy and treatment with the adjuvant temozolomide, a DNA alkylating agent. However, the ability of tumor cells to deeply infiltrate the surrounding tissue makes complete resection quite impossible, and, in consequence, the probability of tumor recurrence is high, and the prognosis is not positive. GBM is highly heterogeneous and adapts to treatment in most individuals. Nevertheless, these mechanisms of adaption are unknown.; ; Recent findings; In this review, we will discuss the recent discoveries in molecular and cellular heterogeneity, mechanisms of therapeutic resistance, and new technological approaches to identify new treatments for GBM. The combination of biology and computer resources allow the use of algorithms to apply artificial intelligence and machine learning approaches to identify potential therapeutic pathways and to identify new drug candidates.; ; Conclusion; These new approaches will generate a better understanding of GBM pathogenesis and will result in novel treatments to reduce or block the devastating consequences of brain cancers.
Transferable HMM probability matrices in multi‐orientation geometric medical volumes segmentation
AlZu'bi, Shadi
AlQatawneh, Sokyna
ElBes, Mohammad
Alsmirat, Mohammad
Concurrency and Computation: Practice and Experience2019Journal Article, cited 0 times
Website
Head-Neck Cetuximab
Hidden Markov Model
Segmentation
machine learning
Acceptable error rate, low quality assessment, and time complexity are the major problems in image segmentation, which needed to be discovered. A variety of acceleration techniques have been applied and achieve real time results, but still limited in 3D. HMM is one of the best statistical techniques that played a significant rule recently. The problem associated with HMM is time complexity, which has been resolved using different accelerator. In this research, we propose a methodology for transferring HMM matrices from image to another skipping the training time for the rest of the 3D volume. One HMM train is generated and generalized to the whole volume. The concepts behind multi‐orientation geometrical segmentation has been employed here to improve the quality of HMM segmentation. Axial, saggital, and coronal orientations have been considered individually and together to achieve accurate segmentation results in less processing time and superior quality in the detection accuracy.
Binary differential evolution with self learning and deep neural network for breast cancer classification
Pullaiah, Nagaraja Rao Pamula
Venkatasekhar, Dorai
Venkatramana, Padarthi
Sudhakar, Balaraj
2022Journal Article, cited 0 times
BREAST-DIAGNOSIS
Abstract Early classification of breast cancer helps to treat the patient effectively and increases the survival rate. The existing methods involve applying the feature selection methods and deep learning methods to improve the performance of the breast cancer classification. In this research, the binary differential evolution with self learning (BDE‐SL) and deep neural network (DNN) method is proposed to improve the performance of the breast cancer classification. The BDE‐SL feature selection method involves selecting the relevant features based on the measure of probability difference for each feature and non‐dominated sorting. The DNN method has the advantage which effectively analysis the non‐linear relationship among the selected features and output. The BI‐RADS MRI breast cancer dataset was applied to test the performance of the proposed method. The adaptive histogram equalization and region growing applied in the input images to enhance the image. The dual‐tree complex wavelet transform, gray‐level co‐occurrence matrix, and local directional ternary pattern were the feature extraction method used for the classification. This result shows that BDE‐SL with the DNN method has an accuracy of 99.12% and the existing convolutional neural network has 98.33% accuracy.
Improving lung cancer detection using faster region‐based convolutional neural network aided with fuzzy butterfly optimization algorithm
Sinthia, P.
Malathi, M.
K, Anitha
Suresh Anand, M.
Concurrency and Computation: Practice and Experience2022Journal Article, cited 0 times
Website
LIDC-IDRI
Anti-PD-1_Lung
Convolutional Neural Network (CNN)
Lung cancer is the most deadly type of cancer and it is caused by genetic variations in lung tissues. Other causes of lung cancer are alcohol, smoking, and threatening gas exposures. The diagnosis of lung cancer is an intricate task and early detection of lung cancer can help to get the exact treatment in advance. The application of a computer-aided diagnosis process helps to predict lung cancer earlier, nonetheless, it does not provide better accuracy. The overfitting nature of features and dimensionality of lung cancer can prevent it from obtaining maximum accuracy. Hence, we proposed a novel faster region convolutional neural network (RCNN) based fuzzy butterfly optimization algorithm (FBOA) to achieve better prediction accuracy and effectiveness. The proposed Faster RCNN can provide better positioning of lung cancer swiftly and effectively and the FBOA approach can be used to perform two-stage classification. The fuzzy rules used in the FBOA can be utilized to find the severity of the lung cancer and differentiate the Benign stage and Malignant stage effectively. The experimental analyses are performed in MATLAB simulation. The preprocessing of images is performed by different tools from MATLAB and is to format the images as required. The cancer imaging archive (TCIA) dataset is utilized to analyze the performance of the proposed method and compared with various state-of-art works. The performance of the proposed method is evaluated by using different evaluation metrics such as precision, recall, F-measure, and accuracy and attained 99, 98, 99, and 97% respectively. Thus, our proposed method outperforms all the other approaches.
Special issue “The advance of solid tumor research in China”: Prognosis prediction for stage II colorectal cancer by fusing computed tomography radiomics and deep‐learning features of primary lesions and peripheral lymph nodes
Li, Menglei
Gong, Jing
Bao, Yichao
Huang, Dan
Peng, Junjie
Tong, Tong
2022Journal Article, cited 0 times
StageII-Colorectal-CT
Currently, the prognosis assessment of stage II colorectal cancer (CRC) remains a difficult clinical problem; therefore, more accurate prognostic predictors must be developed. In our study, we developed a prognostic prediction model for stage II CRC by fusing radiomics and deep-learning (DL) features of primary lesions and peripheral lymph nodes (LNs) in computed tomography (CT) scans. First, two CT radiomics models were built using primary lesion and LN image features. Subsequently, an information fusion method was used to build a fusion radiomics model by combining the tumor and LN image features. Furthermore, a transfer learning method was applied to build a deep convolutional neural network (CNN) model. Finally, the prediction scores generated by the radiomics and CNN models were fused to improve the prognosis prediction performance. The disease-free survival (DFS) and overall survival (OS) prediction areas under the curves (AUCs) generated by the fusion model improved to 0.76 ± 0.08 and 0.91 ± 0.05, respectively. These were significantly higher than the AUCs generated by the models using the individual CT radiomics and deep image features. Applying the survival analysis method, the DFS and OS fusion models yielded concordance index (C-index) values of 0.73 and 0.9, respectively. Hence, the combined model exhibited good predictive efficacy; therefore, it could be used for the accurate assessment of the prognosis of stage II CRC patients. Moreover, it could be used to screen out high-risk patients with poor prognoses, and assist in the formulation of clinical treatment decisions in a timely manner to achieve precision medicine.
Improving the diagnosis of ductal carcinoma in situ with microinvasion without immunohistochemistry: An innovative method with H&E‐stained and multiphoton microscopy images
Han, Xiahui
Liu, Yulan
Zhang, Shichao
Li, Lianhuang
Zheng, Liqin
Qiu, Lida
Chen, Jianhua
Zhan, Zhenlin
Wang, Shu
Ma, Jianli
Kang, Deyong
Chen, Jianxin
2024Journal Article, cited 0 times
HE-vs-MPM
Immunohistochemistry
Breast
Ductal carcinoma in situ with microinvasion (DCISM) is a challenging subtype of breast cancer with controversial invasiveness and prognosis. Accurate diagnosis of DCISM from ductal carcinoma in situ (DCIS) is crucial for optimal treatment and improved clinical outcomes. However, there are often some suspicious small cancer nests in DCIS, and it is difficult to diagnose the presence of intact myoepithelium by conventional hematoxylin and eosin (H&E) stained images. Although a variety of biomarkers are available for immunohistochemical (IHC) staining of myoepithelial cells, no single biomarker is consistently sensitive to all tumor lesions. Here, we introduced a new diagnostic method that provides rapid and accurate diagnosis of DCISM using multiphoton microscopy (MPM). Suspicious foci in H&E-stained images were labeled as regions of interest (ROIs), and the nuclei within these ROIs were segmented using a deep learning model. MPM was used to capture images of the ROIs in H&E-stained sections. The intensity of two-photon excitation fluorescence (TPEF) in the myoepithelium was significantly different from that in tumor parenchyma and tumor stroma. Through the use of MPM, the myoepithelium and basement membrane can be easily observed via TPEF and second-harmonic generation (SHG), respectively. By fusing the nuclei in H&E-stained images with MPM images, DCISM can be differentiated from suspicious small cancer clusters in DCIS. The proposed method demonstrated good consistency with the cytokeratin 5/6 (CK5/6) myoepithelial staining method (kappa coefficient = 0.818).
Automated detection of glioblastoma tumor in brain magnetic imaging using ANFIS classifier
Thirumurugan, P
Ramkumar, D
Batri, K
Sundhara Raja, D
International Journal of Imaging Systems and Technology2016Journal Article, cited 3 times
Website
Algorithm Development
BRAIN
Classification
This article proposes a novel and efficient methodology for the detection of Glioblastoma tumor in brain MRI images. The proposed method consists of the following stages as preprocessing, Non-subsampled Contourlet transform (NSCT), feature extraction and Adaptive neuro fuzzy inference system classification. Euclidean direction algorithm is used to remove the impulse noise from the brain image during image acquisition process. NSCT decomposes the denoised brain image into approximation bands and high frequency bands. The features mean, standard deviation and energy are computed for the extracted coefficients and given to the input of the classifier. The classifier classifies the brain MRI image into normal or Glioblastoma tumor image based on the feature set. The proposed system achieves 99.8% sensitivity, 99.7% specificity, and 99.8% accuracy with respect to the ground truth images available in the dataset.;
Automated delineation of non‐small cell lung cancer: A step toward quantitative reasoning in medical decision science
Saad, Maliazurina
Lee, Ik Hyun
Choi, Tae‐Sun
International Journal of Imaging Systems and Technology2019Journal Article, cited 0 times
Website
NSCLC-Radiomics
Radiomics
Non Small Cell Lung Cancer (NSCLC)
Segmentation
U-Net
Convolutional Neural Network (CNN)
Algorithm Development
Quantitative reasoning in medical decision science relies on the delineation of pathological objects. For example, evidence‐based clinical decisions regarding lung diseases require the segmentation of nodules, tumors, or cancers. Non‐small cell lung cancer (NSCLC) tends to be large sized, irregularly shaped, and grows against surrounding structures imposing challenges in the segmentation, even for expert clinicians. An automated delineation tool based on spatial analysis was developed and studied on 25 sets of computed tomography scans of NSCLC. Manual and automated delineations were compared, and the proposed method exhibited robustness in terms of the tumor size (5.32–18.24 mm), shape (spherical or irregular), contouring (lobulated, spiculated, or cavitated), localization (solitary, pleural, mediastinal, endobronchial, or tagging), and laterality (left or right lobe) with accuracy between 80% and 99%. Small discrepancies observed between the manual and automated delineations may arise from the variability in the practitioners' definitions of region of interest or imaging artifacts that reduced the tissue resolution.
Optimizing deep belief network parameters using grasshopper algorithm for liver disease classification
Renukadevi, Thangavel
Karunakaran, Saminathan
International Journal of Imaging Systems and Technology2019Journal Article, cited 0 times
TCGA-LIHC
Deep Learning
Algorithm Development
Computer Assisted Detection (CAD)
Image processing plays a vital role in many areas such as healthcare, military, scientific and business due to its wide variety of advantages and applications. Detection of computed tomography (CT) liver disease is one of the difficult tasks in the medical field. Hand crafted features and classifications are the two types of methods used in the previous approaches, to classify liver disease. But these classification results are not optimal. In this article, we propose a novel method utilizing deep belief network (DBN) with grasshopper optimization algorithm (GOA) for liver disease classification. Initially, the image quality is enhanced by preprocessing techniques and then features like texture, color and shape are extracted. The extracted features are reduced by utilizing the dimensionality reduction method like principal component analysis (PCA). Here, the DBN parameters are optimized using GOA for recognizing liver disease. The experiments are performed on the real time and open source CT image datasets which embraces normal, cyst, hepatoma, and cavernous hemangiomas, fatty liver, metastasis, cirrhosis, and tumor samples. The proposed method yields 98% accuracy, 95.82% sensitivity, 97.52% specificity, 98.53% precision, and 96.8% F‐1 score in simulation process when compared with other existing techniques.
Volumetric medical image compression using inter‐slice correlation switched prediction approach
Sharma, Urvashi
Sood, Meenakshi
Puthooran, Emjee
International Journal of Imaging Systems and Technology2020Journal Article, cited 0 times
LungCT-Diagnosis
RIDER Breast MRI
RIDER NEURO MRI
Abstract With the advancement in medical data acquisition and telemedicine systems, image compression has become an important tool for image handling, as the tremendous amount of data generated in medical field needs to be stored and transmitted effectively. Volumetric MRI and CT images comprise a set of image slices that are correlated to each other. The prediction of the pixels in a slice depends not only upon the spatial information of the slice, but also the inter‐slice information to achieve compression. This article proposes an inter‐slice correlation switched predictor (ICSP) with block adaptive arithmetic encoding (BAAE) technique for 3D medical image data. The proposed ICSP exploits both inter‐slice and intra‐slice redundancies from the volumetric images efficiently. Novelty of the proposed technique is in selecting the correlation coefficient threshold (T ϒ ) for switching of ICSP. Resolution independent gradient edge detector (RIGED) at optimal prediction threshold value is proposed for intra‐slice prediction. Use of RIGED, which is modality and resolution independent, brings the novelty and improved performance for 3D prediction of volumetric images. BAAE is employed for encoding of prediction error image to resulting in higher compression efficiency. The proposed technique is also extended for higher bit depth volumetric medical images (16‐bit depth) presenting significant compression gain of 3D images. The performance of the proposed technique is compared with the state‐of‐the art techniques in terms of bits per pixel (BPP) for 8‐bit depth and was found to be 31.21%, 27.55%, 21.89%, and 2.39% better than the JPEG‐2000, CALIC, JPEG‐LS, M‐CALIC, and 3D‐CALIC respectively. The proposed technique is 11.86%, 8.56%, 7.97%, 6.80%, and 4.86% better than the M‐CALIC, 3D CALIC, JPEG‐2000, JPEG‐LS and CALIC respectively for 16‐bit depth image datasets. The average value of compression ratio for 8‐bit and 16‐bit image dataset is obtained as 3.70 and 3.11 respectively by the proposed technique.
A serialized classification method for pulmonary nodules based on lightweight cascaded convolutional neural network‐long short‐term memory
Ni, Zihao
Peng, Yanjun
International Journal of Imaging Systems and Technology2020Journal Article, cited 0 times
LIDC-IDRI
Abstract Computer Assisted Diagnosis (CAD) is an effective method to detect lung cancer from computed tomography (CT) scans. The development of artificial neural network makes CAD more accurate in detecting pathological changes. Due to the complexity of the lung environment, the existing neural network training still requires large datasets, excessive time, and memory space. To meet the challenge, we analysis 3D volumes as serialized 2D slices and present a new neural network structure lightweight convolutional neural network (CNN)‐long short‐term memory (LSTM) for lung nodule classification. Our network contains two main components: (a) optimized lightweight CNN layers with tiny parameter space for extracting visual features of serialized 2D images, and (b) LSTM network for learning relevant information among 2D images. In all experiments, we compared the training results of several models and our model achieved an accuracy of 91.78% for lung nodule classification with an AUC of 93%. We used fewer samples and memory space to train the model, and we achieved faster convergence. Finally, we analyzed and discussed the feasibility of migrating this framework to mobile devices. The framework can also be applied to cope with the small amount of training data and the development of mobile health device in future.
Deeply supervised U‐Net for mass segmentation in digital mammograms
Ravitha Rajalakshmi, N.
Vidhyapriya, R.
Elango, N.
Ramesh, Nikhil
International Journal of Imaging Systems and Technology2020Journal Article, cited 0 times
Website
CBIS-DDSM
BREAST
Computer Aided Detection (CADe)
Mass detection is a critical process in the examination of mammograms. The shape and texture of the mass are key parameters used in the diagnosis of breast cancer. To recover the shape of the mass, semantic segmentation is found to be more useful rather than mere object detection (or) localization. The main challenges involved in the mass segmentation include: (a) low signal to noise ratio (b) indiscernible mass boundaries, and (c) more false positives. These problems arise due to the significant overlap in the intensities of both the normal parenchymal region and the mass region. To address these challenges, deeply supervised U‐Net model (DS U‐Net) coupled with dense conditional random fields (CRFs) is proposed. Here, the input images are preprocessed using CLAHE and a modified encoder‐decoder‐based deep learning model is used for segmentation. In general, the encoder captures the textual information of various regions in an input image, whereas the decoder recovers the spatial location of the desired region of interest. The encoder‐decoder‐based models lack the ability to recover the non‐conspicuous and spiculated mass boundaries. In the proposed work, deep supervision is integrated with a popular encoder‐decoder model (U‐Net) to improve the attention of the network toward the boundary of the suspicious regions. The final segmentation map is also created as a linear combination of the intermediate feature maps and the output feature map. The dense CRF is then used to fine‐tune the segmentation map for the recovery of definite edges. The DS U‐Net with dense CRF is evaluated on two publicly available benchmark datasets CBIS‐DDSM and INBREAST. It provides a dice score of 82.9% for CBIS‐DDSM and 79% for INBREAST.
Glioma grade detection using grasshopper optimization algorithm‐optimized machine learning methods: The Cancer Imaging Archive study
Hedyehzadeh, Mohammadreza
Maghooli, Keivan
MomenGharibvand, Mohammad
International Journal of Imaging Systems and Technology2021Journal Article, cited 0 times
TCGA-GBM
TCGA-LGG
Abstract Detection of brain tumor's grade is a very important task in treatment plan design which was done using invasive methods such as pathological examination. This examination needs resection procedure and resulted in pain, hemorrhage and infection. The aim of this study is to provide an automated non‐invasive method for estimation of brain tumor's grade using Magnetic Resonance Images (MRI). After pre‐processing, using Fuzzy C‐Means (FCM) segmentation method, tumor region was extracted from post‐processed images. In feature extraction, texture, Local Binary Pattern (LBP) and fractal‐based features were extracted using Matlab software. Then using Grasshopper Optimization Algorithm (GOA), parameters of three different classification methods including Random Forest (RF), K‐Nearest Neighbor (KNN) and Support Vector Machine (SVM) were optimized. Finally, performance of three applied classifiers before and after optimization were compared. The results showed that the random forest with accuracy of 99.09% has achieved better performance comparing other classification methods.
Improved pulmonary lung nodules risk stratification in computed tomography images by fusing shape and texture features in a machine-learning paradigm
Sahu, Satya Prakash
Londhe, Narendra D.
Verma, Shrish
Singh, Bikesh K.
Banchhor, Sumit Kumar
International Journal of Imaging Systems and Technology2020Journal Article, cited 0 times
Website
LIDC-IDRI
Lung cancer
CAD
radiomic features
Abstract Lung cancer is one of the most deadly cancer in both men and women. Accurate and early diagnosis of pulmonary lung nodules is critical. This study presents an accurate computer-aided diagnosis (CADx) system for risk stratification of pulmonary nodules in computed tomography (CT) lung images by fusing shape and texture-based features in a machine-learning (ML) based paradigm. A database with 114 (28 high-risk) patients acquired from Lung Image Database Consortium (LIDC) is used in this study. After nodule segmentation using K-means clustering, features based on shape and texture attributes are extracted. Seven different filter and wrapper-based feature selection techniques are used for dominant feature selection. Lastly, the classification of nodules is performed by a support vector machine using six different kernel functions. The classification results are evaluated using 10-fold cross-validation and hold-out data division protocols. The performance of the proposed system is evaluated using accuracy, sensitivity, specificity, and the area under receiver operating characteristics (AUC). Using 30 dominant features from the pool of shape and texture-based features, the proposed system achieves the highest classification accuracy and AUC of 89% and 0.92, respectively. The proposed ML-based system showed an improvement in risk stratification accuracy by fusing shape and texture-based features.
Novel computer‐aided lung cancer detection based on convolutional neural network‐based and feature‐based classifiers using metaheuristics
Guo, Z. Q.
Xu, L. A.
Si, Y. J.
Razmjooy, N.
International Journal of Imaging Systems and Technology2021Journal Article, cited 1 times
Website
LungCT-Diagnosis
Computer Aided Diagnosis (CADx)
optimization
Classification
Algorithm Development
This study proposes a lung cancer diagnosis system based on computed tomography (CT) scan images for the detection of the disease. The proposed method uses a sequential approach to achieve this goal. Consequently, two well-organized classifiers, the convolutional neural network (CNN) and feature-based methodology, have been used. In the first step, the CNN classifier is optimized using a newly designed optimization method called the improved Harris hawk optimizer. This method is applied to the dataset, and the classification is commenced. If the disease cannot be detected via this method, the results are conveyed to the second classifier, that is, the feature-based method. This classifier, including Haralick and LBP features, is subsequently applied to the received dataset from the CNN classifier. Finally, if the feature-based method also does not detect cancer, the case study is healthy; otherwise, the case study is cancerous.
Detection of lung tumor using dual tree complex wavelet transform and co‐active adaptive neuro fuzzy inference system classification approach
Kailasam, Manoj Senthil
Thiagarajan, MeeraDevi
International Journal of Imaging Systems and Technology2021Journal Article, cited 0 times
Website
LIDC-IDRI
Wavelet
Computed Tomography (CT)
Automatic segmentation
LUNG
The automatic detection and location of the tumor regions in lung images is more important to provide timely medical treatments to patients in order to save their lives. In this article, machine learning-based lung tumor detection, classification and segmentation algorithm is proposed. The tumor classification phase first smooth the source lung computed tomography image using adaptive median filter and then discrete time complex wavelet transform (DT-CWT) is applied on this smoothed lung image to decompose the entire image into a number of sub-bands. Along with the decomposed sub-bands, DWT, pattern, and co-occurrence features are computed and classified using co-active adaptive neuro fuzzy inference system (CANFIS). The tumor segmentation phase uses morphological functions on this classified abnormal lung image to locate the tumor regions. The multi-evaluation parameters are used to evaluate the proposed method. This method is compared with the other state-of-the-art methods on the same lung image from open-access dataset.
Accelerated brain tumor dynamic contrast‐enhanced; MRI; using Adaptive; Pharmaco‐Kinetic; Model Constrained method
Liu, Fan
Li, Dongxiao
Jin, Xinyu
Qiu, Wenyuan
International Journal of Imaging Systems and Technology2021Journal Article, cited 0 times
Website
RIDER Neuro MRI
Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI)
In brain tumor, dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) spatiotemporally resolved high-quality reconstruction which is required for quantitative analysis of some physiological characteristics of brain tissue. By exploiting some kind of sparsity priori, compressed sensing methods can achieve high spatiotemporal DCE-MRI image reconstruction from undersampled k-space data. Recently, as a kind of priori information about the contrast agent (CA) concentration dynamics, Pharmacokinetic (PK) models have been explored for undersampled DCE-MRI reconstruction. This paper presents a novel dictionary learning-based reconstruction method with Adaptive Pharmaco-Kinetic Model Constraints (APKMC). In APKMC, the priori knowledge about CA dynamics is incorporated into a novel dictionary, which consists of PK model-based atoms and adaptive atoms. The PK atoms are constructed based on Patlak model and K-SVD dimension reduction algorithm, and the adaptive ones are used to resolve PK model inconsistencies. To solve APKMC, an optimization algorithm based on variable splitting and alternating iterative optimization is presented. The proposed method has been validated on three brain tumor DCE-MRI data sets by comparing with two state-of-the-art methods. As demonstrated by the quantitative and qualitative analysis of results, APKMC achieved substantially better quality in the reconstruction of brain DCE-MRI images, as well as in the reconstruction of PK model parameter maps.
COLI‐Net: Deep learning‐assisted fully automated COVID‐19 lung and infection pneumonia lesion detection and segmentation from chest computed tomography images
Shiri, Isaac
Arabi, Hossein
Salimi, Yazdan
Sanaat, Amirhossein
Akhavanallaf, Azadeh
Hajianfar, Ghasem
Askari, Dariush
Moradi, Shakiba
Mansouri, Zahra
Pakbin, Masoumeh
Sandoughdaran, Saleh
Abdollahi, Hamid
Radmard, Amir Reza
Rezaei‐Kalantari, Kiara
Ghelich Oghli, Mostafa
Zaidi, Habib
International Journal of Imaging Systems and Technology2021Journal Article, cited 0 times
Website
NSCLC-Radiomics
PleThora
Computed Tomography (CT)
Deep residual neural network
TensorFlow
COVID-19
2D segmentation
3D segmentation
LUNG
Radiomics
Imaging features
We present a deep learning (DL)-based automated whole lung and COVID-19 pneumonia infectious lesions (COLI-Net) detection and segmentation from chest computed tomography (CT) images. This multicenter/multiscanner study involved 2368 (347′259 2D slices) and 190 (17 341 2D slices) volumetric CT exams along with their corresponding manual segmentation of lungs and lesions, respectively. All images were cropped, resized, and the intensity values clipped and normalized. A residual network with non-square Dice loss function built upon TensorFlow was employed. The accuracy of lung and COVID-19 lesions segmentation was evaluated on an external reverse transcription-polymerase chain reaction positive COVID-19 dataset (7′333 2D slices) collected at five different centers. To evaluate the segmentation performance, we calculated different quantitative metrics, including radiomic features. The mean Dice coefficients were 0.98 ± 0.011 (95% CI, 0.98–0.99) and 0.91 ± 0.038 (95% CI, 0.90–0.91) for lung and lesions segmentation, respectively. The mean relative Hounsfield unit differences were 0.03 ± 0.84% (95% CI, −0.12 to 0.18) and −0.18 ± 3.4% (95% CI, −0.8 to 0.44) for the lung and lesions, respectively. The relative volume difference for lung and lesions were 0.38 ± 1.2% (95% CI, 0.16–0.59) and 0.81 ± 6.6% (95% CI, −0.39 to 2), respectively. Most radiomic features had a mean relative error less than 5% with the highest mean relative error achieved for the lung for the range first-order feature (−6.95%) and least axis length shape feature (8.68%) for lesions. We developed an automated DL-guided three-dimensional whole lung and infected regions segmentation in COVID-19 patients to provide fast, consistent, robust, and human error immune framework for lung and pneumonia lesion detection and quantification.
Pathological categorization of lung carcinoma from multimodality images using convolutional neural networks
Jacob, Chinnu
Menon, Gopakumar Chandrasekhara
International Journal of Imaging Systems and Technology2021Journal Article, cited 0 times
Lung-PET-CT-Dx
Abstract Accurate diagnosis and treatment of lung carcinoma depend on its pathological type and staging. Normally, pathological analysis is performed either by needle biopsy or surgery. Therefore, a noninvasive method to detect pathological types would be a good alternative. Hence, this work aims at categorizing different types of lung cancer from multimodality images. The proposed approach involves two stages. Initially, a Blind/Referenceless Image Spatial Quality Evaluator‐based approach is adopted to extract the slices having lung abnormalities from the dataset. The slices then are transferred to a novel shallow convolutional neural network model to detect adenocarcinoma, squamous cell carcinoma, and small cell carcinoma from multimodality images. The classifier efficacy is then investigated by comparing precision, recall, area under curve, and accuracy with pretrained models and existing methods. The results narrate that the suggested system outperformed with a testing accuracy of 95% in Positron emission tomography/computed tomography (PET/CT), 93% in CT images of the Lung‐PET‐CT‐DX dataset, and 98% in the Lung3 dataset. Furthermore, a kappa score of 0.92 in PET/CT of Lung‐PETCT‐DX and 0.98 in CT of Lung3 exhibited the effectiveness of the presented system in the field of lung cancer classification.
Lung cancer classification using exponential mean saturation linear unit activation function in various generative adversarial network models
Thirumagal, Egambaram
Saruladha, Krishnamurthy
International Journal of Imaging Systems and Technology2022Journal Article, cited 0 times
Website
SPIE-AAPM Lung CT Challenge
Generative Adversarial Network (GAN)
Classification
Algorithm Development
Nowadays, the mortality rate due to lung cancer increases rapidly worldwide as it can be classified only at the later stages. Early classification of lung cancer will help patients to take treatment and decrease the death rate. The limited dataset and diversity of data samples are the bottlenecks for early classification. In this paper, robust deep learning generative adversarial network (GAN) models are employed to enhance the dataset and to increase classification accuracy. The activation function plays an important feature-learning role in neural networks. Since the existing activation functions suffer from various drawbacks such as vanishing gradient, dead neurons, output offset, etc., this paper proposes a novel activation function exponential mean saturation linear unit (EMSLU), which aims to speed up training, reduce network running time, and improve classification accuracy. The experiments were conducted using vanilla GAN, Wasserstein generative adversarial network, Wasserstein generative adversarial network with gradient penalty, conditional generative adversarial network, and deep convolutional generative adversarial network. Each GAN is tested with rectified linear unit, exponential linear unit, and proposed EMSLU activation functions. The results show that all the GAN's with EMSLU yields improved precision, recall, F1-score, and accuracy.
A multilevel self‐attention based segmentation and classification technique using Directional Hexagonal Mixed Pattern algorithm for lung nodule detection in thoracic CT image
Sahaya Jeniba, J.
Milton, A.
International Journal of Imaging Systems and Technology2022Journal Article, cited 0 times
Website
LIDC-IDRI
LUNG
Classification
Pulmonic nodules are unusual growing of tissues; originate on one lung or both lungs. They are the round, trifling mass of soft tissues in the lung area. Habitually, pulmonic nodules are indications of lung tumors, but they may be nonthreatening. When identified earlier and treated in time, the patient's life expectancy increases. The anatomy of the lung is highly interconnected in nature, which makes it difficult to diagnose pulmonic nodules by diverse clinical imaging practices. A network model is presented in this paper for accurate classification of pulmonic nodules from computed tomography scans images. The lung images are subjected to semantic segmentation using Attention U-Net to isolate the pulmonary nodules. The proposed Directional Hexagonal Mixed Pattern is applied to generate a new texture pattern. Then, the nodules are classified by combining the proposed multilevel network model with the self-attention network. This paper also demonstrates an experimental arrangement called tenfold cross-validation without a segmentation mask, in which the nodules that had been marked as less than 3 mm by radiologists are discarded. This has obtained an improved result. The experimental results show that with and without segmentation masks the proposed classifier scores an accuracy of 90.48% and 91.83%. In addition, it has efficiently produced the measure of area under curve as 98.08%.
Detection of liver abnormalities—A new paradigm in medical image processing and classification techniques
R, Karthikamani
Rajaguru, Harikumar
International Journal of Imaging Systems and Technology2022Journal Article, cited 0 times
B-mode-and-CEUS-Liver
Abstract The liver is the body's most essential organ, and all human activities are interrelated with normal liver function. Any malfunction of the liver may lead to fatal diseases; therefore, early detection of liver abnormalities is essential. Modern medical imaging techniques combined with engineering procedures are reducing human suffering caused by liver disease. This study uses multiple classifiers to detect liver cirrhosis in ultrasonic images. The ultrasound images were obtained from The Cancer Imaging Archive database. A gray‐level co‐occurrence matrix (GLCM) and statistical approaches are used to extract features from normal and liver‐cirrhosis images. The extracted GLCM features are normalized and classified using nonlinear regression, linear regression, logistic regression, Bayesian Linear Discriminant Classifiers (BLDC), Gaussian Mixture Model (GMM), Firefly, Cuckoo search, Particle Swarm Optimization (PSO), Elephant search, Dragon Fly, Firefly GMM, Cuckoo search GMM, PSO GMM, Elephant search GMM, and Dragon Fly GMM classifiers. Benchmark metrics, such as sensitivity, specificity, accuracy, precision, negative predictive value, false‐negative rate, balanced accuracy, F1 score, Mathew correlation coefficient, F measure, error rate, Jaccard metric, and classifier success index, are assessed to identify the best‐performing classifier. The GMM classifier outperformed other classifiers for statistical features, and it achieved the highest accuracy (98.39%) and lowest error rate (1.61%). Moreover, the Dragon Fly GMM classifier achieved 90.69% for the GLCM feature used to classify liver cirrhosis.
SABOS-Net: Self-supervised attention based network for automatic organ segmentation of head and neck CT images
Francis, S.
Pooloth, G.
Singam, S. B. S.
Puzhakkal, N.
Narayanan, P. P.
Balakrishnan, J. P.
International Journal of Imaging Systems and Technology2022Journal Article, cited 0 times
OPC-Radiomics
auto-contouring
Deep Learning
head and neck ct
organs at risk(oar)
radiation therapy
residual u-net
self supervision
auto-segmentation
framework
Algorithm Development
Atlas
Radiotherapy
The segmentation of Organs At Risk (OAR) in Computed Tomography (CT) images is an essential part of the planning phase of radiation treatment to avoid the adverse effects of cancer radiotherapy treatment. Accurate segmentation is a tedious task in the head and neck region due to a large number of small and sensitive organs and the low contrast of CT images. Deep learning-based automatic contouring algorithms can ease this task even when the organs have irregular shapes and size variations. This paper proposes a fully automatic deep learning-based self-supervised 3D Residual UNet architecture with CBAM(Convolution Block Attention Mechanism) for the organ segmentation in head and neck CT images. The Model Genesis structure and image context restoration techniques are used for self-supervision, which can help the network learn image features from unlabeled data, hence solving the annotated medical data scarcity problem in deep networks. A new loss function is applied for training by integrating Focal loss, Tversky loss, and Cross-entropy loss. The proposed model outperforms the state-of-the-art methods in terms of dice similarity coefficient in segmenting the organs. Our self-supervised model could achieve a 4% increase in the dice score of Chiasm, which is a small organ that is present only in a very few CT slices. The proposed model exhibited better accuracy for 5 out of 7 OARs than the recent state-of-the-art models. The proposed model could simultaneously segment all seven organs in an average time of 0.02 s. The source code of this work is made available at .
FFCAEs : An efficient feature fusion framework using cascaded autoencoders for the identification of gliomas
Gudigar, Anjan
Raghavendra, U.
Rao, Tejaswi N.
Samanth, Jyothi
Rajinikanth, Venkatesan
Satapathy, Suresh Chandra
Ciaccio, Edward J.
Wai Yee, Chan
Acharya, U. Rajendra
International Journal of Imaging Systems and Technology2022Journal Article, cited 0 times
Website
TCGA-LGG
TCGA-GBM
BRAIN
Computer Aided Diagnosis (CADx)
Computer Aided Detection (CADe)
Intracranial tumors arise from constituents of the brain and its meninges. Glioblastoma (GBM) is the most common adult primary intracranial neoplasm and is categorized as high-grade astrocytoma according to the World Health Organization (WHO). The survival rate for 5 and 10 years after diagnosis is under 10%, contributing to its grave prognosis. Early detection of GBM enables early intervention, prognostication, and treatment monitoring. Computer-aided diagnostics (CAD) is a computerized process that helps to differentiate between GBM and low-grade gliomas (LGG), using the perceptible analysis of magnetic resonance (MR) of the brain. This study proposes a framework consisting of a feature fusion algorithm with cascaded autoencoders (CAEs), referred to as FFCAEs. Here we utilized two CAEs and extracted the relevant features from multiple CAEs. Inspired by the existing work on fusion algorithms, the obtained features are then fused by using a novel fusion algorithm. Finally, the resultant fused features are classified with the Softmax classifier to arrive at an average classification accuracy of 96.7%, which is 2.45% more than the previously best-performing model. The method is shown to be efficacious thus, it can be useful as a utility program for doctors.
Histopathological carcinoma classification using parallel, cross‐concatenated and grouped convolutions deep neural network
Kadirappa, Ravindranath
Subbian, Deivalakshmi
Ramasamy, Pandeeswari
Ko, Seok‐Bum
International Journal of Imaging Systems and Technology2023Journal Article, cited 0 times
TCGA-LIHC
Abstract Cancer is more alarming in modern days due to its identification at later stages. Among cancers, lung, liver and colon cancers are the leading cause of untimely death. Manual cancer identification from histopathological images is time‐consuming and labour‐intensive. Thereby, computer‐aided decision support systems are desired. A deep learning model is proposed in this paper to accurately identify cancer. Convolutional neural networks have shown great ability to identify the significant patterns for cancer classification. The proposed Parallel, Cross Concatenated and Grouped Convolutions Deep Neural Network (PC 2 GCDN 2 ) has been developed to obtain accurate patterns for classification. To prove the robustness of the model, it is evaluated on the KMC and TCGA‐LIHC liver dataset, LC25000 dataset for lung and colon cancer classification. The proposed PC 2 GCDN 2 model outperforms states‐of‐the‐art methods. The model provides 5.5% improved accuracy compared to the LiverNet proposed by Aatresh et. al on the KMC dataset. On the LC25000 dataset, 2% improvement is observed compared to existing models. Performance evaluation metrics like Sensitivity, Specificity, Recall, F1‐Score and Intersection‐Over‐Union are used to evaluate the performance. To the best of our knowledge, PC 2 GCDN 2 can be considered as gold standard for multiple histopathology image classification. PC 2 GCDN is able to classify the KMC and TCGA‐LIHC liver dataset with 96.4% and 98.6% accuracy, respectively, which are the best results obtained till now. The performance has been superior on LC25000 dataset with 99.5% and 100% classification accuracy on lung and colon dataset, by utilizing less than 0.5 million parameters.
A transformer-based deep neural network for detection and classification of lung cancer via PET/CT images
Barbouchi, Khalil
Hamdi, Dhekra El
Elouedi, Ines
Aïcha, Takwa Ben
Echi, Afef Kacem
Slim, Ihsen
International Journal of Imaging Systems and Technology2023Journal Article, cited 0 times
Website
Lung-PET-CT-Dx
Algorithm Development
LUNG
Deep Learning
Radiomics
Classification
Lung cancer is the leading cause of death for men and women worldwide and the second most frequent cancer. Therefore, early detection of the disease increases the cure rate. This paper presents a new approach to evaluate the ability of positron emission tomography/computed tomography (PET/CT) images to classify and detect lung cancer using deep learning techniques. Our approach aims to fully automate lung cancer's anatomical localization from PET/CT images. It also searches to classify the tumor, which is essential as it makes it possible to determine the disease's speed of progression and the best treatments to adopt. We have built, in this work, an approach based on transformers by implementing the DETR model as a tool to detect the tumor and assist physicians in staging patients with lung cancer. The TNM staging system and histologic subtype classification were both taken as a standard for classification. Experimental results demonstrated that our approach achieves sound results on tumor localization, T staging, and histology classification. Our proposed approach detects tumors with an intersection over union (IOU) of 0.8 when tested on the Lung-PET-CT-Dx dataset. It also has yielded better accuracy than state-of-the-art T-staging and histologic classification methods. It classified T-stage and histologic subtypes with an accuracy of 0.97 and 0.94, respectively.
Brain tumor image pixel segmentation and detection using an aggregation of GAN models with vision transformer
Datta, Priyanka
Rohilla, Rajesh
International Journal of Imaging Systems and Technology2023Journal Article, cited 0 times
Website
BraTS 2020
Magnetic Resonance Imaging (MRI)
Imaging features
Image Enhancement/methods
Classification
Algorithm Development
Generative Adversarial Network (GAN)
A number of applications in the field of medical analysis require the difficult and crucial tasks of brain tumor detection and segmentation from magnetic resonance imaging (MRI). Given that each type of brain imaging provides distinctive information about the specifics of each tumor component, in order to create a flexible and successful brain tumor segmentation system, we first suggest a normalization preprocessing method along with pixel segmentation. Then creating synthetic images is advantageous in many fields thanks to generative adversarial networks (GANs). In contrast, combining different GANs may enable understanding of the distributed features but it can make the model very complex and confusing. Standalone GAN may only retrieve the localized features in the latent version of an image. To achieve global and local feature extraction in a single model, we have used a vision transformer (ViT) along with a standalone GAN which will further improve the similarity of the images and can increase the performance of the model for detection of tumor. By effectively overcoming the constraint of data scarcity, high computational time, and lower discrimination capability, our suggested model can comprehend better accuracy, and lower computational time and also give the understanding of the information variance in various representations of the original images. The proposed model was evaluated on the BraTS 2020 dataset and Masoud2021 dataset, that is, a combination of the three datasets SARTAJ, Figshare, and BR35H. The obtained results demonstrate that the suggested model is capable of producing fine-quality images with accuracy and sensitivity scores of 0.9765 and 0.977 on the BraTS 2020 dataset as well as 0.9899 and 0.9683 on the Masoud2021 dataset.
An intelligent system of pelvic lymph node detection
Wang, Han
Huang, Hao
Wang, Jingling
Wei, Mingtian
Yi, Zhang
Wang, Ziqiang
Zhang, Haixian
2021Journal Article, cited 0 times
CT Lymph Nodes
Computed tomography (CT) scanning is a fast and painless procedure that can capture clear imaging information beneath the abdomen and is widely used to help diagnose and monitor disease progress. The pelvic lymph node is a key indicator of colorectal cancer metastasis. In the traditional process, an experienced radiologist must read all the CT scanning images slice by slice to track the lymph nodes for future diagnosis. However, this process is time‐consuming, exhausting, and subjective due to the complex pelvic structure, numerous blood vessels, and small lymph nodes. Therefore, automated methods are desirable to make this process easier. Currently, the available open‐source CTLNDataset only contains large lymph nodes. Consequently, a new data set called PLNDataset, which is dedicated to lymph nodes within the pelvis, is constructed to solve this issue. A two‐level annotation calibration method is proposed to guarantee the quality and correctness of pelvic lymph node annotation. Moreover, a novel system composed of a keyframe localization network and a lymph node detection network is proposed to detect pelvic lymph nodes in CT scanning images. The proposed method makes full use of two kinds of prior knowledge: spatial prior knowledge for keyframe localization and anchor prior knowledge for lymph node detection. A series of experiments are carried out to evaluate the proposed method, including ablation experiments, comparing other state‐of‐the‐art methods, and visualization of results. The experimental results demonstrate that our proposed method outperforms other methods on PLNDataset and CTLNDataset. This system is expected to be applied in future clinical practice.
Optimizing interstitial photodynamic therapy with custom cylindrical diffusers
Yassine, Abdul‐Amir
Lilge, Lothar
Betz, Vaughn
Journal of biophotonics2018Journal Article, cited 0 times
Website
Brain
Model
Algorithm Development
Multiparametric MRI of prostate cancer: An update on state‐of‐the‐art techniques and their performance in detecting and localizing prostate cancer
Hegde, John V
Mulkern, Robert V
Panych, Lawrence P
Fennessy, Fiona M
Fedorov, Andriy
Maier, Stephan E
Tempany, Clare
Journal of Magnetic Resonance Imaging2013Journal Article, cited 164 times
Website
Breast cancer molecular subtype classifier that incorporates MRI features
Sutton, Elizabeth J
Dashevsky, Brittany Z
Oh, Jung Hun
Veeraraghavan, Harini
Apte, Aditya P
Thakur, Sunitha B
Morris, Elizabeth A
Deasy, Joseph O
Journal of Magnetic Resonance Imaging2016Journal Article, cited 34 times
Website
Radiomics
Imaging features
BREAST
Machine learning
Radiogenomics
Purpose: To use features extracted from magnetic resonance (MR) images and a machine-learning method to assist in differentiating breast cancer molecular subtypes.; Materials and Methods: This retrospective Health Insurance Portability and Accountability Act (HIPAA)-compliant study received Institutional Review Board (IRB) approval. We identified 178 breast cancer patients between 2006-2011 with: 1) ERPR+ (n=95, 53.4%), ERPR-/HER2+ (n=35, 19.6%), or triple negative (TN, n=48, 27.0%) invasive ductal carcinoma (IDC), and 2) preoperative breast MRI at 1.5T or 3.0T. Shape, texture, and histogram-based features were extracted from each tumor contoured on pre- and three postcontrast MR images using in-house software. Clinical and pathologic features were also collected. Machine-learning-based (support vector machines) models were used to identify significant imaging features and to build models that predict IDC subtype. Leave-one-out cross-validation (LOOCV) was used to avoid model overfitting. Statistical significance was determined using the Kruskal-Wallis test.; Results: Each support vector machine fit in the LOOCV process generated a model with varying features. Eleven out of the top 20 ranked features were significantly different between IDC subtypes with P < 0.05. When the top nine pathologic and imaging features were incorporated, the predictive model distinguished IDC subtypes with an overall accuracy on LOOCV of 83.4%. The combined pathologic and imaging model's accuracy for each subtype was 89.2% (ERPR+), ;63.6% (ERPR-/HER2+), and 82.5% (TN). When only the top nine imaging features were incorporated, the predictive model distinguished IDC subtypes with an overall accuracy on LOOCV of 71.2%. The combined pathologic and imaging model's accuracy for each subtype was 69.9% (ERPR+), 62.9% (ERPR-/HER2+), and 81.0% (TN).; Conclusion: We developed a machine-learning-based predictive model using features extracted from MRI that can distinguish IDC subtypes with significant predictive power.
Intratumor partitioning and texture analysis of dynamic contrast‐enhanced (DCE)‐MRI identifies relevant tumor subregions to predict pathological response of breast cancer to neoadjuvant chemotherapy
Wu, Jia
Gong, Guanghua
Cui, Yi
Li, Ruijiang
Journal of Magnetic Resonance Imaging2016Journal Article, cited 43 times
Website
Algorithm Development
BREAST
PURPOSE: To predict pathological response of breast cancer to neoadjuvant chemotherapy (NAC) based on quantitative, multiregion analysis of dynamic contrast enhancement magnetic resonance imaging (DCE-MRI). MATERIALS AND METHODS: In this Institutional Review Board-approved study, 35 patients diagnosed with stage II/III breast cancer were retrospectively investigated using 3T DCE-MR images acquired before and after the first cycle of NAC. First, principal component analysis (PCA) was used to reduce the dimensionality of the DCE-MRI data with high temporal resolution. We then partitioned the whole tumor into multiple subregions using k-means clustering based on the PCA-defined eigenmaps. Within each tumor subregion, we extracted four quantitative Haralick texture features based on the gray-level co-occurrence matrix (GLCM). The change in texture features in each tumor subregion between pre- and during-NAC was used to predict pathological complete response after NAC. RESULTS: Three tumor subregions were identified through clustering, each with distinct enhancement characteristics. In univariate analysis, all imaging predictors except one extracted from the tumor subregion associated with fast washout were statistically significant (P < 0.05) after correcting for multiple testing, with area under the receiver operating characteristic (ROC) curve (AUC) or AUCs between 0.75 and 0.80. In multivariate analysis, the proposed imaging predictors achieved an AUC of 0.79 (P = 0.002) in leave-one-out cross-validation. This improved upon conventional imaging predictors such as tumor volume (AUC = 0.53) and texture features based on whole-tumor analysis (AUC = 0.65). CONCLUSION: The heterogeneity of the tumor subregion associated with fast washout on DCE-MRI predicted pathological response to NAC in breast cancer. J. Magn. Reson. Imaging 2016;44:1107-1115.
Identifying relations between imaging phenotypes and molecular subtypes of breast cancer: Model discovery and external validation
Wu, Jia
Sun, Xiaoli
Wang, Jeff
Cui, Yi
Kato, Fumi
Shirato, Hiroki
Ikeda, Debra M
Li, Ruijiang
Journal of Magnetic Resonance Imaging2017Journal Article, cited 17 times
Website
TCGA-BRCA
DCE-MRI
Radiomics
Radiogenomics
BREAST
Classification
Purpose: To determine whether dynamic contrast enhancement magnetic resonance imaging (DCE-MRI) characteristics of the breast tumor and background parenchyma can distinguish molecular subtypes (ie, luminal A/B or basal) of breast cancer.; ; Materials and methods: In all, 84 patients from one institution and 126 patients from The Cancer Genome Atlas (TCGA) were used for discovery and external validation, respectively. Thirty-five quantitative image features were extracted from DCE-MRI (1.5 or 3T) including morphology, texture, and volumetric features, which capture both tumor and background parenchymal enhancement (BPE) characteristics. Multiple testing was corrected using the Benjamini-Hochberg method to control the false-discovery rate (FDR). Sparse logistic regression models were built using the discovery cohort to distinguish each of the three studied molecular subtypes versus the rest, and the models were evaluated in the validation cohort.; ; Results: On univariate analysis in discovery and validation cohorts, two features characterizing tumor and two characterizing BPE were statistically significant in separating luminal A versus nonluminal A cancers; two features characterizing tumor were statistically significant for separating luminal B; one feature characterizing tumor and one characterizing BPE reached statistical significance for distinguishing basal (Wilcoxon P < 0.05, FDR < 0.25). In discovery and validation cohorts, multivariate logistic regression models achieved an area under the receiver operator characteristic curve (AUC) of 0.71 and 0.73 for luminal A cancer, 0.67 and 0.69 for luminal B cancer, and 0.66 and 0.79 for basal cancer, respectively.; ; Conclusion: DCE-MRI characteristics of breast cancer and BPE may potentially be used to distinguish among molecular subtypes of breast cancer.; ; Level of evidence: 3 Technical Efficacy: Stage 3 J. Magn. Reson. Imaging 2017;46:1017-1027.; ; Keywords: breast cancer; classification; dynamic contrast enhanced MRI; imaging genomics; molecular subtype.
Radiomics Strategy for Molecular Subtype Stratification of Lower‐Grade Glioma: Detecting IDH and TP53 Mutations Based on Multimodal MRI
Zhang, Xi
Tian, Qiang
Wang, Liang
Liu, Yang
Li, Baojuan
Liang, Zhengrong
Gao, Peng
Zheng, Kaizhong
Zhao, Bofeng
Lu, Hongbing
Journal of Magnetic Resonance Imaging2018Journal Article, cited 5 times
Website
LGG
Radiomics
Computer‐aided diagnosis of prostate cancer using a deep convolutional neural network from multiparametric MRI
Song, Yang
Zhang, Yu‐Dong
Yan, Xu
Liu, Hui
Zhou, Minxiong
Hu, Bingwen
Yang, Guang
Journal of Magnetic Resonance Imaging2018Journal Article, cited 0 times
PROSTATEx
BACKGROUND: Deep learning is the most promising methodology for automatic computer-aided diagnosis of prostate cancer (PCa) with multiparametric MRI (mp-MRI).
PURPOSE: To develop an automatic approach based on deep convolutional neural network (DCNN) to classify PCa and noncancerous tissues (NC) with mp-MRI.
STUDY TYPE: Retrospective.
SUBJECTS: In all, 195 patients with localized PCa were collected from a PROSTATEx database. In total, 159/17/19 patients with 444/48/55 observations (215/23/23 PCas and 229/25/32 NCs) were randomly selected for training/validation/testing, respectively.
SEQUENCE: T2 -weighted, diffusion-weighted, and apparent diffusion coefficient images.
ASSESSMENT: A radiologist manually labeled the regions of interest of PCas and NCs and estimated the Prostate Imaging Reporting and Data System (PI-RADS) scores for each region. Inspired by VGG-Net, we designed a patch-based DCNN model to distinguish between PCa and NCs based on a combination of mp-MRI data. Additionally, an enhanced prediction method was used to improve the prediction accuracy. The performance of DCNN prediction was tested using a receiver operating characteristic (ROC) curve, and the area under the ROC curve (AUC), sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were calculated. Moreover, the predicted result was compared with the PI-RADS score to evaluate its clinical value using decision curve analysis.
STATISTICAL TEST: Two-sided Wilcoxon signed-rank test with statistical significance set at 0.05.
RESULTS: The DCNN produced excellent diagnostic performance in distinguishing between PCa and NC for testing datasets with an AUC of 0.944 (95% confidence interval: 0.876-0.994), sensitivity of 87.0%, specificity of 90.6%, PPV of 87.0%, and NPV of 90.6%. The decision curve analysis revealed that the joint model of PI-RADS and DCNN provided additional net benefits compared with the DCNN model and the PI-RADS scheme.
DATA CONCLUSION: The proposed DCNN-based model with enhanced prediction yielded high performance in statistical analysis, suggesting that DCNN could be used in computer-aided diagnosis (CAD) for PCa classification.
LEVEL OF EVIDENCE: 3 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018;48:1570-1577.
Analysis of postprocessing steps for residue function dependent dynamic susceptibility contrast (DSC)‐MRI biomarkers and their clinical impact on glioma grading for both 1.5 and 3T
Bell, Laura C
Stokes, Ashley M
Quarles, C Chad
Journal of Magnetic Resonance Imaging2020Journal Article, cited 0 times
Website
QIN-BRAIN-DSC-MRI
Brain-Tumor-Progression
Classification
Deep Learning Whole-Gland and Zonal Prostate Segmentation on a Public MRI Dataset
Cuocolo, Renato
Comelli, Albert
Stefano, Alessandro
Benfante, Viviana
Dahiya, Navdeep
Stanzione, Arnaldo
Castaldo, Anna
De Lucia, Davide Raffaele
Yezzi, Anthony
Imbriaco, Massimo
Journal of Magnetic Resonance Imaging2021Journal Article, cited 0 times
Website
ProstateX
Deep learning
segmentation
Background Prostate volume, as determined by magnetic resonance imaging (MRI), is a useful biomarker both for distinguishing between benign and malignant pathology and can be used either alone or combined with other parameters such as prostate-specific antigen. Purpose This study compared different deep learning methods for whole-gland and zonal prostate segmentation. Study Type Retrospective. Population A total of 204 patients (train/test = 99/105) from the PROSTATEx public dataset. Field strength/Sequence A 3 T, TSE T2-weighted. Assessment Four operators performed manual segmentation of the whole-gland, central zone + anterior stroma + transition zone (TZ), and peripheral zone (PZ). U-net, efficient neural network (ENet), and efficient residual factorized ConvNet (ERFNet) were trained and tuned on the training data through 5-fold cross-validation to segment the whole gland and TZ separately, while PZ automated masks were obtained by the subtraction of the first two. Statistical Tests Networks were evaluated on the test set using various accuracy metrics, including the Dice similarity coefficient (DSC). Model DSC was compared in both the training and test sets using the analysis of variance test (ANOVA) and post hoc tests. Parameter number, disk size, training, and inference times determined network computational complexity and were also used to assess the model performance differences. A P < 0.05 was selected to indicate the statistical significance. Results The best DSC (P < 0.05) in the test set was achieved by ENet: 91% ± 4% for the whole gland, 87% ± 5% for the TZ, and 71% ± 8% for the PZ. U-net and ERFNet obtained, respectively, 88% ± 6% and 87% ± 6% for the whole gland, 86% ± 7% and 84% ± 7% for the TZ, and 70% ± 8% and 65 ± 8% for the PZ. Training and inference time were lowest for ENet. Data Conclusion Deep learning networks can accurately segment the prostate using T2-weighted images. Evidence Level 4 Technical Efficacy Stage 2
Prospective Evaluation of Repeatability and Robustness of Radiomic Descriptors in Healthy Brain Tissue Regions In Vivo Across Systematic Variations in T2‐Weighted Magnetic Resonance Imaging Acquisition Parameters
Eck, Brendan
Chirra, Prathyush V.
Muchhala, Avani
Hall, Sophia
Bera, Kaustav
Tiwari, Pallavi
Madabhushi, Anant
Seiberlich, Nicole
Viswanath, Satish E.
Journal of Magnetic Resonance Imaging2021Journal Article, cited 0 times
TCGA-GBM
BACKGROUND: Radiomic descriptors from magnetic resonance imaging (MRI) are promising for disease diagnosis and characterization but may be sensitive to differences in imaging parameters.
OBJECTIVE: To evaluate the repeatability and robustness of radiomic descriptors within healthy brain tissue regions on prospectively acquired MRI scans; in a test-retest setting, under controlled systematic variations of MRI acquisition parameters, and after postprocessing.
STUDY TYPE: Prospective.
SUBJECTS: Fifteen healthy participants.
FIELD STRENGTH/SEQUENCE: A 3.0 T, axial T2 -weighted 2D turbo spin-echo pulse sequence, 181 scans acquired (2 test/retest reference scans and 12 with systematic variations in contrast weighting, resolution, and acceleration per participant; removing scans with artifacts).
ASSESSMENT: One hundred and forty-six radiomic descriptors were extracted from a contiguous 2D region of white matter in each scan, before and after postprocessing.
STATISTICAL TESTS: Repeatability was assessed in a test/retest setting and between manual and automated annotations for the reference scan. Robustness was evaluated between the reference scan and each group of variant scans (contrast weighting, resolution, and acceleration). Both repeatability and robustness were quantified as the proportion of radiomic descriptors that fell into distinct ranges of the concordance correlation coefficient (CCC): excellent (CCC > 0.85), good (0.7 ≤ CCC ≤ 0.85), moderate (0.5 ≤ CCC < 0.7), and poor (CCC < 0.5); for unprocessed and postprocessed scans separately.
RESULTS: Good to excellent repeatability was observed for 52% of radiomic descriptors between test/retest scans and 48% of descriptors between automated vs. manual annotations, respectively. Contrast weighting (TR/TE) changes were associated with the largest proportion of highly robust radiomic descriptors (21%, after processing). Image resolution changes resulted in the largest proportion of poorly robust radiomic descriptors (97%, before postprocessing). Postprocessing of images with only resolution/acceleration differences resulted in 73% of radiomic descriptors showing poor robustness.
DATA CONCLUSIONS: Many radiomic descriptors appear to be nonrobust across variations in MR contrast weighting, resolution, and acceleration, as well in test-retest settings, depending on feature formulation and postprocessing.
EVIDENCE LEVEL: 2 TECHNICAL EFFICACY: Stage 2.
Four‐Dimensional Machine Learning Radiomics for the Pretreatment Assessment of Breast Cancer Pathologic Complete Response to Neoadjuvant Chemotherapy in Dynamic Contrast‐Enhanced MRI
Caballo, Marco
Sanderink, Wendelien BG
Han, Luyi
Gao, Yuan
Athanasiou, Alexandra
Mann, Ritse M
Journal of Magnetic Resonance Imaging2022Journal Article, cited 1 times
Website
Duke-Breast-Cancer-MRI
Machine Learning
Radiomic feature
breast cancer
Noninvasive Evaluation of the Notch Signaling Pathway via Radiomic Signatures Based on Multiparametric MRI in Association With Biological Functions of Patients With Glioma: A Multi-institutional Study
Shen, N.
Lv, W.
Li, S.
Liu, D.
Xie, Y.
Zhang, J.
Zhang, J.
Jiang, J.
Jiang, R.
Zhu, W.
J Magn Reson Imaging2022Journal Article, cited 0 times
Website
CPTAC-GBM
TCGA-GBM
Notch signaling pathway
glioma
multi-parametric magnetic resonance imaging (multi-parametric MRI)
Radiogenomics
Radiomics
BACKGROUND: Noninvasive determination of Notch signaling is important for prognostic evaluation and therapeutic intervention in glioma. PURPOSE: To predict Notch signaling using multiparametric (mp) MRI radiomics and correlate with biological characteristics in gliomas. STUDY TYPE: Retrospective. POPULATION: A total of 63 patients for model construction and 47 patients from two public databases for external testing. FIELD STRENGTH/SEQUENCE: A 1.5 T and 3.0 T, T1-weighted imaging (T1WI), T2WI, T2 fluid attenuated inversion recovery (FLAIR), contrast-enhanced (CE)-T1WI. ASSESSMENT: Radiomic features were extracted from CE-T1WI, T1WI, T2WI, and T2FLAIR and imaging signatures were selected using a least absolute shrinkage and selection operator. Diagnostic performance was compared between single modality and a combined mpMRI radiomics model. A radiomic-clinical nomogram was constructed incorporating the mpMRI radiomic signature and Karnofsky Performance score. The performance was validated in the test set. The radiomic signatures were correlated with immunohistochemistry (IHC) analysis of downstream Notch pathway components. STATISTICAL TESTS: Receiver operating characteristic curve, decision curve analysis (DCA), Pearson correlation, and Hosmer-Lemeshow test. A P value < 0.05 was considered statistically significant. RESULTS: The radiomic signature derived from the combination of all sequences numerically showed highest area under the curve (AUC) in both training and external test sets (AUCs of 0.857 and 0.823). The radiomics nomogram that incorporated the mpMRI radiomic signature and KPS status resulted in AUCs of 0.891 and 0.859 in the training and test sets. The calibration curves showed good agreement between prediction and observation in both sets (P= 0.279 and 0.170, respectively). DCA confirmed the clinical usefulness of the nomogram. IHC identified Notch pathway inactivation and the expression levels of Hes1 correlated with higher combined radiomic scores (r = -0.711) in Notch1 mutant tumors. DATA CONCLUSION: The mpMRI-based radiomics nomogram may reflect the intratumor heterogeneity associated with downstream biofunction that predicts Notch signaling in a noninvasive manner. EVIDENCE LEVEL: 3 TECHNICAL EFFICACY: Stage 2.
Glioma Tumor Grading Using Radiomics on Conventional MRI: A Comparative Study of WHO 2021 and WHO 2016 Classification of Central Nervous Tumors
Moodi, F.
Khodadadi Shoushtari, F.
Ghadimi, D. J.
Valizadeh, G.
Khormali, E.
Salari, H. M.
Ohadi, M. A. D.
Nilipour, Y.
Jahanbakhshi, A.
Rad, H. S.
J Magn Reson Imaging2023Journal Article, cited 0 times
TCGA-LGG
TCGA-GBM
BraTS-TCGA-GBM
BraTS-TCGA-LGG
Classification
WHO CNS tumor classification
artificial intelligence
glioma
machine learning
neoplasm grading
Radiomics
BACKGROUND: Glioma grading transformed in World Health Organization (WHO) 2021 CNS tumor classification, integrating molecular markers. However, the impact of this change on radiomics-based machine learning (ML) classifiers remains unexplored. PURPOSE: To assess the performance of ML in classifying glioma tumor grades based on various WHO criteria. STUDY TYPE: Retrospective. SUBJECTS: A neuropathologist regraded gliomas of 237 patients into WHO 2016 and 2021 from 2007 criteria. FIELD STRENGTH/SEQUENCE: Multicentric 0.5 to 3 Tesla; pre- and post-contrast T1-weighted, T2-weighted, and fluid-attenuated inversion recovery. ASSESSMENT: Radiomic features were selected using random forest-recursive feature elimination. The synthetic minority over-sampling technique (SMOTE) was implemented for data augmentation. Stratified 10-fold cross-validation with and without SMOTE was used to evaluate 11 classifiers for 3-grade (2, 3, and 4; WHO 2016 and 2021) and 2-grade (low and high grade; WHO 2007 and 2021) classification. Additionally, we developed the models on data randomly divided into training and test sets (mixed-data analysis), or data divided based on the centers (independent-data analysis). STATISTICAL TESTS: We assessed ML classifiers using sensitivity, specificity, accuracy, and the area under the receiver operating characteristic curve (AUC). Top performances were compared with a t-test and categorical data with the chi-square test using a significance level of P < 0.05. RESULTS: In the mixed-data analysis, Stacking Classifier without SMOTE achieved the highest accuracy (0.86) and AUC (0.92) in 3-grade WHO 2021 grouping. The results of WHO 2021 were significantly better than WHO 2016 (P-value<0.0001). In the 2-grade analysis, ML achieved 1.00 in all metrics. In the independent-data analysis, ML classifiers showed strong discrimination between grade 2 and 4, despite lower performance metrics than the mixed analysis. DATA CONCLUSION: ML algorithms performed better in glioma tumor grading based on WHO 2021 criteria. Nonetheless, the clinical use of ML classifiers needs further investigation. LEVEL OF EVIDENCE: 3 TECHNICAL EFFICACY: Stage 2.
Prediction of COVID-19 patients in danger of death using radiomic features of portable chest radiographs
Nakashima, M.
Uchiyama, Y.
Minami, H.
Kasai, S.
J Med Radiat Sci2022Journal Article, cited 0 times
Website
COVID-19-AR
Artificial intelligence
Covid-19
portable chest X-ray
prognosis prediction
Radiomics
INTRODUCTION: Computer-aided diagnostic systems have been developed for the detection and differential diagnosis of coronavirus disease 2019 (COVID-19) pneumonia using imaging studies to characterise a patient's current condition. In this radiomic study, we propose a system for predicting COVID-19 patients in danger of death using portable chest X-ray images. METHODS: In this retrospective study, we selected 100 patients, including ten that died and 90 that recovered from the COVID-19-AR database of the Cancer Imaging Archive. Since it can be difficult to analyse portable chest X-ray images of patients with COVID-19 because bone components overlap with the abnormal patterns of this disease, we employed a bone-suppression technique during pre-processing. A total of 620 radiomic features were measured in the left and right lung regions, and four radiomic features were selected using the least absolute shrinkage and selection operator technique. We distinguished death from recovery cases using a linear discriminant analysis (LDA) and a support vector machine (SVM). The leave-one-out method was used to train and test the classifiers, and the area under the receiver-operating characteristic curve (AUC) was used to evaluate discriminative performance. RESULTS: The AUCs for LDA and SVM were 0.756 and 0.959, respectively. The discriminative performance was improved when the bone-suppression technique was employed. When the SVM was used, the sensitivity for predicting disease severity was 90.9% (9/10), and the specificity was 95.6% (86/90). CONCLUSIONS: We believe that the radiomic features of portable chest X-ray images can predict COVID-19 patients in danger of death.
Influence of Contrast Administration on Computed Tomography–Based Analysis of Visceral Adipose and Skeletal Muscle Tissue in Clear Cell Renal Cell Carcinoma
Paris, Michael T
Furberg, Helena F
Petruzella, Stacey
Akin, Oguz
Hötker, Andreas M
Mourtzakis, Marina
Journal of Parenteral and Enteral Nutrition2018Journal Article, cited 0 times
Website
TCGA_RCC
body composition
sarcopenia
visceral adipose
muscle quality
Introducing the Medical Physics Dataset Article
Williamson, Jeffrey F
Das, Shiva K
Goodsitt, Mitchell S
Deasy, Joseph O
Medical Physics2017Journal Article, cited 7 times
Website
Multi‐site quality and variability analysis of 3D FDG PET segmentations based on phantom and clinical image data
Beichel, Reinhard R
Smith, Brian J
Bauer, Christian
Ulrich, Ethan J
Ahmadvand, Payam
Budzevich, Mikalai M
Gillies, Robert J
Goldgof, Dmitry
Grkovski, Milan
Hamarneh, Ghassan
Medical Physics2017Journal Article, cited 7 times
Website
QIN PET Phantom
PURPOSE: Radiomics utilizes a large number of image-derived features for quantifying tumor characteristics that can in turn be correlated with response and prognosis. Unfortunately, extraction and analysis of such image-based features is subject to measurement variability and bias. The challenge for radiomics is particularly acute in Positron Emission Tomography (PET) where limited resolution, a high noise component related to the limited stochastic nature of the raw data, and the wide variety of reconstruction options confound quantitative feature metrics. Extracted feature quality is also affected by tumor segmentation methods used to define regions over which to calculate features, making it challenging to produce consistent radiomics analysis results across multiple institutions that use different segmentation algorithms in their PET image analysis. Understanding each element contributing to these inconsistencies in quantitative image feature and metric generation is paramount for ultimate utilization of these methods in multi-institutional trials and clinical oncology decision making. METHODS: To assess segmentation quality and consistency at the multi-institutional level, we conducted a study of seven institutional members of the National Cancer Institute Quantitative Imaging Network. For the study, members were asked to segment a common set of phantom PET scans acquired over a range of imaging conditions as well as a second set of head and neck cancer (HNC) PET scans. Segmentations were generated at each institution using their preferred approach. In addition, participants were asked to repeat segmentations with a time interval between initial and repeat segmentation. This procedure resulted in overall 806 phantom insert and 641 lesion segmentations. Subsequently, the volume was computed from the segmentations and compared to the corresponding reference volume by means of statistical analysis. RESULTS: On the two test sets (phantom and HNC PET scans), the performance of the seven segmentation approaches was as follows. On the phantom test set, the mean relative volume errors ranged from 29.9 to 87.8% of the ground truth reference volumes, and the repeat difference for each institution ranged between -36.4 to 39.9%. On the HNC test set, the mean relative volume error ranged between -50.5 to 701.5%, and the repeat difference for each institution ranged between -37.7 to 31.5%. In addition, performance measures per phantom insert/lesion size categories are given in the paper. On phantom data, regression analysis resulted in coefficient of variation (CV) components of 42.5% for scanners, 26.8% for institutional approaches, 21.1% for repeated segmentations, 14.3% for relative contrasts, 5.3% for count statistics (acquisition times), and 0.0% for repeated scans. Analysis showed that the CV components for approaches and repeated segmentations were significantly larger on the HNC test set with increases by 112.7% and 102.4%, respectively. CONCLUSION: Analysis results underline the importance of PET scanner reconstruction harmonization and imaging protocol standardization for quantification of lesion volumes. In addition, to enable a distributed multi-site analysis of FDG PET images, harmonization of analysis approaches and operator training in combination with highly automated segmentation methods seems to be advisable. Future work will focus on quantifying the impact of segmentation variation on radiomics system performance.
A supervoxel‐based segmentation method for prostate MR images
Tian, Zhiqiang
Liu, Lizhi
Zhang, Zhenfeng
Xue, Jianru
Fei, Baowei
Medical Physics2017Journal Article, cited 57 times
Website
ISBI-MR-Prostate-2013
Magnetic Resonance Imaging
Prostate
PURPOSE: Segmentation of the prostate on MR images has many applications in prostate cancer management. In this work, we propose a supervoxel-based segmentation method for prostate MR images.
METHODS: A supervoxel is a set of pixels that have similar intensities, locations, and textures in a 3D image volume. The prostate segmentation problem is considered as assigning a binary label to each supervoxel, which is either the prostate or background. A supervoxel-based energy function with data and smoothness terms is used to model the label. The data term estimates the likelihood of a supervoxel belonging to the prostate by using a supervoxel-based shape feature. The geometric relationship between two neighboring supervoxels is used to build the smoothness term. The 3D graph cut is used to minimize the energy function to get the labels of the supervoxels, which yields the prostate segmentation. A 3D active contour model is then used to get a smooth surface by using the output of the graph cut as an initialization. The performance of the proposed algorithm was evaluated on 30 in-house MR image data and PROMISE12 dataset.
RESULTS: The mean Dice similarity coefficients are 87.2 ± 2.3% and 88.2 ± 2.8% for our 30 in-house MR volumes and the PROMISE12 dataset, respectively. The proposed segmentation method yields a satisfactory result for prostate MR images.
CONCLUSION: The proposed supervoxel-based method can accurately segment prostate MR images and can have a variety of application in prostate cancer diagnosis and therapy.
A longitudinal four‐dimensional computed tomography and cone beam computed tomography dataset for image‐guided radiation therapy research in lung cancer
Hugo, Geoffrey D
Weiss, Elisabeth
Sleeman, William C
Balik, Salim
Keall, Paul J
Lu, Jun
Williamson, Jeffrey F
Medical Physics2017Journal Article, cited 8 times
Website
4D-Lung
Computed Tomography (CT)
PURPOSE: To describe in detail a dataset consisting of serial four-dimensional computed tomography (4DCT) and 4D cone beam CT (4DCBCT) images acquired during chemoradiotherapy of 20 locally advanced, nonsmall cell lung cancer patients we have collected at our institution and shared publicly with the research community. ACQUISITION AND VALIDATION METHODS: As part of an NCI-sponsored research study 82 4DCT and 507 4DCBCT images were acquired in a population of 20 locally advanced nonsmall cell lung cancer patients undergoing radiation therapy. All subjects underwent concurrent radiochemotherapy to a total dose of 59.4-70.2 Gy using daily 1.8 or 2 Gy fractions. Audio-visual biofeedback was used to minimize breathing irregularity during all fractions, including acquisition of all 4DCT and 4DCBCT acquisitions in all subjects. Target, organs at risk, and implanted fiducial markers were delineated by a physician in the 4DCT images. Image coordinate system origins between 4DCT and 4DCBCT were manipulated in such a way that the images can be used to simulate initial patient setup in the treatment position. 4DCT images were acquired on a 16-slice helical CT simulator with 10 breathing phases and 3 mm slice thickness during simulation. In 13 of the 20 subjects, 4DCTs were also acquired on the same scanner weekly during therapy. Every day, 4DCBCT images were acquired on a commercial onboard CBCT scanner. An optically tracked external surrogate was synchronized with CBCT acquisition so that each CBCT projection was time stamped with the surrogate respiratory signal through in-house software and hardware tools. Approximately 2500 projections were acquired over a period of 8-10 minutes in half-fan mode with the half bow-tie filter. Using the external surrogate, the CBCT projections were sorted into 10 breathing phases and reconstructed with an in-house FDK reconstruction algorithm. Errors in respiration sorting, reconstruction, and acquisition were carefully identified and corrected. DATA FORMAT AND USAGE NOTES: 4DCT and 4DCBCT images are available in DICOM format and structures through DICOM-RT RTSTRUCT format. All data are stored in the Cancer Imaging Archive (TCIA, http://www.cancerimagingarchive.net/) as collection 4D-Lung and are publicly available. DISCUSSION: Due to high temporal frequency sampling, redundant (4DCT and 4DCBCT) data at similar timepoints, oversampled 4DCBCT, and fiducial markers, this dataset can support studies in image-guided and image-guided adaptive radiotherapy, assessment of 4D voxel trajectory variability, and development and validation of new tools for image registration and motion management.
Automatic intensity windowing of mammographic images based on a perceptual metric
Albiol, Alberto
Corbi, Alberto
Albiol, Francisco
Medical Physics2017Journal Article, cited 0 times
Website
Algorithm Development
Computer Aided Diagnosis (CADx)
BI-RADS
mutual information
Mammography
Gabor filter
BREAST
Radiomic feature
PURPOSE: Initial auto-adjustment of the window level WL and width WW applied to mammographic images. The proposed intensity windowing (IW) method is based on the maximization of the mutual information (MI) between a perceptual decomposition of the original 12-bit sources and their screen displayed 8-bit version. Besides zoom, color inversion and panning operations, IW is the most commonly performed task in daily screening and has a direct impact on diagnosis and the time involved in the process. METHODS: The authors present a human visual system and perception-based algorithm named GRAIL (Gabor-relying adjustment of image levels). GRAIL initially measures a mammogram's quality based on the MI between the original instance and its Gabor-filtered derivations. From this point on, the algorithm performs an automatic intensity windowing process that outputs the WL/WW that best displays each mammogram for screening. GRAIL starts with the default, high contrast, wide dynamic range 12-bit data, and then maximizes the graphical information presented in ordinary 8-bit displays. Tests have been carried out with several mammogram databases. They comprise correlations and an ANOVA analysis with the manual IW levels established by a group of radiologists. A complete MATLAB implementation of GRAIL is available at https://github.com/TheAnswerIsFortyTwo/GRAIL. RESULTS: Auto-leveled images show superior quality both perceptually and objectively compared to their full intensity range and compared to the application of other common methods like global contrast stretching (GCS). The correlations between the human determined intensity values and the ones estimated by our method surpass that of GCS. The ANOVA analysis with the upper intensity thresholds also reveals a similar outcome. GRAIL has also proven to specially perform better with images that contain micro-calcifications and/or foreign X-ray-opaque elements and with healthy BI-RADS A-type mammograms. It can also speed up the initial screening time by a mean of 4.5 s per image. CONCLUSIONS: A novel methodology is introduced that enables a quality-driven balancing of the WL/WW of mammographic images. This correction seeks the representation that maximizes the amount of graphical information contained in each image. The presented technique can contribute to the diagnosis and the overall efficiency of the breast screening session by suggesting, at the beginning, an optimal and customized windowing setting for each mammogram.
Quantifying the reproducibility of lung ventilation images between 4-Dimensional Cone Beam CT and 4-Dimensional CT
Woodruff, Henry C.
Shieh, Chun-Chien
Hegi-Johnson, Fiona
Keall, Paul J.
Kipritidis, John
Medical Physics2017Journal Article, cited 2 times
Website
4D-Lung
lung radiation therapy
functional imaging
ventilation
4D cone beam CT
deformable image registration
Fully automatic and accurate detection of lung nodules in CT images using a hybrid feature set
Shaukat, Furqan
Raja, Gulistan
Gooya, Ali
Frangi, Alejandro F
Medical Physics2017Journal Article, cited 2 times
Website
LIDC-IDRI
Segmentation
optimal thresholding
Support Vector Machine (SVM)
K-Nearest-Neighbour (KNN)
Linear Discriminant Analysis (LDA)
A deep convolutional neural network using directional wavelets for low-dose X-ray CT reconstruction
Kang, E.
Min, J.
Ye, J. C.
Med Phys2017Journal Article, cited 568 times
Website
LDCT-and-Projection-data
*Radiation Dosage
Signal-To-Noise Ratio
Computed Tomography (CT)
Wavelet Analysis
Convolutional Neural Network (CNN)
Deep Learning
PURPOSE: Due to the potential risk of inducing cancer, radiation exposure by X-ray CT devices should be reduced for routine patient scanning. However, in low-dose X-ray CT, severe artifacts typically occur due to photon starvation, beam hardening, and other causes, all of which decrease the reliability of the diagnosis. Thus, a high-quality reconstruction method from low-dose X-ray CT data has become a major research topic in the CT community. Conventional model-based de-noising approaches are, however, computationally very expensive, and image-domain de-noising approaches cannot readily remove CT-specific noise patterns. To tackle these problems, we want to develop a new low-dose X-ray CT algorithm based on a deep-learning approach. METHOD: We propose an algorithm which uses a deep convolutional neural network (CNN) which is applied to the wavelet transform coefficients of low-dose CT images. More specifically, using a directional wavelet transform to extract the directional component of artifacts and exploit the intra- and inter- band correlations, our deep network can effectively suppress CT-specific noise. In addition, our CNN is designed with a residual learning architecture for faster network training and better performance. RESULTS: Experimental results confirm that the proposed algorithm effectively removes complex noise patterns from CT images derived from a reduced X-ray dose. In addition, we show that the wavelet-domain CNN is efficient when used to remove noise from low-dose CT compared to existing approaches. Our results were rigorously evaluated by several radiologists at the Mayo Clinic and won second place at the 2016 "Low-Dose CT Grand Challenge." CONCLUSIONS: To the best of our knowledge, this work is the first deep-learning architecture for low-dose CT reconstruction which has been rigorously evaluated and proven to be effective. In addition, the proposed algorithm, in contrast to existing model-based iterative reconstruction (MBIR) methods, has considerable potential to benefit from large data sets. Therefore, we believe that the proposed algorithm opens a new direction in the area of low-dose CT research.
Segmentation and tracking of lung nodules via graph‐cuts incorporating shape prior and motion from 4D CT
Cha, Jungwon
Farhangi, Mohammad Mehdi
Dunlap, Neal
Amini, Amir A
Medical Physics2018Journal Article, cited 5 times
Website
LIDC-IDRI
Automated image quality assessment for chest CT scans
Reeves, A. P.
Xie, Y.
Liu, S.
Med Phys2018Journal Article, cited 0 times
Website
FDA-Phantom
Lung Image Database Consortium (LIDC)
lung cancer
segmentation
CT image calibration assessment
CT image noise assessment
automatic image quality measurement
PURPOSE: Medical image quality needs to be maintained at standards sufficient for effective clinical reading. Automated computer analytic methods may be applied to medical images for quality assessment. METHODS: For chest CT scans in a lung cancer screening context, an automated quality assessment method is presented that characterizes image noise and image intensity calibration. This is achieved by image measurements in three automatically segmented homogeneous regions of the scan: external air, trachea lumen air, and descending aorta blood. Profiles of CT scanner behavior are also computed. RESULTS: The method has been evaluated on both phantom and real low-dose chest CT scans and results show that repeatable noise and calibration measures may be realized by automated computer algorithms. Noise and calibration profiles show relevant differences between different scanners and protocols. CONCLUSIONS: Automated image quality assessment may be useful for quality control for lung cancer screening and may enable performance improvements to automated computer analysis methods.
Deep learning for segmentation of brain tumors: Impact of cross-institutional training and testing
AlBadawy, E. A.
Saha, A.
Mazurowski, M. A.
Med Phys2018Journal Article, cited 5 times
Website
TCGA-GBM
MICCAI BraTS challenge
Convolutional neural network (CNN)
FMRIB Software Library (FSL)
Dice similarity coefficient
Average Hausdorff Distance
BRAIN
Segmentation
Glioblastoma Multiforme (GBM)
magnetic resonance imaging (MRI)
BACKGROUND AND PURPOSE: Convolutional neural networks (CNNs) are commonly used for segmentation of brain tumors. In this work, we assess the effect of cross-institutional training on the performance of CNNs. METHODS: We selected 44 glioblastoma (GBM) patients from two institutions in The Cancer Imaging Archive dataset. The images were manually annotated by outlining each tumor component to form ground truth. To automatically segment the tumors in each patient, we trained three CNNs: (a) one using data for patients from the same institution as the test data, (b) one using data for the patients from the other institution and (c) one using data for the patients from both of the institutions. The performance of the trained models was evaluated using Dice similarity coefficients as well as Average Hausdorff Distance between the ground truth and automatic segmentations. The 10-fold cross-validation scheme was used to compare the performance of different approaches. RESULTS: Performance of the model significantly decreased (P < 0.0001) when it was trained on data from a different institution (dice coefficients: 0.68 +/- 0.19 and 0.59 +/- 0.19) as compared to training with data from the same institution (dice coefficients: 0.72 +/- 0.17 and 0.76 +/- 0.12). This trend persisted for segmentation of the entire tumor as well as its individual components. CONCLUSIONS: There is a very strong effect of selecting data for training on performance of CNNs in a multi-institutional setting. Determination of the reasons behind this effect requires additional comprehensive investigation.
Synthetic Head and Neck and Phantom Images for Determining Deformable Image Registration Accuracy in Magnetic Resonance Imaging
Ger, Rachel B
Yang, Jinzhong
Ding, Yao
Jacobsen, Megan C
Cardenas, Carlos E
Fuller, Clifton D
Howell, Rebecca M
Li, Heng
Stafford, R Jason
Zhou, Shouhao
Medical Physics2018Journal Article, cited 0 times
Website
MRI-DIR
head and neck cancer
mri
T1-weighted
T2-weighted
porcine phantom
4D robust optimization including uncertainties in time structures can reduce the interplay effect in proton pencil beam scanning radiation therapy
Engwall, Erik
Fredriksson, Albin
Glimelius, Lars
Medical Physics2018Journal Article, cited 2 times
Website
non-small-cell lung cancer
4D-Lung
Opportunities and challenges to utilization of quantitative imaging: Report of the AAPM practical big data workshop
Mackie, Thomas R
Jackson, Edward F
Giger, Maryellen
Medical Physics2018Journal Article, cited 1 times
Website
LIDC
Quantitative Imaging Network (QIN)
reference image database to evaluate response (RIDER)
Autosegmentation for thoracic radiation treatment planning: A grand challenge at AAPM 2017
Yang, J.
Veeraraghavan, H.
Armato, S. G., 3rd
Farahani, K.
Kirby, J. S.
Kalpathy-Kramer, J.
van Elmpt, W.
Dekker, A.
Han, X.
Feng, X.
Aljabar, P.
Oliveira, B.
van der Heyden, B.
Zamdborg, L.
Lam, D.
Gooding, M.
Sharp, G. C.
Med Phys2018Journal Article, cited 172 times
Website
LCTSC
Lung CT Segmentation Challenge 2017
Algorithm Development
Humans
Organs at Risk/radiation effects
Radiotherapy Planning
Computer-Assisted/*methods
Radiotherapy
Image-Guided/adverse effects/*methods
Thorax/*diagnostic imaging/*radiation effects
Tomography
X-Ray Computed
automatic segmentation
grand challenge
lung cancer
radiation therapy
PURPOSE: This report presents the methods and results of the Thoracic Auto-Segmentation Challenge organized at the 2017 Annual Meeting of American Association of Physicists in Medicine. The purpose of the challenge was to provide a benchmark dataset and platform for evaluating performance of autosegmentation methods of organs at risk (OARs) in thoracic CT images. METHODS: Sixty thoracic CT scans provided by three different institutions were separated into 36 training, 12 offline testing, and 12 online testing scans. Eleven participants completed the offline challenge, and seven completed the online challenge. The OARs were left and right lungs, heart, esophagus, and spinal cord. Clinical contours used for treatment planning were quality checked and edited to adhere to the RTOG 1106 contouring guidelines. Algorithms were evaluated using the Dice coefficient, Hausdorff distance, and mean surface distance. A consolidated score was computed by normalizing the metrics against interrater variability and averaging over all patients and structures. RESULTS: The interrater study revealed highest variability in Dice for the esophagus and spinal cord, and in surface distances for lungs and heart. Five out of seven algorithms that participated in the online challenge employed deep-learning methods. Although the top three participants using deep learning produced the best segmentation for all structures, there was no significant difference in the performance among them. The fourth place participant used a multi-atlas-based approach. The highest Dice scores were produced for lungs, with averages ranging from 0.95 to 0.98, while the lowest Dice scores were produced for esophagus, with a range of 0.55-0.72. CONCLUSION: The results of the challenge showed that the lungs and heart can be segmented fairly accurately by various algorithms, while deep-learning methods performed better on the esophagus. Our dataset together with the manual contours for all training cases continues to be available publicly as an ongoing benchmarking resource.
Identification of optimal mother wavelets in survival prediction of lung cancer patients using wavelet decomposition‐based radiomic features
Soufi, Mazen
Arimura, Hidetaka
Nagami, Noriyuki
Medical Physics2018Journal Article, cited 1 times
Website
Radiomics
LIDC-IDRI
QIN LUNG CT
RIDER Lung CT
High quality imaging from sparsely sampled computed tomography data with deep learning and wavelet transform in various domains
Lee, Donghoong
Choi, Sunghoon
Kim, Hee‐Joung
Medical Physics2018Journal Article, cited 0 times
Website
LungCT-Diagnosis
wavelet
deep learning
Radiomics
More accurate and efficient segmentation of organs‐at‐risk in radiotherapy with Convolutional Neural Networks Cascades
Men, Kuo
Geng, Huaizhi
Cheng, Chingyun
Zhong, Haoyu
Huang, Mi
Fan, Yong
Plastaras, John P
Lin, Alexander
Xiao, Ying
Medical Physics2018Journal Article, cited 0 times
Website
HNSCC
segmentation
CNN
AnatomyNet: Deep learning for fast and fully automated whole‐volume segmentation of head and neck anatomy
Zhu, Wentao
Huang, Yufang
Zeng, Liang
Chen, Xuming
Liu, Yong
Qian, Zhen
Du, Nan
Fan, Wei
Xie, Xiaohui
Medical Physics2018Journal Article, cited 4 times
Website
Segmentation
Deep learning
Head and Neck Neoplasms
Radiation Therapy
U-Net
Head-Neck Cetuximab
MICCAI 2015
Multicenter CT phantoms public dataset for radiomics reproducibility tests
Kalendralis, Petros
Traverso, Alberto
Shi, Zhenwei
Zhovannik, Ivan
Monshouwer, Rene
Starmans, Martijn P A
Klein, Stefan
Pfaehler, Elisabeth
Boellaard, Ronald
Dekker, Andre
Wee, Leonard
Med Phys2019Journal Article, cited 0 times
Credence-Cartridge-Radiomics-Phantom
Algorithm Development
Reproducibility
PURPOSE: The aim of this paper is to describe a public, open-access, computed tomography (CT) phantom image set acquired at three centers and collected especially for radiomics reproducibility research. The dataset is useful to test radiomic features reproducibility with respect to various parameters, such as acquisition settings, scanners, and reconstruction algorithms. ACQUISITION AND VALIDATION METHODS: Three phantoms were scanned in three independent institutions. Images of the following phantoms were acquired: Catphan 700 and COPDGene Phantom II (Phantom Laboratory, Greenwich, NY, USA), and the Triple modality 3D Abdominal Phantom (CIRS, Norfolk, VA, USA). Data were collected at three Dutch medical centers: MAASTRO Clinic (Maastricht, NL), Radboud University Medical Center (Nijmegen, NL), and University Medical Center Groningen (Groningen, NL) with scanners from two different manufacturers Siemens Healthcare and Philips Healthcare. The following acquisition parameter were varied in the phantom scans: slice thickness, reconstruction kernels, and tube current. DATA FORMAT AND USAGE NOTES: We made the dataset publically available on the Dutch instance of "Extensible Neuroimaging Archive Toolkit-XNAT" (https://xnat.bmia.nl). The dataset is freely available and reusable with attribution (Creative Commons 3.0 license). POTENTIAL APPLICATIONS: Our goal was to provide a findable, open-access, annotated, and reusable CT phantom dataset for radiomics reproducibility studies. Reproducibility testing and harmonization are fundamental requirements for wide generalizability of radiomics-based clinical prediction models. It is highly desirable to include only reproducible features into models, to be more assured of external validity across hitherto unseen contexts. In this view, phantom data from different centers represent a valuable source of information to exclude CT radiomic features that may already be unstable with respect to simplified structures and tightly controlled scan settings. The intended extension of our shared dataset is to include other modalities and phantoms with more realistic lesion simulations.
Medical Physics2019Journal Article, cited 0 times
Website
HNSCC-3D-CT-RT
squamous cell carcenoma
HEAD AND NECK
computed tomography
Purpose To describe in detail a dataset consisting of longitudinal fan-beam computed tomography (CT) imaging to visualize anatomical changes in head-and-neck squamous cell carcinoma (HNSCC) patients throughout radiotherapy (RT) treatment course. Acquisition and validation methods This dataset consists of CT images from 31 HNSCC patients who underwent volumetric modulated arc therapy (VMAT). Patients had three CT scans acquired throughout the duration of the radiation treatment course. Pretreatment planning CT scans with a median of 13 days before treatment (range: 2–27), mid-treatment CT at 22 days after start of treatment (range: 13–38), and post-treatment CT 65 days after start of treatment (range: 35–192). Patients received RT treatment to a total dose of 58–70 Gy, using daily 2.0–2.20 Gy, fractions for 30–35 fractions. The fan-beam CT images were acquired using a Siemens 16-slice CT scanner head protocol with 120 kV and current of 400 mAs. A helical scan with 1 rotation per second was used with a slice thickness of 2 mm and table increment of 1.2 mm. In addition to the imaging data, contours of anatomical structures for RT, demographic, and outcome measurements are provided. Data format and usage notes The dataset with DICOM files including images, RTSTRUCT files, and RTDOSE files can be found and publicly accessed in the Cancer Imaging Archive (TCIA, http://www.cancerimagingarchive.net/) as collection Head-and-neck squamous cell carcinoma patients with CT taken during pretreatment, mid-treatment, and post-treatment (HNSCC-3DCT-RT). Discussion This is the first dataset to date in TCIA which provides a collection of multiple CT imaging studies (pretreatment, mid-treatment, and post-treatment) throughout the treatment course. The dataset can serve a wide array of research projects including (but not limited to): quantitative imaging assessment, investigation on anatomical changes with treatment progress, dosimetry of target volumes and/or normal structures due to anatomical changes occurring during treatment, investigation of RT toxicity, and concurrent chemotherapy and RT effects on head-and-neck patients.
Multi-subtype classification model for non-small cell lung cancer based on radiomics: SLS model
Liu, J.
Cui, J.
Liu, F.
Yuan, Y.
Guo, F.
Zhang, G.
Med Phys2019Journal Article, cited 0 times
Website
NSCLC-Radiomics
Non Small Cell Lung Cancer (NSCLC)
Radiomics
Radiomic feature
PURPOSE: Histological subtypes of non-small cell lung cancer (NSCLC) are crucial for systematic treatment decisions. However, the current studies which used non-invasive radiomic methods to classify NSCLC histology subtypes mainly focused on two main subtypes: squamous cell carcinoma (SCC) and adenocarcinoma (ADC), while multi-subtype classifications that included the other two subtypes of NSCLC: large cell carcinoma (LCC) and not otherwise specified (NOS), were very few in the previous studies. The aim of this work is to establish a multi-subtype classification model for the four main subtypes of NSCLC and improve the classification performance and generalization ability compared with previous studies. METHODS: In this work, we extracted 1029 features from regions of interest in computed tomography (CT) images of 349 patients from two different datasets using radiomic methods. Based on 'three-in-one' concept, we proposed a model called SLS wrapping three algorithms, synthetic minority oversampling technique, l2,1-norm minimization, and support vector machines, into one hybrid technique to classify the four main subtypes of NSCLC: SCC, ADC, LCC and NOS, which could cover the whole range of NSCLC. RESULTS: We analyzed the 247 features obtained by dimension reduction, and found that the extracted features from three methods: first order statistics, gray level co-occurrence matrix, and gray level size zone matrix, were more conducive to the classification of NSCLC subtypes. The proposed SLS model achieved an average classification accuracy of 0.89 on the training set (95% confidence interval [CI]: 0.846 to 0.912) and a classification accuracy of 0.86 on the test set (95% CI: 0.779 to 0.941). CONCLUSIONS: The experiment results showed that the subtypes of NSCLC could be well classified by radiomic method. Our SLS model can accurately classify and diagnose the four subtypes of NSCLC based on CT images, and thus it has the potential to be used in the clinical practice to provide valuable information for lung cancer treatment and further promote the personalized medicine. This article is protected by copyright. All rights reserved.
Projection-domain scatter correction for cone beam computed tomography using a residual convolutional neural network
PURPOSE: Scatter is a major factor degrading the image quality of cone beam computed tomography (CBCT). Conventional scatter correction strategies require handcrafted analytical models with ad hoc assumptions, which often leads to less accurate scatter removal. This study aims to develop an effective scatter correction method using a residual convolutional neural network (CNN). METHODS: A U-net based 25-layer CNN was constructed for CBCT scatter correction. The establishment of the model consists of three steps: model training, validation, and testing. For model training, a total of 1800 pairs of x-ray projection and the corresponding scatter-only distribution in nonanthropomorphic phantoms taken in full-fan scan were generated using Monte Carlo simulation of a CBCT scanner installed with a proton therapy system. An end-to-end CNN training was implemented with two major loss functions for 100 epochs with a mini-batch size of 10. Image rotations and flips were randomly applied to augment the training datasets during training. For validation, 200 projections of a digital head phantom were collected. The proposed CNN-based method was compared to a conventional projection-domain scatter correction method named fast adaptive scatter kernel superposition (fASKS) method using 360 projections of an anthropomorphic head phantom. Two different loss functions were applied for the same CNN to evaluate the impact of loss functions on the final results. Furthermore, the CNN model trained with full-fan projections was fine-tuned for scatter correction in half-fan scan by using transfer learning with additional 360 half-fan projection pairs of nonanthropomorphic phantoms. The tuned-CNN model for half-fan scan was compared with the fASKS method as well as the CNN-based method without the fine-tuning using additional lung phantom projections. RESULTS: The CNN-based method provides projections with significantly reduced scatter and CBCT images with more accurate Hounsfield Units (HUs) than that of the fASKS-based method. Root mean squared error of the CNN-corrected projections was improved to 0.0862 compared to 0.278 for uncorrected projections or 0.117 for the fASKS-corrected projections. The CNN-corrected reconstruction provided better HU quantification, especially in regions near the air or bone interfaces. All four image quality measures, which include mean absolute error (MAE), mean squared error (MSE), peak signal-to-noise ratio (PSNR), and structural similarity (SSIM), indicated that the CNN-corrected images were significantly better than that of the fASKS-corrected images. Moreover, the proposed transfer learning technique made it possible for the CNN model trained with full-fan projections to be applicable to remove scatters in half-fan projections after fine-tuning with only a small number of additional half-fan training datasets. SSIM value of the tuned-CNN-corrected images was 0.9993 compared to 0.9984 for the non-tuned-CNN-corrected images or 0.9990 for the fASKS-corrected images. Finally, the CNN-based method is computationally efficient - the correction time for the 360 projections only took less than 5 s in the reported experiments on a PC (4.20 GHz Intel Core-i7 CPU) with a single NVIDIA GTX 1070 GPU. CONCLUSIONS: The proposed deep learning-based method provides an effective tool for CBCT scatter correction and holds significant value for quantitative imaging and image-guided radiation therapy.
Machine learning approach for distinguishing malignant and benign lung nodules utilizing standardized perinodular parenchymal features from CT
Uthoff, J.
Stephens, M. J.
Newell, J. D., Jr.
Hoffman, E. A.
Larson, J.
Koehn, N.
De Stefano, F. A.
Lusk, C. M.
Wenzlaff, A. S.
Watza, D.
Neslund-Dudas, C.
Carr, L. L.
Lynch, D. A.
Schwartz, A. G.
Sieren, J. C.
Med Phys2019Journal Article, cited 62 times
Website
PURPOSE: Computed tomography (CT) is an effective method for detecting and characterizing lung nodules in vivo. With the growing use of chest CT, the detection frequency of lung nodules is increasing. Noninvasive methods to distinguish malignant from benign nodules have the potential to decrease the clinical burden, risk, and cost involved in follow-up procedures on the large number of false-positive lesions detected. This study examined the benefit of including perinodular parenchymal features in machine learning (ML) tools for pulmonary nodule assessment. METHODS: Lung nodule cases with pathology confirmed diagnosis (74 malignant, 289 benign) were used to extract quantitative imaging characteristics from computed tomography scans of the nodule and perinodular parenchyma tissue. A ML tool development pipeline was employed using k-medoids clustering and information theory to determine efficient predictor sets for different amounts of parenchyma inclusion and build an artificial neural network classifier. The resulting ML tool was validated using an independent cohort (50 malignant, 50 benign). RESULTS: The inclusion of parenchymal imaging features improved the performance of the ML tool over exclusively nodular features (P < 0.01). The best performing ML tool included features derived from nodule diameter-based surrounding parenchyma tissue quartile bands. We demonstrate similar high-performance values on the independent validation cohort (AUC-ROC = 0.965). A comparison using the independent validation cohort with the Fleischner pulmonary nodule follow-up guidelines demonstrated a theoretical reduction in recommended follow-up imaging and procedures. CONCLUSIONS: Radiomic features extracted from the parenchyma surrounding lung nodules contain valid signals with spatial relevance for the task of lung cancer risk classification. Through standardization of feature extraction regions from the parenchyma, ML tool validation performance of 100% sensitivity and 96% specificity was achieved.
Reliability of tumor segmentation in glioblastoma: impact on the robustness of MRI‐radiomic features
Tixier, Florent
Um, Hyemin
Young, Robert J
Veeraraghavan, Harini
Med Phys2019Journal Article, cited 0 times
Website
TCGA-GBM
Radiomics
Glioblastoma Multiforme (GBM)
Purpose; The use of radiomic features as biomarkers of treatment response and outcome or as correlates to genomic variations requires that the computed features are robust and reproducible. Segmentation, a crucial step in radiomic analysis, is a major source of variability in the computed radiomic features. Therefore, we studied the impact of tumor segmentation variability on the robustness of MRI radiomic features.; Method; Fluid‐attenuated inversion recovery (FLAIR) and contrast‐enhanced T1‐weighted (T1WICE) MRI of 90 patients diagnosed with glioblastoma were segmented using a semi‐automatic algorithm and an interactive segmentation with two different raters. We analyzed the robustness of 108 radiomic features from 5 categories (intensity histogram, gray‐level co‐occurrence matrix, gray‐level size‐zone matrix (GLSZM), edge maps and shape) using intra‐class correlation coefficient (ICC) and Bland and Altman analysis. ; Results; Our results show that both segmentation methods are reliable with ICC ≥ 0.96 and standard deviation (SD) of mean differences between the two raters (SDdiffs) ≤ 30%. Features computed from the histogram and co‐occurrence matrices were found to be the most robust (ICC ≥ 0.8 and SDdiffs ≤ 30% for most features in these groups). Features from GLSZM were shown to have mixed robustness. Edge, shape and GLSZM features were the most impacted by the choice of segmentation method with the interactive method resulting in more robust features than the semi‐automatic method. Finally, features computed from T1WICE and FLAIR images were found to have similar robustness when computed with the interactive segmentation method. ; Conclusion; Semi‐automatic and interactive segmentation methods using two raters are both reliable. The interactive method produced more robust features than the semi‐automatic method. We also found that the robustness of radiomic features varied by categories. Therefore, this study could help motivate segmentation methods and feature selection in MRI radiomic studies.
Technical Note‐In silico imaging tools from the VICTRE clinical trial
Sharma, Diksha
Graff, Christian G.
Badal, Andreu
Zeng, Rongping
Sawant, Purva
Sengupta, Aunnasha
Dahal, Eshan
Badano, Aldo
Medical Physics2019Journal Article, cited 0 times
VICTRE
BREAST
Model
PURPOSE: In silico imaging clinical trials are emerging alternative sources of evidence for regulatory evaluation and are typically cheaper and faster than human trials. In this Note, we describe the set of in silico imaging software tools used in the VICTRE (Virtual Clinical Trial for Regulatory Evaluation) which replicated a traditional trial using a computational pipeline. MATERIALS AND METHODS: We describe a complete imaging clinical trial software package for comparing two breast imaging modalities (digital mammography and digital breast tomosynthesis). First, digital breast models were developed based on procedural generation techniques for normal anatomy. Second, lesions were inserted in a subset of breast models. The breasts were imaged using GPU-accelerated Monte Carlo transport methods and read using image interpretation models for the presence of lesions. All in silico components were assembled into a computational pipeline. The VICTRE images were made available in DICOM format for ease of use and visualization. RESULTS: We describe an open-source collection of in silico tools for running imaging clinical trials. All tools and source codes have been made freely available. CONCLUSION: The open-source tools distributed as part of the VICTRE project facilitate the design and execution of other in silico imaging clinical trials. The entire pipeline can be run as a complete imaging chain, modified to match needs of other trial designs, or used as independent components to build additional pipelines.
ALTIS: A fast and automatic lung and trachea CT-image segmentation method
Sousa, A. M.
Martins, S. B.
Falcão, A. X.
Reis, F.
Bagatin, E.
Irion, K.
Med Phys2019Journal Article, cited 0 times
LIDC-IDRI
Algorithm Development
Segmentation
PURPOSE: The automated segmentation of each lung and trachea in CT scans is commonly taken as a solved problem. Indeed, existing approaches may easily fail in the presence of some abnormalities caused by a disease, trauma, or previous surgery. For robustness, we present ALTIS (implementation is available at http://lids.ic.unicamp.br/downloads) - a fast automatic lung and trachea CT-image segmentation method that relies on image features and relative shape- and intensity-based characteristics less affected by most appearance variations of abnormal lungs and trachea. METHODS: ALTIS consists of a sequence of image foresting transforms (IFTs) organized in three main steps: (a) lung-and-trachea extraction, (b) seed estimation inside background, trachea, left lung, and right lung, and (c) their delineation such that each object is defined by an optimum-path forest rooted at its internal seeds. We compare ALTIS with two methods based on shape models (SOSM-S and MALF), and one algorithm based on seeded region growing (PTK). RESULTS: The experiments involve the highest number of scans found in literature - 1255 scans, from multiple public data sets containing many anomalous cases, being only 50 normal scans used for training and 1205 scans used for testing the methods. Quantitative experiments are based on two metrics, DICE and ASSD. Furthermore, we also demonstrate the robustness of ALTIS in seed estimation. Considering the test set, the proposed method achieves an average DICE of 0.987 for both lungs and 0.898 for the trachea, whereas an average ASSD of 0.938 for the right lung, 0.856 for the left lung, and 1.316 for the trachea. These results indicate that ALTIS is statistically more accurate and considerably faster than the compared methods, being able to complete segmentation in a few seconds on modern PCs. CONCLUSION: ALTIS is the most effective and efficient choice among the compared methods to segment left lung, right lung, and trachea in anomalous CT scans for subsequent detection, segmentation, and quantitative analysis of abnormal structures in the lung parenchyma and pleural space.
Stability and reproducibility of computed tomography radiomic features extracted from peritumoral regions of lung cancer lesions
Tunali, Ilke
Hall, Lawrence O
Napel, Sandy
Cherezov, Dmitry
Guvenis, Albert
Gillies, Robert J
Schabath, Matthew B
Med Phys2019Journal Article, cited 0 times
LUNG
Radiomics
PURPOSE: Recent efforts have demonstrated that radiomic features extracted from the peritumoral region, the area surrounding the tumor parenchyma, have clinical utility in various cancer types. However, as like any radiomic features, peritumoral features could also be unstable and/or nonreproducible. Hence, the purpose of this study was to assess the stability and reproducibility of computed tomography (CT) radiomic features extracted from the peritumoral regions of lung lesions where stability was defined as the consistency of a feature by different segmentations, and reproducibility was defined as the consistency of a feature to different image acquisitions. METHODS: Stability was measured utilizing the "moist run" dataset and reproducibility was measured utilizing the Reference Image Database to Evaluate Therapy Response test-retest dataset. Peritumoral radiomic features were extracted from incremental distances of 3-12 mm outside the tumor segmentation. A total of 264 statistical, histogram, and texture radiomic features were assessed from the selected peritumoral region-of-interests (ROIs). All features (except wavelet texture features) were extracted using standardized algorithms defined by the Image Biomarker Standardisation Initiative. Stability and reproducibility of features were assessed using the concordance correlation coefficient. The clinical utility of stable and reproducible peritumoral features was tested in three previously published lung cancer datasets using overall survival as the endpoint. RESULTS: Features found to be stable and reproducible, regardless of the peritumoral distances, included statistical, histogram, and a subset of texture features suggesting that these features are less affected by changes (e.g., size or shape) of the peritumoral region due to different segmentations and image acquisitions. The stability and reproducibility of Laws and wavelet texture features were inconsistent across all peritumoral distances. The analyses also revealed that a subset of features were consistently stable irrespective of the initial parameters (e.g., seed point) for a given segmentation algorithm. No significant differences were found in stability for features that were extracted from ROIs bounded by a lung parenchyma mask versus ROIs that were not bounded by a lung parenchyma mask (i.e., peritumoral regions that extended outside of lung parenchyma). After testing the clinical utility of peritumoral features, stable and reproducible features were shown to be more likely to create repeatable models than unstable and nonreproducible features. CONCLUSIONS: This study identified a subset of stable and reproducible CT radiomic features extracted from the peritumoral region of lung lesions. The stable and reproducible features identified in this study could be applied to a feature selection pipeline for CT radiomic analyses. According to our findings, top performing features in survival models were more likely to be stable and reproducible hence, it may be best practice to utilize them to achieve repeatable studies and reduce the chance of overfitting.
Automatic classification of lung nodule candidates based on a novel 3D convolution network and knowledge transferred from a 2D network
Zuo, Wangxia
Zhou, Fuqiang
He, Yuzhu
Li, Xiaosong
Med Phys2019Journal Article, cited 0 times
LIDC-IDRI
Algorithm Development
Computer Aided Detection (CADe)
OBJECTIVE: In the automatic lung nodule detection system, the authenticity of a large number of nodule candidates needs to be judged, which is a classification task. However, the variable shapes and sizes of the lung nodules have posed a great challenge to the classification of candidates. To solve this problem, we propose a method for classifying nodule candidates through three-dimensional (3D) convolution neural network (ConvNet) model which is trained by transferring knowledge from a multiresolution two-dimensional (2D) ConvNet model. METHODS: In this scheme, a novel 3D ConvNet model is preweighted with the weights of the trained 2D ConvNet model, and then the 3D ConvNet model is trained with 3D image volumes. In this way, the knowledge transfer method can make 3D network easier to converge and make full use of the spatial information of nodules with different sizes and shapes to improve the classification accuracy. RESULTS: The experimental results on 551 065 pulmonary nodule candidates in the LUNA16 dataset show that our method gains a competitive average score in the false-positive reduction track in lung nodule detection, with the sensitivities of 0.619 and 0.642 at 0.125 and 0.25 FPs per scan, respectively. CONCLUSIONS: The proposed method can maintain satisfactory classification accuracy even when the false-positive rate is extremely small in the face of nodules of different sizes and shapes. Moreover, as a transfer learning idea, the method to transfer knowledge from 2D ConvNet to 3D ConvNet is the first attempt to carry out full migration of parameters of various layers including convolution layers, full connection layers, and classifier between different dimensional models, which is more conducive to utilizing the existing 2D ConvNet resources and generalizing transfer learning schemes.
A fast and scalable method for quality assurance of deformable image registration on lung CT scans using convolutional neural networks
Galib, Shaikat M
Lee, Hyoung K
Guy, Christopher L
Riblett, Matthew J
Hugo, Geoffrey D
Med Phys2020Journal Article, cited 1 times
Website
4D-Lung
Deep Learning
Image registration
PURPOSE: To develop and evaluate a method to automatically identify and quantify deformable image registration (DIR) errors between lung computed tomography (CT) scans for quality assurance (QA) purposes. METHODS: We propose a deep learning method to flag registration errors. The method involves preparation of a dataset for machine learning model training and testing, design of a three-dimensional (3D) convolutional neural network architecture that classifies registrations into good or poor classes, and evaluation of a metric called registration error index (REI) which provides a quantitative measure of registration error. RESULTS: Our study shows that, despite having limited number of training images available (10 CT scan pairs for training and 17 CT scan pairs for testing), the method achieves 0.882 AUC-ROC on the test dataset. Furthermore, the combined standard uncertainty of the estimated REI by our model lies within +/- 0.11 (+/- 11% of true REI value), with a confidence level of approximately 68%. CONCLUSIONS: We have developed and evaluated our method using original clinical registrations without generating any synthetic/simulated data. Moreover, test data were acquired from a different environment than that of training data, so that the method was validated robustly. The results of this study showed that our algorithm performs reasonably well in challenging scenarios.
Head and neck cancer patient images for determining auto-segmentation accuracy in T2-weighted magnetic resonance imaging through expert manual segmentations
PURPOSE: The use of magnetic resonance imaging (MRI) in radiotherapy treatment planning has rapidly increased due to its ability to evaluate patient's anatomy without the use of ionizing radiation and due to its high soft tissue contrast. For these reasons, MRI has become the modality of choice for longitudinal and adaptive treatment studies. Automatic segmentation could offer many benefits for these studies. In this work, we describe a T2-weighted MRI dataset of head and neck cancer patients that can be used to evaluate the accuracy of head and neck normal tissue auto-segmentation systems through comparisons to available expert manual segmentations. ACQUISITION AND VALIDATION METHODS: T2-weighted MRI images were acquired for 55 head and neck cancer patients. These scans were collected after radiotherapy computed tomography (CT) simulation scans using a thermoplastic mask to replicate patient treatment position. All scans were acquired on a single 1.5 T Siemens MAGNETOM Aera MRI with two large four-channel flex phased-array coils. The scans covered the region encompassing the nasopharynx region cranially and supraclavicular lymph node region caudally, when possible, in the superior-inferior direction. Manual contours were created for the left/right submandibular gland, left/right parotids, left/right lymph node level II, and left/right lymph node level III. These contours underwent quality assurance to ensure adherence to predefined guidelines, and were corrected if edits were necessary. DATA FORMAT AND USAGE NOTES: The T2-weighted images and RTSTRUCT files are available in DICOM format. The regions of interest are named based on AAPM's Task Group 263 nomenclature recommendations (Glnd_Submand_L, Glnd_Submand_R, LN_Neck_II_L, Parotid_L, Parotid_R, LN_Neck_II_R, LN_Neck_III_L, LN_Neck_III_R). This dataset is available on The Cancer Imaging Archive (TCIA) by the National Cancer Institute under the collection "AAPM RT-MAC Grand Challenge 2019" (https://doi.org/10.7937/tcia.2019.bcfjqfqb). POTENTIAL APPLICATIONS: This dataset provides head and neck patient MRI scans to evaluate auto-segmentation systems on T2-weighted images. Additional anatomies could be provided at a later time to enhance the existing library of contours.
Spline curve deformation model with prior shapes for identifying adhesion boundaries between large lung tumors and tissues around lungs in CT images
Zhang, Xin
Wang, Jie
Yang, Ying
Wang, Bing
Gu, Lixu
Med Phys2020Journal Article, cited 0 times
Website
LIDC-IDRI
RIDER Lung CT
Segmentation
PURPOSE: Automated segmentation of lung tumors attached to anatomic structures such as the chest wall or mediastinum remains a technical challenge because of the similar Hounsfield units of these structures. To address this challenge, we propose herein a spline curve deformation model that combines prior shapes to correct large spatially contiguous errors (LSCEs) in input shapes derived from image-appearance cues.The model is then used to identify the adhesion boundaries between large lung tumors and tissue around the lungs. METHODS: The deformation of the whole curve is driven by the transformation of the control points (CPs) of the spline curve, which are influenced by external and internal forces. The external force drives the model to fit the positions of the non-LSCEs of the input shapes while the internal force ensures the local similarity of the displacements of the neighboring CPs. The proposed model corrects the gross errors in the lung input shape caused by large lung tumors, where the initial lung shape for the model is inferred from the training shapes by shape group-based sparse prior information and the input lung shape is inferred by adaptive-thresholding-based segmentation followed by morphological refinement. RESULTS: The accuracy of the proposed model is verified by applying it to images of lungs with either moderate large-sized (ML) tumors or giant large-sized (GL) tumors. The quantitative results in terms of the averages of the dice similarity coefficient (DSC) and the Jaccard similarity index (SI) are 0.982 +/- 0.006 and 0.965 +/- 0.012 for segmentation of lungs adhered by ML tumors, and 0.952 +/- 0.048 and 0.926 +/- 0.059 for segmentation of lungs adhered by GL tumors, which give 0.943 +/- 0.021 and 0.897 +/- 0.041 for segmentation of the ML tumors, and 0.907 +/- 0.057 and 0.888 +/- 0.091 for segmentation of the GL tumors, respectively. In addition, the bidirectional Hausdorff distances are 5.7 +/- 1.4 and 11.3 +/- 2.5 mm for segmentation of lungs with ML and GL tumors, respectively. CONCLUSIONS: When combined with prior shapes, the proposed spline curve deformation can deal with large spatially consecutive errors in object shapes obtained from image-appearance information. We verified this method by applying it to the segmentation of lungs with large tumors adhered to the tissue around the lungs and the large tumors. Both the qualitative and quantitative results are more accurate and repeatable than results obtained with current state-of-the-art techniques.
Stationary computed tomography with source and detector in linear symmetric geometry: Direct filtered backprojection reconstruction
Zhang, Tao
Xing, Yuxiang
Zhang, Li
Jin, Xin
Gao, Hewei
Chen, Zhiqiang
Medical Physics2020Journal Article, cited 0 times
Pancreas-CT
PURPOSE: Inverse-geometry computed tomography (IGCT) could have great potential in medical applications and security inspections, and has been actively investigated in recent years. In this work, we explore a special architecture of IGCT in a stationary configuration: symmetric-geometry computed tomography (SGCT), where the x-ray source and detector are linearly distributed in a symmetric design. A direct filtered backprojection (FBP)-type algorithm is developed to analytically reconstruct images from the SGCT projections.
METHODS: In our proposed SGCT system, a big number of x-ray source points equally distributed along a straight-line trajectory will sequentially fire in an ultra-fast manner in one side, and an equispaced detector whose total length is comparable to that of the source will continuously collect data in the opposite side, as the object to be scanned moves into the imaging plane. We firstly present the overall design of SGCT. An FBP-type reconstruction algorithm is then derived for this unique imaging configuration. With finite length of x-ray source and detector arrays, projection data from one segment of SGCT scan are insufficient for an exact reconstruction. As a result, in practical applications, dual-SGCT scan whose detector segments are placed perpendicular to each other, is of particular interest and is proposed. Two segments of SGCT together can make sure that the passing rays cover at least 180 degrees for each and every point if carefully designed. In general, however, there exists a data redundancy problem for a dual-SGCT. So a weighting strategy is developed to maximize the use of projection data collected while avoid image artifacts. In addition, we further extend the fan-beam SGCT to cone beam and obtain a Feldkamp-Davis-Kress (FDK)-type reconstruction algorithm. Finally, we conduct a set of experimental studies both in simulation and on a prototype SGCT system and validate our proposed methods.
RESULTS: A simulation study using the Shepp-Logan head phantom confirms that CT images can be exactly reconstructed from dual-SGCT scan and that our proposed weighting strategy is able to handle the data redundancy properly. Compared with the rebinning-to-parallel-beam method using the forward projection of an abdominal CT dataset, our proposed method is seen to be less sensitive to data truncation. Our algorithm can achieve 10.64 lp/cm of spatial resolution at 50% modulation transfer functions point, higher than that of the rebinning method which can only reach at 9.42 lp/cm even with extremely fine interpolation. Real experiments of a cylindrical object on a prototype SGCT further prove the effectiveness and practicability of the direct FBP method proposed, with similar level of noise performance to rebinning algorithm.
CONCLUSIONS: A new concept of SGCT with linearly distributed source and detector is investigated in this work, in which spinning of sources and detectors is no longer needed during data acquisition, simplifying its system design, development, and manufacturing. A direct FBP-type algorithm is developed for analytical reconstruction from SGCT projection data. Numerical and real experiments validate our method and show that exact CT image can be reconstructed from dual-SGCT scan, where data redundancy problem can be solved by our proposed weighting function.
Recurrent Attention Network for False Positive Reduction in the Detection of Pulmonary Nodules in Thoracic CT Scans
M. Mehdi Farhangi
Nicholas Petrick
Berkman Sahiner
Hichem Frigui
Amir A. Amini
Aria Pezeshk
Med Phys2020Journal Article, cited 0 times
Website
LIDC-IDRI
LUNA16 Challenge
National Lung Screening Trial (NLST)
PURPOSE: Multi-view 2-D Convolutional Neural Networks (CNNs) and 3-D CNNs have been successfully used for analyzing volumetric data in many state-of-the-art medical imaging applications. We propose an alternative modular framework that analyzes volumetric data with an approach that is analogous to radiologists' interpretation, and apply the framework to reduce false positives that are generated in Computer-Aided Detection (CADe) systems for pulmonary nodules in thoracic CT scans. METHODS: In our approach, a deep network consisting of 2-D CNNs first processes slices individually. The features extracted in this stage are then passed to a Recurrent Neural Network (RNN), thereby modeling consecutive slices as a sequence of temporal data and capturing the contextual information across all three dimensions in the volume of interest. Outputs of the RNN layer are weighed before the final fully connected layer, enabling the network to scale the importance of different slices within a volume of interest in an end-to-end training framework. RESULTS: We validated the proposed architecture on the false positive reduction track of the Lung Nodule Analysis (LUNA) challenge for pulmonary nodule detection in chest CT scans, and obtained competitive results compared to 3-D CNNs. Our results show that the proposed approach can encode the 3-D information in volumetric data effectively by achieving a sensitivity > 0.8 with just 1/8 false positives per scan. CONCLUSIONS: Our experimental results demonstrate the effectiveness of temporal analysis of volumetric images for the application of false positive reduction in chest CT scans and show that state-of-the-art 2-D architectures from the literature can be directly applied to analyzing volumetric medical data. As newer and better 2-D architectures are being developed at a much faster rate compared to 3-D architectures, our approach makes it easy to obtain state-of-the-art performance on volumetric data using new 2-D architectures.
Homology-based radiomic features for prediction of the prognosis of lung cancer based on CT-based radiomics
Kadoya, Noriyuki
Tanaka, Shohei
Kajikawa, Tomohiro
Tanabe, Shunpei
Abe, Kota
Nakajima, Yujiro
Yamamoto, Takaya
Takahashi, Noriyoshi
Takeda, Kazuya
Dobashi, Suguru
Takeda, Ken
Nakane, Kazuaki
Jingu, Keiichi
Med Phys2020Journal Article, cited 0 times
Website
NSCLC Radiogenomics
RIDER Lung CT
QIN LUNG CT
Radiomics
PURPOSE: Radiomics is a new technique that enables noninvasive prognostic prediction by extracting features from medical images. Homology is a concept used in many branches of algebra and topology that can quantify the contact degree. In the present study, we developed homology-based radiomic features to predict the prognosis of non-small-cell lung cancer (NSCLC) patients and then evaluated the accuracy of this prediction method. METHODS: Four data sets were used: two to provide training and test data and two for the selection of robust radiomic features. All the data sets were downloaded from The Cancer Imaging Archive (TCIA). In two-dimensional cases, the Betti numbers consist of two values: b0 (zero-dimensional Betti number), which is the number of isolated components, and b1 (one-dimensional Betti number), which is the number of one-dimensional or "circular" holes. For homology-based evaluation, CT images must be converted to binarized images in which each pixel has two possible values: 0 or 1. All CT slices of the gross tumor volume were used for calculating the homology histogram. First, by changing the threshold of the CT value (range: -150 to 300 HU) for all its slices, we developed homology-based histograms for b0 , b1 , and b1 /b0 using binarized images All histograms were then summed, and the summed histogram was normalized by the number of slices. 144 homology-based radiomic features were defined from the histogram. To compare the standard radiomic features, 107 radiomic features were calculated using the standard radiomics technique. To clarify the prognostic power, the relationship between the values of the homology-based radiomic features and overall survival was evaluated using LASSO Cox regression model and the Kaplan-Meier method. The retained features with non-zero coefficients calculated by the LASSO Cox regression model were used for fitting the regression model. Moreover, these features were then integrated into a radiomics signature. An individualized rad score was calculated from a linear combination of the selected features, which were weighted by their respective coefficients. RESULTS: When the patients in the training and test data sets were stratified into high-risk and low-risk groups according to the rad scores, the overall survival of the groups was significantly different. The C-index values for the homology-based features (rad score), standard features (rad score), and tumor size were 0.625, 0.603, and 0.607, respectively, for the training data sets and 0.689, 0.668, and 0.667 for the test data sets. This result showed that homology-based radiomic features had slightly higher prediction power than the standard radiomic features. CONCLUSIONS: Prediction performance using homology-based radiomic features had a comparable or slightly higher prediction power than standard radiomic features. These findings suggest that homology-based radiomic features may have great potential for improving the prognostic prediction accuracy of CT-based radiomics. In this result, it is noteworthy that there are some limitations.
CT images with expert manual contours of thoracic cancer for benchmarking auto-segmentation accuracy
Yang, J.
Veeraraghavan, H.
van Elmpt, W.
Dekker, A.
Gooding, M.
Sharp, G.
Med Phys2020Journal Article, cited 0 times
LCTSC
Lung CT Segmentation Challenge 2017
Automatic segmentation
Computed Tomography (CT)
Algorithm Development
PURPOSE: Automatic segmentation offers many benefits for radiotherapy treatment planning; however, the lack of publicly available benchmark datasets limits the clinical use of automatic segmentation. In this work, we present a well-curated computed tomography (CT) dataset of high-quality manually drawn contours from patients with thoracic cancer that can be used to evaluate the accuracy of thoracic normal tissue auto-segmentation systems. ACQUISITION AND VALIDATION METHODS: Computed tomography scans of 60 patients undergoing treatment simulation for thoracic radiotherapy were acquired from three institutions: MD Anderson Cancer Center, Memorial Sloan Kettering Cancer Center, and the MAASTRO clinic. Each institution provided CT scans from 20 patients, including mean intensity projection four-dimensional CT (4D CT), exhale phase (4D CT), or free-breathing CT scans depending on their clinical practice. All CT scans covered the entire thoracic region with a 50-cm field of view and slice spacing of 1, 2.5, or 3 mm. Manual contours of left/right lungs, esophagus, heart, and spinal cord were retrieved from the clinical treatment plans. These contours were checked for quality and edited if necessary to ensure adherence to RTOG 1106 contouring guidelines. DATA FORMAT AND USAGE NOTES: The CT images and RTSTRUCT files are available in DICOM format. The regions of interest were named according to the nomenclature recommended by American Association of Physicists in Medicine Task Group 263 as Lung_L, Lung_R, Esophagus, Heart, and SpinalCord. This dataset is available on The Cancer Imaging Archive (funded by the National Cancer Institute) under Lung CT Segmentation Challenge 2017 (http://doi.org/10.7937/K9/TCIA.2017.3r3fvz08). POTENTIAL APPLICATIONS: This dataset provides CT scans with well-delineated manually drawn contours from patients with thoracic cancer that can be used to evaluate auto-segmentation systems. Additional anatomies could be supplied in the future to enhance the existing library of contours.
A method of rapid quantification of patient-specific organ doses for CT using deep-learning-based multi-organ segmentation and GPU-accelerated Monte Carlo dose computing
Peng, Z.
Fang, X.
Yan, P.
Shan, H.
Liu, T.
Pei, X.
Wang, G.
Liu, B.
Kalra, M. K.
Xu, X. G.
Med Phys2020Journal Article, cited 0 times
Website
Lung CT Segmentation Challenge 2017
LCTSC
Segmentation
Algorithm Development
PURPOSE: One technical barrier to patient-specific computed tomography (CT) dosimetry has been the lack of computational tools for the automatic patient-specific multi-organ segmentation of CT images and rapid organ dose quantification. When previous CT images are available for the same body region of the patient, the ability to obtain patient-specific organ doses for CT - in a similar manner as radiation therapy treatment planning - will open the door to personalized and prospective CT scan protocols. This study aims to demonstrate the feasibility of combining deep-learning algorithms for automatic segmentation of multiple radiosensitive organs from CT images with the GPU-based Monte Carlo rapid organ dose calculation. METHODS: A deep convolutional neural network (CNN) based on the U-Net for organ segmentation is developed and trained to automatically delineate multiple radiosensitive organs from CT images. Two databases are used: The lung CT segmentation challenge 2017 (LCTSC) dataset that contains 60 thoracic CT scan patients, each consisting of five segmented organs, and the Pancreas-CT (PCT) dataset, which contains 43 abdominal CT scan patients each consisting of eight segmented organs. A fivefold cross-validation method is performed on both sets of data. Dice similarity coefficients (DSCs) are used to evaluate the segmentation performance against the ground truth. A GPU-based Monte Carlo dose code, ARCHER, is used to calculate patient-specific CT organ doses. The proposed method is evaluated in terms of relative dose errors (RDEs). To demonstrate the potential improvement of the new method, organ dose results are compared against those obtained for population-average patient phantoms used in an off-line dose reporting software, VirtualDose, at Massachusetts General Hospital. RESULTS: The median DSCs are found to be 0.97 (right lung), 0.96 (left lung), 0.92 (heart), 0.86 (spinal cord), 0.76 (esophagus) for the LCTSC dataset, along with 0.96 (spleen), 0.96 (liver), 0.95 (left kidney), 0.90 (stomach), 0.87 (gall bladder), 0.80 (pancreas), 0.75 (esophagus), and 0.61 (duodenum) for the PCT dataset. Comparing with organ dose results from population-averaged phantoms, the new patient-specific method achieved smaller absolute RDEs (mean +/- standard deviation) for all organs: 1.8% +/- 1.4% (vs 16.0% +/- 11.8%) for the lung, 0.8% +/- 0.7% (vs 34.0% +/- 31.1%) for the heart, 1.6% +/- 1.7% (vs 45.7% +/- 29.3%) for the esophagus, 0.6% +/- 1.2% (vs 15.8% +/- 12.7%) for the spleen, 1.2% +/- 1.0% (vs 18.1% +/- 15.7%) for the pancreas, 0.9% +/- 0.6% (vs 20.0% +/- 15.2%) for the left kidney, 1.7% +/- 3.1% (vs 19.1% +/- 9.8%) for the gallbladder, 0.3% +/- 0.3% (vs 24.2% +/- 18.7%) for the liver, and 1.6% +/- 1.7% (vs 19.3% +/- 13.6%) for the stomach. The trained automatic segmentation tool takes <5 s per patient for all 103 patients in the dataset. The Monte Carlo radiation dose calculations performed in parallel to the segmentation process using the GPU-accelerated ARCHER code take <4 s per patient to achieve <0.5% statistical uncertainty in all organ doses for all 103 patients in the database. CONCLUSION: This work shows the feasibility to perform combined automatic patient-specific multi-organ segmentation of CT images and rapid GPU-based Monte Carlo dose quantification with clinically acceptable accuracy and efficiency.
Automated proton treatment planning with robust optimization using constrained hierarchical optimization
Taasti, Vicki T.
Hong, Linda
Deasy, Joseph O.
Zarepisheh, Masoud
Medical Physics2020Journal Article, cited 0 times
HNSCC-3DCT-RT
PURPOSE: We present a method for fully automated generation of high quality robust proton treatment plans using hierarchical optimization. To fill the gap between the two common extreme robust optimization approaches, that is, stochastic and worst-case, a robust optimization approach based on the p-norm function is used whereby a single parameter, p , can be used to control the level of robustness in an intuitive way.
METHODS: A fully automated approach to treatment planning using Expedited Constrained Hierarchical Optimization (ECHO) is implemented in our clinic for photon treatments. ECHO strictly enforces critical (inviolable) clinical criteria as hard constraints and improves the desirable clinical criteria sequentially, as much as is feasible. We extend our in-house developed ECHO codes for proton therapy and integrate it with a new approach for robust optimization. Multiple scenarios accounting for both setup and range uncertainties are included (13scenarios), and the maximum/mean/dose-volume constraints on organs-at-risk (OARs) and target are fulfilled in all scenarios. We combine the objective functions of the individual scenarios using the p-norm function. The p-norm with a parameter p = 1 or p = ∞ result in the stochastic or the worst-case approach, respectively; an intermediate robustness level is obtained by employing p -values in-between. While the worst-case approach only focuses on the worst-case scenario(s), the p-norm approach with a large p value ( p ≈ 20 ) resembles the worst-case approach without completely neglecting other scenarios. The proposed approach is evaluated on three head-and-neck (HN) patients and one water phantom with different parameters, p ∈ 1 , 2 , 5 , 10 , 20 . The results are compared against the stochastic approach (p-norm approach with p = 1 ) and the worst-case approach, as well as the nonrobust approach (optimized solely on the nominal scenario).
RESULTS: The proposed algorithm successfully generates automated robust proton plans on all cases. As opposed to the nonrobust plans, the robust plans have narrower dose volume histogram (DVH) bands across all 13 scenarios, and meet all hard constraints (i.e., maximum/mean/dose-volume constraints) on OARs and the target for all scenarios. The spread in the objective function values is largest for the stochastic approach ( p = 1 ) and decreases with increasing p toward the worst-case approach. Compared to the worst-case approach, the p-norm approach results in DVH bands for clinical target volume (CTV) which are closer to the prescription dose at a negligible cost in the DVH for the worst scenario, thereby improving the overall plan quality. On average, going from the worst-case approach to the p-norm approach with p = 20 , the median objective function value across all the scenarios is improved by 15% while the objective function value for the worst scenario is only degraded by 3%.
CONCLUSION: An automated treatment planning approach for proton therapy is developed, including robustness, dose-volume constraints, and the ability to control the robustness level using the p-norm parameter p , to fit the priorities deemed most important.
Automating proton treatment planning with beam angle selection using Bayesian optimization
Taasti, Vicki T.
Hong, Linda
Shim, Jin Sup
Deasy, Joseph O.
Zarepisheh, Masoud
Medical Physics2020Journal Article, cited 0 times
HNSCC-3DCT-RT
PURPOSE: To present a fully automated treatment planning process for proton therapy including beam angle selection using a novel Bayesian optimization approach and previously developed constrained hierarchical fluence optimization method.
METHODS: We adapted our in-house automated intensity modulated radiation therapy (IMRT) treatment planning system, which is based on constrained hierarchical optimization and referred to as ECHO (expedited constrained hierarchical optimization), for proton therapy. To couple this to beam angle selection, we propose using a novel Bayesian approach. By integrating ECHO with this Bayesian beam selection approach, we obtain a fully automated treatment planning framework including beam angle selection. Bayesian optimization is a global optimization technique which only needs to search a small fraction of the search space for slowly varying objective functions (i.e., smooth functions). Expedited constrained hierarchical optimization is run for some initial beam angle candidates and the resultant treatment plan for each beam configuration is rated using a clinically relevant treatment score function. Bayesian optimization iteratively predicts the treatment score for not-yet-evaluated candidates to find the best candidate to be optimized next with ECHO. We tested this technique on five head-and-neck (HN) patients with two coplanar beams. In addition, tests were performed with two noncoplanar and three coplanar beams for two patients.
RESULTS: For the two coplanar configurations, the Bayesian optimization found the optimal beam configuration after running ECHO for, at most, 4% of all potential configurations (23 iterations) for all patients (range: 2%-4%). Compared with the beam configurations chosen by the planner, the optimal configurations reduced the mandible maximum dose by 6.6 Gy and high dose to the unspecified normal tissues by 3.8 Gy, on average. For the two noncoplanar and three coplanar beam configurations, the algorithm converged after 45 iterations (examining <1% of all potential configurations).
CONCLUSIONS: A fully automated and efficient treatment planning process for proton therapy, including beam angle optimization was developed. The algorithm automatically generates high-quality plans with optimal beam angle configuration by combining Bayesian optimization and ECHO. As the Bayesian optimization is capable of handling complex nonconvex functions, the treatment score function which is used in the algorithm to evaluate the dose distribution corresponding to each beam configuration can contain any clinically relevant metric.
FAIR-compliant clinical, radiomics and DICOM metadata of RIDER, interobserver, Lung1 and head-Neck1 TCIA collections
Kalendralis, Petros
Shi, Zhenwei
Traverso, Alberto
Choudhury, Ananya
Sloep, Matthijs
Zhovannik, Ivan
Starmans, Martijn P A
Grittner, Detlef
Feltens, Peter
Monshouwer, Rene
Klein, Stefan
Fijten, Rianne
Aerts, Hugo
Dekker, Andre
van Soest, Johan
Wee, Leonard
Med Phys2020Journal Article, cited 0 times
Website
Radiomics
NSCLC-Radiomics
RIDER Lung CT
Head-Neck-Radiomics-HN1
NSCLC-Radiomics- Interobserver1
Imaging features
PURPOSE: One of the most frequently cited radiomics investigations showed that features automatically extracted from routine clinical images could be used in prognostic modeling. These images have been made publicly accessible via The Cancer Imaging Archive (TCIA). There have been numerous requests for additional explanatory metadata on the following datasets - RIDER, Interobserver, Lung1, and Head-Neck1. To support repeatability, reproducibility, generalizability, and transparency in radiomics research, we publish the subjects' clinical data, extracted radiomics features, and digital imaging and communications in medicine (DICOM) headers of these four datasets with descriptive metadata, in order to be more compliant with findable, accessible, interoperable, and reusable (FAIR) data management principles. ACQUISITION AND VALIDATION METHODS: Overall survival time intervals were updated using a national citizens registry after internal ethics board approval. Spatial offsets of the primary gross tumor volume (GTV) regions of interest (ROIs) associated with the Lung1 CT series were improved on the TCIA. GTV radiomics features were extracted using the open-source Ontology-Guided Radiomics Analysis Workflow (O-RAW). We reshaped the output of O-RAW to map features and extraction settings to the latest version of Radiomics Ontology, so as to be consistent with the Image Biomarker Standardization Initiative (IBSI). Digital imaging and communications in medicine metadata was extracted using a research version of Semantic DICOM (SOHARD, GmbH, Fuerth; Germany). Subjects' clinical data were described with metadata using the Radiation Oncology Ontology. All of the above were published in Resource Descriptor Format (RDF), that is, triples. Example SPARQL queries are shared with the reader to use on the online triples archive, which are intended to illustrate how to exploit this data submission. DATA FORMAT: The accumulated RDF data are publicly accessible through a SPARQL endpoint where the triples are archived. The endpoint is remotely queried through a graph database web application at http://sparql.cancerdata.org. SPARQL queries are intrinsically federated, such that we can efficiently cross-reference clinical, DICOM, and radiomics data within a single query, while being agnostic to the original data format and coding system. The federated queries work in the same way even if the RDF data were partitioned across multiple servers and dispersed physical locations. POTENTIAL APPLICATIONS: The public availability of these data resources is intended to support radiomics features replication, repeatability, and reproducibility studies by the academic community. The example SPARQL queries may be freely used and modified by readers depending on their research question. Data interoperability and reusability are supported by referencing existing public ontologies. The RDF data are readily findable and accessible through the aforementioned link. Scripts used to create the RDF are made available at a code repository linked to this submission: https://gitlab.com/UM-CDS/FAIR-compliant_clinical_radiomics_and_DICOM_metadata.
Investigation of inter-fraction target motion variations in the context of pencil beam scanned proton therapy in non-small cell lung cancer patients
den Otter, L. A.
Anakotta, R. M.
Weessies, M.
Roos, C. T. G.
Sijtsema, N. M.
Muijs, C. T.
Dieters, M.
Wijsman, R.
Troost, E. G. C.
Richter, C.
Meijers, A.
Langendijk, J. A.
Both, S.
Knopf, A. C.
Med Phys2020Journal Article, cited 0 times
Website
4D-Lung
PURPOSE: For locally advanced-stage non-small cell lung cancer (NSCLC), inter-fraction target motion variations during the whole time span of a fractionated treatment course are assessed in a large and representative patient cohort. The primary objective is to develop a suitable motion monitoring strategy for pencil beam scanning proton therapy (PBS-PT) treatments of NSCLC patients during free breathing. METHODS: Weekly 4D computed tomography (4DCT; 41 patients) and daily 4D cone beam computed tomography (4DCBCT; 10 of 41 patients) scans were analyzed for a fully fractionated treatment course. Gross tumor volumes (GTVs) were contoured and the 3D displacement vectors of the centroid positions were compared for all scans. Furthermore, motion amplitude variations in different lung segments were statistically analyzed. The dosimetric impact of target motion variations and target motion assessment was investigated in exemplary patient cases. RESULTS: The median observed centroid motion was 3.4 mm (range: 0.2-12.4 mm) with an average variation of 2.2 mm (range: 0.1-8.8 mm). Ten of 32 patients (31.3%) with an initial motion <5 mm increased beyond a 5-mm motion amplitude during the treatment course. Motion observed in the 4DCBCT scans deviated on average 1.5 mm (range: 0.0-6.0 mm) from the motion observed in the 4DCTs. Larger motion variations for one example patient compromised treatment plan robustness while no dosimetric influence was seen due to motion assessment biases in another example case. CONCLUSIONS: Target motion variations were investigated during the course of radiotherapy for NSCLC patients. Patients with initial GTV motion amplitudes of < 2 mm can be assumed to be stable in motion during the treatment course. For treatments of NSCLC patients who exhibit motion amplitudes of > 2 mm, 4DCBCT should be considered for motion monitoring due to substantial motion variations observed.
A multi-objective radiomics model for the prediction of locoregional recurrence in head and neck squamous cell cancer
Wang, K.
Zhou, Z.
Wang, R.
Chen, L.
Zhang, Q.
Sher, D.
Wang, J.
Med Phys2020Journal Article, cited 0 times
Website
Classification
Radiomics
HNSCC
PURPOSE: Locoregional recurrence (LRR) is the predominant pattern of relapse after nonsurgical treatment of head and neck squamous cell cancer (HNSCC). Therefore, accurately identifying patients with HNSCC who are at high risk for LRR is important for optimizing personalized treatment plans. In this work, we developed a multi-classifier, multi-objective, and multi-modality (mCOM) radiomics-based outcome prediction model for HNSCC LRR. METHODS: In mCOM, we considered sensitivity and specificity simultaneously as the objectives to guide the model optimization. We used multiple classifiers, comprising support vector machine (SVM), discriminant analysis (DA), and logistic regression (LR), to build the model. We used features from multiple modalities as model inputs, comprising clinical parameters and radiomics feature extracted from X-ray computed tomography (CT) images and positron emission tomography (PET) images. We proposed a multi-task multi-objective immune algorithm (mTO) to train the mCOM model and used an evidential reasoning (ER)-based method to fuse the output probabilities from different classifiers and modalities in mCOM. We evaluated the effectiveness of the developed method using a retrospective public pretreatment HNSCC dataset downloaded from The Cancer Imaging Archive (TCIA). The input for our model included radiomics features extracted from pretreatment PET and CT using an open source radiomics software and clinical characteristics such as sex, age, stage, primary disease site, human papillomavirus (HPV) status, and treatment paradigm. In our experiment, 190 patients from two institutions were used for model training while the remaining 87 patients from the other two institutions were used for testing. RESULTS: When we built the predictive model using features from single modality, the multi-classifier (MC) models achieved better performance over the models built with the three base-classifiers individually. When we built the model using features from multiple modalities, the proposed method achieved area under the receiver operating characteristic curve (AUC) values of 0.76 for the radiomics-only model, and 0.77 for the model built with radiomics and clinical features, which is significantly higher than the AUCs of models built with single-modality features. The statistical analysis was performed using MATLAB software. CONCLUSIONS: Comparisons with other methods demonstrated the efficiency of the mTO algorithm and the superior performance of the proposed mCOM model for predicting HNSCC LRR.
Deep model with Siamese network for viable and necrotic tumor regions assessment in osteosarcoma
Fu, Yu
Xue, Peng
Ji, Huizhong
Cui, Wentao
Dong, Enqing
Medical Physics2020Journal Article, cited 0 times
Osteosarcoma-Tumor-Assessment
PURPOSE: To achieve automatic classification of viable and necrotic tumor regions in osteosarcoma, most of the existing deep learning methods can only design a simple model to prevent overfitting on small datasets, which leads to the weak ability of extracting image features and low accuracy of the models. In order to solve the above problem, a deep model with Siamese network (DS-Net) was designed in this paper.
METHODS: The DS-Net constructed on the basis of full convolutional networks is composed of an auxiliary supervision network (ASN) and a classification network. The construction of the ASN based on the Siamese network aims to solve the problem of a small training set (the main bottleneck of deep learning in medical images). It uses paired data as the input and updates the network through combined labels. The classification network uses the features extracted by the ASN to perform accurate classification.
RESULTS: Pathological diagnosis is the most accurate method to identify osteosarcoma. However, due to intraclass variation and interclass similarity, it is challenging for pathologists to accurately identify osteosarcoma. Through the experiments on hematoxylin and eosin (H&E)-stained osteosarcoma histology slides, the DS-Net we constructed can achieve an average accuracy of 95.1%. Compared with existing methods, the DS-Net performs best in the test dataset.
CONCLUSIONS: The DS-Net we constructed can not only effectively realize the histological classification of osteosarcoma, but also be applicable to many other medical image classification tasks affected by small datasets.
PleThora: Pleural effusion and thoracic cavity segmentations in diseased lungs for benchmarking chest CT processing pipelines
Kiser, K. J.
Ahmed, S.
Stieb, S.
Mohamed, A. S. R.
Elhalawani, H.
Park, P. Y. S.
Doyle, N. S.
Wang, B. J.
Barman, A.
Li, Z.
Zheng, W. J.
Fuller, C. D.
Giancardo, L.
Med Phys2020Journal Article, cited 0 times
Website
PleThora
NSCLC-Radiomics
Analysis Results
LUNG
U-Net
This manuscript describes a dataset of thoracic cavity segmentations and discrete pleural effusion segmentations we have annotated on 402 computed tomography (CT) scans acquired from patients with non-small cell lung cancer. The segmentation of these anatomic regions precedes fundamental tasks in image analysis pipelines such as lung structure segmentation, lesion detection, and radiomics feature extraction. Bilateral thoracic cavity volumes and pleural effusion volumes were manually segmented on CT scans acquired from The Cancer Imaging Archive "NSCLC Radiomics" data collection. Four hundred and two thoracic segmentations were first generated automatically by a U-Net based algorithm trained on chest CTs without cancer, manually corrected by a medical student to include the complete thoracic cavity (normal, pathologic, and atelectatic lung parenchyma, lung hilum, pleural effusion, fibrosis, nodules, tumor, and other anatomic anomalies), and revised by a radiation oncologist or a radiologist. Seventy-eight pleural effusions were manually segmented by a medical student and revised by a radiologist or radiation oncologist. Interobserver agreement between the radiation oncologist and radiologist corrections was acceptable. All expert-vetted segmentations are publicly available in NIfTI format through The Cancer Imaging Archive at https://doi.org/10.7937/tcia.2020.6c7y-gq39. Tabular data detailing clinical and technical metadata linked to segmentation cases are also available. Thoracic cavity segmentations will be valuable for developing image analysis pipelines on pathologic lungs - where current automated algorithms struggle most. In conjunction with gross tumor volume segmentations already available from "NSCLC Radiomics," pleural effusion segmentations may be valuable for investigating radiomics profile differences between effusion and primary tumor or training algorithms to discriminate between them.
PURPOSE: The dataset contains annotations for lung nodules collected by the Lung Imaging Data Consortium and Image Database Resource Initiative (LIDC) stored as standard DICOM objects. The annotations accompany a collection of computed tomography (CT) scans for over 1000 subjects annotated by multiple expert readers, and correspond to "nodules ≥ 3 mm", defined as any lesion considered to be a nodule with greatest in-plane dimension in the range 3-30 mm regardless of presumed histology. The present dataset aims to simplify reuse of the data with the readily available tools, and is targeted towards researchers interested in the analysis of lung CT images.
ACQUISITION AND VALIDATION METHODS: Open source tools were utilized to parse the project-specific XML representation of LIDC-IDRI annotations and save the result as standard DICOM objects. Validation procedures focused on establishing compliance of the resulting objects with the standard, consistency of the data between the DICOM and project-specific representation, and evaluating interoperability with the existing tools.
DATA FORMAT AND USAGE NOTES: The dataset utilizes DICOM Segmentation objects for storing annotations of the lung nodules, and DICOM Structured Reporting objects for communicating qualitative evaluations (nine attributes) and quantitative measurements (three attributes) associated with the nodules. The total of 875 subjects contain 6859 nodule annotations. Clustering of the neighboring annotations resulted in 2651 distinct nodules. The data are available in TCIA at https://doi.org/10.7937/TCIA.2018.h7umfurq.
POTENTIAL APPLICATIONS: The standardized dataset maintains the content of the original contribution of the LIDC-IDRI consortium, and should be helpful in developing automated tools for characterization of lung lesions and image phenotyping. In addition to those properties, the representation of the present dataset makes it more FAIR (Findable, Accessible, Interoperable, Reusable) for the research community, and enables its integration with other standardized data collections.
Comparison of iterative parametric and indirect deep learning-based reconstruction methods in highly undersampled DCE-MR Imaging of the breast
Rastogi, A.
Yalavarthy, P. K.
Med Phys2020Journal Article, cited 0 times
Website
QIN Breast DCE-MRI
PURPOSE: To compare the performance of iterative direct and indirect parametric reconstruction methods with indirect deep learning-based reconstruction methods in estimating tracer-kinetic parameters from highly undersampled DCE-MR Imaging breast data and provide a systematic comparison of the same. METHODS: Estimation of tracer-kinetic parameters using indirect methods from undersampled data requires to reconstruct the anatomical images initially by solving an inverse problem. This reconstructed images gets utilized in turn to estimate the tracer-kinetic parameters. In direct estimation, the parameters are estimated without reconstructing the anatomical images. Both problems are ill-posed and are typically solved using prior-based regularization or using deep learning. In this study, for indirect estimation, two deep learning-based reconstruction frameworks namely, ISTA-Net(+) and MODL, were utilized. For direct and indirect parametric estimation, sparsity inducing priors (L1 and Total-Variation) and limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm as solver was deployed. The performance of these techniques were compared systematically in estimation of vascular permeability ( K trans ) from undersampled DCE-MRI breast data using Patlak as pharmaco-kinetic model. The experiments involved retrospective undersampling of the data 20x, 50x, and 100x and compared the results using PSNR, nRMSE, SSIM, and Xydeas metrics. The K trans maps estimated from fully sampled data were utilized as ground truth. The developed code was made available as https://github.com/Medical-Imaging-Group/DCE-MRI-Compare open-source for enthusiastic users. RESULTS: The reconstruction methods performance was evaluated using ten patients breast data (five patients each for training and testing). Consistent with other studies, the results indicate that direct parametric reconstruction methods provide improved performance compared to the indirect parameteric reconstruction methods. The results also indicate that for 20x undersampling, deep learning-based methods performs better or at par with direct estimation in terms of PSNR, SSIM, and nRMSE. However, for higher undersampling rates (50x and 100x) direct estimation performs better in all metrics. For all undersampling rates, direct reconstruction performed better in terms of Xydeas metric, which indicated fidelity in magnitude and orientation of edges. CONCLUSION: Deep learning-based indirect techniques perform at par with direct estimation techniques for lower undersampling rates in the breast DCE-MR imaging. At higher undersampling rates, they are not able to provide much needed generalization. Direct estimation techniques are able to provide more accurate results than both deep learning- and parametric-based indirect methods in these high undersampling scenarios.
Technical Note: Automatic segmentation of CT images for ventral body composition analysis
Fu, Yabo
Ippolito, Joseph E.
Ludwig, Daniel R.
Nizamuddin, Rehan
Li, Harold H.
Yang, Deshan
Medical Physics2020Journal Article, cited 0 times
TCGA-KIRC
PURPOSE: Body composition is known to be associated with many diseases including diabetes, cancers, and cardiovascular diseases. In this paper, we developed a fully automatic body tissue decomposition procedure to segment three major compartments that are related to body composition analysis - subcutaneous adipose tissue (SAT), visceral adipose tissue (VAT), and muscle. Three additional compartments - the ventral cavity, lung, and bones - were also segmented during the segmentation process to assist segmentation of the major compartments.
METHODS: A convolutional neural network (CNN) model with densely connected layers was developed to perform ventral cavity segmentation. An image processing workflow was developed to segment the ventral cavity in any patient's computed tomography (CT) using the CNN model, then further segment the body tissue into multiple compartments using hysteresis thresholding followed by morphological operations. It is important to segment ventral cavity firstly to allow accurate separation of compartments with similar Hounsfield unit (HU) inside and outside the ventral cavity.
RESULTS: The ventral cavity segmentation CNN model was trained and tested with manually labeled ventral cavities in 60 CTs. Dice scores (mean ± standard deviation) for ventral cavity segmentation were 0.966 ± 0.012. Tested on CT datasets with intravenous (IV) and oral contrast, the Dice scores were 0.96 ± 0.02, 0.94 ± 0.06, 0.96 ± 0.04, 0.95 ± 0.04, and 0.99 ± 0.01 for bone, VAT, SAT, muscle, and lung, respectively. The respective Dice scores were 0.97 ± 0.02, 0.94 ± 0.07, 0.93 ± 0.06, 0.91 ± 0.04, and 0.99 ± 0.01 for non-contrast CT datasets.
CONCLUSION: A body tissue decomposition procedure was developed to automatically segment multiple compartments of the ventral body. The proposed method enables fully automated quantification of three-dimensional (3D) ventral body composition metrics from CT images.
Generating anthropomorphic phantoms using fully unsupervised deformable image registration with convolutional neural networks
Chen, Junyu
Li, Ye
Du, Yong
Frey, Eric C
Med Phys2020Journal Article, cited 0 times
Website
NaF-Prostate
PHANTOM
Image Registration
Medical Image Simulation
Deep convolutional neural network (DCNN)
PURPOSE: Computerized phantoms have been widely used in nuclear medicine imaging for imaging system optimization and validation. Although the existing computerized phantoms can model anatomical variations through organ and phantom scaling, they do not provide a way to fully reproduce the anatomical variations and details seen in humans. In this work, we present a novel registration-based method for creating highly anatomically detailed computerized phantoms. We experimentally show substantially improved image similarity of the generated phantom to a patient image. METHODS: We propose a deep-learning-based unsupervised registration method to generate a highly anatomically detailed computerized phantom by warping an XCAT phantom to a patient computed tomography (CT) scan. We implemented and evaluated the proposed method using the NURBS-based XCAT phantom and a publicly available low-dose CT dataset from TCIA. A rigorous tradeoff analysis between image similarity and deformation regularization was conducted to select the loss function and regularization term for the proposed method. A novel SSIM-based unsupervised objective function was proposed. Finally, ablation studies were conducted to evaluate the performance of the proposed method (using the optimal regularization and loss function) and the current state-of-the-art unsupervised registration methods. RESULTS: The proposed method outperformed the state-of-the-art registration methods, such as SyN and VoxelMorph, by more than 8%, measured by the SSIM and less than 30%, by the MSE. The phantom generated by the proposed method was highly detailed and was almost identical in appearance to a patient image. CONCLUSIONS: A deep-learning-based unsupervised registration method was developed to create anthropomorphic phantoms with anatomies labels that can be used as the basis for modeling organ properties. Experimental results demonstrate the effectiveness of the proposed method. The resulting anthropomorphic phantom is highly realistic. Combined with realistic simulations of the image formation process, the generated phantoms could serve in many applications of medical imaging research.
Reproducibility analysis of multi‐institutional paired expert annotations and radiomic features of the Ivy Glioblastoma Atlas Project (Ivy GAP) dataset
Pati, Sarthak
Verma, Ruchika
Akbari, Hamed
Bilello, Michel
Hill, Virginia B.
Sako, Chiharu
Correa, Ramon
Beig, Niha
Venet, Ludovic
Thakur, Siddhesh
Serai, Prashant
Ha, Sung Min
Blake, Geri D.
Shinohara, Russell Taki
Tiwari, Pallavi
Bakas, Spyridon
Medical Physics2020Journal Article, cited 0 times
IvyGAP
IvyGAP-Radiomics
PURPOSE: The availability of radiographic magnetic resonance imaging (MRI) scans for the Ivy Glioblastoma Atlas Project (Ivy GAP) has opened up opportunities for development of radiomic markers for prognostic/predictive applications in glioblastoma (GBM). In this work, we address two critical challenges with regard to developing robust radiomic approaches: (a) the lack of availability of reliable segmentation labels for glioblastoma tumor sub-compartments (i.e., enhancing tumor, non-enhancing tumor core, peritumoral edematous/infiltrated tissue) and (b) identifying "reproducible" radiomic features that are robust to segmentation variability across readers/sites.
ACQUISITION AND VALIDATION METHODS: From TCIA's Ivy GAP cohort, we obtained a paired set (n = 31) of expert annotations approved by two board-certified neuroradiologists at the Hospital of the University of Pennsylvania (UPenn) and at Case Western Reserve University (CWRU). For these studies, we performed a reproducibility study that assessed the variability in (a) segmentation labels and (b) radiomic features, between these paired annotations. The radiomic variability was assessed on a comprehensive panel of 11 700 radiomic features including intensity, volumetric, morphologic, histogram-based, and textural parameters, extracted for each of the paired sets of annotations. Our results demonstrated (a) a high level of inter-rater agreement (median value of DICE ≥0.8 for all sub-compartments), and (b) ≈24% of the extracted radiomic features being highly correlated (based on Spearman's rank correlation coefficient) to annotation variations. These robust features largely belonged to morphology (describing shape characteristics), intensity (capturing intensity profile statistics), and COLLAGE (capturing heterogeneity in gradient orientations) feature families.
DATA FORMAT AND USAGE NOTES: We make publicly available on TCIA's Analysis Results Directory (https://doi.org/10.7937/9j41-7d44), the complete set of (a) multi-institutional expert annotations for the tumor sub-compartments, (b) 11 700 radiomic features, and (c) the associated reproducibility meta-analysis.
POTENTIAL APPLICATIONS: The annotations and the associated meta-data for Ivy GAP are released with the purpose of enabling researchers toward developing image-based biomarkers for prognostic/predictive applications in GBM.
Low‐dose CT image and projection dataset
Moen, Taylor R.
Chen, Baiyu
Holmes, David R.
Duan, Xinhui
Yu, Zhicong
Yu, Lifeng
Leng, Shuai
Fletcher, Joel G.
McCollough, Cynthia H.
Medical Physics2020Journal Article, cited 0 times
LDCT-and-Projection-data
PURPOSE: To describe a large, publicly available dataset comprising computed tomography (CT) projection data from patient exams, both at routine clinical doses and simulated lower doses.
ACQUISITION AND VALIDATION METHODS: The library was developed under local ethics committee approval. Projection and image data from 299 clinically performed patient CT exams were archived for three types of clinical exams: noncontrast head CT scans acquired for acute cognitive or motor deficit, low-dose noncontrast chest scans acquired to screen high-risk patients for pulmonary nodules, and contrast-enhanced CT scans of the abdomen acquired to look for metastatic liver lesions. Scans were performed on CT systems from two different CT manufacturers using routine clinical protocols. Projection data were validated by reconstructing the data using several different reconstruction algorithms and through use of the data in the 2016 Low Dose CT Grand Challenge. Reduced dose projection data were simulated for each scan using a validated noise-insertion method. Radiologists marked location and diagnosis for detected pathologies. Reference truth was obtained from the patient medical record, either from histology or subsequent imaging.
DATA FORMAT AND USAGE NOTES: Projection datasets were converted into the previously developed DICOM-CT-PD format, which is an extended DICOM format created to store CT projections and acquisition geometry in a nonproprietary format. Image data are stored in the standard DICOM image format and clinical data in a spreadsheet. Materials are provided to help investigators use the DICOM-CT-PD files, including a dictionary file, data reader, and user manual. The library is publicly available from The Cancer Imaging Archive (https://doi.org/10.7937/9npb-2637).
POTENTIAL APPLICATIONS: This CT data library will facilitate the development and validation of new CT reconstruction and/or denoising algorithms, including those associated with machine learning or artificial intelligence. The provided clinical information allows evaluation of task-based diagnostic performance.
MAD‐UNet: A deep U‐shaped network combined with an attention mechanism for pancreas segmentation in CT images
Li, Weisheng
Qin, Sheng
Li, Feiyan
Wang, Linhong
Medical Physics2020Journal Article, cited 0 times
Pancreas-CT
PURPOSE: Pancreas segmentation is a difficult task because of the high intrapatient variability in the shape, size, and location of the organ, as well as the low contrast and small footprint of the CT scan. At present, the U-Net model is likely to lead to the problems of intraclass inconsistency and interclass indistinction in pancreas segmentation. To solve this problem, we improved the contextual and semantic feature information acquisition method of the biomedical image segmentation model (U-Net) based on a convolutional network and proposed an improved segmentation model called the multiscale attention dense residual U-shaped network (MAD-UNet).
METHODS: There are two aspects considered in this method. First, we adopted dense residual blocks and weighted binary cross-entropy to enhance the semantic features to learn the details of the pancreas. Using such an approach can reduce the effects of intraclass inconsistency. Second, we used an attention mechanism and multiscale convolution to enrich the contextual information and suppress learning in unrelated areas. We let the model be more sensitive to pancreatic marginal information and reduced the impact of interclass indistinction.
RESULTS: We evaluated our model using fourfold cross-validation on 82 abdominal enhanced three-dimensional (3D) CT scans from the National Institutes of Health (NIH-82) and 281 3D CT scans from the 2018 MICCAI segmentation decathlon challenge (MSD). The experimental results showed that our method achieved state-of-the-art performance on the two pancreatic datasets. The mean Dice coefficients were 86.10% ± 3.52% and 88.50% ± 3.70%.
CONCLUSIONS: Our model can effectively solve the problems of intraclass inconsistency and interclass indistinction in the segmentation of the pancreas, and it has value in clinical application. Code is available at https://github.com/Mrqins/pancreas-segmentation.
Two‐stage deep learning model for fully automated pancreas segmentation on computed tomography: Comparison with intra‐reader and inter‐reader reliability at full and reduced radiation dose on an external dataset
Panda, Ananya
Korfiatis, Panagiotis
Suman, Garima
Garg, Sushil K.
Polley, Eric C.
Singh, Dhruv P.
Chari, Suresh T.
Goenka, Ajit H.
Medical Physics2021Journal Article, cited 0 times
LDCT-and-Projection-data
Pancreas-CT
PURPOSE: To develop a two-stage three-dimensional (3D) convolutional neural networks (CNNs) for fully automated volumetric segmentation of pancreas on computed tomography (CT) and to further evaluate its performance in the context of intra-reader and inter-reader reliability at full dose and reduced radiation dose CTs on a public dataset.
METHODS: A dataset of 1994 abdomen CT scans (portal venous phase, slice thickness ≤ 3.75-mm, multiple CT vendors) was curated by two radiologists (R1 and R2) to exclude cases with pancreatic pathology, suboptimal image quality, and image artifacts (n = 77). Remaining 1917 CTs were equally allocated between R1 and R2 for volumetric pancreas segmentation [ground truth (GT)]. This internal dataset was randomly divided into training (n = 1380), validation (n = 248), and test (n = 289) sets for the development of a two-stage 3D CNN model based on a modified U-net architecture for automated volumetric pancreas segmentation. Model's performance for pancreas segmentation and the differences in model-predicted pancreatic volumes vs GT volumes were compared on the test set. Subsequently, an external dataset from The Cancer Imaging Archive (TCIA) that had CT scans acquired at standard radiation dose and same scans reconstructed at a simulated 25% radiation dose was curated (n = 41). Volumetric pancreas segmentation was done on this TCIA dataset by R1 and R2 independently on the full dose and then at the reduced radiation dose CT images. Intra-reader and inter-reader reliability, model's segmentation performance, and reliability between model-predicted pancreatic volumes at full vs reduced dose were measured. Finally, model's performance was tested on the benchmarking National Institute of Health (NIH)-Pancreas CT (PCT) dataset.
RESULTS: Three-dimensional CNN had mean (SD) Dice similarity coefficient (DSC): 0.91 (0.03) and average Hausdorff distance of 0.15 (0.09) mm on the test set. Model's performance was equivalent between males and females (P = 0.08) and across different CT slice thicknesses (P > 0.05) based on noninferiority statistical testing. There was no difference in model-predicted and GT pancreatic volumes [mean predicted volume 99 cc (31cc); GT volume 101 cc (33 cc), P = 0.33]. Mean pancreatic volume difference was -2.7 cc (percent difference: -2.4% of GT volume) with excellent correlation between model-predicted and GT volumes [concordance correlation coefficient (CCC)=0.97]. In the external TCIA dataset, the model had higher reliability than R1 and R2 on full vs reduced dose CT scans [model mean (SD) DSC: 0.96 (0.02), CCC = 0.995 vs R1 DSC: 0.83 (0.07), CCC = 0.89, and R2 DSC:0.87 (0.04), CCC = 0.97]. The DSC and volume concordance correlations for R1 vs R2 (inter-reader reliability) were 0.85 (0.07), CCC = 0.90 at full dose and 0.83 (0.07), CCC = 0.96 at reduced dose datasets. There was good reliability between model and R1 at both full and reduced dose CT [full dose: DSC: 0.81 (0.07), CCC = 0.83 and reduced dose DSC:0.81 (0.08), CCC = 0.87]. Likewise, there was good reliability between model and R2 at both full and reduced dose CT [full dose: DSC: 0.84 (0.05), CCC = 0.89 and reduced dose DSC:0.83(0.06), CCC = 0.89]. There was no difference in model-predicted and GT pancreatic volume in TCIA dataset (mean predicted volume 96 cc (33); GT pancreatic volume 89 cc (30), p = 0.31). Model had mean (SD) DSC: 0.89 (0.04) (minimum-maximum DSC: 0.79 -0.96) on the NIH-PCT dataset.
CONCLUSION: A 3D CNN developed on the largest dataset of CTs is accurate for fully automated volumetric pancreas segmentation and is generalizable across a wide range of CT slice thicknesses, radiation dose, and patient gender. This 3D CNN offers a scalable tool to leverage biomarkers from pancreas morphometrics and radiomics for pancreatic diseases including for early pancreatic cancer detection.
OpenKBP: The open‐access knowledge‐based planning grand challenge and dataset
Babier, A.
Zhang, B.
Mahmood, R.
Moore, K. L.
Purdie, T. G.
McNiven, A. L.
Chan, T. C. Y.
Medical Physics2021Journal Article, cited 0 times
Website
Head-Neck Cetuximab
Head-Neck-PET-CT
Head-Neck-CT-Atlas
TCGA-HNSC
Radiation Therapy
Machine Learning
Contouring
Computed Tomography (CT)
PURPOSE: To advance fair and consistent comparisons of dose prediction methods for knowledge-based planning (KBP) in radiation therapy research. METHODS: We hosted OpenKBP, a 2020 AAPM Grand Challenge, and challenged participants to develop the best method for predicting the dose of contoured computed tomography (CT) images. The models were evaluated according to two separate scores: (a) dose score, which evaluates the full three-dimensional (3D) dose distributions, and (b) dose-volume histogram (DVH) score, which evaluates a set DVH metrics. We used these scores to quantify the quality of the models based on their out-of-sample predictions. To develop and test their models, participants were given the data of 340 patients who were treated for head-and-neck cancer with radiation therapy. The data were partitioned into training ( n = 200 ), validation ( n = 40 ), and testing ( n = 100 ) datasets. All participants performed training and validation with the corresponding datasets during the first (validation) phase of the Challenge. In the second (testing) phase, the participants used their model on the testing data to quantify the out-of-sample performance, which was hidden from participants and used to determine the final competition ranking. Participants also responded to a survey to summarize their models. RESULTS: The Challenge attracted 195 participants from 28 countries, and 73 of those participants formed 44 teams in the validation phase, which received a total of 1750 submissions. The testing phase garnered submissions from 28 of those teams, which represents 28 unique prediction methods. On average, over the course of the validation phase, participants improved the dose and DVH scores of their models by a factor of 2.7 and 5.7, respectively. In the testing phase one model achieved the best dose score (2.429) and DVH score (1.478), which were both significantly better than the dose score (2.564) and the DVH score (1.529) that was achieved by the runner-up models. Lastly, many of the top performing teams reported that they used generalizable techniques (e.g., ensembles) to achieve higher performance than their competition. CONCLUSION: OpenKBP is the first competition for knowledge-based planning research. The Challenge helped launch the first platform that enables researchers to compare KBP prediction methods fairly and consistently using a large open-source dataset and standardized metrics. OpenKBP has also democratized KBP research by making it accessible to everyone, which should help accelerate the progress of KBP research. The OpenKBP datasets are available publicly to help benchmark future KBP research.
Interactive contouring through contextual deep learning
Trimpl, Michael J.
Boukerroui, Djamal
Stride, Eleanor P. J.
Vallis, Katherine A.
Gooding, Mark J.
Medical Physics2021Journal Article, cited 0 times
NSCLC-Radiomics
PURPOSE: To investigate a deep learning approach that enables three-dimensional (3D) segmentation of an arbitrary structure of interest given a user provided two-dimensional (2D) contour for context. Such an approach could decrease delineation times and improve contouring consistency, particularly for anatomical structures for which no automatic segmentation tools exist.
METHODS: A series of deep learning segmentation models using a Recurrent Residual U-Net with attention gates was trained with a successively expanding training set. Contextual information was provided to the models, using a previously contoured slice as an input, in addition to the slice to be contoured. In total, 6 models were developed, and 19 different anatomical structures were used for training and testing. Each of the models was evaluated for all 19 structures, even if they were excluded from the training set, in order to assess the model's ability to segment unseen structures of interest. Each model's performance was evaluated using the Dice similarity coefficient (DSC), Hausdorff distance, and relative added path length (APL).
RESULTS: The segmentation performance for seen and unseen structures improved when the training set was expanded by addition of structures previously excluded from the training set. A model trained exclusively on heart structures achieved a DSC of 0.33, HD of 44 mm, and relative APL of 0.85 when segmenting the spleen, whereas a model trained on a diverse set of structures, but still excluding the spleen, achieved a DSC of 0.80, HD of 13 mm, and relative APL of 0.35. Iterative prediction performed better compared to direct prediction when considering unseen structures.
CONCLUSIONS: Training a contextual deep learning model on a diverse set of structures increases the segmentation performance for the structures in the training set, but importantly enables the model to generalize and make predictions even for unseen structures that were not represented in the training set. This shows that user-provided context can be incorporated into deep learning contouring to facilitate semi-automatic segmentation of CT images for any given structure. Such an approach can enable faster de-novo contouring in clinical practice.
Multilayer residual sparsifying transform (MARS) model for low‐dose CT image reconstruction
Yang, Xikai
Long, Yong
Ravishankar, Saiprasad
Medical Physics2021Journal Article, cited 0 times
LDCT-and-Projection-data
PURPOSE: Signal models based on sparse representations have received considerable attention in recent years. On the other hand, deep models consisting of a cascade of functional layers, commonly known as deep neural networks, have been highly successful for the task of object classification and have been recently introduced to image reconstruction. In this work, we develop a new image reconstruction approach based on a novel multilayer model learned in an unsupervised manner by combining both sparse representations and deep models. The proposed framework extends the classical sparsifying transform model for images to a Multilayer residual sparsifying transform (MARS) model, wherein the transform domain data are jointly sparsified over layers. We investigate the application of MARS models learned from limited regular-dose images for low-dose CT reconstruction using penalized weighted least squares (PWLS) optimization.
METHODS: We propose new formulations for multilayer transform learning and image reconstruction. We derive an efficient block coordinate descent algorithm to learn the transforms across layers, in an unsupervised manner from limited regular-dose images. The learned model is then incorporated into the low-dose image reconstruction phase.
RESULTS: Low-dose CT experimental results with both the XCAT phantom and Mayo Clinic data show that the MARS model outperforms conventional methods such as filtered back-projection and PWLS methods based on the edge-preserving (EP) regularizer in terms of two numerical metrics (RMSE and SSIM) and noise suppression. Compared with the single-layer learned transform (ST) model, the MARS model performs better in maintaining some subtle details.
CONCLUSIONS: This work presents a novel data-driven regularization framework for CT image reconstruction that exploits learned multilayer or cascaded residual sparsifying transforms. The image model is learned in an unsupervised manner from limited images. Our experimental results demonstrate the promising performance of the proposed multilayer scheme over single-layer learned sparsifying transforms. Learned MARS models also offer better image quality than typical nonadaptive PWLS methods.
A hybrid feature selection‐based approach for brain tumor detection and automatic segmentation on multiparametric magnetic resonance images
Chen, Hao
Ban, Duo
Qi, X. Sharon
Pan, Xiaoying
Qiang, Yongqian
Yang, Qing
Medical Physics2021Journal Article, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
PURPOSE: To develop a novel method based on feature selection, combining convolutional neural network (CNN) and ensemble learning (EL), to achieve high accuracy and efficiency of glioma detection and segmentation using multiparametric MRIs.
METHODS: We proposed an evolutionary feature selection-based hybrid approach for glioma detection and segmentation on 4 MR sequences (T2-FLAIR, T1, T1Gd, and T2). First, we trained a lightweight CNN to detect glioma and mask the suspected region to process large batch of MRI images. Second, we employed a differential evolution algorithm to search a feature space, which composed of 416-dimension radiomic features extracted from four sequences of MRIs and 128-dimension high-order features extracted by the CNN, to generate an optimal feature combination for pixel classification. Finally, we trained an EL classifier using the optimal feature combination to segment whole tumor (WT) and its subregions including nonenhancing tumor (NET), peritumoral edema (ED), and enhancing tumor (ET) in the suspected region. Experiments were carried out on 300 glioma patients from the BraTS2019 dataset using fivefold cross validation, the model was independently validated using the rest 35 patients from the same database.
RESULTS: The approach achieved a detection accuracy of 98.8% using four MRIs. The Dice coefficients (and standard deviations) were 0.852 ± 0.057, 0.844 ± 0.046, and 0.799 ± 0.053 for segmentation of WT (NET+ET+ED), tumor core (NET+ET), and ET, respectively. The sensitivities and specificities were 0.873 ± 0.074, 0.863 ± 0.072, and 0.852 ± 0.082; the specificities were 0.994 ± 0.005, 0.994 ± 0.005, and 0.995 ± 0.004 for the WT, tumor core, and ET, respectively. The performances and calculation times were compared with the state-of-the-art approaches, our approach yielded a better overall performance with average processing time of 139.5 s per set of four sequence MRIs.
CONCLUSIONS: We demonstrated a robust and computational cost-effective hybrid segmentation approach for glioma and its subregions on multi-sequence MR images. The proposed approach can be used for automated target delineation for glioma patients.
Fully automated segmentation of brain tumor from multiparametric MRI using 3D context deep supervised U‐Net
Lin, Mingquan
Momin, Shadab
Lei, Yang
Wang, Hesheng
Curran, Walter J.
Liu, Tian
Yang, Xiaofeng
Medical Physics2021Journal Article, cited 0 times
BraTS-TCGA-GBM
PURPOSE: Owing to histologic complexities of brain tumors, its diagnosis requires the use of multimodalities to obtain valuable structural information so that brain tumor subregions can be properly delineated. In current clinical workflow, physicians typically perform slice-by-slice delineation of brain tumor subregions, which is a time-consuming process and also more susceptible to intra- and inter-rater variabilities possibly leading to misclassification. To deal with this issue, this study aims to develop an automatic segmentation of brain tumor in MR images using deep learning.
METHOD: In this study, we develop a context deep-supervised U-Net to segment brain tumor subregions. A context block which aggregates multiscale contextual information for dense segmentation was proposed. This approach enlarges the effective receptive field of convolutional neural networks, which, in turn, improves the segmentation accuracy of brain tumor subregions. We performed the fivefold cross-validation on the Brain Tumor Segmentation Challenge (BraTS) 2020 training dataset. The BraTS 2020 testing datasets were obtained via BraTS online website as a hold-out test. For BraTS, the evaluation system divides the tumor into three regions: whole tumor (WT), tumor core (TC), and enhancing tumor (ET). The performance of our proposed method was compared against two state-of-the-arts CNN networks in terms of segmentation accuracy via Dice similarity coefficient (DSC) and Hausdorff distance (HD). The tumor volumes generated by our proposed method were compared with manually contoured volumes via Bland-Altman plots and Pearson analysis.
RESULTS: The proposed method achieved the segmentation results with a DSC of 0.923 ± 0.047, 0.893 ± 0.176, and 0.846 ± 0.165 and a 95% HD95 of 3.946 ± 7.041, 3.981 ± 6.670, and 10.128 ± 51.136 mm on WT, TC, and ET, respectively. Experimental results demonstrate that our method achieved comparable to significantly (p < 0.05) better segmentation accuracies than other two state-of-the-arts CNN networks. Pearson correlation analysis showed a high positive correlation between the tumor volumes generated by proposed method and manual contour.
CONCLUSION: Overall qualitative and quantitative results of this work demonstrate the potential of translating proposed technique into clinical practice for segmenting brain tumor subregions, and further facilitate brain tumor radiotherapy workflow.
Using neural networks to extend cropped medical images for deformable registration among images with differing scan extents
McKenzie, E. M.
Tong, N.
Ruan, D.
Cao, M.
Chin, R. K.
Sheng, K.
Med Phys2021Journal Article, cited 1 times
Website
Algorithm Development
HEAD
*Image Processing
Computer-Assisted
HEADNECK
*Neural Networks
Computer
Deep learning
Image Registration
PURPOSE: Missing or discrepant imaging volume is a common challenge in deformable image registration (DIR). To minimize the adverse impact, we train a neural network to synthesize cropped portions of head and neck CT's and then test its use in DIR. METHODS: Using a training dataset of 409 head and neck CT's, we trained a generative adversarial network to take in a cropped 3D image and output an image with synthesized anatomy in the cropped region. The network used a 3D U-Net generator along with Visual Geometry Group (VGG) deep feature losses. To test our technique, for each of the 53 test volumes, we used Elastix to deformably register combinations of a randomly cropped, full, and synthetically full volume to a single cropped, full, and synthetically full target volume. We additionally tested our method's robustness to crop extent by progressively increasing the amount of cropping, synthesizing the missing anatomy using our network, and then performing the same registration combinations. Registration performance was measured using 95% Hausdorff distance across 16 contours. RESULTS: We successfully trained a network to synthesize missing anatomy in superiorly and inferiorly cropped images. The network can estimate large regions in an incomplete image, far from the cropping boundary. Registration using our estimated full images was not significantly different from registration using the original full images. The average contour matching error for full image registration was 9.9 mm, whereas our method was 11.6, 12.1, and 13.6 mm for synthesized-to-full, full-to-synthesized, and synthesized-to-synthesized registrations, respectively. In comparison, registration using the cropped images had errors of 31.7 mm and higher. Plotting the registered image contour error as a function of initial preregistered error shows that our method is robust to registration difficulty. Synthesized-to-full registration was statistically independent of cropping extent up to 18.7 cm superiorly cropped. Synthesized-to-synthesized registration was nearly independent, with a -0.04 mm of change in average contour error for every additional millimeter of cropping. CONCLUSIONS: Different or inadequate in scan extent is a major cause of DIR inaccuracies. We address this challenge by training a neural network to complete cropped 3D images. We show that with image completion, the source of DIR inaccuracy is eliminated, and the method is robust to varying crop extent.
An effective deep network for automatic segmentation of complex lung tumors in CT images
Wang, B.
Chen, K.
Tian, X.
Yang, Y.
Zhang, X.
Med Phys2021Journal Article, cited 0 times
Website
RIDER NEURO MRI
LIDC-IDRI
Computed Tomography (CT)
Segmentation
Semantic features
Deep Learning
Algorithm Development
PURPOSE: Accurate segmentation of complex tumors in lung computed tomography (CT) images is essential to improve the effectiveness and safety of lung cancer treatment. However, the characteristics of heterogeneity, blurred boundaries, and large-area adhesion to tissues with similar gray-scale features always make the segmentation of complex tumors difficult. METHODS: This study proposes an effective deep network for the automatic segmentation of complex lung tumors (CLT-Net). The network architecture uses an encoder-decoder model that combines long and short skip connections and a global attention unit to identify target regions using multiscale semantic information. A boundary-aware loss function integrating Tversky loss and boundary loss based on the level-set calculation is designed to improve the network's ability to perceive boundary positions of difficult-to-segment (DTS) tumors. We use a dynamic weighting strategy to balance the contributions of the two parts of the loss function. RESULTS: The proposed method was verified on a dataset consisting of 502 lung CT images containing DTS tumors. The experiments show that the Dice similarity coefficient and Hausdorff distance metric of the proposed method are improved by 13.2% and 8.5% on average, respectively, compared with state-of-the-art segmentation models. Furthermore, we selected three additional medical image datasets with different modalities to evaluate the proposed model. Compared with mainstream architectures, the Dice similarity coefficient is also improved to a certain extent, which demonstrates the effectiveness of our method for segmenting medical images. CONCLUSIONS: Quantitative and qualitative results show that our method outperforms current mainstream lung tumor segmentation networks in terms of Dice similarity coefficient and Hausdorff distance. Note that the proposed method is not limited to the segmentation of complex lung tumors but also performs in different modalities of medical image segmentation.
Deformation driven Seq2Seq longitudinal tumor and organs‐at‐risk prediction for radiotherapy
Lee, Donghoon
Alam, Sadegh R.
Jiang, Jue
Zhang, Pengpeng
Nadeem, Saad
Hu, Yu‐chi
Medical Physics2021Journal Article, cited 0 times
HNSCC-3DCT-RT
PURPOSE: Radiotherapy presents unique challenges and clinical requirements for longitudinal tumor and organ-at-risk (OAR) prediction during treatment. The challenges include tumor inflammation/edema and radiation-induced changes in organ geometry, whereas the clinical requirements demand flexibility in input/output sequence timepoints to update the predictions on rolling basis and the grounding of all predictions in relationship to the pre-treatment imaging information for response and toxicity assessment in adaptive radiotherapy.
METHODS: To deal with the aforementioned challenges and to comply with the clinical requirements, we present a novel 3D sequence-to-sequence model based on Convolution Long Short-Term Memory (ConvLSTM) that makes use of series of deformation vector fields (DVFs) between individual timepoints and reference pre-treatment/planning CTs to predict future anatomical deformations and changes in gross tumor volume as well as critical OARs. High-quality DVF training data are created by employing hyper-parameter optimization on the subset of the training data with DICE coefficient and mutual information metric. We validated our model on two radiotherapy datasets: a publicly available head-and-neck dataset (28 patients with manually contoured pre-, mid-, and post-treatment CTs), and an internal non-small cell lung cancer dataset (63 patients with manually contoured planning CT and 6 weekly CBCTs).
RESULTS: The use of DVF representation and skip connections overcomes the blurring issue of ConvLSTM prediction with the traditional image representation. The mean and standard deviation of DICE for predictions of lung GTV at weeks 4, 5, and 6 were 0.83 ± 0.09, 0.82 ± 0.08, and 0.81 ± 0.10, respectively, and for post-treatment ipsilateral and contralateral parotids, were 0.81 ± 0.06 and 0.85 ± 0.02.
CONCLUSION: We presented a novel DVF-based Seq2Seq model for medical images, leveraging the complete 3D imaging information of a relatively large longitudinal clinical dataset, to carry out longitudinal GTV/OAR predictions for anatomical changes in HN and lung radiotherapy patients, which has potential to improve RT outcomes.
Low‐dose CT reconstruction with Noise2Noise network and testing‐time fine‐tuning
Wu, Dufan
Kim, Kyungsang
Li, Quanzheng
Medical Physics2021Journal Article, cited 0 times
LDCT-and-Projection-data
PURPOSE: Deep learning-based image denoising and reconstruction methods demonstrated promising performance on low-dose CT imaging in recent years. However, most existing deep learning-based low-dose CT reconstruction methods require normal-dose images for training. Sometimes such clean images do not exist such as for dynamic CT imaging or very large patients. The purpose of this work is to develop a low-dose CT image reconstruction algorithm based on deep learning which does not need clean images for training.
METHODS: In this paper, we proposed a novel reconstruction algorithm where the image prior was expressed via the Noise2Noise network, whose weights were fine-tuned along with the image during the iterative reconstruction. The Noise2Noise network built a self-consistent loss by projection data splitting and mapping the corresponding filtered backprojection (FBP) results to each other with a deep neural network. Besides, the network weights are optimized along with the image to be reconstructed under an alternating optimization scheme. In the proposed method, no clean image is needed for network training and the testing-time fine-tuning leads to optimization for each reconstruction.
RESULTS: We used the 2016 Low-dose CT Challenge dataset to validate the feasibility of the proposed method. We compared its performance to several existing iterative reconstruction algorithms that do not need clean training data, including total variation, non-local mean, convolutional sparse coding, and Noise2Noise denoising. It was demonstrated that the proposed Noise2Noise reconstruction achieved better RMSE, SSIM and texture preservation compared to the other methods. The performance is also robust against the different noise levels, hyperparameters, and network structures used in the reconstruction. Furthermore, we also demonstrated that the proposed methods achieved competitive results without any pre-training of the network at all, that is, using randomly initialized network weights during testing. The proposed iterative reconstruction algorithm also has empirical convergence with and without network pre-training.
CONCLUSIONS: The proposed Noise2Noise reconstruction method can achieve promising image quality in low-dose CT image reconstruction. The method works both with and without pre-training, and only noisy data are required for pre-training.
Assessment of the global noise algorithm for automatic noise measurement in head CT examinations
Ahmad, M.
Tan, D.
Marisetty, S.
Med Phys2021Journal Article, cited 0 times
Website
LDCT-and-Projection-data
Computed Tomography (CT)
Image processing
quality control
PURPOSE: The global noise (GN) algorithm has been previously introduced as a method for automatic noise measurement in clinical CT images. The accuracy of the GN algorithm has been assessed in abdomen CT examinations, but not in any other body part until now. This work assesses the GN algorithm accuracy in automatic noise measurement in head CT examinations. METHODS: A publicly available image dataset of 99 head CT examinations was used to evaluate the accuracy of the GN algorithm in comparison to reference noise values. Reference noise values were acquired using a manual noise measurement procedure. The procedure used a consistent instruction protocol and multiple observers to mitigate the influence of intra- and interobserver variation, resulting in precise reference values. Optimal GN algorithm parameter values were determined. The GN algorithm accuracy and the corresponding statistical confidence interval were determined. The GN measurements were compared across the six different scan protocols used in this dataset. The correlation of GN to patient head size was also assessed using a linear regression model, and the CT scanner's X-ray beam quality was inferred from the model fit parameters. RESULTS: Across all head CT examinations in the dataset, the range of reference noise was 2.9-10.2 HU. A precision of +/-0.33 HU was achieved in the reference noise measurements. After optimization, the GN algorithm had a RMS error 0.34 HU corresponding to a percent RMS error of 6.6%. The GN algorithm had a bias of +3.9%. Statistically significant differences in GN were detected in 11 out of the 15 different pairs of scan protocols. The GN measurements were correlated with head size with a statistically significant regression slope parameter (p < 10(-7) ). The CT scanner X-ray beam quality estimated from the slope parameter was 3.5 cm water HVL (2.8-4.8 cm 95% CI). CONCLUSION: The GN algorithm was validated for application in head CT examinations. The GN algorithm was accurate in comparison to reference manual measurement, with errors comparable to interobserver variation in manual measurement. The GN algorithm can detect noise differences in examinations performed on different scanner models or using different scan protocols. The trend in GN across patients of different head sizes closely follows that predicted by a physical model of X-ray attenuation.
Low‐dose CT denoising via convolutional neural network with an observer loss function
Han, Minah
Shim, Hyunjung
Baek, Jongduk
Medical Physics2021Journal Article, cited 0 times
LDCT-and-Projection-data
PURPOSE: Convolutional neural network (CNN)-based denoising is an effective method for reducing complex computed tomography (CT) noise. However, the image blur induced by denoising processes is a major concern. The main source of image blur is the pixel-level loss (e.g., mean squared error [MSE] and mean absolute error [MAE]) used to train a CNN denoiser. To reduce the image blur, feature-level loss is utilized to train a CNN denoiser. A CNN denoiser trained using visual geometry group (VGG) loss can preserve the small structures, edges, and texture of the image.However, VGG loss, derived from an ImageNet-pretrained image classifier, is not optimal for training a CNN denoiser for CT images. ImageNet contains natural RGB images, so the features extracted by the ImageNet-pretrained model cannot represent the characteristics of CT images that are highly correlated with diagnosis. Furthermore, a CNN denoiser trained with VGG loss causes bias in CT number. Therefore, we propose to use a binary classification network trained using CT images as a feature extractor and newly define the feature-level loss as observer loss.
METHODS: As obtaining labeled CT images for training classification network is difficult, we create labels by inserting simulated lesions. We conduct two separate classification tasks, signal-known-exactly (SKE) and signal-known-statistically (SKS), and define the corresponding feature-level losses as SKE loss and SKS loss, respectively. We use SKE loss and SKS loss to train CNN denoiser.
RESULTS: Compared to pixel-level losses, a CNN denoiser trained using observer loss (i.e., SKE loss and SKS loss) is effective in preserving structure, edge, and texture. Observer loss also resolves the bias in CT number, which is a problem of VGG loss. Comparing observer losses using SKE and SKS tasks, SKS yields images having a more similar noise structure to reference images.
CONCLUSIONS: Using observer loss for training CNN denoiser is effective to preserve structure, edge, and texture in denoised images and prevent the CT number bias. In particular, when using SKS loss, denoised images having a similar noise structure to reference images are generated.
CARes‐UNet: Content‐Aware residual UNet for lesion segmentation of COVID‐19 from chest CT images
Xu, Xinhua
Wen, Yuhang
Zhao, Lu
Zhang, Yi
Zhao, Youjun
Tang, Zixuan
Yang, Ziduo
Chen, Calvin Yu‐Chian
Medical Physics2021Journal Article, cited 0 times
Website
CT Images in COVID-19
U-Net
Machine Learning
COVID-19
Image‐based shading correction for narrow‐FOV truncated pelvic CBCT with deep convolutional neural networks and transfer learning
Rossi, Matteo
Belotti, Gabriele
Paganelli, Chiara
Pella, Andrea
Barcellini, Amelia
Cerveri, Pietro
Baroni, Guido
Medical Physics2021Journal Article, cited 0 times
Pelvic-Reference-Data
PURPOSE: Cone beam computed tomography (CBCT) is a standard solution for in-room image guidance for radiation therapy. It is used to evaluate and compensate for anatomopathological changes between the dose delivery plan and the fraction delivery day. CBCT is a fast and versatile solution, but it suffers from drawbacks like low contrast and requires proper calibration to derive density values. Although these limitations are even more prominent with in-room customized CBCT systems, strategies based on deep learning have shown potential in improving image quality. As such, this article presents a method based on a convolutional neural network and a novel two-step supervised training based on the transfer learning paradigm for shading correction in CBCT volumes with narrow field of view (FOV) acquired with an ad hoc in-room system.
METHODS: We designed a U-Net convolutional neural network, trained on axial slices of corresponding CT/CBCT couples. To improve the generalization capability of the network, we exploited two-stage learning using two distinct data sets. At first, the network weights were trained using synthetic CBCT scans generated from a public data set, and then only the deepest layers of the network were trained again with real-world clinical data to fine-tune the weights. Synthetic data were generated according to real data acquisition parameters. The network takes a single grayscale volume as input and outputs the same volume with corrected shading and improved HU values.
RESULTS: Evaluation was carried out with a leave-one-out cross-validation, computed on 18 unique CT/CBCT pairs from six different patients from a real-world dataset. Comparing original CBCT to CT and improved CBCT to CT, we obtained an average improvement of 6 dB on peak signal-to-noise ratio (PSNR), +2% on structural similarity index measure (SSIM). The median interquartile range (IQR) Hounsfield unit (HU) difference between CBCT and CT improved from 161.37 (162.54) HU to 49.41 (66.70) HU. Region of interest (ROI)-based HU difference was narrowed by 75% in the spongy bone (femoral head), 89% in the bladder, 85% for fat, and 83% for muscle. The improvement in contrast-to-noise ratio for these ROIs was about 67%.
CONCLUSIONS: We demonstrated that shading correction obtaining CT-compatible data from narrow-FOV CBCTs acquired with a customized in-room system is possible. Moreover, the transfer learning approach proved particularly beneficial for such a shading correction approach.
Lung-CRNet: A convolutional recurrent neural network for lung 4DCT image registration
Lu, J.
Jin, R.
Song, E.
Ma, G.
Wang, M.
Med Phys2021Journal Article, cited 0 times
Website
4D-Lung
Computed Tomography (CT)
Deep Learning
Image Registration
recurrent neural network
PURPOSE: Deformable image registration (DIR) of lung four-dimensional computed tomography (4DCT) plays a vital role in a wide range of clinical applications. Most of the existing deep learning-based lung 4DCT DIR methods focus on pairwise registration which aims to register two images with large deformation. However, the temporal continuities of deformation fields between phases are ignored. This paper proposes a fast and accurate deep learning-based lung 4DCT DIR approach that leverages the temporal component of 4DCT images. METHODS: We present Lung-CRNet, an end-to-end convolutional recurrent registration neural network for lung 4DCT images and reformulate 4DCT DIR as a spatiotemporal sequence predicting problem in which the input is a sequence of three-dimensional computed tomography images from the inspiratory phase to the expiratory phase in a respiratory cycle. The first phase in the sequence is selected as the only reference image and the rest as moving images. Multiple convolutional gated recurrent units (ConvGRUs) are stacked to capture the temporal clues between images. The proposed network is trained in an unsupervised way using a spatial transformer layer. During inference, Lung-CRNet is able to yield the respective displacement field for each reference-moving image pair in the input sequence. RESULTS: We have trained the proposed network using a publicly available lung 4DCT dataset and evaluated performance on the widely used the DIR-Lab dataset. The mean and standard deviation of target registration error are 1.56 +/- 1.05 mm on the DIR-Lab dataset. The computation time for each forward prediction is less than 1 s on average. CONCLUSIONS: The proposed Lung-CRNet is comparable to the existing state-of-the-art deep learning-based 4DCT DIR methods in both accuracy and speed. Additionally, the architecture of Lung-CRNet can be generalized to suit other groupwise registration tasks which align multiple images simultaneously.
Progressive attention module for segmentation of volumetric medical images
Zhang, Minghui
Pan, Hong
Zhu, Yaping
Gu, Yun
Medical Physics2021Journal Article, cited 0 times
BraTS-TCGA-GBM
BraTS-TCGA-LGG
PURPOSE: Medical image segmentation is critical for many medical image analysis applications. 3D convolutional neural networks (CNNs) have been widely adopted in the segmentation of volumetric medical images. The recent development of channelwise and spatialwise attentions achieves the state-of-the-art feature representation performance. However, these attention strategies have not explicitly modeled interdependencies among slices in 3D medical volumes. In this work, we propose a novel attention module called progressive attention module (PAM) to explicitly model the slicewise importance for 3D medical image analysis.
METHODS: The proposed method is composed of three parts: Slice Attention (SA) block, Key-Slice-Selection (KSS) block, and Channel Attention (CA) block. First, the SA is a novel attention block to explore the correlation among slices for 3D medical image segmentation. SA is designed to explicitly reweight the importance of each slice in the 3D medical image scan. Second, the KSS block, cooperating with the SA block, is designed to adaptively emphasize the critical slice features while suppressing the irrelevant slice features, which helps the model focus on the slices with rich structural and contextual information. Finally, the CA block receives the output of KSS as input for further feature recalibration. Our proposed PAM organically combines SA, KSS, and CA, progressively highlights the key slices containing rich information for the relevant tasks while suppressing those irrelevant slices.
RESULTS: To demonstrate the effectiveness of PAM, we embed it into 3D CNNs architectures and evaluate the segmentation performance on three public challenging data sets: BraTS 2018 data set, MALC data set, and HVSMR data set. We achieve 80.34%, 88.98%, and 84.43% of the Dice similarity coefficient on these three data sets, respectively. Experimental results show that the proposed PAM not only boosts the segmentation accuracy of the standard 3D CNNs methods consistently, but also outperforms the other attention mechanisms with slight extra costs.
CONCLUSIONS: We propose a new PAM to identify the most informative slices and recalibrate channelwise feature responses for volumetric medical image segmentation. The proposed method is evaluated on three public data sets, and the results show improvements over other methods. This proposed technique can effectively assist physicians in many medical image analysis. It is also anticipated to be generalizable and transferable to assist physicians in a wider range of medical imaging applications to produce greater value and impact to health.
Dynamic boundary‐insensitive loss for magnetic resonance medical image segmentation
Qiu, Mingyan
Zhang, Chenxi
Song, Zhijian
Medical Physics2021Journal Article, cited 0 times
ISBI-MR-Prostate-2013
PURPOSE: A deep learning method has achieved great success in MR medical image segmentation. One challenge in applying deep learning segmentation models to clinical practice is their poor generalization mainly due to limited labeled training samples, inter-site heterogeneity of different datasets, and ambiguous boundary definition, etc. The objective of this work is to develop a dynamic boundary-insensitive (DBI) loss to address this poor generalization caused by an uncertain boundary.
METHODS: The DBI loss is designed to assign higher penalties to misclassified voxels farther from the boundaries in each training iteration to reduce the sensitivity of the segmentation model to the uncertain boundary. The weighting factor of the DBI loss can be adjusted adaptively without any manual setting and adjustment. Extensive experiments were conducted to verify the performance of our DBI loss and its variant, DiceDBI, on four heterogeneous prostate MRI datasets for prostate zonal segmentation and whole prostate segmentation.
RESULTS: Experimental results show that our DBI loss, when combined with Dice loss, outperforms all competing loss functions in dice similarity coefficient (DSC) and improves the segmentation performance across all datasets consistently, especially on unseen datasets and when segmenting small or narrow targets.
CONCLUSIONS: The proposed DiceDBI loss will be valuable for enhancement of the generalization performance of the segmentation model.
Pediatric chest-abdomen-pelvis and abdomen-pelvis CT images with expert organ contours
Jordan, P.
Adamson, P. M.
Bhattbhatt, V.
Beriwal, S.
Shen, S.
Radermecker, O.
Bose, S.
Strain, L. S.
Offe, M.
Fraley, D.
Principi, S.
Ye, D. H.
Wang, A. S.
Van Heteren, J.
Vo, N. J.
Schmidt, T. G.
Med Phys2022Journal Article, cited 0 times
Website
Pediatric-CT-SEG
PURPOSE: Organ autosegmentation efforts to date have largely been focused on adult populations, due to limited availability of pediatric training data. Pediatric patients may present additional challenges for organ segmentation. This paper describes a dataset of 359 pediatric chest-abdomen-pelvis and abdomen-pelvis CT images with expert contours of up to 29 anatomical organ structures to aid in the evaluation and development of autosegmentation algorithms for pediatric CT imaging. ACQUISITION AND VALIDATION METHODS: The dataset collection consists of axial CT images in DICOM format of 180 male and 179 female pediatric chest-abdomen-pelvis or abdomen-pelvis exams acquired from one of three CT scanners at Children's Wisconsin. The datasets represent random pediatric cases based upon routine clinical indications. Subjects ranged in age from 5 days to 16 years, with a mean age of seven years. The CT acquisition, contrast, and reconstruction protocols varied across the scanner models and patients, with specifications available in the DICOM headers. Expert contours were manually labeled for up to 29 organ structures per subject. Not all contours are available for all subjects, due to limited field of view or unreliable contouring due to high noise. DATA FORMAT AND USAGE NOTES: The data are available on TCIA (https://www.cancerimagingarchive.net/) under the collection Pediatric-CT-SEG. The axial CT image slices for each subject are available in DICOM format. The expert contours are stored in a single DICOM RTSTRUCT file for each subject. The contours are names as listed in Table 2. POTENTIAL APPLICATIONS: This dataset will enable the evaluation and development of organ autosegmentation algorithms for pediatric populations, which exhibit variations in organ shape and size across age. Automated organ segmentation from CT images has numerous applications including radiation therapy, diagnostic tasks, surgical planning, and patient-specific organ dose estimation. This article is protected by copyright. All rights reserved.
Technical note: Evaluation of a V‐Net autosegmentation algorithm for pediatric CT scans: Performance, generalizability, and application to patient‐specific CT dosimetry
Adamson, Philip M.
Bhattbhatt, Vrunda
Principi, Sara
Beriwal, Surabhi
Strain, Linda S.
Offe, Michael
Wang, Adam S.
Vo, Nghia‐Jack
Schmidt, Taly Gilat
Jordan, Petr
Medical Physics2022Journal Article, cited 0 times
Pediatric-CT-SEG
PURPOSE: This study developed and evaluated a fully convolutional network (FCN) for pediatric CT organ segmentation and investigated the generalizability of the FCN across image heterogeneities such as CT scanner model protocols and patient age. We also evaluated the autosegmentation models as part of a software tool for patient-specific CT dose estimation.
METHODS: A collection of 359 pediatric CT datasets with expert organ contours were used for model development and evaluation. Autosegmentation models were trained for each organ using a modified FCN 3D V-Net. An independent test set of 60 patients was withheld for testing. To evaluate the impact of CT scanner model protocol and patient age heterogeneities, separate models were trained using a subset of scanner model protocols and pediatric age groups. Train and test sets were split to answer questions about the generalizability of pediatric FCN autosegmentation models to unseen age groups and scanner model protocols, as well as the merit of scanner model protocol or age-group-specific models. Finally, the organ contours resulting from the autosegmentation models were applied to patient-specific dose maps to evaluate the impact of segmentation errors on organ dose estimation.
RESULTS: Results demonstrate that the autosegmentation models generalize to CT scanner acquisition and reconstruction methods which were not present in the training dataset. While models are not equally generalizable across age groups, age-group-specific models do not hold any advantage over combining heterogeneous age groups into a single training set. Dice similarity coefficient (DSC) and mean surface distance results are presented for 19 organ structures, for example, median DSC of 0.52 (duodenum), 0.74 (pancreas), 0.92 (stomach), and 0.96 (heart). The FCN models achieve a mean dose error within 5% of expert segmentations for all 19 organs except for the spinal canal, where the mean error was 6.31%.
CONCLUSIONS: Overall, these results are promising for the adoption of FCN autosegmentation models for pediatric CT, including applications for patient-specific CT dose estimation.
Anatomically and physiologically informed computational model of hepatic contrast perfusion for virtual imaging trials
Sauer, Thomas J.
Abadi, Ehsan
Segars, Paul
Samei, Ehsan
Medical Physics2022Journal Article, cited 0 times
TCGA-LIHC
PURPOSE: Virtual (in silico) imaging trials (VITs), involving computerized phantoms and models of the imaging process, provide a modern alternative to clinical imaging trials. VITs are faster, safer, and enable otherwise-impossible investigations. Current phantoms used in VITs are limited in their ability to model functional behavior such as contrast perfusion which is an important determinant of dose and image quality in CT imaging. In our prior work with the XCAT computational phantoms, we determined and modeled inter-organ (organ to organ) intravenous contrast concentration as a function of time from injection. However, intra-organ concentration, heterogeneous distribution within a given organ, was not pursued. We extend our methods in this work to model intra-organ concentration within the XCAT phantom with a specific focus on the liver.
METHODS: Intra-organ contrast perfusion depends on the organ's vessel network. We modeled the intricate vascular structures of the liver, informed by empirical and theoretical observations of anatomy and physiology. The developed vessel generation algorithm modeled a dual-input-single-output vascular network as a series of bifurcating surfaces to optimally deliver flow within the bounding surface of a given XCAT liver. Using this network, contrast perfusion was simulated within voxelized versions of the phantom by using knowledge of the blood velocities in each vascular structure, vessel diameters and length, and the time since the contrast entered the hepatic artery. The utility of the enhanced phantom was demonstrated through a simulation study with the phantom voxelized prior to CT simulation with the relevant liver vasculature prepared to represent blood and iodinated contrast media. The spatial extent of the blood-contrast mixture was compared to clinical data.
RESULTS: The vascular structures of the liver were generated with size and orientation which resulted in minimal energy expenditure required to maintain blood flow. Intravenous contrast was simulated as having known concentration and known total volume in the liver as calibrated from time-concentration curves. Measurements of simulated CT ROIs were found to agree with clinically observed values of early arterial phase contrast enhancement of the parenchyma ( ∼ 5 $ \sim 5$ HU). Similarly, early enhancement in the hepatic artery was found to agree with average clinical enhancement ( 180 $(180$ HU).
CONCLUSIONS: The computational methods presented here furthered the development of the XCAT phantoms allowing for multi-timepoint contrast perfusion simulations, enabling more anthropomorphic virtual clinical trials intended for optimization of current clinical imaging technologies and applications.
Feature fusion Siamese network for breast cancer detection comparing current and prior mammograms
Bai, J.
Jin, A.
Wang, T.
Yang, C.
Nabavi, S.
Med Phys2022Journal Article, cited 0 times
CBIS-DDSM
CMMD
BCS-DBT
BREAST
Automatic detection
Artificial Intelligence
*Breast Neoplasms/diagnostic imaging
Female
Humans
Machine Learning
Mammography/methods
Neural Networks
Computer
Siamese
deep learning
prior mammogram
PURPOSE: Automatic detection of very small and nonmass abnormalities from mammogram images has remained challenging. In clinical practice for each patient, radiologists commonly not only screen the mammogram images obtained during the examination, but also compare them with previous mammogram images to make a clinical decision. To design an artificial intelligence (AI) system to mimic radiologists for better cancer detection, in this work we proposed an end-to-end enhanced Siamese convolutional neural network to detect breast cancer using previous year and current year mammogram images. METHODS: The proposed Siamese-based network uses high-resolution mammogram images and fuses features of pairs of previous year and current year mammogram images to predict cancer probabilities. The proposed approach is developed based on the concept of one-shot learning that learns the abnormal differences between current and prior images instead of abnormal objects, and as a result can perform better with small sample size data sets. We developed two variants of the proposed network. In the first model, to fuse the features of current and previous images, we designed an enhanced distance learning network that considers not only the overall distance, but also the pixel-wise distances between the features. In the other model, we concatenated the features of current and previous images to fuse them. RESULTS: We compared the performance of the proposed models with those of some baseline models that use current images only (ResNet and VGG) and also use current and prior images (long short-term memory [LSTM] and vanilla Siamese) in terms of accuracy, sensitivity, precision, F1 score, and area under the curve (AUC). Results show that the proposed models outperform the baseline models and the proposed model with the distance learning network performs the best (accuracy: 0.92, sensitivity: 0.93, precision: 0.91, specificity: 0.91, F1: 0.92 and AUC: 0.95). CONCLUSIONS: Integrating prior mammogram images improves automatic cancer classification, specially for very small and nonmass abnormalities. For classification models that integrate current and prior mammogram images, using an enhanced and effective distance learning network can advance the performance of the models.
HFCF‐Net: A hybrid‐feature cross fusion network for COVID‐19 lesion segmentation from CT volumetric images
Wang, Yanting
Yang, Qingyu
Tian, Lixia
Zhou, Xuezhong
Rekik, Islem
Huang, Huifang
Medical Physics2022Journal Article, cited 0 times
CT Images in COVID-19
BACKGROUND: The coronavirus disease 2019 (COVID-19) spreads rapidly across the globe, seriously threatening the health of people all over the world. To reduce the diagnostic pressure of front-line doctors, an accurate and automatic lesion segmentation method is highly desirable in clinic practice.
PURPOSE: Many proposed two-dimensional (2D) methods for sliced-based lesion segmentation cannot take full advantage of spatial information in the three-dimensional (3D) volume data, resulting in limited segmentation performance. Three-dimensional methods can utilize the spatial information but suffer from long training time and slow convergence speed. To solve these problems, we propose an end-to-end hybrid-feature cross fusion network (HFCF-Net) to fuse the 2D and 3D features at three scales for the accurate segmentation of COVID-19 lesions.
METHODS: The proposed HFCF-Net incorporates 2D and 3D subnets to extract features within and between slices effectively. Then the cross fusion module is designed to bridge 2D and 3D decoders at the same scale to fuse both types of features. The module consists of three cross fusion blocks, each of which contains a prior fusion path and a context fusion path to jointly learn better lesion representations. The former aims to explicitly provide the 3D subnet with lesion-related prior knowledge, and the latter utilizes the 3D context information as the attention guidance of the 2D subnet, which promotes the precise segmentation of the lesion regions. Furthermore, we explore an imbalance-robust adaptive learning loss function that includes image-level loss and pixel-level loss to tackle the problems caused by the apparent imbalance between the proportions of the lesion and non-lesion voxels, providing a learning strategy to dynamically adjust the learning focus between 2D and 3D branches during the training process for effective supervision.
RESULT: Extensive experiments conducted on a publicly available dataset demonstrate that the proposed segmentation network significantly outperforms some state-of-the-art methods for the COVID-19 lesion segmentation, yielding a Dice similarity coefficient of 74.85%. The visual comparison of segmentation performance also proves the superiority of the proposed network in segmenting different-sized lesions.
CONCLUSIONS: In this paper, we propose a novel HFCF-Net for rapid and accurate COVID-19 lesion segmentation from chest computed tomography volume data. It innovatively fuses hybrid features in a cross manner for lesion segmentation, aiming to utilize the advantages of 2D and 3D subnets to complement each other for enhancing the segmentation performance. Benefitting from the cross fusion mechanism, the proposed HFCF-Net can segment the lesions more accurately with the knowledge acquired from both subnets.
Limited parameter denoising for low-dose X-ray computed tomography using deep reinforcement learning
Patwari, M.
Gutjahr, R.
Raupach, R.
Maier, A.
Med Phys2022Journal Article, cited 0 times
Website
LDCT-and-Projection-data
Computed Tomography (CT)
Image denoising
Convolutional Neural Network (CNN)
BACKGROUND: The use of deep learning has successfully solved several problems in the field of medical imaging. Deep learning has been applied to the CT denoising problem successfully. However, the use of deep learning requires large amounts of data to train deep convolutional networks (CNNs). Moreover, due to the large parameter count, such deep CNNs may cause unexpected results. PURPOSE: In this study, we introduce a novel CT denoising framework, which has interpretable behavior and provides useful results with limited data. METHODS: We employ bilateral filtering in both the projection and volume domains to remove noise. To account for nonstationary noise, we tune the sigma parameters of the volume for every projection view and every volume pixel. The tuning is carried out by two deep CNNs. Due to the impracticality of labeling, the two-deep CNNs are trained via a Deep-Q reinforcement learning task. The reward for the task is generated by using a custom reward function represented by a neural network. Our experiments were carried out on abdominal scans for the Mayo Clinic the cancer imaging archive (TCIA) dataset and the American association of physicists in medicine (AAPM) Low Dose CT Grand Challenge. RESULTS: Our denoising framework has excellent denoising performance increasing the peak signal to noise ratio (PSNR) from 28.53 to 28.93 and increasing the structural similarity index (SSIM) from 0.8952 to 0.9204. We outperform several state-of-the-art deep CNNs, which have several orders of magnitude higher number of parameters (p-value [PSNR] = 0.000, p-value [SSIM] = 0.000). Our method does not introduce any blurring, which is introduced by mean squared error (MSE) loss-based methods, or any deep learning artifacts, which are introduced by wasserstein generative adversarial network (WGAN)-based models. Our ablation studies show that parameter tuning and using our reward network results in the best possible results. CONCLUSIONS: We present a novel CT denoising framework, which focuses on interpretability to deliver good denoising performance, especially with limited data. Our method outperforms state-of-the-art deep neural networks. Future work will be focused on accelerating our method and generalizing it to different geometries and body parts.
Distributed and scalable optimization for robust proton treatment planning
Fu, Anqi
Taasti, Vicki T.
Zarepisheh, Masoud
Medical Physics2022Journal Article, cited 0 times
HNSCC-3DCT-RT
BACKGROUND: The importance of robust proton treatment planning to mitigate the impact of uncertainty is well understood. However, its computational cost grows with the number of uncertainty scenarios, prolonging the treatment planning process.
PURPOSE: We developed a fast and scalable distributed optimization platform that parallelizes the robust proton treatment plan computation over the uncertainty scenarios.
METHODS: We modeled the robust proton treatment planning problem as a weighted least-squares problem. To solve it, we employed an optimization technique called the alternating direction method of multipliers with Barzilai-Borwein step size (ADMM-BB). We reformulated the problem in such a way as to split the main problem into smaller subproblems, one for each proton therapy uncertainty scenario. The subproblems can be solved in parallel, allowing the computational load to be distributed across multiple processors (e.g., CPU threads/cores). We evaluated ADMM-BB on four head-and-neck proton therapy patients, each with 13 scenarios accounting for 3 mm setup and 3.5% range uncertainties. We then compared the performance of ADMM-BB with projected gradient descent (PGD) applied to the same problem.
RESULTS: For each patient, ADMM-BB generated a robust proton treatment plan that satisfied all clinical criteria with comparable or better dosimetric quality than the plan generated by PGD. However, ADMM-BB's total runtime averaged about 6 to 7 times faster. This speedup increased with the number of scenarios.
CONCLUSIONS: ADMM-BB is a powerful distributed optimization method that leverages parallel processing platforms, such as multicore CPUs, GPUs, and cloud servers, to accelerate the computationally intensive work of robust proton treatment planning. This results in (1) a shorter treatment planning process and (2) the ability to consider more uncertainty scenarios, which improves plan quality.
Development and verification of radiomics framework for computed tomography image segmentation
Gu, Jiabing
Li, Baosheng
Shu, Huazhong
Zhu, Jian
Qiu, Qingtao
Bai, Tong
Medical Physics2022Journal Article, cited 0 times
Website
Credence Cartridge Radiomics Phantom CT Scans
PHANTOM
radiomics
Computed Tomography (CT)
Automated segmentation of five different body tissues on computed tomography using deep learning
Pu, L.
Gezer, N. S.
Ashraf, S. F.
Ocak, I.
Dresser, D. E.
Dhupar, R.
Med Phys2022Journal Article, cited 0 times
Website
NSCLC Radiogenomics
ACRIN-NSCLC-FDG-PET
NLST
C4KC-KiTS
computed Tomography (CT)
convolutional Neural Network (CNN)
Segmentation
PET/CT
PURPOSE: To develop and validate a computer tool for automatic and simultaneous segmentation of five body tissues depicted on computed tomography (CT) scans: visceral adipose tissue (VAT), subcutaneous adipose tissue (SAT), intermuscular adipose tissue (IMAT), skeletal muscle (SM), and bone. METHODS: A cohort of 100 CT scans acquired on different subjects were collected from The Cancer Imaging Archive-50 whole-body positron emission tomography-CTs, 25 chest, and 25 abdominal. Five different body tissues (i.e., VAT, SAT, IMAT, SM, and bone) were manually annotated. A training-while-annotating strategy was used to improve the annotation efficiency. The 10-fold cross-validation method was used to develop and validate the performance of several convolutional neural networks (CNNs), including UNet, Recurrent Residual UNet (R2Unet), and UNet++. A grid-based three-dimensional patch sampling operation was used to train the CNN models. The CNN models were also trained and tested separately for each body tissue to see if they could achieve a better performance than segmenting them jointly. The paired sample t-test was used to statistically assess the performance differences among the involved CNN models RESULTS: When segmenting the five body tissues simultaneously, the Dice coefficients ranged from 0.826 to 0.840 for VAT, from 0.901 to 0.908 for SAT, from 0.574 to 0.611 for IMAT, from 0.874 to 0.889 for SM, and from 0.870 to 0.884 for bone, which were significantly higher than the Dice coefficients when segmenting the body tissues separately (p < 0.05), namely, from 0.744 to 0.819 for VAT, from 0.856 to 0.896 for SAT, from 0.433 to 0.590 for IMAT, from 0.838 to 0.871 for SM, and from 0.803 to 0.870 for bone. CONCLUSION: There were no significant differences among the CNN models in segmenting body tissues, but jointly segmenting body tissues achieved a better performance than segmenting them separately.
Automated lung tumor delineation on positron emission tomography/computed tomography via a hybrid regional network
Lei, Y.
Wang, T.
Jeong, J. J.
Janopaul-Naylor, J.
Kesarwala, A. H.
Roper, J.
Tian, S.
Bradley, J. D.
Liu, T.
Higgins, K.
Yang, X.
Med Phys2022Journal Article, cited 0 times
Website
Lung-PET-CT-Dx
Positron Emission Tomography (PET)
Computed Tomography (CT)
PET-CT
Deep learning
LUNG
Radiotherapy
Segmentation
BACKGROUND: Multimodality positron emission tomography/computed tomography (PET/CT) imaging combines the anatomical information of CT with the functional information of PET. In the diagnosis and treatment of many cancers, such as non-small cell lung cancer (NSCLC), PET/CT imaging allows more accurate delineation of tumor or involved lymph nodes for radiation planning. PURPOSE: In this paper, we propose a hybrid regional network method of automatically segmenting lung tumors from PET/CT images. METHODS: The hybrid regional network architecture synthesizes the functional and anatomical information from the two image modalities, whereas the mask regional convolutional neural network (R-CNN) and scoring fine-tune the regional location and quality of the output segmentation. This model consists of five major subnetworks, that is, a dual feature representation network (DFRN), a regional proposal network (RPN), a specific tumor-wise R-CNN, a mask-Net, and a score head. Given a PET/CT image as inputs, the DFRN extracts feature maps from the PET and CT images. Then, the RPN and R-CNN work together to localize lung tumors and reduce the image size and feature map size by removing irrelevant regions. The mask-Net is used to segment tumor within a volume-of-interest (VOI) with a score head evaluating the segmentation performed by the mask-Net. Finally, the segmented tumor within the VOI was mapped back to the volumetric coordinate system based on the location information derived via the RPN and R-CNN. We trained, validated, and tested the proposed neural network using 100 PET/CT images of patients with NSCLC. A fivefold cross-validation study was performed. The segmentation was evaluated with two indicators: (1) multiple metrics, including the Dice similarity coefficient, Jacard, 95th percentile Hausdorff distance, mean surface distance (MSD), residual mean square distance, and center-of-mass distance; (2) Bland-Altman analysis and volumetric Pearson correlation analysis. RESULTS: In fivefold cross-validation, this method achieved Dice and MSD of 0.84 +/- 0.15 and 1.38 +/- 2.2 mm, respectively. A new PET/CT can be segmented in 1 s by this model. External validation on The Cancer Imaging Archive dataset (63 PET/CT images) indicates that the proposed model has superior performance compared to other methods. CONCLUSION: The proposed method shows great promise to automatically delineate NSCLC tumors on PET/CT images, thereby allowing for a more streamlined clinical workflow that is faster and reduces physician effort.
Prognostic generalization of multi-level CT-dose fusion dosiomics from primary tumor and lymph node in nasopharyngeal carcinoma
Cai, C.
Lv, W.
Chi, F.
Zhang, B.
Zhu, L.
Yang, G.
Zhao, S.
Zhu, Y.
Han, X.
Dai, Z.
Wang, X.
Lu, L.
Med Phys2022Journal Article, cited 0 times
Website
Head-Neck-PET-CT
HNSCC
Radiomics
Computed Tomography (CT)
dosiomics
multi-level fusion
Segmentation
Algorithm Development
OBJECTIVES: To investigate the prognostic performance of multi-level CT-dose fusion dosiomics at the image-, matrix- and feature-levels from the gross tumor volume at nasopharynx and the involved lymph node for nasopharyngeal carcinoma (NPC) patients. MATERIALS AND METHODS: Two hundred and nineteen NPC patients (175 vs. 44 for training vs. internal validation) were used to train prediction model, and thirty two NPC patients were used for external validation. We first extracted CT and dose information from intratumoral nasopharynx (GTV_nx) and lymph node (GTV_nd) regions. Then the corresponding peritumoral regions (RING_3mm and RING_5mm) were also considered. Thus, the individual and combination of intra- and peri-tumoral regions were as follows: GTV_nx, GTV_nd, RING_3mm_nx, RING_3mm_nd, RING_5mm_nx, RING_5mm_nd, GTV_nxnd, RING_3mm_nxnd, RING_5mm_nxnd, GTV+RING_3mm_nxnd and GTV+RING_5mm_nxnd. For each region, eleven models were built by combining 5 clinical parameters and 127 features from (1) dose images alone; (2-7) fused dose and CT images via wavelet-based fusion (WF) using CT weights of 0.2, 0.4, 0.6 and 0.8, gradient transfer fusion (GTF), and guided filtering-based fusion (GFF); (8) fused matrices (sumMat); (9-10) fused features derived via feature averaging (avgFea) and feature concatenation (conFea); and finally, (11) CT images alone. The C-index and Kaplan-Meier curves with log-rank test were used to assess model performance. RESULTS: The fusion models' performance was better than single CT/dose model on both internal and external validation. Models combined the information from both GTV_nx and GTV_nd regions outperformed the single region model. For internal validation, GTV+RING_3mm_nxnd GFF model achieved the highest C-index both in recurrence-free survival (RFS) and metastasis-free survival (MFS) predictions (RFS: 0.822; MFS: 0.786). The highest C-index in external validation set was achieved by RING_3mm_nxnd model (RFS: 0.762; MFS: 0.719). The GTV+RING_3mm_nxnd GFF model is able to significantly separate patients into high-risk and low-risk groups compared to dose-only or CT-only models. CONCLUSION: Fusion dosiomics model combining the primary tumor, the involved lymph node, and 3mm peritumoral information outperformed single modality models for different outcome predictions, which is helpful for clinical decision-making and the development of personalized treatment. This article is protected by copyright. All rights reserved.
VTDCE‐Net: A time invariant deep neural network for direct estimation of pharmacokinetic parameters from undersampled DCE MRI data
Rastogi, Aditya
Dutta, Arindam
Yalavarthy, Phaneendra Kumar
Medical Physics2022Journal Article, cited 0 times
QIN Breast DCE-MRI
PURPOSE: To propose a robust time and space invariant deep learning (DL) method to directly estimate the pharmacokinetic/tracer kinetic (PK/TK) parameters from undersampled dynamic contrast-enhanced (DCE) magnetic resonance imaging (MRI) data.
METHODS: DCE-MRI consists of 4D (3D-spatial + temporal) data and has been utilized to estimate 3D (spatial) tracer kinetic maps. Existing DL architecture for this task needs retraining for variation in temporal and/or spatial dimensions. This work proposes a DL algorithm that is invariant to training and testing in both temporal and spatial dimensions. The proposed network was based on a 2.5-dimensional Unet architecture, where the encoder consists of a 3D convolutional layer and the decoder consists of a 2D convolutional layer. The proposed VTDCE-Net was evaluated for solving the ill-posed inverse problem of directly estimating TK parameters from undersampled k - t $k-t$ space data of breast cancer patients, and the results were systematically compared with a total variation (TV) regularization based direct parameter estimation scheme. In the breast dataset, the training was performed on patients with 32 time samples, and testing was carried out on patients with 26 and 32 time samples. Translation of the proposed VTDCE-Net for brain dataset to show the generalizability was also carried out. Undersampling rates (R) of 8× , 12× , and 20× were utilized with PSNR and SSIM as the figures of merit in this evaluation. TK parameter maps estimated from fully sampled data were utilized as ground truth.
RESULTS: Experiments carried out in this work demonstrate that the proposed VTDCE-Net outperforms the TV scheme on both breast and brain datasets across all undersampling rates. For K trans $\mathbf {K_{trans}}$ and V p $\mathbf {V_{p}}$ maps, the improvement over TV is as high as 2 and 5 dB, respectively, using the proposed VTDCE-Net.
CONCLUSION: Temporal points invariant DL network that was proposed in this work to estimate the TK-parameters using DCE-MRI data has provided state-of-the-art performance compared to standard image reconstruction methods and is shown to work across all undersampling rates.
Technical note: Performance evaluation of volumetric imaging based on motion modeling by principal component analysis
Asano, Suzuka
Oseki, Keishi
Takao, Seishin
Miyazaki, Koichi
Yokokawa, Kohei
Matsuura, Taeko
Taguchi, Hiroshi
Katoh, Norio
Aoyama, Hidefumi
Umegaki, Kikuo
Miyamoto, Naoki
Medical Physics2022Journal Article, cited 0 times
4D-Lung
PURPOSE: To quantitatively evaluate the achievable performance of volumetric imaging based on lung motion modeling by principal component analysis (PCA).
METHODS: In volumetric imaging based on PCA, internal deformation was represented as a linear combination of the eigenvectors derived by PCA of the deformation vector fields evaluated from patient-specific four-dimensional-computed tomography (4DCT) datasets. The volumetric image was synthesized by warping the reference CT image with a deformation vector field which was evaluated using optimal principal component coefficients (PCs). Larger PCs were hypothesized to reproduce deformations larger than those included in the original 4DCT dataset. To evaluate the reproducibility of PCA-reconstructed volumetric images synthesized to be close to the ground truth as possible, mean absolute error (MAE), structure similarity index measure (SSIM) and discrepancy of diaphragm position were evaluated using 22 4DCT datasets of nine patients.
RESULTS: Mean MAE and SSIM values for the PCA-reconstructed volumetric images were approximately 80 HU and 0.88, respectively, regardless of the respiratory phase. In most test cases including the data of which motion range was exceeding that of the modeling data, the positional error of diaphragm was less than 5 mm. The results suggested that large deformations not included in the modeling 4DCT dataset could be reproduced. Furthermore, since the first PC correlated with the displacement of the diaphragm position, the first eigenvector became the dominant factor representing the respiration-associated deformations. However, other PCs did not necessarily change with the same trend as the first PC, and no correlation was observed between the coefficients. Hence, randomly allocating or sampling these PCs in expanded ranges may be applicable to reasonably generate an augmented dataset with various deformations.
CONCLUSIONS: Reasonable accuracy of image synthesis comparable to those in the previous research were shown by using clinical data. These results indicate the potential of PCA-based volumetric imaging for clinical applications.
TrEnD: A transformer‐based encoder‐decoder model with adaptive patch embedding for mass segmentation in mammograms
Liu, Dongdong
Wu, Bo
Li, Changbo
Sun, Zheng
Zhang, Nan
Medical Physics2023Journal Article, cited 0 times
CBIS-DDSM
BACKGROUND: Breast cancer is one of the most prevalent malignancies diagnosed in women. Mammogram inspection in the search and delineation of breast tumors is an essential prerequisite for a reliable diagnosis. However, analyzing mammograms by radiologists is time-consuming and prone to errors. Therefore, the development of computer-aided diagnostic (CAD) systems to automate the mass segmentation procedure is greatly expected.
PURPOSE: Accurate breast mass segmentation in mammograms remains challenging in CAD systems due to the low contrast, various shapes, and fuzzy boundaries of masses. In this paper, we propose a fully automatic and effective mass segmentation model based on deep learning for improving segmentation performance.
METHODS: We propose an effective transformer-based encoder-decoder model (TrEnD). Firstly, we introduce a lightweight method for adaptive patch embedding (APE) of the transformer, which utilizes superpixels to adaptively adjust the size and position of each patch. Secondly, we introduce a hierarchical transformer-encoder and attention-gated-decoder structure, which is beneficial for progressively suppressing interference feature activations in irrelevant background areas. Thirdly, a dual-branch design is employed to extract and fuse globally coarse and locally fine features in parallel, which could capture the global contextual information and ensure the relevance and integrity of local information. The model is evaluated on two public datasets CBIS-DDSM and INbreast. To further demonstrate the robustness of TrEnD, different cropping strategies are applied to these datasets, termed tight, loose, maximal, and mix-frame. Finally, ablation analysis is performed to assess the individual contribution of each module to the model performance.
RESULTS: The proposed segmentation model provides a high Dice coefficient and Intersection over Union (IoU) of 92.20% and 85.81% on the mix-frame CBIS-DDSM, while 91.83% and 85.29% for the mix-frame INbreast, respectively. The segmentation performance outperforms the current state-of-the-art approaches. By adding the APE and attention-gated module, the Dice and IoU have improved by 6.54% and 10.07%.
CONCLUSION: According to extensive qualitative and quantitative assessments, the proposed network is effective for automatic breast mass segmentation, and has adequate potential to offer technical assistance for subsequent clinical diagnoses.
Likelihood‐based bilateral filters for pre‐estimated basis sinograms using photon‐counting CT
Lee, Okkyun
Medical Physics2023Journal Article, cited 0 times
Pancreas-CT
BACKGROUND: Noise amplification in material decomposition is an issue for exploiting photon-counting computed tomography (PCCT). Regularization techniques and neighborhood filters have been widely used, but degraded spatial resolution and bias are concerns.
PURPOSE: This paper proposes likelihood-based bilateral filters that can be applied to pre-estimated basis sinograms to reduce the noise while minimally affecting spatial resolution and accuracy.
METHODS: The proposed method needs system models (e.g., incident spectrum, detector response) to calculate the likelihood. First, it performs maximum likelihood (ML)-based estimation in the projection domain to obtain basis sinograms. The estimated basis sinograms suffer from severe noise but are asymptotically unbiased without degrading spatial resolution. Then it calculates the neighborhood likelihoods for a given measurement at the center pixel using the neighborhood estimates and designs the weights based on the distance of likelihoods. It is also analyzed in terms of statistical inference, and then two variations of the filter are introduced: one that requires the significance level instead of the empirical hyperparameter. The other is a measurement-based filter, which can be applied when accurate estimates are given without the system models. The proposed methods were validated by analyzing the local property of noise and spatial resolution and the global trend of noise and bias using numerical thorax and abdominal phantoms for a two-material decomposition (water and bone). They were compared to the conventional neighborhood filters and the model-based iterative reconstruction with an edge-preserving penalty applied in the basis images.
RESULTS: The proposed method showed comparable or superior performance for the local and global properties to conventional methods in many cases. The thorax phantom: The full width at half maximum (FWHM) decreased by -2%-31% (-2 indicates that it increased by 2% compared to the best performance from conventional methods), and the global bias was reduced by 2%-19% compared to other methods for similar noise levels (local: 51% of the ML, global: 49%) in the water basis image. The FWHM decreased by 8%-31%, and the global bias was reduced by 9%-44% for similar noise levels (local: 44% of the ML, global: 36%) in the CT image at 65 keV. The abdominal phantom: The FWHM decreased by 10%-32%, and the global bias was reduced by 3%-35% compared to other methods for similar noise levels (local: 66% of the ML, global: 67%) in the water basis image. The FWHM decreased by up to -11%-47%, and the global bias was reduced by 13%-35% for similar noise levels (local: 71% of the ML, global: 70%) in the CT image at 60 keV.
CONCLUSIONS: This paper introduced the likelihood-based bilateral filters as a post-processing method applied to the ML-based estimates of basis sinograms. The proposed filters effectively reduced the noise in the basis images and the synthesized monochromatic CT images. It showed the potential of using likelihood-based filters in the projection domain as a substitute for conventional regularization or filtering methods.
Utilization of an attentive map to preserve anatomical features for training convolutional neural‐network‐based low‐dose CT denoiser
Han, Minah
Shim, Hyunjung
Baek, Jongduk
Medical Physics2023Journal Article, cited 0 times
LDCT-and-Projection-data
BACKGROUND: The purpose of a convolutional neural network (CNN)-based denoiser is to increase the diagnostic accuracy of low-dose computed tomography (LDCT) imaging. To increase diagnostic accuracy, there is a need for a method that reflects the features related to diagnosis during the denoising process.
PURPOSE: To provide a training strategy for LDCT denoisers that relies more on diagnostic task-related features to improve diagnostic accuracy.
METHODS: An attentive map derived from a lesion classifier (i.e., determining lesion-present or not) is created to represent the extent to which each pixel influences the decision by the lesion classifier. This is used as a weight to emphasize important parts of the image. The proposed training method consists of two steps. In the first one, the initial parameters of the CNN denoiser are trained using LDCT and normal-dose CT image pairs via supervised learning. In the second one, the learned parameters are readjusted using the attentive map to restore the fine details of the image.
RESULTS: Structural details and the contrast are better preserved in images generated by using the denoiser trained via the proposed method than in those generated by conventional denoisers. The proposed denoiser also yields higher lesion detectability and localization accuracy than conventional denoisers.
CONCLUSIONS: A denoiser trained using the proposed method preserves the small structures and the contrast in the denoised images better than without it. Specifically, using the attentive map improves the lesion detectability and localization accuracy of the denoiser.
Deep learning‐based dominant index lesion segmentation for MR‐guided radiation therapy of prostate cancer
Simeth, Josiah
Jiang, Jue
Nosov, Anton
Wibmer, Andreas
Zelefsky, Michael
Tyagi, Neelam
Veeraraghavan, Harini
Medical Physics2023Journal Article, cited 0 times
PROSTATEx
BACKGROUND: Dose escalation radiotherapy enables increased control of prostate cancer (PCa) but requires segmentation of dominant index lesions (DIL). This motivates the development of automated methods for fast, accurate, and consistent segmentation of PCa DIL.
PURPOSE: To construct and validate a model for deep-learning-based automatic segmentation of PCa DIL defined by Gleason score (GS) ≥3+4 from MR images applied to MR-guided radiation therapy. Validate generalizability of constructed models across scanner and acquisition differences.
METHODS: Five deep-learning networks were evaluated on apparent diffusion coefficient (ADC) MRI from 500 lesions in 365 patients arising from internal training Dataset 1 (156 lesions in 125 patients, 1.5Tesla GE MR with endorectal coil), testing using Dataset 1 (35 lesions in 26 patients), external ProstateX Dataset 2 (299 lesions in 204 patients, 3Tesla Siemens MR), and internal inter-rater Dataset 3 (10 lesions in 10 patients, 3Tesla Philips MR). The five networks include: multiple resolution residually connected network (MRRN) and MRRN regularized in training with deep supervision implemented into the last convolutional block (MRRN-DS), Unet, Unet++, ResUnet, and fast panoptic segmentation (FPSnet) as well as fast panoptic segmentation with smoothed labels (FPSnet-SL). Models were evaluated by volumetric DIL segmentation accuracy using Dice similarity coefficient (DSC) and the balanced F1 measure of detection accuracy, as a function of lesion aggressiveness and size (Dataset 1 and 2), and accuracy with respect to two-raters (on Dataset 3). Upon acceptance for publication segmentation models will be made available in an open-source GitHub repository.
RESULTS: In general, MRRN-DS more accurately segmented tumors than other methods on the testing datasets. MRRN-DS significantly outperformed ResUnet in Dataset2 (DSC of 0.54 vs. 0.44, p < 0.001) and the Unet++ in Dataset3 (DSC of 0.45 vs. p = 0.04). FPSnet-SL was similarly accurate as MRRN-DS in Dataset2 (p = 0.30), but MRRN-DS significantly outperformed FPSnet and FPSnet-SL in both Dataset1 (0.60 vs. 0.51 [p = 0.01] and 0.54 [p = 0.049] respectively) and Dataset3 (0.45 vs. 0.06 [p = 0.002] and 0.24 [p = 0.004] respectively). Finally, MRRN-DS produced slightly higher agreement with experienced radiologist than two radiologists in Dataset 3 (DSC of 0.45 vs. 0.41).
CONCLUSIONS: MRRN-DS was generalizable to different MR testing datasets acquired using different scanners. It produced slightly higher agreement with an experienced radiologist than that between two radiologists. Finally, MRRN-DS more accurately segmented aggressive lesions, which are generally candidates for radiative dose ablation.
Clinical capability of modern brain tumor segmentation models
Berkley, Adam
Saueressig, Camillo
Shukla, Utkarsh
Chowdhury, Imran
Munoz‐Gauna, Anthony
Shehu, Olalekan
Singh, Ritambhara
Munbodh, Reshma
Medical Physics2023Journal Article, cited 0 times
QIN-BRAIN-DSC-MRI
Glioma
PURPOSE: State-of-the-art automated segmentation methods achieve exceptionally high performance on the Brain Tumor Segmentation (BraTS) challenge, a dataset of uniformly processed and standardized magnetic resonance generated images (MRIs) of gliomas. However, a reasonable concern is that these models may not fare well on clinical MRIs that do not belong to the specially curated BraTS dataset. Research using the previous generation of deep learning models indicates significant performance loss on cross-institutional predictions. Here, we evaluate the cross-institutional applicability and generalzsability of state-of-the-art deep learning models on new clinical data.
METHODS: We train a state-of-the-art 3D U-Net model on the conventional BraTS dataset comprising low- and high-grade gliomas. We then evaluate the performance of this model for automatic tumor segmentation of brain tumors on in-house clinical data. This dataset contains MRIs of different tumor types, resolutions, and standardization than those found in the BraTS dataset. Ground truth segmentations to validate the automated segmentation for in-house clinical data were obtained from expert radiation oncologists.
RESULTS: We report average Dice scores of 0.764, 0.648, and 0.61 for the whole tumor, tumor core, and enhancing tumor, respectively, in the clinical MRIs. These means are higher than numbers reported previously on same institution and cross-institution datasets of different origin using different methods. There is no statistically significant difference when comparing the dice scores to the inter-annotation variability between two expert clinical radiation oncologists. Although performance on the clinical data is lower than on the BraTS data, these numbers indicate that models trained on the BraTS dataset have impressive segmentation performance on previously unseen images obtained at a separate clinical institution. These images differ in the imaging resolutions, standardization pipelines, and tumor types from the BraTS data.
CONCLUSIONS: State-of-the-art deep learning models demonstrate promising performance on cross-institutional predictions. They considerably improve on previous models and can transfer knowledge to new types of brain tumors without additional modeling.
Using 3D deep features from CT scans for cancer prognosis based on a video classification model: A multi-dataset feasibility study
Chen, J.
Wee, L.
Dekker, A.
Bermejo, I.
Med Phys2023Journal Article, cited 0 times
Website
NSCLC-Radiomics
OPC-Radiomics
Head-Neck-Radiomics-HN1
RIDER LUNG CT
3D deep neural network
cancer prognosis
deep features
Radiomics
Transfer learning
BACKGROUND: Cancer prognosis before and after treatment is key for patient management and decision making. Handcrafted imaging biomarkers-radiomics-have shown potential in predicting prognosis. PURPOSE: However, given the recent progress in deep learning, it is timely and relevant to pose the question: could deep learning based 3D imaging features be used as imaging biomarkers and outperform radiomics? METHODS: Effectiveness, reproducibility in test/retest, across modalities, and correlation of deep features with clinical features such as tumor volume and TNM staging were tested in this study. Radiomics was introduced as the reference image biomarker. For deep feature extraction, we transformed the CT scans into videos, and we adopted the pre-trained Inflated 3D ConvNet (I3D) video classification network as the architecture. We used four datasets-LUNG 1 (n = 422), LUNG 4 (n = 106), OPC (n = 605), and H&N 1 (n = 89)-with 1270 samples from different centers and cancer types-lung and head and neck cancer-to test deep features' predictiveness and two additional datasets to assess the reproducibility of deep features. RESULTS: Support Vector Machine-Recursive Feature Elimination (SVM-RFE) selected top 100 deep features achieved a concordance index (CI) of 0.67 in survival prediction in LUNG 1, 0.87 in LUNG 4, 0.76 in OPC, and 0.87 in H&N 1, while SVM-RFE selected top 100 radiomics achieved CIs of 0.64, 0.77, 0.73, and 0.74, respectively, all statistically significant differences (p < 0.01, Wilcoxon's test). Most selected deep features are not correlated with tumor volume and TNM staging. However, full radiomics features show higher reproducibility than full deep features in a test/retest setting (0.89 vs. 0.62, concordance correlation coefficient). CONCLUSION: The results show that deep features can outperform radiomics while providing different views for tumor prognosis compared to tumor volume and TNM staging. However, deep features suffer from lower reproducibility than radiomic features and lack the interpretability of the latter.
A repository of grade 1 and 2 meningioma MRIs in a public dataset for radiomics reproducibility tests
Vassantachart, April
Cao, Yufeng
Shen, Zhilei
Cheng, Karen
Gribble, Michael
Ye, Jason C.
Zada, Gabriel
Hurth, Kyle
Mathew, Anna
Guzman, Samuel
Yang, Wensha
Medical Physics2023Journal Article, cited 0 times
Meningioma-SEG-CLASS
Radiomics
Magnetic Resonance Imaging (MRI)
Manual classification
Purpose; Meningiomas are the most common primary brain tumors in adults with management varying widely based on World Health Organization (WHO) grade. However, there are limited datasets available for researchers to develop and validate radiomic models. The purpose of our manuscript is to report on the first dataset of meningiomas in The Cancer Imaging Archive (TCIA).; ; Acquisition and validation methods; The dataset consists of pre-operative MRIs from 96 patients with meningiomas who underwent resection from 2010–2019 and include axial T1post and T2-FLAIR sequences—55 grade 1 and 41 grade 2. Meningioma grade was confirmed based on the 2016 WHO Bluebook classification guideline by two neuropathologists and one neuropathology fellow. The hyperintense T1post tumor and hyperintense T2-FLAIR regions were manually contoured on both sequences and resampled to an isotropic resolution of 1 × 1 × 1 mm3. The entire dataset was reviewed by a certified medical physicist.; ; Data format and usage notes; The data was imported into TCIA for storage and can be accessed at https://doi.org/10.7937/0TKV-1A36. The total size of the dataset is 8.8GB, with 47 519 individual Digital Imaging and Communications in Medicine (DICOM) files consisting of 384 image series, and 192 structures.; ; Potential applications; Grade 1 and 2 meningiomas have different treatment paradigms and are often treated based on radiologic diagnosis alone. Therefore, predicting grade prior to treatment is essential in clinical decision-making. This dataset will allow researchers to create models to auto-differentiate grade 1 and 2 meningiomas as well as evaluate for other pathologic features including mitotic index, brain invasion, and atypical features. Limitations of this study are the small sample size and inclusion of only two MRI sequences. However, there are no meningioma datasets on TCIA and limited datasets elsewhere although meningiomas are the most common intracranial tumor in adults.
Transfer learning for auto-segmentation of 17 organs-at-risk in the head and neck: Bridging the gap between institutional and public datasets
Clark, B.
Hardcastle, N.
Johnston, L. A.
Korte, J.
Med Phys2024Journal Article, cited 0 times
Website
HEAD-NECK-RADIOMICS-HN1
Head-Neck-PET-CT
Head-Neck-CT-Atlas
OPC-Radiomics
Algorithm Development
Deep Learning
Image Segmentation
Transfer learning
BACKGROUND: Auto-segmentation of organs-at-risk (OARs) in the head and neck (HN) on computed tomography (CT) images is a time-consuming component of the radiation therapy pipeline that suffers from inter-observer variability. Deep learning (DL) has shown state-of-the-art results in CT auto-segmentation, with larger and more diverse datasets showing better segmentation performance. Institutional CT auto-segmentation datasets have been small historically (n < 50) due to the time required for manual curation of images and anatomical labels. Recently, large public CT auto-segmentation datasets (n > 1000 aggregated) have become available through online repositories such as The Cancer Imaging Archive. Transfer learning is a technique applied when training samples are scarce, but a large dataset from a closely related domain is available. PURPOSE: The purpose of this study was to investigate whether a large public dataset could be used in place of an institutional dataset (n > 500), or to augment performance via transfer learning, when building HN OAR auto-segmentation models for institutional use. METHODS: Auto-segmentation models were trained on a large public dataset (public models) and a smaller institutional dataset (institutional models). The public models were fine-tuned on the institutional dataset using transfer learning (transfer models). We assessed both public model generalizability and transfer model performance by comparison with institutional models. Additionally, the effect of institutional dataset size on both transfer and institutional models was investigated. All DL models used a high-resolution, two-stage architecture based on the popular 3D U-Net. Model performance was evaluated using five geometric measures: the dice similarity coefficient (DSC), surface DSC, 95(th) percentile Hausdorff distance, mean surface distance (MSD), and added path length. RESULTS: For a small subset of OARs (left/right optic nerve, spinal cord, left submandibular), the public models performed significantly better (p < 0.05) than, or showed no significant difference to, the institutional models under most of the metrics examined. For the remaining OARs, the public models were inferior to the institutional models, although performance differences were small (DSC </= 0.03, MSD < 0.5 mm) for seven OARs (brainstem, left/right lens, left/right parotid, mandible, right submandibular). The transfer models performed significantly better than the institutional models for seven OARs (brainstem, right lens, left/right optic nerve, left/right parotid, spinal cord) with a small margin of improvement (DSC </= 0.02, MSD < 0.4 mm). When numbers of institutional training samples were limited, public and transfer models outperformed the institutional models for most OARs (brainstem, left/right lens, left/right optic nerve, left/right parotid, spinal cord, and left/right submandibular). CONCLUSION: Training auto-segmentation models with public data alone was suitable for a small number of OARs. Using only public data incurred a small performance deficit for most other OARs, when compared with institutional data alone, but may be preferable over time-consuming curation of a large institutional dataset. When a large institutional dataset was available, transfer learning with models pretrained on a large public dataset provided a modest performance improvement for several OARs. When numbers of institutional samples were limited, using the public dataset alone, or as a pretrained model, was beneficial for most OARs.
A comprehensive lung CT landmark pair dataset for evaluating deformable image registration algorithms