Detection of Lung Nodules on Medical Images by the Use of Fractal Segmentation

  • Abdollahzadeh Rezaie, Afsaneh
  • Habiboghli, Ali
International Journal of Interactive Multimedia and Artificial Inteligence 2017 Journal Article, cited 0 times
Website

A generalized framework for medical image classification and recognition

  • Abedini, M
  • Codella, NCF
  • Connell, JH
  • Garnavi, R
  • Merler, M
  • Pankanti, S
  • Smith, JR
  • Syeda-Mahmood, T
IBM Journal of Research and Development 2015 Journal Article, cited 19 times
Website
In this work, we study the performance of a two-stage ensemble visual machine learning framework for classification of medical images. In the first stage, models are built for subsets of features and data, and in the second stage, models are combined. We demonstrate the performance of this framework in four contexts: 1) The public ImageCLEF (Cross Language Evaluation Forum) 2013 medical modality recognition benchmark, 2) echocardiography view and mode recognition, 3) dermatology disease recognition across two datasets, and 4) a broad medical image dataset, merged from multiple data sources into a collection of 158 categories covering both general and specific medical concepts-including modalities, body regions, views, and disease states. In the first context, the presented system achieves state-of-art performance of 82.2% multiclass accuracy. In the second context, the system attains 90.48% multiclass accuracy. In the third, state-of-art performance of 90% specificity and 90% sensitivity is obtained on a small standardized dataset of 200 images using a leave-one-out strategy. For a larger dataset of 2,761 images, 95% specificity and 98% sensitivity is obtained on a 20% held-out test set. Finally, in the fourth context, the system achieves sensitivity and specificity of 94.7% and 98.4%, respectively, demonstrating the ability to generalize over domains.

Computer-aided diagnosis of clinically significant prostate cancer from MRI images using sparse autoencoder and random forest classifier

  • Abraham, Bejoy
  • Nair, Madhu S
Biocybernetics and Biomedical Engineering 2018 Journal Article, cited 0 times
Website

Decoding tumour phenotype by noninvasive imaging using a quantitative radiomics approach

  • Aerts, H. J.
  • Velazquez, E. R.
  • Leijenaar, R. T.
  • Parmar, C.
  • Grossmann, P.
  • Cavalho, S.
  • Bussink, J.
  • Monshouwer, R.
  • Haibe-Kains, B.
  • Rietveld, D.
  • Hoebers, F.
  • Rietbergen, M. M.
  • Leemans, C. R.
  • Dekker, A.
  • Quackenbush, J.
  • Gillies, R. J.
  • Lambin, P.
Nat Commun 2014 Journal Article, cited 1029 times
Website
Human cancers exhibit strong phenotypic differences that can be visualized noninvasively by medical imaging. Radiomics refers to the comprehensive quantification of tumour phenotypes by applying a large number of quantitative image features. Here we present a radiomic analysis of 440 features quantifying tumour image intensity, shape and texture, which are extracted from computed tomography data of 1,019 patients with lung or head-and-neck cancer. We find that a large number of radiomic features have prognostic power in independent data sets of lung and head-and-neck cancer patients, many of which were not identified as significant before. Radiogenomics analysis reveals that a prognostic radiomic signature, capturing intratumour heterogeneity, is associated with underlying gene-expression patterns. These data suggest that radiomics identifies a general prognostic phenotype existing in both lung and head-and-neck cancer. This may have a clinical impact as imaging is routinely used in clinical practice, providing an unprecedented opportunity to improve decision-support in cancer treatment at low cost.

Adaptive Multi-Column Deep Neural Networks with Application to Robust Image Denoising

  • Agostinelli, Forest
  • Anderson, Michael R
  • Lee, Honglak
2013 Conference Proceedings, cited 118 times
Website
Stacked sparse denoising auto-encoders (SSDAs) have recently been shown to be successful at removing noise from corrupted images. However, like most denoising techniques, the SSDA is not robust to variation in noise types beyond what it has seen during training. We present the multi-column stacked sparse denoising autoencoder, a novel technique of combining multiple SSDAs into a multi-column SSDA (MC-SSDA) by combining the outputs of each SSDA. We eliminate the need to determine the type of noise, let alone its statistics, at test time. We show that good denoising performance can be achieved with a single system on a variety of different noise types, including ones not seen in the training set. Additionally, we experimentally demonstrate the efficacy of MC-SSDA denoising by achieving MNIST digit error rates on denoised images at close to that of the uncorrupted images.

Tumor Lesion Segmentation from 3D PET Using a Machine Learning Driven Active Surface

  • Ahmadvand, Payam
  • Duggan, Nóirín
  • Bénard, François
  • Hamarneh, Ghassan
2016 Conference Proceedings, cited 4 times
Website

Increased robustness in reference region model analysis of DCE MRI using two‐step constrained approaches

  • Ahmed, Zaki
  • Levesque, Ives R
Magnetic Resonance in Medicine 2016 Journal Article, cited 1 times
Website

Automated and Manual Quantification of Tumour Cellularity in Digital Slides for Tumour Burden Assessment

  • Akbar, S.
  • Peikari, M.
  • Salama, S.
  • Panah, A. Y.
  • Nofech-Mozes, S.
  • Martel, A. L.
Sci RepScientific reports 2019 Journal Article, cited 3 times
Website
The residual cancer burden index is an important quantitative measure used for assessing treatment response following neoadjuvant therapy for breast cancer. It has shown to be predictive of overall survival and is composed of two key metrics: qualitative assessment of lymph nodes and the percentage of invasive or in situ tumour cellularity (TC) in the tumour bed (TB). Currently, TC is assessed through eye-balling of routine histopathology slides estimating the proportion of tumour cells within the TB. With the advances in production of digitized slides and increasing availability of slide scanners in pathology laboratories, there is potential to measure TC using automated algorithms with greater precision and accuracy. We describe two methods for automated TC scoring: 1) a traditional approach to image analysis development whereby we mimic the pathologists' workflow, and 2) a recent development in artificial intelligence in which features are learned automatically in deep neural networks using image data alone. We show strong agreements between automated and manual analysis of digital slides. Agreements between our trained deep neural networks and experts in this study (0.82) approach the inter-rater agreements between pathologists (0.89). We also reveal properties that are captured when we apply deep neural network to whole slide images, and discuss the potential of using such visualisations to improve upon TC assessment in the future.

Self-organizing Approach to Learn a Level-set Function for Object Segmentation in Complex Background Environments

  • Albalooshi, Fatema A
2015 Thesis, cited 0 times
Website
Boundary extraction for object region segmentation is one of the most challenging tasks in image processing and computer vision areas. The complexity of large variations in the appearance of the object and the background in a typical image causes the performance degradation of existing segmentation algorithms. One of the goals of computer vision studies is to produce algorithms to segment object regions to produce accurate object boundaries that can be utilized in feature extraction and classification. This dissertation research considers the incorporation of prior knowledge of intensity/color of objects of interest within segmentation framework to enhance the performance of object region and boundary extraction of targets in unconstrained environments. The information about intensity/color of object of interest is taken from small patches as seeds that are fed to learn a neural network. The main challenge is accounting for the projection transformation between the limited amount of prior information and the appearance of the real object of interest in the testing data. We address this problem by the use of a Self-organizing Map (SOM) which is an unsupervised learning neural network. The segmentation process is achieved by the construction of a local fitted image level-set cost function, in which, the dynamic variable is a Best Matching Unit (BMU) coming from the SOM map. The proposed method is demonstrated on the PASCAL 2011 challenging dataset, in which, images contain objects with variations of illuminations, shadows, occlusions and clutter. In addition, our method is tested on different types of imagery including thermal, hyperspectral, and medical imagery. Metrics illustrate the effectiveness and accuracy of the proposed algorithm in improving the efficiency of boundary extraction and object region detection. In order to reduce computational time, a lattice Boltzmann Method (LBM) convergence criteria is used along with the proposed self-organized active contour model for producing faster and effective segmentation. The lattice Boltzmann method is utilized to evolve the level-set function rapidly and terminate the evolution of the curve at the most optimum region. Experiments performed on our test datasets show promising results in terms of time and quality of the segmentation when compared to other state-of-the-art learning-based active contour model approaches. Our method is more than 53% faster than other state-of-the-art methods. Research is in progress to employ Time Adaptive Self- Organizing Map (TASOM) for improved segmentation and utilize the parallelization property of the LBM to achieve real-time segmentation.

Multi-modal Multi-temporal Brain Tumor Segmentation, Growth Analysis and Texture-based Classification

  • Alberts, Esther
2019 Thesis, cited 0 times
Website
Brain tumor analysis is an active field of research, which has received a lot of attention from both the medical and the technical communities in the past decades. The purpose of this thesis is to investigate brain tumor segmentation, growth analysis and tumor classification based on multi-modal magnetic resonance (MR) image datasets of low- and high-grade glioma making use of computer vision and machine learning methodologies. Brain tumor segmentation involves the delineation of tumorous structures, such as edema, active tumor and necrotic tumor core, and healthy brain tissues, often categorized in gray matter, white matter and cerebro-spinal fluid. Deep learning frameworks have proven to be among the most accurate brain tumor segmentation techniques, performing particularly well when large accurately annotated image datasets are available. A first project is designed to build a more flexible model, which allows for intuitive semi-automated user-interaction, is less dependent on training data, and can handle missing MR modalities. The framework is based on a Bayesian network with hidden variables optimized by the expectation-maximization algorithm, and is tailored to handle non-Gaussian multivariate distributions using the concept of Gaussian copulas. To generate reliable priors for the generative probabilistic model and to spatially regularize the segmentation results, it is extended with an initialization and a post-processing module, both based on supervoxels classified by random forests. Brain tumor segmentation allows to assess tumor volumetry over time, which is important to identify disease progression (tumor regrowth) after therapy. In a second project, a dataset of temporal MR sequences is analyzed. To that end, brain tumor segmentation and brain tumor growth assessment are unified within a single framework using a conditional random field (CRF). The CRF extends over the temporal patient datasets and includes directed links with infinite weight in order to incorporate growth or shrinkage constraints. The model is shown to obtain temporally coherent tumor segmentation and aids in estimating the likelihood of disease progression after therapy. Recent studies classify brain tumors based on their genotypic parameters, which are reported to have an important impact on the prognosis and the therapy of patients. A third project is aimed to investigate whether the genetic profile of glioma can be predicted based on the MR images only, which would eliminate the need to take biopsies. A multi-modal medical image classification framework is built, classifying glioma in three genetic classes based on DNA methylation status. The framework makes use of short local image descriptors as well as deep-learned features acquired by denoising auto-encoders to generate meaningful image features. The framework is successfully validated and shown to obtain high accuracies even though the same image-based classification task is hardly possible for medical experts.

Automatic intensity windowing of mammographic images based on a perceptual metric

  • Albiol, Alberto
  • Corbi, Alberto
  • Albiol, Francisco
Medical physics 2017 Journal Article, cited 0 times
Website
PURPOSE: Initial auto-adjustment of the window level WL and width WW applied to mammographic images. The proposed intensity windowing (IW) method is based on the maximization of the mutual information (MI) between a perceptual decomposition of the original 12-bit sources and their screen displayed 8-bit version. Besides zoom, color inversion and panning operations, IW is the most commonly performed task in daily screening and has a direct impact on diagnosis and the time involved in the process. METHODS: The authors present a human visual system and perception-based algorithm named GRAIL (Gabor-relying adjustment of image levels). GRAIL initially measures a mammogram's quality based on the MI between the original instance and its Gabor-filtered derivations. From this point on, the algorithm performs an automatic intensity windowing process that outputs the WL/WW that best displays each mammogram for screening. GRAIL starts with the default, high contrast, wide dynamic range 12-bit data, and then maximizes the graphical information presented in ordinary 8-bit displays. Tests have been carried out with several mammogram databases. They comprise correlations and an ANOVA analysis with the manual IW levels established by a group of radiologists. A complete MATLAB implementation of GRAIL is available at https://github.com/TheAnswerIsFortyTwo/GRAIL. RESULTS: Auto-leveled images show superior quality both perceptually and objectively compared to their full intensity range and compared to the application of other common methods like global contrast stretching (GCS). The correlations between the human determined intensity values and the ones estimated by our method surpass that of GCS. The ANOVA analysis with the upper intensity thresholds also reveals a similar outcome. GRAIL has also proven to specially perform better with images that contain micro-calcifications and/or foreign X-ray-opaque elements and with healthy BI-RADS A-type mammograms. It can also speed up the initial screening time by a mean of 4.5 s per image. CONCLUSIONS: A novel methodology is introduced that enables a quality-driven balancing of the WL/WW of mammographic images. This correction seeks the representation that maximizes the amount of graphical information contained in each image. The presented technique can contribute to the diagnosis and the overall efficiency of the breast screening session by suggesting, at the beginning, an optimal and customized windowing setting for each mammogram.

Semi-automatic classification of prostate cancer on multi-parametric MR imaging using a multi-channel 3D convolutional neural network

  • Aldoj, Nader
  • Lukas, Steffen
  • Dewey, Marc
  • Penzkofer, Tobias
European Radiology 2020 Journal Article, cited 1 times
Website

Radiogenomics in renal cell carcinoma

  • Alessandrino, Francesco
  • Shinagare, Atul B
  • Bossé, Dominick
  • Choueiri, Toni K
  • Krajewski, Katherine M
Abdominal Radiology 2018 Journal Article, cited 0 times
Website

Medical Image Classification Algorithm Based on Weight Initialization-Sliding Window Fusion Convolutional Neural Network

  • An, Feng-Ping
Complexity 2019 Journal Article, cited 0 times
Website
Due to the complexity of medical images, traditional medical image classification methods have been unable to meet actual application needs. In recent years, the rapid development of deep learning theory has provided a technical approach for solving medical image classification tasks. However, deep learning has the following problems in medical image classification. First, it is impossible to construct a deep learning model hierarchy for medical image properties; second, the network initialization weights of deep learning models are not well optimized. Therefore, this paper starts from the perspective of network optimization and improves the nonlinear modeling ability of the network through optimization methods. A new network weight initialization method is proposed, which alleviates the problem that existing deep learning model initialization is limited by the type of the nonlinear unit adopted and increases the potential of the neural network to handle different visual tasks. Moreover, through an in-depth study of the multicolumn convolutional neural network framework, this paper finds that the number of features and the convolution kernel size at different levels of the convolutional neural network are different. In contrast, the proposed method can construct different convolutional neural network models that adapt better to the characteristics of the medical images of interest and thus can better train the resulting heterogeneous multicolumn convolutional neural networks. Finally, using the adaptive sliding window fusion mechanism proposed in this paper, both methods jointly complete the classification task of medical images. Based on the above ideas, this paper proposes a medical classification algorithm based on a weight initialization/sliding window fusion for multilevel convolutional neural networks. The methods proposed in this study were applied to breast mass, brain tumor tissue, and medical image database classification experiments. The results show that the proposed method not only achieves a higher average accuracy than that of traditional machine learning and other deep learning methods but also is more stable and more robust.

Application of Fuzzy c-means and Neural networks to categorize tumor affected breast MR Images

  • Anand, Shruthi
  • Vinod, Viji
  • Rampure, Anand
International Journal of Applied Engineering Research 2015 Journal Article, cited 4 times
Website

Imaging Genomics in Glioblastoma Multiforme: A Predictive Tool for Patients Prognosis, Survival, and Outcome

  • Anil, Rahul
  • Colen, Rivka R
Magnetic Resonance Imaging Clinics of North America 2016 Journal Article, cited 3 times
Website
The integration of imaging characteristics and genomic data has started a new trend in approach toward management of glioblastoma (GBM). Many ongoing studies are investigating imaging phenotypical signatures that could explain more about the behavior of GBM and its outcome. The discovery of biomarkers has played an adjuvant role in treating and predicting the outcome of patients with GBM. Discovering these imaging phenotypical signatures and dysregulated pathways/genes is needed and required to engineer treatment based on specific GBM manifestations. Characterizing these parameters will establish well-defined criteria so researchers can build on the treatment of GBM through personal medicine.

Lung nodule detection using 3D convolutional neural networks trained on weakly labeled data

  • Anirudh, Rushil
  • Thiagarajan, Jayaraman J
  • Bremer, Timo
  • Kim, Hyojin
2016 Conference Proceedings, cited 33 times
Website

Brain tumour classification using two-tier classifier with adaptive segmentation technique

  • Anitha, V
  • Murugavalli, S
IET Computer Vision 2016 Journal Article, cited 46 times
Website
A brain tumour is a mass of tissue that is structured by a gradual addition of anomalous cells and it is important to classify brain tumours from the magnetic resonance imaging (MRI) for treatment. Human investigation is the routine technique for brain MRI tumour detection and tumours classification. Interpretation of images is based on organised and explicit classification of brain MRI and also various techniques have been proposed. Information identified with anatomical structures and potential abnormal tissues which are noteworthy to treat are given by brain tumour segmentation on MRI, the proposed system uses the adaptive pillar K-means algorithm for successful segmentation and the classification methodology is done by the two-tier classification approach. In the proposed system, at first the self-organising map neural network trains the features extracted from the discrete wavelet transform blend wavelets and the resultant filter factors are consequently trained by the K-nearest neighbour and the testing process is also accomplished in two stages. The proposed two-tier classification system classifies the brain tumours in double training process which gives preferable performance over the traditional classification method. The proposed system has been validated with the support of real data sets and the experimental results showed enhanced performance.

Fast wavelet based image characterization for content based medical image retrieval

  • Anwar, Syed Muhammad
  • Arshad, Fozia
  • Majid, Muhammad
2017 Conference Proceedings, cited 4 times
Website
A large collection of medical images surrounds health care centers and hospitals. Medical images produced by different modalities like magnetic resonance imaging (MRI), computed tomography (CT), positron emission tomography (PET), and X-rays have increased incredibly with the advent of latest technologies for image acquisition. Retrieving clinical images of interest from these large data sets is a thought-provoking and demanding task. In this paper, a fast wavelet based medical image retrieval system is proposed that can aid physicians in the identification or analysis of medical images. The image signature is calculated using kurtosis and standard deviation as features. A possible use case is when the radiologist has some suspicion on diagnosis and wants further case histories, the acquired clinical images are sent (e.g. MRI images of brain) as a query to the content based medical image retrieval system. The system is tuned to retrieve the top most relevant images to the query. The proposed system is computationally efficient and more accurate in terms of the quality of retrieved images.

End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography

  • Ardila, D.
  • Kiraly, A. P.
  • Bharadwaj, S.
  • Choi, B.
  • Reicher, J. J.
  • Peng, L.
  • Tse, D.
  • Etemadi, M.
  • Ye, W.
  • Corrado, G.
  • Naidich, D. P.
  • Shetty, S.
Nat Med 2019 Journal Article, cited 1 times
Website
With an estimated 160,000 deaths in 2018, lung cancer is the most common cause of cancer death in the United States(1). Lung cancer screening using low-dose computed tomography has been shown to reduce mortality by 20-43% and is now included in US screening guidelines(1-6). Existing challenges include inter-grader variability and high false-positive and false-negative rates(7-10). We propose a deep learning algorithm that uses a patient's current and prior computed tomography volumes to predict the risk of lung cancer. Our model achieves a state-of-the-art performance (94.4% area under the curve) on 6,716 National Lung Cancer Screening Trial cases, and performs similarly on an independent clinical validation set of 1,139 cases. We conducted two reader studies. When prior computed tomography imaging was not available, our model outperformed all six radiologists with absolute reductions of 11% in false positives and 5% in false negatives. Where prior computed tomography imaging was available, the model performance was on-par with the same radiologists. This creates an opportunity to optimize the screening process via computer assistance and automation. While the vast majority of patients remain unscreened, we show the potential for deep learning models to increase the accuracy, consistency and adoption of lung cancer screening worldwide.

Discovery of pre-therapy 2-deoxy-2-18 F-fluoro-D-glucose positron emission tomography-based radiomics classifiers of survival outcome in non-small-cell lung cancer patients

  • Arshad, Mubarik A
  • Thornton, Andrew
  • Lu, Haonan
  • Tam, Henry
  • Wallitt, Kathryn
  • Rodgers, Nicola
  • Scarsbrook, Andrew
  • McDermott, Garry
  • Cook, Gary J
  • Landau, David
European journal of nuclear medicine and molecular imaging 2018 Journal Article, cited 0 times
Website

Morphological and multi-level geometrical descriptor analysis in CT and MRI volumes for automatic pancreas segmentation

  • Asaturyan, Hykoush
  • Gligorievski, Antonio
  • Villarini, Barbara
Computerized Medical Imaging and Graphics 2019 Journal Article, cited 3 times
Website
Automatic pancreas segmentation in 3D radiological scans is a critical, yet challenging task. As a prerequisite for computer-aided diagnosis (CADx) systems, accurate pancreas segmentation could generate both quantitative and qualitative information towards establishing the severity of a condition, and thus provide additional guidance for therapy planning. Since the pancreas is an organ of high inter-patient anatomical variability, previous segmentation approaches report lower quantitative accuracy scores in comparison to abdominal organs such as the liver or kidneys. This paper presents a novel approach for automatic pancreas segmentation in magnetic resonance imaging (MRI) and computer tomography (CT) scans. This method exploits 3D segmentation that, when coupled with geometrical and morphological characteristics of abdominal tissue, classifies distinct contours in tight pixel-range proximity as “pancreas” or “non-pancreas”. There are three main stages to this approach: (1) identify a major pancreas region and apply contrast enhancement to differentiate between pancreatic and surrounding tissue; (2) perform 3D segmentation via continuous max-flow and min-cuts approach, structured forest edge detection, and a training dataset of annotated pancreata; (3) eliminate non-pancreatic contours from resultant segmentation via morphological operations on area, structure and connectivity between distinct contours. The proposed method is evaluated on a dataset containing 82 CT image volumes, achieving mean Dice Similarity coefficient (DSC) of 79.3 ± 4.4%. Two MRI datasets containing 216 and 132 image volumes are evaluated, achieving mean DSC 79.6 ± 5.7% and 81.6 ± 5.1% respectively. This approach is statistically stable, reflected by lower metrics in standard deviation in comparison to state-of-the-art approaches.

Early survival prediction in non-small cell lung cancer from PET/CT images using an intra-tumor partitioning method.

  • Astaraki, Mehdi
  • Wang, Chunliang
  • Buizza, Giulia
  • Toma-Dasu, Iuliana
  • Lazzeroni, Marta
  • Smedby, Orjan
Physica Medica 2019 Journal Article, cited 0 times
Website
PURPOSE: To explore prognostic and predictive values of a novel quantitative feature set describing intra-tumor heterogeneity in patients with lung cancer treated with concurrent and sequential chemoradiotherapy. METHODS: Longitudinal PET-CT images of 30 patients with non-small cell lung cancer were analysed. To describe tumor cell heterogeneity, the tumors were partitioned into one to ten concentric regions depending on their sizes, and, for each region, the change in average intensity between the two scans was calculated for PET and CT images separately to form the proposed feature set. To validate the prognostic value of the proposed method, radiomics analysis was performed and a combination of the proposed novel feature set and the classic radiomic features was evaluated. A feature selection algorithm was utilized to identify the optimal features, and a linear support vector machine was trained for the task of overall survival prediction in terms of area under the receiver operating characteristic curve (AUROC). RESULTS: The proposed novel feature set was found to be prognostic and even outperformed the radiomics approach with a significant difference (AUROCSALoP=0.90 vs. AUROCradiomic=0.71) when feature selection was not employed, whereas with feature selection, a combination of the novel feature set and radiomics led to the highest prognostic values. CONCLUSION: A novel feature set designed for capturing intra-tumor heterogeneity was introduced. Judging by their prognostic power, the proposed features have a promising potential for early survival prediction.

Computer Aided Detection Scheme To Improve The Prognosis Assessment Of Early Stage Lung Cancer Patients

  • Athira, KV
  • Nithin, SS
Computer 2018 Journal Article, cited 0 times
Website
To develop a computer aided detection scheme to predict the stage 1 non-small cell lung cancer recurrence risk in lung cancer patients after surgery. By using chest computed tomography images; that taken before surgery, this system automatically segment the tumor that seen on CT images and extract the tumor related morphological and texture-based image features. We trained a Naïve Bayesian network classifier using six image features and an ANN classifier using two genomic biomarkers, these biomarkers are protein expression of the excision repair cross-complementing 1 gene (ERCC1) & a regulatory subunit of ribonucleotide reductase (RRM1) to predict the cancer recurrence risk, respectively. We developed a new approach that has a high potential to assist doctors in more effectively managing first stage NSCLC patients to reduce the cancer recurrence risk.

BIOMEDICAL IMAGE RETRIEVAL USING LBWP

  • Babu, Joyce Sarah
  • Mathew, Soumya
  • Simon, Rini
International Research Journal of Engineering and Technology 2017 Journal Article, cited 0 times
Website

Technical and Clinical Factors Affecting Success Rate of a Deep Learning Method for Pancreas Segmentation on CT

  • Bagheri, Mohammad Hadi
  • Roth, Holger
  • Kovacs, William
  • Yao, Jianhua
  • Farhadi, Faraz
  • Li, Xiaobai
  • Summers, Ronald M
Acad Radiol 2019 Journal Article, cited 0 times
Website
PURPOSE: Accurate pancreas segmentation has application in surgical planning, assessment of diabetes, and detection and analysis of pancreatic tumors. Factors that affect pancreas segmentation accuracy have not been previously reported. The purpose of this study is to identify technical and clinical factors that adversely affect the accuracy of pancreas segmentation on CT. METHOD AND MATERIALS: In this IRB and HIPAA compliant study, a deep convolutional neural network was used for pancreas segmentation in a publicly available archive of 82 portal-venous phase abdominal CT scans of 53 men and 29 women. The accuracies of the segmentations were evaluated by the Dice similarity coefficient (DSC). The DSC was then correlated with demographic and clinical data (age, gender, height, weight, body mass index), CT technical factors (image pixel size, slice thickness, presence or absence of oral contrast), and CT imaging findings (volume and attenuation of pancreas, visceral abdominal fat, and CT attenuation of the structures within a 5 mm neighborhood of the pancreas). RESULTS: The average DSC was 78% +/- 8%. Factors that were statistically significantly correlated with DSC included body mass index (r=0.34, p < 0.01), visceral abdominal fat (r=0.51, p < 0.0001), volume of the pancreas (r=0.41, p=0.001), standard deviation of CT attenuation within the pancreas (r=0.30, p=0.01), and median and average CT attenuation in the immediate neighborhood of the pancreas (r = -0.53, p < 0.0001 and r=-0.52, p < 0.0001). There were no significant correlations between the DSC and the height, gender, or mean CT attenuation of the pancreas. CONCLUSION: Increased visceral abdominal fat and accumulation of fat within or around the pancreas are major factors associated with more accurate segmentation of the pancreas. Potential applications of our findings include assessment of pancreas segmentation difficulty of a particular scan or dataset and identification of methods that work better for more challenging pancreas segmentations.

Imaging genomics in cancer research: limitations and promises

  • Bai, Harrison X
  • Lee, Ashley M
  • Yang, Li
  • Zhang, Paul
  • Davatzikos, Christos
  • Maris, John M
  • Diskin, Sharon J
The British journal of radiology 2016 Journal Article, cited 28 times
Website

BraTS Multimodal Brain Tumor Segmentation Challenge

  • Bakas, Spyridon
2017 Conference Proceedings, cited 2030 times
Website

GLISTRboost: Combining Multimodal MRI Segmentation, Registration, and Biophysical Tumor Growth Modeling with Gradient Boosting Machines for Glioma Segmentation.

  • Bakas, S.
  • Zeng, K.
  • Sotiras, A.
  • Rathore, S.
  • Akbari, H.
  • Gaonkar, B.
  • Rozycki, M.
  • Pati, S.
  • Davatzikos, C.
Brainlesion 2016 Journal Article, cited 49 times
Website
We present an approach for segmenting low- and high-grade gliomas in multimodal magnetic resonance imaging volumes. The proposed approach is based on a hybrid generative-discriminative model. Firstly, a generative approach based on an Expectation-Maximization framework that incorporates a glioma growth model is used to segment the brain scans into tumor, as well as healthy tissue labels. Secondly, a gradient boosting multi-class classification scheme is used to refine tumor labels based on information from multiple patients. Lastly, a probabilistic Bayesian strategy is employed to further refine and finalize the tumor segmentation based on patient-specific intensity statistics from the multiple modalities. We evaluated our approach in 186 cases during the training phase of the BRAin Tumor Segmentation (BRATS) 2015 challenge and report promising results. During the testing phase, the algorithm was additionally evaluated in 53 unseen cases, achieving the best performance among the competing methods.

A radiogenomic dataset of non-small cell lung cancer

  • Bakr, Shaimaa
  • Gevaert, Olivier
  • Echegaray, Sebastian
  • Ayers, Kelsey
  • Zhou, Mu
  • Shafiq, Majid
  • Zheng, Hong
  • Benson, Jalen Anthony
  • Zhang, Weiruo
  • Leung, Ann NC
Scientific data 2018 Journal Article, cited 1 times
Website

Secure telemedicine using RONI halftoned visual cryptography without pixel expansion

  • Bakshi, Arvind
  • Patel, Anoop Kumar
Journal of Information Security and Applications 2019 Journal Article, cited 0 times
Website
To provide quality healthcare services worldwide telemedicine is a well-known technique. It delivers healthcare services remotely. For the diagnosis of disease and prescription by the doctor, lots of information is needed to be shared over public and private channels. Medical information like MRI, X-Ray, CT-scan etc. contains very personal information and needs to be secured. Security like confidentiality, privacy, and integrity of medical data is still a challenge. It is observed that the existing security techniques like digital watermarking, encryption are not efficient for real-time use. This paper investigates the problem and provides the solution of security considering major aspects, using Visual Cryptography (VC). The proposed algorithm creates shares for parts of the image which does not have relevant information. All the information which contains data related to the disease is supposed to be relevant and is marked as the region of interest (ROI). The integrity of the image is maintained by inserting some information in the region of non-interest (RONI). All the shares generated are transmitted over different channels and embedded information is decrypted by overlapping (in XOR fashion) shares in theta(1) time. Visual perception of all the results discussed in this article is very clear. The proposed algorithm has performance metrics as PSNR (peak signal-to-noise ratio), SSIM (structure similarity matrix), and Accuracy having values 22.9452, 0.9701, and 99.8740 respectively. (C) 2019 Elsevier Ltd. All rights reserved.

Quantitative Imaging features Improve Discrimination of Malignancy in Pulmonary nodules

  • Balagurunathan, Yoganand
  • Schabath, Matthew B.
  • Wang, Hua
  • Liu, Ying
  • Gillies, Robert J.
Sci RepScientific reports 2019 Journal Article, cited 0 times
Website
Pulmonary nodules are frequently detected radiological abnormalities in lung cancer screening. Nodules of the highest- and lowest-risk for cancer are often easily diagnosed by a trained radiologist there is still a high rate of indeterminate pulmonary nodules (IPN) of unknown risk. Here, we test the hypothesis that computer extracted quantitative features ("radiomics") can provide improved risk-assessment in the diagnostic setting. Nodules were segmented in 3D and 219 quantitative features are extracted from these volumes. Using these features novel malignancy risk predictors are formed with various stratifications based on size, shape and texture feature categories. We used images and data from the National Lung Screening Trial (NLST), curated a subset of 479 participants (244 for training and 235 for testing) that included incident lung cancers and nodule-positive controls. After removing redundant and non-reproducible features, optimal linear classifiers with area under the receiver operator characteristics (AUROC) curves were used with an exhaustive search approach to find a discriminant set of image features, which were validated in an independent test dataset. We identified several strong predictive models, using size and shape features the highest AUROC was 0.80. Using non-size based features the highest AUROC was 0.85. Combining features from all the categories, the highest AUROC were 0.83.

Bone-Cancer Assessment and Destruction Pattern Analysis in Long-Bone X-ray Image

  • Bandyopadhyay, Oishila
  • Biswas, Arindam
  • Bhattacharya, Bhargab B
J Digit Imaging 2018 Journal Article, cited 0 times
Website
Bone cancer originates from bone and rapidly spreads to the rest of the body affecting the patient. A quick and preliminary diagnosis of bone cancer begins with the analysis of bone X-ray or MRI image. Compared to MRI, an X-ray image provides a low-cost diagnostic tool for diagnosis and visualization of bone cancer. In this paper, a novel technique for the assessment of cancer stage and grade in long bones based on X-ray image analysis has been proposed. Cancer-affected bone images usually appear with a variation in bone texture in the affected region. A fusion of different methodologies is used for the purpose of our analysis. In the proposed approach, we extract certain features from bone X-ray images and use support vector machine (SVM) to discriminate healthy and cancerous bones. A technique based on digital geometry is deployed for localizing cancer-affected regions. Characterization of the present stage and grade of the disease and identification of the underlying bone-destruction pattern are performed using a decision tree classifier. Furthermore, the method leads to the development of a computer-aided diagnostic tool that can readily be used by paramedics and doctors. Experimental results on a number of test cases reveal satisfactory diagnostic inferences when compared with ground truth known from clinical findings.

A novel fully automated MRI-based deep-learning method for classification of IDH mutation status in brain gliomas

  • Bangalore Yogananda, Chandan Ganesh
  • Shah, Bhavya R
  • Vejdani-Jahromi, Maryam
  • Nalawade, Sahil S
  • Murugesan, Gowtham K
  • Yu, Frank F
  • Pinho, Marco C
  • Wagner, Benjamin C
  • Mickey, Bruce
  • Patel, Toral R
Neuro-oncology 2020 Journal Article, cited 4 times
Website

Interreader Variability of Dynamic Contrast-enhanced MRI of Recurrent Glioblastoma: The Multicenter ACRIN 6677/RTOG 0625 Study

  • Barboriak, Daniel P
  • Zhang, Zheng
  • Desai, Pratikkumar
  • Snyder, Bradley S
  • Safriel, Yair
  • McKinstry, Robert C
  • Bokstein, Felix
  • Sorensen, Gregory
  • Gilbert, Mark R
  • Boxerman, Jerrold L
Radiology 2019 Journal Article, cited 2 times
Website
Purpose To evaluate factors contributing to interreader variation (IRV) in parameters measured at dynamic contrast material-enhanced (DCE) MRI in patients with glioblastoma who were participating in a multicenter trial. Materials and Methods A total of 18 patients (mean age, 57 years +/- 13 [standard deviation]; 10 men) who volunteered for the advanced imaging arm of ACRIN 6677, a substudy of the RTOG 0625 clinical trial for recurrent glioblastoma treatment, underwent analyzable DCE MRI at one of four centers. The 78 imaging studies were analyzed centrally to derive the volume transfer constant (K(trans)) for gadolinium between blood plasma and tissue extravascular extracellular space, fractional volume of the extracellular extravascular space (ve), and initial area under the gadolinium concentration curve (IAUGC). Two independently trained teams consisting of a neuroradiologist and a technologist segmented the enhancing tumor on three-dimensional spoiled gradient-recalled acquisition in the steady-state images. Mean and median parameter values in the enhancing tumor were extracted after registering segmentations to parameter maps. The effect of imaging time relative to treatment, map quality, imager magnet and sequence, average tumor volume, and reader variability in tumor volume on IRV was studied by using intraclass correlation coefficients (ICCs) and linear mixed models. Results Mean interreader variations (+/- standard deviation) (difference as a percentage of the mean) for mean and median IAUGC, mean and median K(trans), and median ve were 18% +/- 24, 17% +/- 23, 27% +/- 34, 16% +/- 27, and 27% +/- 34, respectively. ICCs for these metrics ranged from 0.90 to 1.0 for baseline and from 0.48 to 0.76 for posttreatment examinations. Variability in reader-derived tumor volume was significantly related to IRV for all parameters. Conclusion Differences in reader tumor segmentations are a significant source of interreader variation for all dynamic contrast-enhanced MRI parameters. (c) RSNA, 2018 Online supplemental material is available for this article. See also the editorial by Wolf in this issue.

Pathologically-Validated Tumor Prediction Maps in MRI

  • Barrington, Alex
2019 Thesis, cited 0 times
Website
Glioblastoma (GBM) is an aggressive cancer with an average 5-year survival rate of about 5%. Following treatment with surgery, radiation, and chemotherapy, diagnosing tumor recurrence requires serial magnetic resonance imaging (MRI) scans. Infiltrative tumor cells beyond gadolinium enhancement on T1-weighted MRI are difficult to detect. This study therefore aims to improve tumor detection beyond traditional tumor margins. To accomplish this, a neural network model was trained to classify tissue samples as ‘tumor’ or ‘not tumor’. This model was then used to classify thousands of tiles from histology samples acquired at autopsy with known MRI locations on the patient’s final clinical MRI scan. This combined radiological-pathological (rad-path) dataset was then treated as a ground truth to train a second model for predicting tumor presence from MRI alone. Predictive maps were created for seven patients left out of the training steps, and tissue samples were tested to determine the model’s accuracy. The final model produced a receiver operator characteristic (ROC) area under the curve (AUC) of 0.70. This study demonstrates a new method for detecting infiltrative tumor beyond conventional radiologist defined margins based on neural networks applied to rad-path datasets in glioblastoma.

Equating quantitative emphysema measurements on different CT image reconstructions

  • Bartel, Seth T
  • Bierhals, Andrew J
  • Pilgram, Thomas K
  • Hong, Cheng
  • Schechtman, Kenneth B
  • Conradi, Susan H
  • Gierada, David S
Medical physics 2011 Journal Article, cited 15 times
Website
PURPOSE: To mathematically model the relationship between CT measurements of emphysema obtained from images reconstructed using different section thicknesses and kernels and to evaluate the accuracy of the models for converting measurements to those of a reference reconstruction. METHODS: CT raw data from the lung cancer screening examinations of 138 heavy smokers were reconstructed at 15 different combinations of section thickness and kernel. An emphysema index was quantified as the percentage of the lung with attenuation below -950 HU (EI950). Linear, quadratic, and power functions were used to model the relationship between EI950 values obtained with a reference 1 mm, medium smooth kernel reconstruction and values from each of the other 14 reconstructions. Preferred models were selected using the corrected Akaike information criterion (AICc), coefficients of determination (R2), and residuals (conversion errors), and cross-validated by a jackknife approach using the leave-one-out method. RESULTS: The preferred models were power functions, with model R2 values ranging from 0.949 to 0.998. The errors in converting EI950 measurements from other reconstructions to the 1 mm, medium smooth kernel reconstruction in leave-one-out testing were less than 3.0 index percentage points for all reconstructions, and less than 1.0 index percentage point for five reconstructions. Conversion errors were related in part to image noise, emphysema distribution, and attenuation histogram parameters. Conversion inaccuracy related to increased kernel sharpness tended to be reduced by increased section thickness. CONCLUSIONS: Image reconstruction-related differences in quantitative emphysema measurements were successfully modeled using power functions.

Call for Data Standardization: Lessons Learned and Recommendations in an Imaging Study

  • Basu, Amrita
  • Warzel, Denise
  • Eftekhari, Aras
  • Kirby, Justin S
  • Freymann, John
  • Knable, Janice
  • Sharma, Ashish
  • Jacobs, Paula
JCO Clin Cancer Inform 2019 Journal Article, cited 0 times
Website
PURPOSE: Data sharing creates potential cost savings, supports data aggregation, and facilitates reproducibility to ensure quality research; however, data from heterogeneous systems require retrospective harmonization. This is a major hurdle for researchers who seek to leverage existing data. Efforts focused on strategies for data interoperability largely center around the use of standards but ignore the problems of competing standards and the value of existing data. Interoperability remains reliant on retrospective harmonization. Approaches to reduce this burden are needed. METHODS: The Cancer Imaging Archive (TCIA) is an example of an imaging repository that accepts data from a diversity of sources. It contains medical images from investigators worldwide and substantial nonimage data. Digital Imaging and Communications in Medicine (DICOM) standards enable querying across images, but TCIA does not enforce other standards for describing nonimage supporting data, such as treatment details and patient outcomes. In this study, we used 9 TCIA lung and brain nonimage files containing 659 fields to explore retrospective harmonization for cross-study query and aggregation. It took 329.5 hours, or 2.3 months, extended over 6 months to identify 41 overlapping fields in 3 or more files and transform 31 of them. We used the Genomic Data Commons (GDC) data elements as the target standards for harmonization. RESULTS: We characterized the issues and have developed recommendations for reducing the burden of retrospective harmonization. Once we harmonized the data, we also developed a Web tool to easily explore harmonized collections. CONCLUSION: While prospective use of standards can support interoperability, there are issues that complicate this goal. Our work recognizes and reveals retrospective harmonization issues when trying to reuse existing data and recommends national infrastructure to address these issues.

Variability of manual segmentation of the prostate in axial T2-weighted MRI: A multi-reader study

  • Becker, A. S.
  • Chaitanya, K.
  • Schawkat, K.
  • Müehlematter, U. J.
  • Hotker, A. M.
  • Konukoglu, E.
  • Donati, O. F.
Eur J Radiol 2019 Journal Article, cited 3 times
Website
PURPOSE: To evaluate the interreader variability in prostate and seminal vesicle (SV) segmentation on T2w MRI. METHODS: Six readers segmented the peripheral zone (PZ), transitional zone (TZ) and SV slice-wise on axial T2w prostate MRI examinations of n=80 patients. Twenty different similarity scores, including dice score (DS), Hausdorff distance (HD) and volumetric similarity coefficient (VS), were computed with the VISCERAL EvaluateSegmentation software for all structures combined and separately for the whole gland (WG=PZ+TZ), TZ and SV. Differences between base, midgland and apex were evaluated with DS slice-wise. Descriptive statistics for similarity scores were computed. Wilcoxon testing to evaluate differences of DS, HD and VS was performed. RESULTS: Overall segmentation variability was good with a mean DS of 0.859 (+/-SD=0.0542), HD of 36.6 (+/-34.9 voxels) and VS of 0.926 (+/-0.065). The WG showed a DS, HD and VS of 0.738 (+/-0.144), 36.2 (+/-35.6 vx) and 0.853 (+/-0.143), respectively. The TZ showed generally lower variability with a DS of 0.738 (+/-0.144), HD of 24.8 (+/-16 vx) and VS of 0.908 (+/-0.126). The lowest variability was found for the SV with DS of 0.884 (+/-0.0407), HD of 17 (+/-10.9 vx) and VS of 0.936 (+/-0.0509). We found a markedly lower DS of the segmentations in the apex (0.85+/-0.12) compared to the base (0.87+/-0.10, p<0.01) and the midgland (0.89+/-0.10, p<0.001). CONCLUSIONS: We report baseline values for interreader variability of prostate and SV segmentation on T2w MRI. Variability was highest in the apex, lower in the base, and lowest in the midgland.

Multi‐site quality and variability analysis of 3D FDG PET segmentations based on phantom and clinical image data

  • Beichel, Reinhard R
  • Smith, Brian J
  • Bauer, Christian
  • Ulrich, Ethan J
  • Ahmadvand, Payam
  • Budzevich, Mikalai M
  • Gillies, Robert J
  • Goldgof, Dmitry
  • Grkovski, Milan
  • Hamarneh, Ghassan
Medical physics 2017 Journal Article, cited 7 times
Website
PURPOSE: Radiomics utilizes a large number of image-derived features for quantifying tumor characteristics that can in turn be correlated with response and prognosis. Unfortunately, extraction and analysis of such image-based features is subject to measurement variability and bias. The challenge for radiomics is particularly acute in Positron Emission Tomography (PET) where limited resolution, a high noise component related to the limited stochastic nature of the raw data, and the wide variety of reconstruction options confound quantitative feature metrics. Extracted feature quality is also affected by tumor segmentation methods used to define regions over which to calculate features, making it challenging to produce consistent radiomics analysis results across multiple institutions that use different segmentation algorithms in their PET image analysis. Understanding each element contributing to these inconsistencies in quantitative image feature and metric generation is paramount for ultimate utilization of these methods in multi-institutional trials and clinical oncology decision making. METHODS: To assess segmentation quality and consistency at the multi-institutional level, we conducted a study of seven institutional members of the National Cancer Institute Quantitative Imaging Network. For the study, members were asked to segment a common set of phantom PET scans acquired over a range of imaging conditions as well as a second set of head and neck cancer (HNC) PET scans. Segmentations were generated at each institution using their preferred approach. In addition, participants were asked to repeat segmentations with a time interval between initial and repeat segmentation. This procedure resulted in overall 806 phantom insert and 641 lesion segmentations. Subsequently, the volume was computed from the segmentations and compared to the corresponding reference volume by means of statistical analysis. RESULTS: On the two test sets (phantom and HNC PET scans), the performance of the seven segmentation approaches was as follows. On the phantom test set, the mean relative volume errors ranged from 29.9 to 87.8% of the ground truth reference volumes, and the repeat difference for each institution ranged between -36.4 to 39.9%. On the HNC test set, the mean relative volume error ranged between -50.5 to 701.5%, and the repeat difference for each institution ranged between -37.7 to 31.5%. In addition, performance measures per phantom insert/lesion size categories are given in the paper. On phantom data, regression analysis resulted in coefficient of variation (CV) components of 42.5% for scanners, 26.8% for institutional approaches, 21.1% for repeated segmentations, 14.3% for relative contrasts, 5.3% for count statistics (acquisition times), and 0.0% for repeated scans. Analysis showed that the CV components for approaches and repeated segmentations were significantly larger on the HNC test set with increases by 112.7% and 102.4%, respectively. CONCLUSION: Analysis results underline the importance of PET scanner reconstruction harmonization and imaging protocol standardization for quantification of lesion volumes. In addition, to enable a distributed multi-site analysis of FDG PET images, harmonization of analysis approaches and operator training in combination with highly automated segmentation methods seems to be advisable. Future work will focus on quantifying the impact of segmentation variation on radiomics system performance.

Radiogenomic analysis of hypoxia pathway reveals computerized MRI descriptors predictive of overall survival in Glioblastoma

  • Beig, Niha
  • Patel, Jay
  • Prasanna, Prateek
  • Partovi, Sasan
  • Varadhan, Vinay
  • Madabhushi, Anant
  • Tiwari, Pallavi
2017 Conference Proceedings, cited 3 times
Website

Analysis of postprocessing steps for residue function dependent dynamic susceptibility contrast (DSC)‐MRI biomarkers and their clinical impact on glioma grading for both 1.5 and 3T

  • Bell, Laura C
  • Stokes, Ashley M
  • Quarles, C. Chad
Journal of Magnetic Resonance Imaging 2019 Journal Article, cited 0 times
Website
BACKGROUND: Dynamic susceptibility contrast (DSC)-MRI analysis pipelines differ across studies and sites, potentially confounding the clinical value and use of the derived biomarkers. PURPOSE/HYPOTHESIS: To investigate how postprocessing steps for computation of cerebral blood volume (CBV) and residue function dependent parameters (cerebral blood flow [CBF], mean transit time [MTT], capillary transit heterogeneity [CTH]) impact glioma grading. STUDY TYPE: Retrospective study from The Cancer Imaging Archive (TCIA). POPULATION: Forty-nine subjects with low- and high-grade gliomas. FIELD STRENGTH/SEQUENCE: 1.5 and 3.0T clinical systems using a single-echo echo planar imaging (EPI) acquisition. ASSESSMENT: Manual regions of interest (ROIs) were provided by TCIA and automatically segmented ROIs were generated by k-means clustering. CBV was calculated based on conventional equations. Residue function dependent biomarkers (CBF, MTT, CTH) were found by two deconvolution methods: circular discretization followed by a signal-to-noise ratio (SNR)-adapted eigenvalue thresholding (Method 1) and Volterra discretization with L-curve-based Tikhonov regularization (Method 2). STATISTICAL TESTS: Analysis of variance, receiver operating characteristics (ROC), and logistic regression tests. RESULTS: MTT alone was unable to statistically differentiate glioma grade (P > 0.139). When normalized, tumor CBF, CTH, and CBV did not differ across field strengths (P > 0.141). Biomarkers normalized to automatically segmented regions performed equally (rCTH AUROC is 0.73 compared with 0.74) or better (rCBF AUROC increases from 0.74-0.84; rCBV AUROC increases 0.78-0.86) than manually drawn ROIs. By updating the current deconvolution steps (Method 2), rCTH can act as a classifier for glioma grade (P < 0.007), but not if processed by current conventional DSC methods (Method 1) (P > 0.577). Lastly, higher-order biomarkers (eg, rCBF and rCTH) along with rCBV increases AUROC to 0.92 for differentiating tumor grade as compared with 0.78 and 0.86 (manual and automatic reference regions, respectively) for rCBV alone. DATA CONCLUSION: With optimized analysis pipelines, higher-order perfusion biomarkers (rCBF and rCTH) improve glioma grading as compared with CBV alone. Additionally, postprocessing steps impact thresholds needed for glioma grading. LEVEL OF EVIDENCE: 3 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2019.

Analysis of postprocessing steps for residue function dependent dynamic susceptibility contrast (DSC)‐MRI biomarkers and their clinical impact on glioma grading for both 1.5 and 3T

  • Bell, Laura C
  • Stokes, Ashley M
  • Quarles, C Chad
Journal of Magnetic Resonance Imaging 2020 Journal Article, cited 0 times
Website

Automatic Design of Window Operators for the Segmentation of the Prostate Gland in Magnetic Resonance Images

  • Benalcázar, Marco E
  • Brun, Marcel
  • Ballarin, Virginia
2015 Conference Proceedings, cited 0 times
Website

Overview of the American Society for Radiation Oncology–National Institutes of Health–American Association of Physicists in Medicine Workshop 2015: Exploring Opportunities for Radiation Oncology in the Era of Big Data

  • Benedict, Stanley H
  • Hoffman, Karen
  • Martel, Mary K
  • Abernethy, Amy P
  • Asher, Anthony L
  • Capala, Jacek
  • Chen, Ronald C
  • Chera, Bhisham
  • Couch, Jennifer
  • Deye, James
International Journal of Radiation Oncology• Biology• Physics 2016 Journal Article, cited 0 times

Segmentation of three-dimensional images with parametric active surfaces and topology changes

  • Benninghoff, Heike
  • Garcke, Harald
Journal of Scientific Computing 2017 Journal Article, cited 1 times
Website
In this paper, we introduce a novel parametric finite element method for segmentation of three-dimensional images. We consider a piecewise constant version of the Mumford-Shah and the Chan-Vese functionals and perform a region-based segmentation of 3D image data. An evolution law is derived from energy minimization problems which push the surfaces to the boundaries of 3D objects in the image. We propose a parametric scheme which describes the evolution of parametric surfaces. An efficient finite element scheme is proposed for a numerical approximation of the evolution equations. Since standard parametric methods cannot handle topology changes automatically, an efficient method is presented to detect, identify and perform changes in the topology of the surfaces. One main focus of this paper are the algorithmic details to handle topology changes like splitting and merging of surfaces and change of the genus of a surface. Different artificial images are studied to demonstrate the ability to detect the different types of topology changes. Finally, the parametric method is applied to segmentation of medical 3D images.

Isolation of Prostate Gland in T1-Weighted Magnetic Resonance Images using Computer Vision

  • Bhattacharya, Sayantan
  • Sharma, Apoorv
  • Gupta, Rinki
  • Bhan, Anupama
2020 Conference Proceedings, cited 0 times
Website

G-DOC Plus–an integrative bioinformatics platform for precision medicine

  • Bhuvaneshwar, Krithika
  • Belouali, Anas
  • Singh, Varun
  • Johnson, Robert M
  • Song, Lei
  • Alaoui, Adil
  • Harris, Michael A
  • Clarke, Robert
  • Weiner, Louis M
  • Gusev, Yuriy
BMC bioinformatics 2016 Journal Article, cited 14 times
Website

Artificial intelligence in cancer imaging: Clinical challenges and applications

  • Bi, Wenya Linda
  • Hosny, Ahmed
  • Schabath, Matthew B
  • Giger, Maryellen L
  • Birkbak, Nicolai J
  • Mehrtash, Alireza
  • Allison, Tavis
  • Arnaout, Omar
  • Abbosh, Christopher
  • Dunn, Ian F
CA: a cancer journal for clinicians 2019 Journal Article, cited 0 times
Website

Learning to detect anatomical landmarks of the pelvis in X-rays from arbitrary views

  • Bier, B.
  • Goldmann, F.
  • Zaech, J. N.
  • Fotouhi, J.
  • Hegeman, R.
  • Grupp, R.
  • Armand, M.
  • Osgood, G.
  • Navab, N.
  • Maier, A.
  • Unberath, M.
Int J Comput Assist Radiol Surg 2019 Journal Article, cited 0 times
Website
Purpose Minimally invasive alternatives are now available for many complex surgeries. These approaches are enabled by the increasing availability of intra-operative image guidance. Yet, fluoroscopic X-rays suffer from projective transformation and thus cannot provide direct views onto anatomy. Surgeons could highly benefit from additional information, such as the anatomical landmark locations in the projections, to support intra-operative decision making. However, detecting landmarks is challenging since the viewing direction changes substantially between views leading to varying appearance of the same landmark. Therefore, and to the best of our knowledge, view-independent anatomical landmark detection has not been investigated yet. Methods In this work, we propose a novel approach to detect multiple anatomical landmarks in X-ray images from arbitrary viewing directions. To this end, a sequential prediction framework based on convolutional neural networks is employed to simultaneously regress all landmark locations. For training, synthetic X-rays are generated with a physically accurate forward model that allows direct application of the trained model to real X-ray images of the pelvis. View invariance is achieved via data augmentation by sampling viewing angles on a spherical segment of 120∘×90∘ . Results On synthetic data, a mean prediction error of 5.6 ± 4.5 mm is achieved. Further, we demonstrate that the trained model can be directly applied to real X-rays and show that these detections define correspondences to a respective CT volume, which allows for analytic estimation of the 11 degree of freedom projective mapping. Conclusion We present the first tool to detect anatomical landmarks in X-ray images independent of their viewing direction. Access to this information during surgery may benefit decision making and constitutes a first step toward global initialization of 2D/3D registration without the need of calibration. As such, the proposed concept has a strong prospect to facilitate and enhance applications and methods in the realm of image-guided surgery.

Performance analysis of unsupervised optimal fuzzy clustering algorithm for MRI brain tumor segmentation

  • Blessy, SA Praylin Selva
  • Sulochana, C Helen
Technology and Health Care 2014 Journal Article, cited 0 times
Website

Performance analysis of unsupervised optimal fuzzy clustering algorithm for MRI brain tumor segmentation

  • Blessy, SA Praylin Selva
  • Sulochana, C Helen
Technology and Health Care 2015 Journal Article, cited 0 times
Website
BACKGROUND: Segmentation of brain tumor from Magnetic Resonance Imaging (MRI) becomes very complicated due to the structural complexities of human brain and the presence of intensity inhomogeneities. OBJECTIVE: To propose a method that effectively segments brain tumor from MR images and to evaluate the performance of unsupervised optimal fuzzy clustering (UOFC) algorithm for segmentation of brain tumor from MR images. METHODS: Segmentation is done by preprocessing the MR image to standardize intensity inhomogeneities followed by feature extraction, feature fusion and clustering. RESULTS: Different validation measures are used to evaluate the performance of the proposed method using different clustering algorithms. The proposed method using UOFC algorithm produces high sensitivity (96%) and low specificity (4%) compared to other clustering methods. CONCLUSIONS: Validation results clearly show that the proposed method with UOFC algorithm effectively segments brain tumor from MR images.

Solid Indeterminate Nodules with a Radiological Stability Suggesting Benignity: A Texture Analysis of Computed Tomography Images Based on the Kurtosis and Skewness of the Nodule Volume Density Histogram

  • Borguezan, Bruno Max
  • Lopes, Agnaldo José
  • Saito, Eduardo Haruo
  • Higa, Claudio
  • Silva, Aristófanes Corrêa
  • Nunes, Rodolfo Acatauassú
Pulmonary Medicine 2019 Journal Article, cited 0 times
Website
BACKGROUND: The number of incidental findings of pulmonary nodules using imaging methods to diagnose other thoracic or extrathoracic conditions has increased, suggesting the need for in-depth radiological image analyses to identify nodule type and avoid unnecessary invasive procedures. OBJECTIVES:The present study evaluated solid indeterminate nodules with a radiological stability suggesting benignity (SINRSBs) through a texture analysis of computed tomography (CT) images. METHODS: A total of 100 chest CT scans were evaluated, including 50 cases of SINRSBs and 50 cases of malignant nodules. SINRSB CT scans were performed using the same noncontrast enhanced CT protocol and equipment; the malignant nodule data were acquired from several databases. The kurtosis (KUR) and skewness (SKW) values of these tests were determined for the whole volume of each nodule, and the histograms were classified into two basic patterns: peaks or plateaus. RESULTS: The mean (MEN) KUR values of the SINRSBs and malignant nodules were 3.37 ± 3.88 and 5.88 ± 5.11, respectively. The receiver operating characteristic (ROC) curve showed that the sensitivity and specificity for distinguishing SINRSBs from malignant nodules were 65% and 66% for KUR values >6, respectively, with an area under the curve (AUC) of 0.709 (p< 0.0001). The MEN SKW values of the SINRSBs and malignant nodules were 1.73 ± 0.94 and 2.07 ± 1.01, respectively. The ROC curve showed that the sensitivity and specificity for distinguishing malignant nodules from SINRSBs were 65% and 66% for SKW values >3.1, respectively, with an AUC of 0.709 (p < 0.0001). An analysis of the peak and plateau histograms revealed sensitivity, specificity, and accuracy values of 84%, 74%, and 79%, respectively. CONCLUSION: KUR, SKW, and histogram shape can help to noninvasively diagnose SINRSBs but should not be used alone or without considering clinical data.

Radiogenomics of Clear Cell Renal Cell Carcinoma: Associations Between mRNA-Based Subtyping and CT Imaging Features

  • Bowen, Lan
  • Xiaojing, Li
Academic radiology 2018 Journal Article, cited 0 times
Website

Singular value decomposition using block least mean square method for image denoising and compression

  • Boyat, Ajay Kumar
  • Khare, Parth
2015 Conference Proceedings, cited 1 times
Website

A volumetric technique for fossil body mass estimation applied to Australopithecus afarensis

  • Brassey, Charlotte A
  • O'Mahoney, Thomas G
  • Chamberlain, Andrew T
  • Sellers, William I
Journal of human evolution 2018 Journal Article, cited 3 times
Website

Constructing 3D-Printable CAD Models of Prostates from MR Images

  • Breseman, Kelsey
  • Lee, Christopher
  • Bloch, B Nicholas
  • Jaffe, Carl
2013 Conference Proceedings, cited 1 times
Website
This paper describes the development of a procedure to generate patient-specific, three-dimensional (3D) solid models of prostates (and related anatomy) from magnetic resonance (MR) images. The 3D models are rendered in STL file format which can be physically printed or visualized on a holographic display system. An example is presented in which a 3D model is printed following this procedure.

Quantitative variations in texture analysis features dependent on MRI scanning parameters: A phantom model

  • Buch, Karen
  • Kuno, Hirofumi
  • Qureshi, Muhammad M
  • Li, Baojun
  • Sakai, Osamu
Journal of applied clinical medical physics 2018 Journal Article, cited 0 times
Website

Quantitative Imaging Biomarker Ontology (QIBO) for Knowledge Representation of Biomedical Imaging Biomarkers

  • Buckler, AndrewJ
  • Ouellette, M.
  • Danagoulian, J.
  • Wernsing, G.
  • Liu, TiffanyTing
  • Savig, Erica
  • Suzek, BarisE
  • Rubin, DanielL
  • Paik, David
Journal of Digital Imaging 2013 Journal Article, cited 17 times
Website

Comparing nonrigid registration techniques for motion corrected MR prostate diffusion imaging

  • Buerger, C
  • Sénégas, J
  • Kabus, S
  • Carolus, H
  • Schulz, H
  • Agarwal, H
  • Turkbey, B
  • Choyke, PL
  • Renisch, S
Medical physics 2015 Journal Article, cited 4 times
Website
PURPOSE: T2-weighted magnetic resonance imaging (MRI) is commonly used for anatomical visualization in the pelvis area, such as the prostate, with high soft-tissue contrast. MRI can also provide functional information such as diffusion-weighted imaging (DWI) which depicts the molecular diffusion processes in biological tissues. The combination of anatomical and functional imaging techniques is widely used in oncology, e.g., for prostate cancer diagnosis and staging. However, acquisition-specific distortions as well as physiological motion lead to misalignments between T2 and DWI and consequently to a reduced diagnostic value. Image registration algorithms are commonly employed to correct for such misalignment. METHODS: The authors compare the performance of five state-of-the-art nonrigid image registration techniques for accurate image fusion of DWI with T2. RESULTS: Image data of 20 prostate patients with cancerous lesions or cysts were acquired. All registration algorithms were validated using intensity-based as well as landmark-based techniques. CONCLUSIONS: The authors' results show that the "fast elastic image registration" provides most accurate results with a target registration error of 1.07 +/- 0.41 mm at minimum execution times of 11 +/- 1 s.

Using computer‐extracted image phenotypes from tumors on breast magnetic resonance imaging to predict breast cancer pathologic stage

  • Burnside, Elizabeth S
  • Drukker, Karen
  • Li, Hui
  • Bonaccio, Ermelinda
  • Zuley, Margarita
  • Ganott, Marie
  • Net, Jose M
  • Sutton, Elizabeth J
  • Brandt, Kathleen R
  • Whitman, Gary J
Cancer 2016 Journal Article, cited 28 times
Website

Medical Image Retrieval Based on Convolutional Neural Network and Supervised Hashing

  • Cai, Yiheng
  • Li, Yuanyuan
  • Qiu, Changyan
  • Ma, Jie
  • Gao, Xurong
IEEE Access 2019 Journal Article, cited 0 times
Website
In recent years, with extensive application in image retrieval and other tasks, a convolutional neural network (CNN) has achieved outstanding performance. In this paper, a new content-based medical image retrieval (CBMIR) framework using CNN and hash coding is proposed. The new framework adopts a Siamese network in which pairs of images are used as inputs, and a model is learned to make images belonging to the same class have similar features by using weight sharing and a contrastive loss function. In each branch of the network, CNN is adapted to extract features, followed by hash mapping, which is used to reduce the dimensionality of feature vectors. In the training process, a new loss function is designed to make the feature vectors more distinguishable, and a regularization term is added to encourage the real value outputs to approximate the desired binary values. In the retrieval phase, the compact binary hash code of the query image is achieved from the trained network and is subsequently compared with the hash codes of the database images. We experimented on two medical image datasets: the cancer imaging archive-computed tomography (TCIA-CT) and the vision and image analysis group/international early lung cancer action program (VIA/I-ELCAP). The results indicate that our method is superior to existing hash methods and CNN methods. Compared with the traditional hashing method, feature extraction based on CNN has advantages. The proposed algorithm combining a Siamese network with the hash method is superior to the classical CNN-based methods. The application of a new loss function can effectively improve retrieval accuracy.

Head and neck cancer patient images for determining auto-segmentation accuracy in T2-weighted magnetic resonance imaging through expert manual segmentations

  • Cardenas, Carlos E
  • Mohamed, Abdallah S R
  • Yang, Jinzhong
  • Gooding, Mark
  • Veeraraghavan, Harini
  • Kalpathy-Cramer, Jayashree
  • Ng, Sweet Ping
  • Ding, Yao
  • Wang, Jihong
  • Lai, Stephen Y
  • Fuller, Clifton D
  • Sharp, Greg
Med Phys 2020 Dataset, cited 0 times
Website
PURPOSE: The use of magnetic resonance imaging (MRI) in radiotherapy treatment planning has rapidly increased due to its ability to evaluate patient's anatomy without the use of ionizing radiation and due to its high soft tissue contrast. For these reasons, MRI has become the modality of choice for longitudinal and adaptive treatment studies. Automatic segmentation could offer many benefits for these studies. In this work, we describe a T2-weighted MRI dataset of head and neck cancer patients that can be used to evaluate the accuracy of head and neck normal tissue auto-segmentation systems through comparisons to available expert manual segmentations. ACQUISITION AND VALIDATION METHODS: T2-weighted MRI images were acquired for 55 head and neck cancer patients. These scans were collected after radiotherapy computed tomography (CT) simulation scans using a thermoplastic mask to replicate patient treatment position. All scans were acquired on a single 1.5 T Siemens MAGNETOM Aera MRI with two large four-channel flex phased-array coils. The scans covered the region encompassing the nasopharynx region cranially and supraclavicular lymph node region caudally, when possible, in the superior-inferior direction. Manual contours were created for the left/right submandibular gland, left/right parotids, left/right lymph node level II, and left/right lymph node level III. These contours underwent quality assurance to ensure adherence to predefined guidelines, and were corrected if edits were necessary. DATA FORMAT AND USAGE NOTES: The T2-weighted images and RTSTRUCT files are available in DICOM format. The regions of interest are named based on AAPM's Task Group 263 nomenclature recommendations (Glnd_Submand_L, Glnd_Submand_R, LN_Neck_II_L, Parotid_L, Parotid_R, LN_Neck_II_R, LN_Neck_III_L, LN_Neck_III_R). This dataset is available on The Cancer Imaging Archive (TCIA) by the National Cancer Institute under the collection "AAPM RT-MAC Grand Challenge 2019" (https://doi.org/10.7937/tcia.2019.bcfjqfqb). POTENTIAL APPLICATIONS: This dataset provides head and neck patient MRI scans to evaluate auto-segmentation systems on T2-weighted images. Additional anatomies could be provided at a later time to enhance the existing library of contours.

PARaDIM - A PHITS-based Monte Carlo tool for internal dosimetry with tetrahedral mesh computational phantoms

  • Carter, L. M.
  • Crawford, T. M.
  • Sato, T.
  • Furuta, T.
  • Choi, C.
  • Kim, C. H.
  • Brown, J. L.
  • Bolch, W. E.
  • Zanzonico, P. B.
  • Lewis, J. S.
J Nucl Med 2019 Journal Article, cited 0 times
Website
Mesh-type and voxel-based computational phantoms comprise the current state-of-the-art for internal dose assessment via Monte Carlo simulations, but excel in different aspects, with mesh-type phantoms offering advantages over their voxel counterparts in terms of their flexibility and realistic representation of detailed patient- or subject-specific anatomy. We have developed PARaDIM, a freeware application for implementing tetrahedral mesh-type phantoms in absorbed dose calculations via the Particle and Heavy Ion Transport code System (PHITS). It considers all medically relevant radionuclides including alpha, beta, gamma, positron, and Auger/conversion electron emitters, and handles calculation of mean dose to individual regions, as well as 3D dose distributions for visualization and analysis in a variety of medical imaging softwares. This work describes the development of PARaDIM, documents the measures taken to test and validate its performance, and presents examples to illustrate its uses. Methods: Human, small animal, and cell-level dose calculations were performed with PARaDIM and the results compared with those of widely accepted dosimetry programs and literature data. Several tetrahedral phantoms were developed or adapted using computer-aided modeling techniques for these comparisons. Results: For human dose calculations, agreement of PARaDIM with OLINDA 2.0 was good - within 10-20% for most organs - despite geometric differences among the phantoms tested. Agreement with MIRDcell for cell-level S-value calculations was within 5% in most cases. Conclusion: PARaDIM extends the use of Monte Carlo dose calculations to the broader community in nuclear medicine by providing a user-friendly graphical user interface for calculation setup and execution. PARaDIM leverages the enhanced anatomical realism provided by advanced computational reference phantoms or bespoke image-derived phantoms to enable improved assessments of radiation doses in a variety of radiopharmaceutical use cases, research, and preclinical development.

MRI volume changes of axillary lymph nodes as predictor of pathological complete responses to neoadjuvant chemotherapy in breast cancer

  • Cattell, Renee F.
  • Kang, James J.
  • Ren, Thomas
  • Huang, Pauline B.
  • Muttreja, Ashima
  • Dacosta, Sarah
  • Li, Haifang
  • Baer, Lea
  • Clouston, Sean
  • Palermo, Roxanne
  • Fisher, Paul
  • Bernstein, Cliff
  • Cohen, Jules A.
  • Duong, Tim Q.
Clinical Breast Cancer 2019 Journal Article, cited 0 times
Website
Introduction Longitudinal monitoring of breast tumor volume over the course of chemotherapy is informative of pathological response. This study aims to determine whether axillary lymph node (aLN) volume by MRI could augment the prediction accuracy of treatment response to neoadjuvant chemotherapy (NAC). Materials and Methods Level-2a curated data from I-SPY-1 TRIAL (2002-2006) were used. Patients had stage 2 or 3 breast cancer. MRI was acquired pre-, during and post-NAC. A subset with visible aLN on MRI was identified (N=132). Prediction of pathological complete response (PCR) was made using breast tumor volume changes, nodal volume changes, and combined breast tumor and nodal volume changes with sub-stratification with and without large lymph nodes (3mL or ∼1.79cm diameter cutoff). Receiver-operator-curve analysis was used to quantify prediction performance. Results Rate of change of aLN and breast tumor volume were informative of pathological response, with prediction being most informative early in treatment (AUC: 0.63-0.82) compared to later in treatment (AUC: 0.50-0.73). Larger aLN volume was associated with hormone receptor negativity, with the largest nodal volume for triple negative subtypes. Sub-stratification by node size improved predictive performance, with the best predictive model for large nodes having AUC of 0.82. Conclusion Axillary lymph node MRI offers clinically relevant information and has the potential to predict treatment response to neoadjuvant chemotherapy in breast cancer patients.

Segmentation, tracking, and kinematics of lung parenchyma and lung tumors from 4D CT with application to radiation treatment planning

  • Cha, Jungwon
2018 Thesis, cited 0 times
Website
This thesis is concerned with development of techniques for efficient computerized analysis of 4-D CT data. The goal is to have a highly automated approach to segmentation of the lung boundary and lung nodules inside the lung. The determination of exact lung tumor location over space and time by image segmentation is an essential step to track thoracic malignancies. Accurate image segmentation helps clinical experts examine the anatomy and structure and determine the disease progress. Since 4-D CT provides structural and anatomical information during tidal breathing, we use the same data to also measure mechanical properties related to deformation of the lung tissue including Jacobian and strain at high resolutions and as a function of time. Radiation Treatment of patients with lung cancer can benefit from knowledge of these measures of regional ventilation. Graph-cuts techniques have been popular for image segmentation since they are able to treat highly textured data via robust global optimization, avoiding local minima in graph based optimization. The graph-cuts methods have been used to extract globally optimal boundaries from images by s/t cut, with energy function based on model-specific visual cues, and useful topological constraints. The method makes N-dimensional globally optimal segmentation possible with good computational efficiency. Even though the graph-cuts method can extract objects where there is a clear intensity difference, segmentation of organs or tumors pose a challenge. For organ segmentation, many segmentation methods using a shape prior have been proposed. However, in the case of lung tumors, the shape varies from patient to patient, and with location. In this thesis, we use a shape prior for tumors through a training step and PCA analysis based on the Active Shape Model (ASM). The method has been tested on real patient data from the Brown Cancer Center at the University of Louisville. We performed temporal B-spline deformable registration of the 4-D CT data - this yielded 3-D deformation fields between successive respiratory phases from which measures of regional lung function were determined. During the respiratory cycle, the lung volume changes and five different lobes of the lung (two in the left and three in the right lung) show different deformation yielding different strain and Jacobian maps. In this thesis, we determine the regional lung mechanics in the Lagrangian frame of reference through different respiratory phases, for example, Phase10 to 20, Phase10 to 30, Phase10 to 40, and Phase10 to 50. Single photon emission computed tomography (SPECT) lung imaging using radioactive tracers with SPECT ventilation and SPECT perfusion imaging also provides functional information. As part of an IRB-approved study therefore, we registered the max-inhale CT volume to both VSPECT and QSPECT data sets using the Demon's non-rigid registration algorithm in patient subjects. Subsequently, statistical correlation between CT ventilation images (Jacobian and strain values), with both VSPECT and QSPECT was undertaken. Through statistical analysis with the Spearman's rank correlation coefficient, we found that Jacobian values have the highest correlation with both VSPECT and QSPECT.

Evaluation of data augmentation via synthetic images for improved breast mass detection on mammograms using deep learning

  • Cha, K. H.
  • Petrick, N.
  • Pezeshk, A.
  • Graff, C. G.
  • Sharma, D.
  • Badal, A.
  • Sahiner, B.
J Med Imaging (Bellingham) 2020 Journal Article, cited 1 times
Website
We evaluated whether using synthetic mammograms for training data augmentation may reduce the effects of overfitting and increase the performance of a deep learning algorithm for breast mass detection. Synthetic mammograms were generated using in silico procedural analytic breast and breast mass modeling algorithms followed by simulated x-ray projections of the breast models into mammographic images. In silico breast phantoms containing masses were modeled across the four BI-RADS breast density categories, and the masses were modeled with different sizes, shapes, and margins. A Monte Carlo-based x-ray transport simulation code, MC-GPU, was used to project the three-dimensional phantoms into realistic synthetic mammograms. 2000 mammograms with 2522 masses were generated to augment a real data set during training. From the Curated Breast Imaging Subset of the Digital Database for Screening Mammography (CBIS-DDSM) data set, we used 1111 mammograms (1198 masses) for training, 120 mammograms (120 masses) for validation, and 361 mammograms (378 masses) for testing. We used faster R-CNN for our deep learning network with pretraining from ImageNet using the Resnet-101 architecture. We compared the detection performance when the network was trained using different percentages of the real CBIS-DDSM training set (100%, 50%, and 25%), and when these subsets of the training set were augmented with 250, 500, 1000, and 2000 synthetic mammograms. Free-response receiver operating characteristic (FROC) analysis was performed to compare performance with and without the synthetic mammograms. We generally observed an improved test FROC curve when training with the synthetic images compared to training without them, and the amount of improvement depended on the number of real and synthetic images used in training. Our study shows that enlarging the training data with synthetic samples can increase the performance of deep learning systems.

Automated feature extraction in brain tumor by magnetic resonance imaging using gaussian mixture models

  • Chaddad, Ahmad
Journal of Biomedical Imaging 2015 Journal Article, cited 29 times
Website

GBM heterogeneity characterization by radiomic analysis of phenotype anatomical planes

  • Chaddad, Ahmad
  • Desrosiers, Christian
  • Toews, Matthew
2016 Conference Proceedings, cited 4 times
Website

Radiomic analysis of multi-contrast brain MRI for the prediction of survival in patients with glioblastoma multiforme

  • Chaddad, Ahmad
  • Desrosiers, Christian
  • Toews, Matthew
2016 Conference Proceedings, cited 11 times
Website
Image texture features are effective at characterizing the microstructure of cancerous tissues. This paper proposes predicting the survival times of glioblastoma multiforme (GBM) patients using texture features extracted in multi-contrast brain MRI images. Texture features are derived locally from contrast enhancement, necrosis and edema regions in T1-weighted post-contrast and fluid-attenuated inversion-recovery (FLAIR) MRIs, based on the gray-level co-occurrence matrix representation. A statistical analysis based on the Kaplan-Meier method and log-rank test is used to identify the texture features related with the overall survival of GBM patients. Results are presented on a dataset of 39 GBM patients. For FLAIR images, four features (Energy, Correlation, Variance and Inverse of Variance) from contrast enhancement regions and a feature (Homogeneity) from edema regions were shown to be associated with survival times (p-value <; 0.01). Likewise, in T1-weighted images, three features (Energy, Correlation, and Variance) from contrast enhancement regions were found to be useful for predicting the overall survival of GBM patients. These preliminary results show the advantages of texture analysis in predicting the prognosis of GBM patients from multi-contrast brain MRI.

Predicting survival time of lung cancer patients using radiomic analysis

  • Chaddad, Ahmad
  • Desrosiers, Christian
  • Toews, Matthew
  • Abdulkarim, Bassam
Oncotarget 2017 Journal Article, cited 4 times
Website
Objectives: This study investigates the prediction of Non-small cell lung cancer (NSCLC) patient survival outcomes based on radiomic texture and shape features automatically extracted from tumor image data. Materials and Methods: Retrospective analysis involves CT scans of 315 NSCLC patients from The Cancer Imaging Archive (TCIA). A total of 24 image features are computed from labeled tumor volumes of patients within groups defined using NSCLC subtype and TNM staging information. Spearman's rank correlation, Kaplan-Meier estimation and log-rank tests were used to identify features related to long/short NSCLC patient survival groups. Automatic random forest classification was used to predict patient survival group from multivariate feature data. Significance is assessed at P < 0.05 following Holm-Bonferroni correction for multiple comparisons. Results: Significant correlations between radiomic features and survival were observed for four clinical groups: (group, [absolute correlation range]): (large cell carcinoma (LCC) [0.35, 0.43]), (tumor size T2, [0.31, 0.39]), (non lymph node metastasis N0, [0.3, 0.33]), (TNM stage I, [0.39, 0.48]). Significant log-rank relationships between features and survival time were observed for three clinical groups: (group, hazard ratio): (LCC, 3.0), (LCC, 3.9), (T2, 2.5) and (stage I, 2.9). Automatic survival prediction performance (i.e. below/above median) is superior for combined radiomic features with age-TNM in comparison to standard TNM clinical staging information (clinical group, mean area-under-the-ROC-curve (AUC)): (LCC, 75.73%), (N0, 70.33%), (T2, 70.28%) and (TNM-I, 76.17%). Conclusion: Quantitative lung CT imaging features can be used as indicators of survival, in particular for patients with large-cell-carcinoma (LCC), primary-tumor-sizes (T2) and no lymph-node-metastasis (N0).

Multimodal Radiomic Features for the Predicting Gleason Score of Prostate Cancer

  • Chaddad, Ahmad
  • Kucharczyk, Michael
  • Niazi, Tamim
Cancers 2018 Journal Article, cited 1 times
Website

Prediction of survival with multi-scale radiomic analysis in glioblastoma patients

  • Chaddad, Ahmad
  • Sabri, Siham
  • Niazi, Tamim
  • Abdulkarim, Bassam
Medical & biological engineering & computing 2018 Journal Article, cited 1 times
Website
We propose a multiscale texture features based on Laplacian-of Gaussian (LoG) filter to predict progression free (PFS) and overall survival (OS) in patients newly diagnosed with glioblastoma (GBM). Experiments use the extracted features derived from 40 patients of GBM with T1-weighted imaging (T1-WI) and Fluid-attenuated inversion recovery (FLAIR) images that were segmented manually into areas of active tumor, necrosis, and edema. Multiscale texture features were extracted locally from each of these areas of interest using a LoG filter and the relation between features to OS and PFS was investigated using univariate (i.e., Spearman’s rank correlation coefficient, log-rank test and Kaplan-Meier estimator) and multivariate analyses (i.e., Random Forest classifier). Three and seven features were statistically correlated with PFS and OS, respectively, with absolute correlation values between 0.32 and 0.36 and p < 0.05. Three features derived from active tumor regions only were associated with OS (p < 0.05) with hazard ratios (HR) of 2.9, 3, and 3.24, respectively. Combined features showed an AUC value of 85.37 and 85.54% for predicting the PFS and OS of GBM patients, respectively, using the random forest (RF) classifier. We presented a multiscale texture features to characterize the GBM regions and predict the PFS and OS. The efficiency achievable suggests that this technique can be developed into a GBM MR analysis system suitable for clinical use after a thorough validation involving more patients.

High-Throughput Quantification of Phenotype Heterogeneity Using Statistical Features

  • Chaddad, Ahmad
  • Tanougast, Camel
Advances in Bioinformatics 2015 Journal Article, cited 5 times
Website
Statistical features are widely used in radiology for tumor heterogeneity assessment using magnetic resonance (MR) imaging technique. In this paper, feature selection based on decision tree is examined to determine the relevant subset of glioblastoma (GBM) phenotypes in the statistical domain. To discriminate between active tumor (vAT) and edema/invasion (vE) phenotype, we selected the significant features using analysis of variance (ANOVA) with p value < 0.01. Then, we implemented the decision tree to define the optimal subset features of phenotype classifier. Naive Bayes (NB), support vector machine (SVM), and decision tree (DT) classifier were considered to evaluate the performance of the feature based scheme in terms of its capability to discriminate vAT from vE. Whole nine features were statistically significant to classify the vAT from vE with p value < 0.01. Feature selection based on decision tree showed the best performance by the comparative study using full feature set. The feature selected showed that the two features Kurtosis and Skewness achieved a highest range value of 58.33-75.00% accuracy classifier and 73.88-92.50% AUC. This study demonstrated the ability of statistical features to provide a quantitative, individualized measurement of glioblastoma patient and assess the phenotype progression.

Quantitative evaluation of robust skull stripping and tumor detection applied to axial MR images

  • Chaddad, Ahmad
  • Tanougast, Camel
Brain Informatics 2016 Journal Article, cited 28 times
Website

Extracted magnetic resonance texture features discriminate between phenotypes and are associated with overall survival in glioblastoma multiforme patients

  • Chaddad, Ahmad
  • Tanougast, Camel
Medical & biological engineering & computing 2016 Journal Article, cited 16 times
Website
GBM is a markedly heterogeneous brain tumor consisting of three main volumetric phenotypes identifiable on magnetic resonance imaging: necrosis (vN), active tumor (vAT), and edema/invasion (vE). The goal of this study is to identify the three glioblastoma multiforme (GBM) phenotypes using a texture-based gray-level co-occurrence matrix (GLCM) approach and determine whether the texture features of phenotypes are related to patient survival. MR imaging data in 40 GBM patients were analyzed. Phenotypes vN, vAT, and vE were segmented in a preprocessing step using 3D Slicer for rigid registration by T1-weighted imaging and corresponding fluid attenuation inversion recovery images. The GBM phenotypes were segmented using 3D Slicer tools. Texture features were extracted from GLCM of GBM phenotypes. Thereafter, Kruskal-Wallis test was employed to select the significant features. Robust predictive GBM features were identified and underwent numerous classifier analyses to distinguish phenotypes. Kaplan-Meier analysis was also performed to determine the relationship, if any, between phenotype texture features and survival rate. The simulation results showed that the 22 texture features were significant with p value < 0.05. GBM phenotype discrimination based on texture features showed the best accuracy, sensitivity, and specificity of 79.31, 91.67, and 98.75 %, respectively. Three texture features derived from active tumor parts: difference entropy, information measure of correlation, and inverse difference were statistically significant in the prediction of survival, with log-rank p values of 0.001, 0.001, and 0.008, respectively. Among 22 features examined, three texture features have the ability to predict overall survival for GBM patients demonstrating the utility of GLCM analyses in both the diagnosis and prognosis of this patient population.

Automated lung field segmentation in CT images using mean shift clustering and geometrical features

  • Chama, Chanukya Krishna
  • Mukhopadhyay, Sudipta
  • Biswas, Prabir Kumar
  • Dhara, Ashis Kumar
  • Madaiah, Mahendra Kasuvinahally
  • Khandelwal, Niranjan
2013 Conference Proceedings, cited 8 times
Website

Using Docker to support reproducible research

  • Chamberlain, Ryan
  • Invenshure, L
  • Schommer, Jennifer
2014 Report, cited 30 times
Website
Reproducible research is a growing movement among scientists, but the tools for creating sustainable software to support the computational side of research are still in their infancy and are typically only being used by scientists with expertise in computer programming and system administration. Docker is a new platform developed for the DevOps community that enables the easy creation and management of consistent computational environments. This article describes how we have applied it to computational science and suggests that it could be a powerful tool for reproducible research.

Primer for Image Informatics in Personalized Medicine

  • Chang, Young Hwan
  • Foley, Patrick
  • Azimi, Vahid
  • Borkar, Rohan
  • Lefman, Jonathan
Procedia Engineering 2016 Journal Article, cited 0 times
Website

“Big data” and “open data”: What kind of access should researchers enjoy?

  • Chatellier, Gilles
  • Varlet, Vincent
  • Blachier-Poisson, Corinne
Thérapie 2016 Journal Article, cited 0 times

MRI prostate cancer radiomics: Assessment of effectiveness and perspectives

  • Chatzoudis, Pavlos
2018 Thesis, cited 0 times
Website

A Fast Semi-Automatic Segmentation Tool for Processing Brain Tumor Images

  • Chen, Andrew X
  • Rabadán, Raúl
2017 Book Section, cited 0 times
Website

Low-dose CT via convolutional neural network

  • Chen, Hu
  • Zhang, Yi
  • Zhang, Weihua
  • Liao, Peixi
  • Li, Ke
  • Zhou, Jiliu
  • Wang, Ge
Biomedical Optics Express 2017 Journal Article, cited 342 times
Website
In order to reduce the potential radiation risk, low-dose CT has attracted an increasing attention. However, simply lowering the radiation dose will significantly degrade the image quality. In this paper, we propose a new noise reduction method for low-dose CT via deep learning without accessing original projection data. A deep convolutional neural network is here used to map low-dose CT images towards its corresponding normal-dose counterparts in a patch-by-patch fashion. Qualitative results demonstrate a great potential of the proposed method on artifact reduction and structure preservation. In terms of the quantitative metrics, the proposed method has showed a substantial improvement on PSNR, RMSE and SSIM than the competing state-of-art methods. Furthermore, the speed of our method is one order of magnitude faster than the iterative reconstruction and patch-based image denoising methods.

Revealing Tumor Habitats from Texture Heterogeneity Analysis for Classification of Lung Cancer Malignancy and Aggressiveness

  • Cherezov, Dmitry
  • Goldgof, Dmitry
  • Hall, Lawrence
  • Gillies, Robert
  • Schabath, Matthew
  • Müller, Henning
  • Depeursinge, Adrien
Sci RepScientific reports 2019 Journal Article, cited 0 times
Website
We propose an approach for characterizing structural heterogeneity of lung cancer nodules using Computed Tomography Texture Analysis (CTTA). Measures of heterogeneity were used to test the hypothesis that heterogeneity can be used as predictor of nodule malignancy and patient survival. To do this, we use the National Lung Screening Trial (NLST) dataset to determine if heterogeneity can represent differences between nodules in lung cancer and nodules in non-lung cancer patients. 253 participants are in the training set and 207 participants in the test set. To discriminate cancerous from non-cancerous nodules at the time of diagnosis, a combination of heterogeneity and radiomic features were evaluated to produce the best area under receiver operating characteristic curve (AUROC) of 0.85 and accuracy 81.64%. Second, we tested the hypothesis that heterogeneity can predict patient survival. We analyzed 40 patients diagnosed with lung adenocarcinoma (20 short-term and 20 long-term survival patients) using a leave-one-out cross validation approach for performance evaluation. A combination of heterogeneity features and radiomic features produce an AUROC of 0.9 and an accuracy of 85% to discriminate long- and short-term survivors.

Computed Tomography (CT) Image Quality Enhancement via a Uniform Framework Integrating Noise Estimation and Super-Resolution Networks

  • Chi, Jianning
  • Zhang, Yifei
  • Yu, Xiaosheng
  • Wang, Ying
  • Wu, Chengdong
Sensors (Basel) 2019 Journal Article, cited 2 times
Website
Computed tomography (CT) imaging technology has been widely used to assist medical diagnosis in recent years. However, noise during the process of imaging, and data compression during the process of storage and transmission always interrupt the image quality, resulting in unreliable performance of the post-processing steps in the computer assisted diagnosis system (CADs), such as medical image segmentation, feature extraction, and medical image classification. Since the degradation of medical images typically appears as noise and low-resolution blurring, in this paper, we propose a uniform deep convolutional neural network (DCNN) framework to handle the de-noising and super-resolution of the CT image at the same time. The framework consists of two steps: Firstly, a dense-inception network integrating an inception structure and dense skip connection is proposed to estimate the noise level. The inception structure is used to extract the noise and blurring features with respect to multiple receptive fields, while the dense skip connection can reuse those extracted features and transfer them across the network. Secondly, a modified residual-dense network combined with joint loss is proposed to reconstruct the high-resolution image with low noise. The inception block is applied on each skip connection of the dense-residual network so that the structure features of the image are transferred through the network more than the noise and blurring features. Moreover, both the perceptual loss and the mean square error (MSE) loss are used to restrain the network, leading to better performance in the reconstruction of image edges and details. Our proposed network integrates the degradation estimation, noise removal, and image super-resolution in one uniform framework to enhance medical image quality. We apply our method to the Cancer Imaging Archive (TCIA) public dataset to evaluate its ability in medical image quality enhancement. The experimental results demonstrate that the proposed method outperforms the state-of-the-art methods on de-noising and super-resolution by providing higher peak signal to noise ratio (PSNR) and structure similarity index (SSIM) values.

SVM-PUK Kernel Based MRI-brain Tumor Identification Using Texture and Gabor Wavelets

  • Chinnam, Siva
  • Sistla, Venkatramaphanikumar
  • Kolli, Venkata
Traitement du Signal 2019 Journal Article, cited 0 times
Website

Imaging phenotypes of breast cancer heterogeneity in pre-operative breast Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) scans predict 10-year recurrence

  • Chitalia, Rhea
  • Rowland, Jennifer
  • McDonald, Elizabeth S
  • Pantalone, Lauren
  • Cohen, Eric A
  • Gastounioti, Aimilia
  • Feldman, Michael
  • Schnall, Mitchell
  • Conant, Emily
  • Kontos, Despina
Clinical Cancer Research 2019 Journal Article, cited 0 times
Website

Classification of the glioma grading using radiomics analysis

  • Cho, Hwan-ho
  • Lee, Seung-hak
  • Kim, Jonghoon
  • Park, Hyunjin
PeerJ 2018 Journal Article, cited 0 times
Website

Incremental Prognostic Value of ADC Histogram Analysis over MGMT Promoter Methylation Status in Patients with Glioblastoma

  • Choi, Yoon Seong
  • Ahn, Sung Soo
  • Kim, Dong Wook
  • Chang, Jong Hee
  • Kang, Seok-Gu
  • Kim, Eui Hyun
  • Kim, Se Hoon
  • Rim, Tyler Hyungtaek
  • Lee, Seung-Koo
Radiology 2016 Journal Article, cited 18 times
Website
Purpose To investigate the incremental prognostic value of apparent diffusion coefficient (ADC) histogram analysis over oxygen 6-methylguanine-DNA methyltransferase (MGMT) promoter methylation status in patients with glioblastoma and the correlation between ADC parameters and MGMT status. Materials and Methods This retrospective study was approved by institutional review board, and informed consent was waived. A total of 112 patients with glioblastoma were divided into training (74 patients) and test (38 patients) sets. Overall survival (OS) and progression-free survival (PFS) was analyzed with ADC parameters, MGMT status, and other clinical factors. Multivariate Cox regression models with and without ADC parameters were constructed. Model performance was assessed with c index and receiver operating characteristic curve analyses for 12- and 16-month OS and 12-month PFS in the training set and validated in the test set. ADC parameters were compared according to MGMT status for the entire cohort. Results By using ADC parameters, the c indices and diagnostic accuracies for 12- and 16-month OS and 12-month PFS in the models showed significant improvement, with the exception of c indices in the models for PFS (P < .05 for all) in the training set. In the test set, the diagnostic accuracy was improved by using ADC parameters and was significant, with the 25th and 50th percentiles of ADC for 16-month OS (P = .040 and P = .047) and the 25th percentile of ADC for 12-month PFS (P = .026). No significant correlation was found between ADC parameters and MGMT status. Conclusion ADC histogram analysis had incremental prognostic value over MGMT promoter methylation status in patients with glioblastoma. ((c)) RSNA, 2016 Online supplemental material is available for this article.

Application of Artificial Neural Networks for Prognostic Modeling in Lung Cancer after Combining Radiomic and Clinical Features

  • Chufal, Kundan S.
  • Ahmad, Irfan
  • Pahuja, Anjali K.
  • Miller, Alexis A.
  • Singh, Rajpal
  • Chowdhary, Rahul L.
Asian Journal of Oncology 2019 Journal Article, cited 0 times
Website
Objective This study was aimed to investigate machine learning (ML) and artificial neural networks (ANNs) in the prognostic modeling of lung cancer, utilizing high-dimensional data. Materials and Methods A computed tomography (CT) dataset of inoperable nonsmall cell lung carcinoma (NSCLC) patients with embedded tumor segmentation and survival status, comprising 422 patients, was selected. Radiomic data extraction was performed on Computation Environment for Radiation Research (CERR). The survival probability was first determined based on clinical features only and then unsupervised ML methods. Supervised ANN modeling was performed by direct and hybrid modeling which were subsequently compared. Statistical significance was set at <0.05. Results Survival analyses based on clinical features alone were not significant, except for gender. ML clustering performed on unselected radiomic and clinical data demonstrated a significant difference in survival (two-step cluster, median overall survival [ mOS]: 30.3 vs. 17.2 m; p = 0.03; K-means cluster, mOS: 21.1 vs. 7.3 m; p < 0.001). Direct ANN modeling yielded a better overall model accuracy utilizing multilayer perceptron (MLP) than radial basis function (RBF; 79.2 vs. 61.4%, respectively). Hybrid modeling with MLP (after feature selection with ML) resulted in an overall model accuracy of 80%. There was no difference in model accuracy after direct and hybrid modeling (p = 0.164). Conclusion Our preliminary study supports the application of ANN in predicting outcomes based on radiomic and clinical data.

Results of initial low-dose computed tomographic screening for lung cancer

  • Church, T. R.
  • Black, W. C.
  • Aberle, D. R.
  • Berg, C. D.
  • Clingan, K. L.
  • Duan, F.
  • Fagerstrom, R. M.
  • Gareen, I. F.
  • Gierada, D. S.
  • Jones, G. C.
  • Mahon, I.
  • Marcus, P. M.
  • Sicks, J. D.
  • Jain, A.
  • Baum, S.
N Engl J MedThe New England journal of medicine 2013 Journal Article, cited 529 times
Website
BACKGROUND: Lung cancer is the largest contributor to mortality from cancer. The National Lung Screening Trial (NLST) showed that screening with low-dose helical computed tomography (CT) rather than with chest radiography reduced mortality from lung cancer. We describe the screening, diagnosis, and limited treatment results from the initial round of screening in the NLST to inform and improve lung-cancer-screening programs. METHODS: At 33 U.S. centers, from August 2002 through April 2004, we enrolled asymptomatic participants, 55 to 74 years of age, with a history of at least 30 pack-years of smoking. The participants were randomly assigned to undergo annual screening, with the use of either low-dose CT or chest radiography, for 3 years. Nodules or other suspicious findings were classified as positive results. This article reports findings from the initial screening examination. RESULTS: A total of 53,439 eligible participants were randomly assigned to a study group (26,715 to low-dose CT and 26,724 to chest radiography); 26,309 participants (98.5%) and 26,035 (97.4%), respectively, underwent screening. A total of 7191 participants (27.3%) in the low-dose CT group and 2387 (9.2%) in the radiography group had a positive screening result; in the respective groups, 6369 participants (90.4%) and 2176 (92.7%) had at least one follow-up diagnostic procedure, including imaging in 5717 (81.1%) and 2010 (85.6%) and surgery in 297 (4.2%) and 121 (5.2%). Lung cancer was diagnosed in 292 participants (1.1%) in the low-dose CT group versus 190 (0.7%) in the radiography group (stage 1 in 158 vs. 70 participants and stage IIB to IV in 120 vs. 112). Sensitivity and specificity were 93.8% and 73.4% for low-dose CT and 73.5% and 91.3% for chest radiography, respectively. CONCLUSIONS: The NLST initial screening results are consistent with the existing literature on screening by means of low-dose CT and chest radiography, suggesting that a reduction in mortality from lung cancer is achievable at U.S. screening centers that have staff experienced in chest CT. (Funded by the National Cancer Institute; NLST ClinicalTrials.gov number, NCT00047385.).

Automatic detection of spiculation of pulmonary nodules in computed tomography images

  • Ciompi, F
  • Jacobs, C
  • Scholten, ET
  • van Riel, SJ
  • Wille, MMW
  • Prokop, M
  • van Ginneken, B
2015 Conference Proceedings, cited 5 times
Website

Reproducing 2D breast mammography images with 3D printed phantoms

  • Clark, Matthew
  • Ghammraoui, Bahaa
  • Badal, Andreu
2016 Conference Proceedings, cited 2 times
Website

The Quantitative Imaging Network: NCI's Historical Perspective and Planned Goals

  • Clarke, Laurence P.
  • Nordstrom, Robert J.
  • Zhang, Huiming
  • Tandon, Pushpa
  • Zhang, Yantian
  • Redmond, George
  • Farahani, Keyvan
  • Kelloff, Gary
  • Henderson, Lori
  • Shankar, Lalitha
  • Deye, James
  • Capala, Jacek
  • Jacobs, Paula
Translational oncology 2014 Journal Article, cited 0 times
Website

Using Machine Learning Applied to Radiomic Image Features for Segmenting Tumour Structures

  • Clifton, Henry
  • Vial, Alanna
  • Miller, Andrew
  • Ritz, Christian
  • Field, Matthew
  • Holloway, Lois
  • Ros, Montserrat
  • Carolan, Martin
  • Stirling, David
2019 Conference Paper, cited 0 times
Website
Lung cancer (LC) was the predicted leading causeof Australian cancer fatalities in 2018 (around 9,200 deaths). Non-Small Cell Lung Cancer (NSCLC) tumours with larger amounts of heterogeneity have been linked to a worse outcome.Medical imaging is widely used in oncology and non-invasively collects data about the whole tumour. The field of radiomics uses these medical images to extract quantitative image featuresand promises further understanding of the disease at the time of diagnosis, during treatment and in follow up. It is well known that manual and semi-automatic tumour segmentation methods are subject to inter-observer variability which reduces confidence in the treatment region and extentof disease. This leads to tumour under- and over-estimation which can impact on treatment outcome and treatment-induced morbidity.This research aims to use radiomic features centred at each pixel to segment the location of the lung tumour on Computed Tomography (CT) scans. To achieve this objective, a DecisionTree (DT) model was trained using sampled CT data from eight patients. The data consisted of 25 pixel-based texture features calculated from four Gray Level Matrices (GLMs)describing the region around each pixel. The model was assessed using an unseen patient through both a confusion matrix and interpretation of the segment.The findings showed that the model accurately (AUROC =83.9%) predicts tumour location within the test data, concluding that pixel based textural features likely contribute to segmenting the lung tumour. The prediction displayed a strong representation of the manually segmented Region of Interest (ROI), which is considered the ground truth for the purpose of this research.

Automated Medical Image Modality Recognition by Fusion of Visual and Text Information

  • Codella, Noel
  • Connell, Jonathan
  • Pankanti, Sharath
  • Merler, Michele
  • Smith, John R
2014 Book Section, cited 10 times
Website

Semantic Model Vector for ImageCLEF2013

  • Codella, Noel
  • Merler, Michele
2014 Report, cited 0 times
Website

The exceptional responders initiative: feasibility of a National Cancer Institute pilot study

  • Conley, Barbara A
  • Staudt, Lou
  • Takebe, Naoko
  • Wheeler, David A
  • Wang, Linghua
  • Cardenas, Maria F
  • Korchina, Viktoriya
  • Zenklusen, Jean Claude
  • McShane, Lisa M
  • Tricoli, James V
JNCI: Journal of the National Cancer Institute 2021 Journal Article, cited 5 times
Website

Extended Modality Propagation: Image Synthesis of Pathological Cases

  • N. Cordier
  • H. Delingette
  • M. Le
  • N. Ayache
IEEE Transactions on Medical Imaging 2016 Journal Article, cited 18 times
Website

Bayesian Kernel Models for Statistical Genetics and Cancer Genomics

  • Crawford, Lorin
2017 Thesis, cited 0 times

Volume of high-risk intratumoral subregions at multi-parametric MR imaging predicts overall survival and complements molecular analysis of glioblastoma

  • Cui, Yi
  • Ren, Shangjie
  • Tha, Khin Khin
  • Wu, Jia
  • Shirato, Hiroki
  • Li, Ruijiang
European Radiology 2017 Journal Article, cited 10 times
Website

Tumor Transcriptome Reveals High Expression of IL-8 in Non-Small Cell Lung Cancer Patients with Low Pectoralis Muscle Area and Reduced Survival

  • Cury, Sarah Santiloni
  • de Moraes, Diogo
  • Freire, Paula Paccielli
  • de Oliveira, Grasieli
  • Marques, Douglas Venancio Pereira
  • Fernandez, Geysson Javier
  • Dal-Pai-Silva, Maeli
  • Hasimoto, Erica Nishida
  • Dos Reis, Patricia Pintor
  • Rogatto, Silvia Regina
  • Carvalho, Robson Francisco
Cancers (Basel) 2019 Journal Article, cited 1 times
Website
Cachexia is a syndrome characterized by an ongoing loss of skeletal muscle mass associated with poor patient prognosis in non-small cell lung cancer (NSCLC). However, prognostic cachexia biomarkers in NSCLC are unknown. Here, we analyzed computed tomography (CT) images and tumor transcriptome data to identify potentially secreted cachexia biomarkers (PSCB) in NSCLC patients with low-muscularity. We integrated radiomics features (pectoralis muscle, sternum, and tenth thoracic (T10) vertebra) from CT of 89 NSCLC patients, which allowed us to identify an index for screening muscularity. Next, a tumor transcriptomic-based secretome analysis from these patients (discovery set) was evaluated to identify potential cachexia biomarkers in patients with low-muscularity. The prognostic value of these biomarkers for predicting recurrence and survival outcome was confirmed using expression data from eight lung cancer datasets (validation set). Finally, C2C12 myoblasts differentiated into myotubes were used to evaluate the ability of the selected biomarker, interleukin (IL)-8, in inducing muscle cell atrophy. We identified 75 over-expressed transcripts in patients with low-muscularity, which included IL-6, CSF3, and IL-8. Also, we identified NCAM1, CNTN1, SCG2, CADM1, IL-8, NPTX1, and APOD as PSCB in the tumor secretome. These PSCB were capable of distinguishing worse and better prognosis (recurrence and survival) in NSCLC patients. IL-8 was confirmed as a predictor of worse prognosis in all validation sets. In vitro assays revealed that IL-8 promoted C2C12 myotube atrophy. Tumors from low-muscularity patients presented a set of upregulated genes encoding for secreted proteins, including pro-inflammatory cytokines that predict worse overall survival in NSCLC. Among these upregulated genes, IL-8 expression in NSCLC tissues was associated with worse prognosis, and the recombinant IL-8 was capable of triggering atrophy in C2C12 myotubes.

Immunotherapy in Metastatic Colorectal Cancer: Could the Latest Developments Hold the Key to Improving Patient Survival?

  • Damilakis, E.
  • Mavroudis, D.
  • Sfakianaki, M.
  • Souglakos, J.
Cancers (Basel) 2020 Journal Article, cited 0 times
Website
Immunotherapy has considerably increased the number of anticancer agents in many tumor types including metastatic colorectal cancer (mCRC). Anti-PD-1 (programmed death 1) and cytotoxic T-lymphocyte-associated antigen 4 (CTLA-4) immune checkpoint inhibitors (ICI) have been shown to benefit the mCRC patients with mismatch repair deficiency (dMMR) or high microsatellite instability (MSI-H). However, ICI is not effective in mismatch repair proficient (pMMR) colorectal tumors, which constitute a large population of patients. Several clinical trials evaluating the efficacy of immunotherapy combined with chemotherapy, radiation therapy, or other agents are currently ongoing to extend the benefit of immunotherapy to pMMR mCRC cases. In dMMR patients, MSI testing through immunohistochemistry and/or polymerase chain reaction can be used to identify patients that will benefit from immunotherapy. Next-generation sequencing has the ability to detect MSI-H using a low amount of nucleic acids and its application in clinical practice is currently being explored. Preliminary data suggest that radiomics is capable of discriminating MSI from microsatellite stable mCRC and may play a role as an imaging biomarker in the future. Tumor mutational burden, neoantigen burden, tumor-infiltrating lymphocytes, immunoscore, and gastrointestinal microbiome are promising biomarkers that require further investigation and validation.

AI-based Prognostic Imaging Biomarkers for Precision Neurooncology: the ReSPOND Consortium

  • Davatzikos, C.
  • Barnholtz-Sloan, J. S.
  • Bakas, S.
  • Colen, R.
  • Mahajan, A.
  • Quintero, C. B.
  • Font, J. C.
  • Puig, J.
  • Jain, R.
  • Sloan, A. E.
  • Badve, C.
  • Marcus, D. S.
  • Choi, Y. S.
  • Lee, S. K.
  • Chang, J. H.
  • Poisson, L. M.
  • Griffith, B.
  • Dicker, A. P.
  • Flanders, A. E.
  • Booth, T. C.
  • Rathore, S.
  • Akbari, H.
  • Sako, C.
  • Bilello, M.
  • Shukla, G.
  • Kazerooni, A. F.
  • Brem, S.
  • Lustig, R.
  • Mohan, S.
  • Bagley, S.
  • Nasrallah, M.
  • O'Rourke, D. M.
Neuro-oncology 2020 Journal Article, cited 0 times
Website

Local mesh ternary patterns: a new descriptor for MRI and CT biomedical image indexing and retrieval

  • Deep, G
  • Kaur, L
  • Gupta, S
Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization 2016 Journal Article, cited 3 times
Website

Predicting response before initiation of neoadjuvant chemotherapy in breast cancer using new methods for the analysis of dynamic contrast enhanced MRI (DCE MRI) data

  • DeGrandchamp, Joseph B
  • Whisenant, Jennifer G
  • Arlinghaus, Lori R
  • Abramson, VG
  • Yankeelov, Thomas E
  • Cárdenas-Rodríguez, Julio
2016 Conference Proceedings, cited 5 times
Website

Deep learning in head & neck cancer outcome prediction

  • Diamant, André
  • Chatterjee, Avishek
  • Vallières, Martin
  • Shenouda, George
  • Seuntjens, Jan
Sci RepScientific reports 2019 Journal Article, cited 0 times
Website
Traditional radiomics involves the extraction of quantitative texture features from medical images in an attempt to determine correlations with clinical endpoints. We hypothesize that convolutional neural networks (CNNs) could enhance the performance of traditional radiomics, by detecting image patterns that may not be covered by a traditional radiomic framework. We test this hypothesis by training a CNN to predict treatment outcomes of patients with head and neck squamous cell carcinoma, based solely on their pre-treatment computed tomography image. The training (194 patients) and validation sets (106 patients), which are mutually independent and include 4 institutions, come from The Cancer Imaging Archive. When compared to a traditional radiomic framework applied to the same patient cohort, our method results in a AUC of 0.88 in predicting distant metastasis. When combining our model with the previous model, the AUC improves to 0.92. Our framework yields models that are shown to explicitly recognize traditional radiomic features, be directly visualized and perform accurate outcome prediction.

Theoretical tumor edge detection technique using multiple Bragg peak decomposition in carbon ion therapy

  • Dias, Marta Filipa Ferraz
  • Collins-Fekete, Charles-Antoine
  • Baroni, Guido
  • Riboldi, Marco
  • Seco, Joao
Biomedical Physics & Engineering Express 2019 Journal Article, cited 0 times
Website

Learning Multi-Class Segmentations From Single-Class Datasets

  • Dmitriev, Konstantin
  • Kaufman, Arie
2019 Conference Paper, cited 1 times
Website
Multi-class segmentation has recently achieved significant performance in natural images and videos. This achievement is due primarily to the public availability of large multi-class datasets. However, there are certain domains, such as biomedical images, where obtaining sufficient multi-class annotations is a laborious and often impossible task and only single-class datasets are available. While existing segmentation research in such domains use private multi-class datasets or focus on single-class segmentations, we propose a unified highly efficient framework for robust simultaneous learning of multi-class segmentations by combining single-class datasets and utilizing a novel way of conditioning a convolutional network for the purpose of segmentation. We demonstrate various ways of incorporating the conditional information, perform an extensive evaluation, and show compelling multi-class segmentation performance on biomedical images, which outperforms current state-of-the-art solutions (up to 2.7%). Unlike current solutions, which are meticulously tailored for particular single-class datasets, we utilize datasets from a variety of sources. Furthermore, we show the applicability of our method also to natural images and evaluate it on the Cityscapes dataset. We further discuss other possible applications of our proposed framework.

Long short-term memory networks predict breast cancer recurrence in analysis of consecutive MRIs acquired during the course of neoadjuvant chemotherapy

  • Drukker, Karen
  • Edwards, Alexandra
  • Papaioannou, John
  • Giger, Maryellen
  • Hahn, Horst K.
  • Mazurowski, Maciej A.
2020 Conference Paper, cited 0 times
Website
The purpose of this study was to assess long short-term memory networks in the prediction of recurrence-free survival in breast cancer patients using features extracted from MRIs acquired during the course of neoadjuvant chemotherapy. In the I-SPY1 dataset, up to 4 MRI exams were available per patient acquired at pre-treatment, early-treatment, interregimen, and pre-surgery time points. Breast cancers were automatically segmented and 8 features describing kinetic curve characteristics were extracted. We assessed performance of long short-term memory networks in the prediction of recurrence-free survival status at 2 years and at 5 years post-surgery. For these predictions, we analyzed MRIs from women who had at least 2 (or 5) years of recurrence-free follow-up or experienced recurrence or death within that timeframe: 157 women and 73 women, respectively. One approach used features extracted from all available exams and the other approach used features extracted from only exams prior to the second cycle of neoadjuvant chemotherapy. The areas under the ROC curve in the prediction of recurrence-free survival status at 2 years post-surgery were 0.80, 95% confidence interval [0.68; 0.88] and 0.75 [0.62; 0.83] for networks trained with all 4 available exams and only the ‘early’ exams, respectively. Hazard ratios at the lowest, median, and highest quartile cut -points were 6.29 [2.91; 13.62], 3.27 [1.77; 6.03], 1.65 [0.83; 3.27] and 2.56 [1.20; 5.48], 3.01 [1.61; 5.66], 2.30 [1.14; 4.67]. Long short-term memory networks were able to predict recurrence-free survival in breast cancer patients, also when analyzing only MRIs acquired ‘early on’ during neoadjuvant treatment.

Most-enhancing tumor volume by MRI radiomics predicts recurrence-free survival “early on” in neoadjuvant treatment of breast cancer

  • Drukker, Karen
  • Li, Hui
  • Antropova, Natalia
  • Edwards, Alexandra
  • Papaioannou, John
  • Giger, Maryellen L
Cancer Imaging 2018 Journal Article, cited 0 times
Website
BACKGROUND: The hypothesis of this study was that MRI-based radiomics has the ability to predict recurrence-free survival "early on" in breast cancer neoadjuvant chemotherapy. METHODS: A subset, based on availability, of the ACRIN 6657 dynamic contrast-enhanced MR images was used in which we analyzed images of all women imaged at pre-treatment baseline (141 women: 40 with a recurrence, 101 without) and all those imaged after completion of the first cycle of chemotherapy, i.e., at early treatment (143 women: 37 with a recurrence vs. 105 without). Our method was completely automated apart from manual localization of the approximate tumor center. The most enhancing tumor volume (METV) was automatically calculated for the pre-treatment and early treatment exams. Performance of METV in the task of predicting a recurrence was evaluated using ROC analysis. The association of recurrence-free survival with METV was assessed using a Cox regression model controlling for patient age, race, and hormone receptor status and evaluated by C-statistics. Kaplan-Meier analysis was used to estimate survival functions. RESULTS: The C-statistics for the association of METV with recurrence-free survival were 0.69 with 95% confidence interval of [0.58; 0.80] at pre-treatment and 0.72 [0.60; 0.84] at early treatment. The hazard ratios calculated from Kaplan-Meier curves were 2.28 [1.08; 4.61], 3.43 [1.83; 6.75], and 4.81 [2.16; 10.72] for the lowest quartile, median quartile, and upper quartile cut-points for METV at early treatment, respectively. CONCLUSION: The performance of the automatically-calculated METV rivaled that of a semi-manual model described for the ACRIN 6657 study (published C-statistic 0.72 [0.60; 0.84]), which involved the same dataset but required semi-manual delineation of the functional tumor volume (FTV) and knowledge of the pre-surgical residual cancer burden.

Local Wavelet Pattern: A New Feature Descriptor for Image Retrieval in Medical CT Databases

  • Dubey, Shiv Ram
  • Singh, Satish Kumar
  • Singh, Rajat Kumar
IEEE Trans Image Process 2015 Journal Article, cited 52 times
Website
A new image feature description based on the local wavelet pattern (LWP) is proposed in this paper to characterize the medical computer tomography (CT) images for content-based CT image retrieval. In the proposed work, the LWP is derived for each pixel of the CT image by utilizing the relationship of center pixel with the local neighboring information. In contrast to the local binary pattern that only considers the relationship between a center pixel and its neighboring pixels, the presented approach first utilizes the relationship among the neighboring pixels using local wavelet decomposition, and finally considers its relationship with the center pixel. A center pixel transformation scheme is introduced to match the range of center value with the range of local wavelet decomposed values. Moreover, the introduced local wavelet decomposition scheme is centrally symmetric and suitable for CT images. The novelty of this paper lies in the following two ways: 1) encoding local neighboring information with local wavelet decomposition and 2) computing LWP using local wavelet decomposed values and transformed center pixel values. We tested the performance of our method over three CT image databases in terms of the precision and recall. We also compared the proposed LWP descriptor with the other state-of-the-art local image descriptors, and the experimental results suggest that the proposed method outperforms other methods for CT image retrieval.

An Ad Hoc Random Initialization Deep Neural Network Architecture for Discriminating Malignant Breast Cancer Lesions in Mammographic Images

  • Duggento, Andrea
  • Aiello, Marco
  • Cavaliere, Carlo
  • Cascella, Giuseppe L
  • Cascella, Davide
  • Conte, Giovanni
  • Guerrisi, Maria
  • Toschi, Nicola
Contrast Media Mol Imaging 2019 Journal Article, cited 1 times
Website
Breast cancer is one of the most common cancers in women, with more than 1,300,000 cases and 450,000 deaths each year worldwide. In this context, recent studies showed that early breast cancer detection, along with suitable treatment, could significantly reduce breast cancer death rates in the long term. X-ray mammography is still the instrument of choice in breast cancer screening. In this context, the false-positive and false-negative rates commonly achieved by radiologists are extremely arduous to estimate and control although some authors have estimated figures of up to 20% of total diagnoses or more. The introduction of novel artificial intelligence (AI) technologies applied to the diagnosis and, possibly, prognosis of breast cancer could revolutionize the current status of the management of the breast cancer patient by assisting the radiologist in clinical image interpretation. Lately, a breakthrough in the AI field has been brought about by the introduction of deep learning techniques in general and of convolutional neural networks in particular. Such techniques require no a priori feature space definition from the operator and are able to achieve classification performances which can even surpass human experts. In this paper, we design and validate an ad hoc CNN architecture specialized in breast lesion classification from imaging data only. We explore a total of 260 model architectures in a train-validation-test split in order to propose a model selection criterion which can pose the emphasis on reducing false negatives while still retaining acceptable accuracy. We achieve an area under the receiver operatic characteristics curve of 0.785 (accuracy 71.19%) on the test set, demonstrating how an ad hoc random initialization architecture can and should be fine tuned to a specific problem, especially in biomedical applications.

Assessing the Effects of Software Platforms on Volumetric Segmentation of Glioblastoma

  • Dunn, William D Jr
  • Aerts, Hugo J W L
  • Cooper, Lee A
  • Holder, Chad A
  • Hwang, Scott N
  • Jaffe, Carle C
  • Brat, Daniel J
  • Jain, Rajan
  • Flanders, Adam E
  • Zinn, Pascal O
  • Colen, Rivka R
  • Gutman, David A
J Neuroimaging Psychiatry Neurol 2016 Journal Article, cited 0 times
Website
Background: Radiological assessments of biologically relevant regions in glioblastoma have been associated with genotypic characteristics, implying a potential role in personalized medicine. Here, we assess the reproducibility and association with survival of two volumetric segmentation platforms and explore how methodology could impact subsequent interpretation and analysis. Methods: Post-contrast T1- and T2-weighted FLAIR MR images of 67 TCGA patients were segmented into five distinct compartments (necrosis, contrast-enhancement, FLAIR, post contrast abnormal, and total abnormal tumor volumes) by two quantitative image segmentation platforms - 3D Slicer and a method based on Velocity AI and FSL. We investigated the internal consistency of each platform by correlation statistics, association with survival, and concordance with consensus neuroradiologist ratings using ordinal logistic regression. Results: We found high correlations between the two platforms for FLAIR, post contrast abnormal, and total abnormal tumor volumes (spearman's r(67) = 0.952, 0.959, and 0.969 respectively). Only modest agreement was observed for necrosis and contrast-enhancement volumes (r(67) = 0.693 and 0.773 respectively), likely arising from differences in manual and automated segmentation methods of these regions by 3D Slicer and Velocity AI/FSL, respectively. Survival analysis based on AUC revealed significant predictive power of both platforms for the following volumes: contrast-enhancement, post contrast abnormal, and total abnormal tumor volumes. Finally, ordinal logistic regression demonstrated correspondence to manual ratings for several features. Conclusion: Tumor volume measurements from both volumetric platforms produced highly concordant and reproducible estimates across platforms for general features. As automated or semi-automated volumetric measurements replace manual linear or area measurements, it will become increasingly important to keep in mind that measurement differences between segmentation platforms for more detailed features could influence downstream survival or radio genomic analyses.

Improving Brain Tumor Diagnosis Using MRI Segmentation Based on Collaboration of Beta Mixture Model and Learning Automata

  • Edalati-rad, Akram
  • Mosleh, Mohammad
Arabian Journal for Science and Engineering 2018 Journal Article, cited 0 times
Website

Performance Analysis of Prediction Methods for Lossless Image Compression

  • Egorov, Nickolay
  • Novikov, Dmitriy
  • Gilmutdinov, Marat
2015 Book Section, cited 4 times
Website

Decision forests for learning prostate cancer probability maps from multiparametric MRI

  • Ehrenberg, Henry R
  • Cornfeld, Daniel
  • Nawaf, Cayce B
  • Sprenkle, Preston C
  • Duncan, James S
2016 Conference Proceedings, cited 2 times
Website

A Content-Based-Image-Retrieval Approach for Medical Image Repositories

  • el Rifai, Diaa
  • Maeder, Anthony
  • Liyanage, Liwan
2015 Conference Paper, cited 2 times
Website

Imaging genomics of glioblastoma: state of the art bridge between genomics and neuroradiology

  • ElBanan, Mohamed G
  • Amer, Ahmed M
  • Zinn, Pascal O
  • Colen, Rivka R
Neuroimaging Clinics of North America 2015 Journal Article, cited 29 times
Website
Glioblastoma (GBM) is the most common and most aggressive primary malignant tumor of the central nervous system. Recently, researchers concluded that the "one-size-fits-all" approach for treatment of GBM is no longer valid and research should be directed toward more personalized and patient-tailored treatment protocols. Identification of the molecular and genomic pathways underlying GBM is essential for achieving this personalized and targeted therapeutic approach. Imaging genomics represents a new era as a noninvasive surrogate for genomic and molecular profile identification. This article discusses the basics of imaging genomics of GBM, its role in treatment decision-making, and its future potential in noninvasive genomic identification.

The Veterans Affairs Precision Oncology Data Repository, a Clinical, Genomic, and Imaging Research Database

  • Elbers, Danne C.
  • Fillmore, Nathanael R.
  • Sung, Feng-Chi
  • Ganas, Spyridon S.
  • Prokhorenkov, Andrew
  • Meyer, Christopher
  • Hall, Robert B.
  • Ajjarapu, Samuel J.
  • Chen, Daniel C.
  • Meng, Frank
  • Grossman, Robert L.
  • Brophy, Mary T.
  • Do, Nhan V.
Patterns 2020 Journal Article, cited 0 times
Website
The Veterans Affairs Precision Oncology Data Repository (VA-PODR) is a large, nationwide repository of de-identified data on patients diagnosed with cancer at the Department of Veterans Affairs (VA). Data include longitudinal clinical data from the VA's nationwide electronic health record system and the VA Central Cancer Registry, targeted tumor sequencing data, and medical imaging data including computed tomography (CT) scans and pathology slides. A subset of the repository is available at the Genomic Data Commons (GDC) and The Cancer Imaging Archive (TCIA), and the full repository is available through the Veterans Precision Oncology Data Commons (VPODC). By releasing this de-identified dataset, we aim to advance Veterans' health care through enabling translational research on the Veteran population by a wide variety of researchers.

Diffusion MRI quality control and functional diffusion map results in ACRIN 6677/RTOG 0625: a multicenter, randomized, phase II trial of bevacizumab and chemotherapy in recurrent glioblastoma

  • Ellingson, Benjamin M
  • Kim, Eunhee
  • Woodworth, Davis C
  • Marques, Helga
  • Boxerman, Jerrold L
  • Safriel, Yair
  • McKinstry, Robert C
  • Bokstein, Felix
  • Jain, Rajan
  • Chi, T Linda
  • Sorensen, A Gregory
  • Gilbert, Mark R
  • Barboriak, Daniel P
Int J Oncol 2015 Journal Article, cited 27 times
Website
Functional diffusion mapping (fDM) is a cancer imaging technique that quantifies voxelwise changes in apparent diffusion coefficient (ADC). Previous studies have shown value of fDMs in bevacizumab therapy for recurrent glioblastoma multiforme (GBM). The aim of the present study was to implement explicit criteria for diffusion MRI quality control and independently evaluate fDM performance in a multicenter clinical trial (RTOG 0625/ACRIN 6677). A total of 123 patients were enrolled in the current multicenter trial and signed institutional review board-approved informed consent at their respective institutions. MRI was acquired prior to and 8 weeks following therapy. A 5-point QC scoring system was used to evaluate DWI quality. fDM performance was evaluated according to the correlation of these metrics with PFS and OS at the first follow-up time-point. Results showed ADC variability of 7.3% in NAWM and 10.5% in CSF. A total of 68% of patients had usable DWI data and 47% of patients had high quality DWI data when also excluding patients that progressed before the first follow-up. fDM performance was improved by using only the highest quality DWI. High pre-treatment contrast enhancing tumor volume was associated with shorter PFS and OS. A high volume fraction of increasing ADC after therapy was associated with shorter PFS, while a high volume fraction of decreasing ADC was associated with shorter OS. In summary, DWI in multicenter trials are currently of limited value due to image quality. Improvements in consistency of image quality in multicenter trials are necessary for further advancement of DWI biomarkers.

A Novel Hybrid Perceptron Neural Network Algorithm for Classifying Breast MRI Tumors

  • ElNawasany, Amal M
  • Ali, Ahmed Fouad
  • Waheed, Mohamed E
2014 Book Section, cited 3 times
Website
Breast cancer today is the leading cause of death amongstcancer patients inflicting women around the world. Breast cancer is themost common cancer in women worldwide. It is also the principle cause ofdeath from cancer among women globally. Early detection of this diseasecan greatly enhance the chances of long-term survival of breast cancervictims. Classification of cancer data helps widely in detection of the dis-ease and it can be achieved using many techniques such as Perceptronwhich is an Artificial Neural Network (ANN) classification technique.In this paper, we proposed a new hybrid algorithm by combining theperceptron algorithm and the feature extraction algorithm after apply-ing the Scale Invariant Feature Transform (SIFT) algorithm in orderto classify magnetic resonance imaging (MRI) breast cancer images. Theproposed algorithm is called breast MRI cancer classifier (BMRICC) andit has been tested tested on 281 MRI breast images (138 abnormal and143 normal). The numerical results of the general performance of theBMRICC algorithm and the comparasion results between it and other 5benchmark classifiers show that, the BMRICC algorithm is a promisingalgorithm and its performance is better than the other algorithms.

Attention P-Net for Segmentation of Post-operative Glioblastoma in MRI

  • Enlund Åström, Isabelle
2019 Thesis, cited 0 times
Website
Segmentation of post-operative glioblastoma is important for follow-up treatment. In this thesis, Fully Convolutional Networks (FCN) are utilised together with attention modules for segmentation of post-operative glioblastoma in MR images. Attention-based modules help the FCN to focus on relevant features to improve segmentation results. Channel and spatial attention combines both the spatial context as well as the semantic information in MR images. P-Net is used as a backbone for creating an architecture with existing bottleneck attention modules and was named attention P-Net. The proposed network and competing techniques were evaluated on a Uppsala University database containing T1-weighted MR images of brain from 12 subjects. The proposed framework shows substantial improvement over the existing techniques.

Radiology and Enterprise Medical Imaging Extensions (REMIX)

  • Erdal, Barbaros S
  • Prevedello, Luciano M
  • Qian, Songyue
  • Demirer, Mutlu
  • Little, Kevin
  • Ryu, John
  • O’Donnell, Thomas
  • White, Richard D
Journal of Digital Imaging 2017 Journal Article, cited 1 times
Website

Multisite Image Data Collection and Management Using the RSNA Image Sharing Network

  • Erickson, Bradley J
  • Fajnwaks, Patricio
  • Langer, Steve G
  • Perry, John
Translational oncology 2014 Journal Article, cited 3 times
Website
The execution of a multisite trial frequently includes image collection. The Clinical Trials Processor (CTP) makes removal of protected health information highly reliable. It also provides reliable transfer of images to a central review site. Trials using central review of imaging should consider using CTP for handling image data when a multisite trial is being designed.

Tumour heterogeneity revealed by unsupervised decomposition of dynamic contrast-enhanced magnetic resonance imaging is associated with underlying gene expression patterns and poor survival in breast cancer patients

  • Fan, M.
  • Xia, P.
  • Liu, B.
  • Zhang, L.
  • Wang, Y.
  • Gao, X.
  • Li, L.
Breast Cancer Res 2019 Journal Article, cited 3 times
Website
BACKGROUND: Heterogeneity is a common finding within tumours. We evaluated the imaging features of tumours based on the decomposition of tumoural dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) data to identify their prognostic value for breast cancer survival and to explore their biological importance. METHODS: Imaging features (n = 14), such as texture, histogram distribution and morphological features, were extracted to determine their associations with recurrence-free survival (RFS) in patients in the training cohort (n = 61) from The Cancer Imaging Archive (TCIA). The prognostic value of the features was evaluated in an independent dataset of 173 patients (i.e. the reproducibility cohort) from the TCIA I-SPY 1 TRIAL dataset. Radiogenomic analysis was performed in an additional cohort, the radiogenomic cohort (n = 87), using DCE-MRI from TCGA-BRCA and corresponding gene expression data from The Cancer Genome Atlas (TCGA). The MRI tumour area was decomposed by convex analysis of mixtures (CAM), resulting in 3 components that represent plasma input, fast-flow kinetics and slow-flow kinetics. The prognostic MRI features were associated with the gene expression module in which the pathway was analysed. Furthermore, a multigene signature for each prognostic imaging feature was built, and the prognostic value for RFS and overall survival (OS) was confirmed in an additional cohort from TCGA. RESULTS: Three image features (i.e. the maximum probability from the precontrast MR series, the median value from the second postcontrast series and the overall tumour volume) were independently correlated with RFS (p values of 0.0018, 0.0036 and 0.0032, respectively). The maximum probability feature from the fast-flow kinetics subregion was also significantly associated with RFS and OS in the reproducibility cohort. Additionally, this feature had a high correlation with the gene expression module (r = 0.59), and the pathway analysis showed that Ras signalling, a breast cancer-related pathway, was significantly enriched (corrected p value = 0.0044). Gene signatures (n = 43) associated with the maximum probability feature were assessed for associations with RFS (p = 0.035) and OS (p = 0.027) in an independent dataset containing 1010 gene expression samples. Among the 43 gene signatures, Ras signalling was also significantly enriched. CONCLUSIONS: Dynamic pattern deconvolution revealed that tumour heterogeneity was associated with poor survival and cancer-related pathways in breast cancer.

Computational Challenges and Collaborative Projects in the NCI Quantitative Imaging Network

  • Farahani, Keyvan
  • Kalpathy-Cramer, Jayashree
  • Chenevert, Thomas L
  • Rubin, Daniel L
  • Sunderland, John J
  • Nordstrom, Robert J
  • Buatti, John
  • Hylton, Nola
Tomography 2016 Journal Article, cited 2 times
Website
The Quantitative Imaging Network (QIN) of the National Cancer Institute (NCI) conducts research in development and validation of imaging tools and methods for predicting and evaluating clinical response to cancer therapy. Members of the network are involved in examining various imaging and image assessment parameters through network-wide cooperative projects. To more effectively use the cooperative power of the network in conducting computational challenges in benchmarking of tools and methods and collaborative projects in analytical assessment of imaging technologies, the QIN Challenge Task Force has developed policies and procedures to enhance the value of these activities by developing guidelines and leveraging NCI resources to help their administration and manage dissemination of results. Challenges and Collaborative Projects (CCPs) are further divided into technical and clinical CCPs. As the first NCI network to engage in CCPs, we anticipate a variety of CCPs to be conducted by QIN teams in the coming years. These will be aimed to benchmark advanced software tools for clinical decision support, explore new imaging biomarkers for therapeutic assessment, and establish consensus on a range of methods and protocols in support of the use of quantitative imaging to predict and assess response to cancer therapy.

A study of machine learning and deep learning models for solving medical imaging problems

  • Farhat, Fadi G.
2019 Thesis, cited 0 times
Website
Application of machine learning and deep learning methods on medical imaging aims to create systems that can help in the diagnosis of disease and the automation of analyzing medical images in order to facilitate treatment planning. Deep learning methods do well in image recognition, but medical images present unique challenges. The lack of large amounts of data, the image size, and the high class-imbalance in most datasets, makes training a machine learning model to recognize a particular pattern that is typically present only in case images a formidable task. Experiments are conducted to classify breast cancer images as healthy or nonhealthy, and to detect lesions in damaged brain MRI (Magnetic Resonance Imaging) scans. Random Forest, Logistic Regression and Support Vector Machine perform competitively in the classification experiments, but in general, deep neural networks beat all conventional methods. Gaussian Naïve Bayes (GNB) and the Lesion Identification with Neighborhood Data Analysis (LINDA) methods produce better lesion detection results than single path neural networks, but a multi-modal, multi-path deep neural network beats all other methods. The importance of pre-processing training data is also highlighted and demonstrated, especially for medical images, which require extensive preparation to improve classifier and detector performance. Only a more complex and deeper neural network combined with properly pre-processed data can produce the desired accuracy levels that can rival and maybe exceed those of human experts.

DICOM for quantitative imaging biomarker development: a standards based approach to sharing clinical data and structured PET/CT analysis results in head and neck cancer research

  • Fedorov, Andriy
  • Clunie, David
  • Ulrich, Ethan
  • Bauer, Christian
  • Wahle, Andreas
  • Brown, Bartley
  • Onken, Michael
  • Riesmeier, Jörg
  • Pieper, Steve
  • Kikinis, Ron
PeerJ 2016 Journal Article, cited 20 times
Website

DICOM for quantitative imaging biomarker development: a standards based approach to sharing clinical data and structured PET/CT analysis results in head and neck cancer research

  • Fedorov, Andriy
  • Clunie, David
  • Ulrich, Ethan
  • Bauer, Christian
  • Wahle, Andreas
  • Brown, Bartley
  • Onken, Michael
  • Riesmeier, Jörg
  • Pieper, Steve
  • Kikinis, Ron
  • Buatti, John
  • Beichel, Reinhard R
PeerJ 2016 Journal Article, cited 20 times
Website
Background. Imaging biomarkers hold tremendous promise for precision medicine clinical applications. Development of such biomarkers relies heavily on image post-processing tools for automated image quantitation. Their deployment in the context of clinical research necessitates interoperability with the clinical systems. Comparison with the established outcomes and evaluation tasks motivate integration of the clinical and imaging data, and the use of standardized approaches to support annotation and sharing of the analysis results and semantics. We developed the methodology and tools to support these tasks in Positron Emission Tomography and Computed Tomography (PET/CT) quantitative imaging (QI) biomarker development applied to head and neck cancer (HNC) treatment response assessment, using the Digital Imaging and Communications in Medicine (DICOM((R))) international standard and free open-source software. Methods. Quantitative analysis of PET/CT imaging data collected on patients undergoing treatment for HNC was conducted. Processing steps included Standardized Uptake Value (SUV) normalization of the images, segmentation of the tumor using manual and semi-automatic approaches, automatic segmentation of the reference regions, and extraction of the volumetric segmentation-based measurements. Suitable components of the DICOM standard were identified to model the various types of data produced by the analysis. A developer toolkit of conversion routines and an Application Programming Interface (API) were contributed and applied to create a standards-based representation of the data. Results. DICOM Real World Value Mapping, Segmentation and Structured Reporting objects were utilized for standards-compliant representation of the PET/CT QI analysis results and relevant clinical data. A number of correction proposals to the standard were developed. The open-source DICOM toolkit (DCMTK) was improved to simplify the task of DICOM encoding by introducing new API abstractions. Conversion and visualization tools utilizing this toolkit were developed. The encoded objects were validated for consistency and interoperability. The resulting dataset was deposited in the QIN-HEADNECK collection of The Cancer Imaging Archive (TCIA). Supporting tools for data analysis and DICOM conversion were made available as free open-source software. Discussion. We presented a detailed investigation of the development and application of the DICOM model, as well as the supporting open-source tools and toolkits, to accommodate representation of the research data in QI biomarker development. We demonstrated that the DICOM standard can be used to represent the types of data relevant in HNC QI biomarker development, and encode their complex relationships. The resulting annotated objects are amenable to data mining applications, and are interoperable with a variety of systems that support the DICOM standard.

A comparison of two methods for estimating DCE-MRI parameters via individual and cohort based AIFs in prostate cancer: A step towards practical implementation

  • Fedorov, Andriy
  • Fluckiger, Jacob
  • Ayers, Gregory D
  • Li, Xia
  • Gupta, Sandeep N
  • Tempany, Clare
  • Mulkern, Robert
  • Yankeelov, Thomas E
  • Fennessy, Fiona M
Magnetic Resonance Imaging 2014 Journal Article, cited 30 times
Website
Multi-parametric Magnetic Resonance Imaging, and specifically Dynamic Contrast Enhanced (DCE) MRI, play increasingly important roles in detection and staging of prostate cancer (PCa). One of the actively investigated approaches to DCE MRI analysis involves pharmacokinetic (PK) modeling to extract quantitative parameters that may be related to microvascular properties of the tissue. It is well-known that the prescribed arterial blood plasma concentration (or Arterial Input Function, AIF) input can have significant effects on the parameters estimated by PK modeling. The purpose of our study was to investigate such effects in DCE MRI data acquired in a typical clinical PCa setting. First, we investigated how the choice of a semi-automated or fully automated image-based individualized AIF (iAIF) estimation method affects the PK parameter values; and second, we examined the use of method-specific averaged AIF (cohort-based, or cAIF) as a means to attenuate the differences between the two AIF estimation methods. Two methods for automated image-based estimation of individualized (patient-specific) AIFs, one of which was previously validated for brain and the other for breast MRI, were compared. cAIFs were constructed by averaging the iAIF curves over the individual patients for each of the two methods. Pharmacokinetic analysis using the Generalized kinetic model and each of the four AIF choices (iAIF and cAIF for each of the two image-based AIF estimation approaches) was applied to derive the volume transfer rate (K(trans)) and extravascular extracellular volume fraction (ve) in the areas of prostate tumor. Differences between the parameters obtained using iAIF and cAIF for a given method (intra-method comparison) as well as inter-method differences were quantified. The study utilized DCE MRI data collected in 17 patients with histologically confirmed PCa. Comparison at the level of the tumor region of interest (ROI) showed that the two automated methods resulted in significantly different (p<0.05) mean estimates of ve, but not of K(trans). Comparing cAIF, different estimates for both ve, and K(trans) were obtained. Intra-method comparison between the iAIF- and cAIF-driven analyses showed the lack of effect on ve, while K(trans) values were significantly different for one of the methods. Our results indicate that the choice of the algorithm used for automated image-based AIF determination can lead to significant differences in the values of the estimated PK parameters. K(trans) estimates are more sensitive to the choice between cAIF/iAIF as compared to ve, leading to potentially significant differences depending on the AIF method. These observations may have practical consequences in evaluating the PK analysis results obtained in a multi-site setting.

An annotated test-retest collection of prostate multiparametric MRI

  • Fedorov, Andriy
  • Schwier, Michael
  • Clunie, David
  • Herz, Christian
  • Pieper, Steve
  • Kikinis, Ron
  • Tempany, Clare
  • Fennessy, Fiona
Scientific data 2018 Journal Article, cited 0 times
Website

HEVC optimizations for medical environments

  • Fernández, DG
  • Del Barrio, AA
  • Botella, Guillermo
  • García, Carlos
  • Meyer-Baese, Uwe
  • Meyer-Baese, Anke
2016 Conference Proceedings, cited 5 times
Website

On the Evaluation of the Suitability of the Materials Used to 3D Print Holographic Acoustic Lenses to Correct Transcranial Focused Ultrasound Aberrations

  • Ferri, Marcelino
  • Bravo, Jose Maria
  • Redondo, Javier
  • Jimenez-Gambin, Sergio
  • Jimenez, Noe
  • Camarena, Francisco
  • Sanchez-Perez, Juan Vicente
Polymers (Basel) 2019 Journal Article, cited 2 times
Website
The correction of transcranial focused ultrasound aberrations is a relevant topic for enhancing various non-invasive medical treatments. Presently, the most widely accepted method to improve focusing is the emission through multi-element phased arrays; however, a new disruptive technology, based on 3D printed holographic acoustic lenses, has recently been proposed, overcoming the spatial limitations of phased arrays due to the submillimetric precision of the latest generation of 3D printers. This work aims to optimize this recent solution. Particularly, the preferred acoustic properties of the polymers used for printing the lenses are systematically analyzed, paying special attention to the effect of p-wave speed and its relationship to the achievable voxel size of 3D printers. Results from simulations and experiments clearly show that, given a particular voxel size, there are optimal ranges for lens thickness and p-wave speed, fairly independent of the emitted frequency, the transducer aperture, or the transducer-target distance.

Enhanced Numerical Method for the Design of 3-D-Printed Holographic Acoustic Lenses for Aberration Correction of Single-Element Transcranial Focused Ultrasound

  • Marcelino Ferri
  • José M. Bravo
  • Javier Redondo
  • Juan V. Sánchez-Pérez
Ultrasound in Medicine & Biology 2018 Journal Article, cited 0 times
Website
The correction of transcranial focused ultrasound aberrations is a relevant issue for enhancing various non-invasive medical treatments. The emission through multi-element phased arrays has been the most widely accepted method to improve focusing in recent years; however, the number and size of transducers represent a bottleneck that limits the focusing accuracy of the technique. To overcome this limitation, a new disruptive technology, based on 3-D-printed acoustic lenses, has recently been proposed. As the submillimeter precision of the latest generation of 3-D printers has been proven to overcome the spatial limitations of phased arrays, a new challenge is to improve the accuracy of the numerical simulations required to design this type of ultrasound lens. In the study described here, we evaluated two improvements in the numerical model applied in previous works for the design of 3-D-printed lenses: (i) allowing the propagation of shear waves in the skull by means of its simulation as an isotropic solid and (ii) introduction of absorption into the set of equations that describes the dynamics of the wave in both fluid and solid media. The results obtained in the numerical simulations are evidence that the inclusion of both s-waves and absorption significantly improves focusing.

LCD-OpenPACS: sistema integrado de telerradiologia com auxílio ao diagnóstico de nódulos pulmonares em exames de tomografia computadorizada

  • Firmino Filho, José Macêdo
2015 Thesis, cited 1 times
Website

A Radiogenomic Approach for Decoding Molecular Mechanisms Underlying Tumor Progression in Prostate Cancer

  • Fischer, Sarah
  • Tahoun, Mohamed
  • Klaan, Bastian
  • Thierfelder, Kolja M
  • Weber, Marc-Andre
  • Krause, Bernd J
  • Hakenberg, Oliver
  • Fuellen, Georg
  • Hamed, Mohamed
Cancers (Basel) 2019 Journal Article, cited 0 times
Website
Prostate cancer (PCa) is a genetically heterogeneous cancer entity that causes challenges in pre-treatment clinical evaluation, such as the correct identification of the tumor stage. Conventional clinical tests based on digital rectal examination, Prostate-Specific Antigen (PSA) levels, and Gleason score still lack accuracy for stage prediction. We hypothesize that unraveling the molecular mechanisms underlying PCa staging via integrative analysis of multi-OMICs data could significantly improve the prediction accuracy for PCa pathological stages. We present a radiogenomic approach comprising clinical, imaging, and two genomic (gene and miRNA expression) datasets for 298 PCa patients. Comprehensive analysis of gene and miRNA expression profiles for two frequent PCa stages (T2c and T3b) unraveled the molecular characteristics for each stage and the corresponding gene regulatory interaction network that may drive tumor upstaging from T2c to T3b. Furthermore, four biomarkers (ANPEP, mir-217, mir-592, mir-6715b) were found to distinguish between the two PCa stages and were highly correlated (average r = +/- 0.75) with corresponding aggressiveness-related imaging features in both tumor stages. When combined with related clinical features, these biomarkers markedly improved the prediction accuracy for the pathological stage. Our prediction model exhibits high potential to yield clinically relevant results for characterizing PCa aggressiveness.

Computer-aided nodule assessment and risk yield risk management of adenocarcinoma: the future of imaging?

  • Foley, Finbar
  • Rajagopalan, Srinivasan
  • Raghunath, Sushravya M
  • Boland, Jennifer M
  • Karwoski, Ronald A
  • Maldonado, Fabien
  • Bartholmai, Brian J
  • Peikert, Tobias
2016 Conference Proceedings, cited 7 times
Website

Breast Lesion Segmentation in DCE- MRI Imaging

  • Frackiewicz, Mariusz
  • Koper, Zuzanna
  • Palus, Henryk
  • Borys, Damian
  • Psiuk-Maksymowicz, Krzysztof
2018 Conference Proceedings, cited 0 times
Website
Breast cancer is one of the most common cancers in women. Typically, the course of the disease is asymptomatic in the early stages of breast cancer. Imaging breast examinations allow early detection of the cancer, which is associated with increased chances of a complete cure. There are many breast imaging techniques such as: mammography (MM), ultrasound imaging (US), positron-emission tomography (PET), computed tomography (CT), and magnetic resonance imaging (MRI). These imaging techniques differ in terms of effectiveness, price, type of physical phenomenon, the impact on the patient and its availability. In this paper, we focus on MRI imaging and we compare three breast lesion segmentation algorithms that have been tested on QIN Breast DCE-MRI database, which is publicly available. The obtained values of Dice and Jaccard indices indicate the segmentation using k-means algorithm.

Supervised Machine-Learning Framework and Classifier Evaluation for Automated Three-dimensional Medical Image Segmentation based on Body MRI

  • Frischmann, Patrick
2013 Thesis, cited 0 times
Website

Extraction of pulmonary vessels and tumour from plain computed tomography sequence

  • Ganapathy, Sridevi
  • Ashar, Kinnari
  • Kathirvelu, D
2018 Conference Proceedings, cited 0 times
Website

Performance analysis for nonlinear tomographic data processing

  • Gang, Grace J
  • Guo, Xueqi
  • Stayman IV, J Webster
2019 Conference Proceedings, cited 0 times
Website

An Improved Mammogram Classification Approach Using Back Propagation Neural Network

  • Gautam, Aman
  • Bhateja, Vikrant
  • Tiwari, Ananya
  • Satapathy, Suresh Chandra
2017 Book Section, cited 16 times
Website
Mammograms are generally contaminated by quantum noise, degrading their visual quality and thereby the performance of the classifier in Computer-Aided Diagnosis (CAD). Hence, enhancement of mammograms is necessary to improve the visual quality and detectability of the anomalies present in the breasts. In this paper, a sigmoid based non-linear function has been applied for contrast enhancement of mammograms. The enhanced mammograms are used to define the texture of the detected anomaly using Gray Level Co-occurrence Matrix (GLCM) features. Later, a Back Propagation Artificial Neural Network (BP-ANN) is used as a classification tool for segregating the mammogram into abnormal or normal. The proposed classifier approach has reported to be the one with considerably better accuracy in comparison to other existing approaches.

A resource for the assessment of lung nodule size estimation methods: database of thoracic CT scans of an anthropomorphic phantom

  • Gavrielides, Marios A
  • Kinnard, Lisa M
  • Myers, Kyle J
  • Peregoy, Jennifer
  • Pritchard, William F
  • Zeng, Rongping
  • Esparza, Juan
  • Karanian, John
  • Petrick, Nicholas
Optics express 2010 Journal Article, cited 50 times
Website
A number of interrelated factors can affect the precision and accuracy of lung nodule size estimation. To quantify the effect of these factors, we have been conducting phantom CT studies using an anthropomorphic thoracic phantom containing a vasculature insert to which synthetic nodules were inserted or attached. Ten repeat scans were acquired on different multi-detector scanners, using several sets of acquisition and reconstruction protocols and various nodule characteristics (size, shape, density, location). This study design enables both bias and variance analysis for the nodule size estimation task. The resulting database is in the process of becoming publicly available as a resource to facilitate the assessment of lung nodule size estimation methodologies and to enable comparisons between different methods regarding measurement error. This resource complements public databases of clinical data and will contribute towards the development of procedures that will maximize the utility of CT imaging for lung cancer screening and tumor therapy evaluation.

Benefit of overlapping reconstruction for improving the quantitative assessment of CT lung nodule volume

  • Gavrielides, Marios A
  • Zeng, Rongping
  • Myers, Kyle J
  • Sahiner, Berkman
  • Petrick, Nicholas
Academic radiology 2013 Journal Article, cited 23 times
Website
RATIONALE AND OBJECTIVES: The aim of this study was to quantify the effect of overlapping reconstruction on the precision and accuracy of lung nodule volume estimates in a phantom computed tomographic (CT) study. MATERIALS AND METHODS: An anthropomorphic phantom was used with a vasculature insert on which synthetic lung nodules were attached. Repeated scans of the phantom were acquired using a 64-slice CT scanner. Overlapping and contiguous reconstructions were performed for a range of CT imaging parameters (exposure, slice thickness, pitch, reconstruction kernel) and a range of nodule characteristics (size, density). Nodule volume was estimated with a previously developed matched-filter algorithm. RESULTS: Absolute percentage bias across all nodule sizes (n = 2880) was significantly lower when overlapping reconstruction was used, with an absolute percentage bias of 6.6% (95% confidence interval [CI], 6.4-6.9), compared to 13.2% (95% CI, 12.7-13.8) for contiguous reconstruction. Overlapping reconstruction also showed a precision benefit, with a lower standard percentage error of 7.1% (95% CI, 6.9-7.2) compared with 15.3% (95% CI, 14.9-15.7) for contiguous reconstructions across all nodules. Both effects were more pronounced for the smaller, subcentimeter nodules. CONCLUSIONS: These results support the use of overlapping reconstruction to improve the quantitative assessment of nodule size with CT imaging.

Synthetic Head and Neck and Phantom Images for Determining Deformable Image Registration Accuracy in Magnetic Resonance Imaging

  • Ger, Rachel B
  • Yang, Jinzhong
  • Ding, Yao
  • Jacobsen, Megan C
  • Cardenas, Carlos E
  • Fuller, Clifton D
  • Howell, Rebecca M
  • Li, Heng
  • Stafford, R Jason
  • Zhou, Shouhao
Medical physics 2018 Journal Article, cited 0 times
Website

Radiomics features of the primary tumor fail to improve prediction of overall survival in large cohorts of CT- and PET-imaged head and neck cancer patients

  • Ger, Rachel B
  • Zhou, Shouhao
  • Elgohari, Baher
  • Elhalawani, Hesham
  • Mackin, Dennis M
  • Meier, Joseph G
  • Nguyen, Callistus M
  • Anderson, Brian M
  • Gay, Casey
  • Ning, Jing
  • Fuller, Clifton D
  • Li, Heng
  • Howell, Rebecca M
  • Layman, Rick R
  • Mawlawi, Osama
  • Stafford, R Jason
  • Aerts, Hugo JWL
  • Court, Laurence E.
PLoS One 2019 Journal Article, cited 0 times
Website
Radiomics studies require many patients in order to power them, thus patients are often combined from different institutions and using different imaging protocols. Various studies have shown that imaging protocols affect radiomics feature values. We examined whether using data from cohorts with controlled imaging protocols improved patient outcome models. We retrospectively reviewed 726 CT and 686 PET images from head and neck cancer patients, who were divided into training or independent testing cohorts. For each patient, radiomics features with different preprocessing were calculated and two clinical variables-HPV status and tumor volume-were also included. A Cox proportional hazards model was built on the training data by using bootstrapped Lasso regression to predict overall survival. The effect of controlled imaging protocols on model performance was evaluated by subsetting the original training and independent testing cohorts to include only patients whose images were obtained using the same imaging protocol and vendor. Tumor volume, HPV status, and two radiomics covariates were selected for the CT model, resulting in an AUC of 0.72. However, volume alone produced a higher AUC, whereas adding radiomics features reduced the AUC. HPV status and one radiomics feature were selected as covariates for the PET model, resulting in an AUC of 0.59, but neither covariate was significantly associated with survival. Limiting the training and independent testing to patients with the same imaging protocol reduced the AUC for CT patients to 0.55, and no covariates were selected for PET patients. Radiomics features were not consistently associated with survival in CT or PET images of head and neck patients, even within patients with the same imaging protocol.

Non-small cell lung cancer: identifying prognostic imaging biomarkers by leveraging public gene expression microarray data--methods and preliminary results

  • Gevaert, Olivier
  • Xu, Jiajing
  • Hoang, Chuong D
  • Leung, Ann N
  • Xu, Yue
  • Quon, Andrew
  • Rubin, Daniel L
  • Napel, Sandy
  • Plevritis, Sylvia K
Radiology 2012 Journal Article, cited 187 times
Website
PURPOSE: To identify prognostic imaging biomarkers in non-small cell lung cancer (NSCLC) by means of a radiogenomics strategy that integrates gene expression and medical images in patients for whom survival outcomes are not available by leveraging survival data in public gene expression data sets. MATERIALS AND METHODS: A radiogenomics strategy for associating image features with clusters of coexpressed genes (metagenes) was defined. First, a radiogenomics correlation map is created for a pairwise association between image features and metagenes. Next, predictive models of metagenes are built in terms of image features by using sparse linear regression. Similarly, predictive models of image features are built in terms of metagenes. Finally, the prognostic significance of the predicted image features are evaluated in a public gene expression data set with survival outcomes. This radiogenomics strategy was applied to a cohort of 26 patients with NSCLC for whom gene expression and 180 image features from computed tomography (CT) and positron emission tomography (PET)/CT were available. RESULTS: There were 243 statistically significant pairwise correlations between image features and metagenes of NSCLC. Metagenes were predicted in terms of image features with an accuracy of 59%-83%. One hundred fourteen of 180 CT image features and the PET standardized uptake value were predicted in terms of metagenes with an accuracy of 65%-86%. When the predicted image features were mapped to a public gene expression data set with survival outcomes, tumor size, edge shape, and sharpness ranked highest for prognostic significance. CONCLUSION: This radiogenomics strategy for identifying imaging biomarkers may enable a more rapid evaluation of novel imaging modalities, thereby accelerating their translation to personalized medicine.

Role of Imaging in the Era of Precision Medicine

  • Giardino, Angela
  • Gupta, Supriya
  • Olson, Emmi
  • Sepulveda, Karla
  • Lenchik, Leon
  • Ivanidze, Jana
  • Rakow-Penner, Rebecca
  • Patel, Midhir J
  • Subramaniam, Rathan M
  • Ganeshan, Dhakshinamoorthy
Academic radiology 2017 Journal Article, cited 12 times
Website
Precision medicine is an emerging approach for treating medical disorders, which takes into account individual variability in genetic and environmental factors. Preventive or therapeutic interventions can then be directed to those who will benefit most from targeted interventions, thereby maximizing benefits and minimizing costs and complications. Precision medicine is gaining increasing recognition by clinicians, healthcare systems, pharmaceutical companies, patients, and the government. Imaging plays a critical role in precision medicine including screening, early diagnosis, guiding treatment, evaluating response to therapy, and assessing likelihood of disease recurrence. The Association of University Radiologists Radiology Research Alliance Precision Imaging Task Force convened to explore the current and future role of imaging in the era of precision medicine and summarized its finding in this article. We review the increasingly important role of imaging in various oncological and non-oncological disorders. We also highlight the challenges for radiology in the era of precision medicine.

Automatic Multi-organ Segmentation on Abdominal CT with Dense V-networks

  • E. Gibson
  • F. Giganti
  • Y. Hu
  • E. Bonmati
  • S. Bandula
  • K. Gurusamy
  • B. Davidson
  • S. P. Pereira
  • M. J. Clarkson
  • D. C. Barratt
IEEE Transactions on Medical Imaging 2018 Journal Article, cited 14 times
Website

Quantitative CT assessment of emphysema and airways in relation to lung cancer risk

  • Gierada, David S
  • Guniganti, Preethi
  • Newman, Blake J
  • Dransfield, Mark T
  • Kvale, Paul A
  • Lynch, David A
  • Pilgram, Thomas K
Radiology 2011 Journal Article, cited 41 times
Website

Projected outcomes using different nodule sizes to define a positive CT lung cancer screening examination

  • Gierada, David S
  • Pinsky, Paul
  • Nath, Hrudaya
  • Chiles, Caroline
  • Duan, Fenghai
  • Aberle, Denise R
Journal of the National Cancer Institute 2014 Journal Article, cited 74 times
Website
Background Computed tomography (CT) screening for lung cancer has been associated with a high frequency of false positive results because of the high prevalence of indeterminate but usually benign small pulmonary nodules. The acceptability of reducing false-positive rates and diagnostic evaluations by increasing the nodule size threshold for a positive screen depends on the projected balance between benefits and risks. Methods We examined data from the National Lung Screening Trial (NLST) to estimate screening CT performance and outcomes for scans with nodules above the 4 mm NLST threshold used to classify a CT screen as positive. Outcomes assessed included screening results, subsequent diagnostic tests performed, lung cancer histology and stage distribution, and lung cancer mortality. Sensitivity, specificity, positive predictive value, and negative predictive value were calculated for the different nodule size thresholds. All statistical tests were two-sided. Results In 64% of positive screens (11 598/18 141), the largest nodule was 7 mm or less in greatest transverse diameter. By increasing the threshold, the percentages of lung cancer diagnoses that would have been missed or delayed and false positives that would have been avoided progressively increased, for example from 1.0% and 15.8% at a 5 mm threshold to 10.5% and 65.8% at an 8 mm threshold, respectively. The projected reductions in postscreening follow-up CT scans and invasive procedures also increased as the threshold was raised. Differences across nodules sizes for lung cancer histology and stage distribution were small but statistically significant. There were no differences across nodule sizes in survival or mortality. Conclusion Raising the nodule size threshold for a positive screen would substantially reduce false-positive CT screenings and medical resource utilization with a variable impact on screening outcomes.

Machine Learning in Medical Imaging

  • Giger, M. L.
J Am Coll Radiol 2018 Journal Article, cited 157 times
Website
Advances in both imaging and computers have synergistically led to a rapid rise in the potential use of artificial intelligence in various radiological imaging tasks, such as risk assessment, detection, diagnosis, prognosis, and therapy response, as well as in multi-omics disease discovery. A brief overview of the field is given here, allowing the reader to recognize the terminology, the various subfields, and components of machine learning, as well as the clinical potential. Radiomics, an expansion of computer-aided diagnosis, has been defined as the conversion of images to minable data. The ultimate benefit of quantitative radiomics is to (1) yield predictive image-based phenotypes of disease for precision medicine or (2) yield quantitative image-based phenotypes for data mining with other -omics for discovery (ie, imaging genomics). For deep learning in radiology to succeed, note that well-annotated large data sets are needed since deep networks are complex, computer software and hardware are evolving constantly, and subtle differences in disease states are more difficult to perceive than differences in everyday objects. In the future, machine learning in radiology is expected to have a substantial clinical impact with imaging examinations being routinely obtained in clinical practice, providing an opportunity to improve decision support in medical image interpretation. The term of note is decision support, indicating that computers will augment human decision making, making it more effective and efficient. The clinical impact of having computers in the routine clinical practice may allow radiologists to further integrate their knowledge with their clinical colleagues in other medical specialties and allow for precision medicine.

Radiomics: Images are more than pictures, they are data

  • Gillies, Robert J
  • Kinahan, Paul E
  • Hricak, Hedvig
Radiology 2015 Journal Article, cited 694 times
Website

Intuitive Error Space Exploration of Medical Image Data in Clinical Daily Routine

  • Gillmann, Christina
  • Arbeláez, Pablo
  • Peñaloza, José Tiberio Hernández
  • Hagen, Hans
  • Wischgoll, Thomas
2017 Conference Paper, cited 3 times
Website

Planning, guidance, and quality assurance of pelvic screw placement using deformable image registration

  • Goerres, J.
  • Uneri, A.
  • Jacobson, M.
  • Ramsay, B.
  • De Silva, T.
  • Ketcha, M.
  • Han, R.
  • Manbachi, A.
  • Vogt, S.
  • Kleinszig, G.
  • Wolinsky, J. P.
  • Osgood, G.
  • Siewerdsen, J. H.
Phys Med Biol 2017 Journal Article, cited 4 times
Website
Percutaneous pelvic screw placement is challenging due to narrow bone corridors surrounded by vulnerable structures and difficult visual interpretation of complex anatomical shapes in 2D x-ray projection images. To address these challenges, a system for planning, guidance, and quality assurance (QA) is presented, providing functionality analogous to surgical navigation, but based on robust 3D-2D image registration techniques using fluoroscopy images already acquired in routine workflow. Two novel aspects of the system are investigated: automatic planning of pelvic screw trajectories and the ability to account for deformation of surgical devices (K-wire deflection). Atlas-based registration is used to calculate a patient-specific plan of screw trajectories in preoperative CT. 3D-2D registration aligns the patient to CT within the projective geometry of intraoperative fluoroscopy. Deformable known-component registration (dKC-Reg) localizes the surgical device, and the combination of plan and device location is used to provide guidance and QA. A leave-one-out analysis evaluated the accuracy of automatic planning, and a cadaver experiment compared the accuracy of dKC-Reg to rigid approaches (e.g. optical tracking). Surgical plans conformed within the bone cortex by 3-4 mm for the narrowest corridor (superior pubic ramus) and >5 mm for the widest corridor (tear drop). The dKC-Reg algorithm localized the K-wire tip within 1.1 mm and 1.4 degrees and was consistently more accurate than rigid-body tracking (errors up to 9 mm). The system was shown to automatically compute reliable screw trajectories and accurately localize deformed surgical devices (K-wires). Such capability could improve guidance and QA in orthopaedic surgery, where workflow is impeded by manual planning, conventional tool trackers add complexity and cost, rigid tool assumptions are often inaccurate, and qualitative interpretation of complex anatomy from 2D projections is prone to trial-and-error with extended fluoroscopy time.

Automatic detection of pulmonary nodules in CT images by incorporating 3D tensor filtering with local image feature analysis

  • Gong, J.
  • Liu, J. Y.
  • Wang, L. J.
  • Sun, X. W.
  • Zheng, B.
  • Nie, S. D.
Physica Medica 2018 Journal Article, cited 4 times
Website

Guest Editorial Deep Learning in Medical Imaging: Overview and Future Promise of an Exciting New Technique

  • Greenspan, Hayit
  • van Ginneken, Bram
  • Summers, Ronald M
IEEE Transactions on Medical Imaging 2016 Journal Article, cited 395 times
Website

Imaging and clinical data archive for head and neck squamous cell carcinoma patients treated with radiotherapy

  • Grossberg, Aaron J
  • Mohamed, Abdallah SR
  • El Halawani, Hesham
  • Bennett, William C
  • Smith, Kirk E
  • Nolan, Tracy S
  • Williams, Bowman
  • Chamchod, Sasikarn
  • Heukelom, Jolien
  • Kantor, Michael E
Scientific data 2018 Journal Article, cited 0 times
Website

Imaging-genomics reveals driving pathways of MRI derived volumetric tumor phenotype features in Glioblastoma

  • Grossmann, Patrick
  • Gutman, David A
  • Dunn, William D
  • Holder, Chad A
  • Aerts, Hugo JWL
BMC cancer 2016 Journal Article, cited 21 times
Website

Defining the biological and clinical basis of radiomics: towards clinical imaging biomarkers

  • Großmann, P. B. H. J.
  • Grossmann, Patrick Benedict Hans Juan
2018 Thesis, cited 0 times
Website

Smooth extrapolation of unknown anatomy via statistical shape models

  • Grupp, RB
  • Chiang, H
  • Otake, Y
  • Murphy, RJ
  • Gordon, CR
  • Armand, M
  • Taylor, RH
2015 Conference Proceedings, cited 2 times
Website

Generative Models and Feature Extraction on Patient Images and Structure Data in Radiation Therapy

  • Gruselius, Hanna
Mathematics 2018 Thesis, cited 0 times
Website

Exploit fully automatic low-level segmented PET data for training high-level deep learning algorithms for the corresponding CT data

  • Gsaxner, Christina
  • Roth, Peter M
  • Wallner, Jurgen
  • Egger, Jan
PLoS One 2019 Journal Article, cited 0 times
Website
We present an approach for fully automatic urinary bladder segmentation in CT images with artificial neural networks in this study. Automatic medical image analysis has become an invaluable tool in the different treatment stages of diseases. Especially medical image segmentation plays a vital role, since segmentation is often the initial step in an image analysis pipeline. Since deep neural networks have made a large impact on the field of image processing in the past years, we use two different deep learning architectures to segment the urinary bladder. Both of these architectures are based on pre-trained classification networks that are adapted to perform semantic segmentation. Since deep neural networks require a large amount of training data, specifically images and corresponding ground truth labels, we furthermore propose a method to generate such a suitable training data set from Positron Emission Tomography/Computed Tomography image data. This is done by applying thresholding to the Positron Emission Tomography data for obtaining a ground truth and by utilizing data augmentation to enlarge the dataset. In this study, we discuss the influence of data augmentation on the segmentation results, and compare and evaluate the proposed architectures in terms of qualitative and quantitative segmentation performance. The results presented in this study allow concluding that deep neural networks can be considered a promising approach to segment the urinary bladder in CT images.

Automatic Colorectal Segmentation with Convolutional Neural Network

  • Guachi, Lorena
  • Guachi, Robinson
  • Bini, Fabiano
  • Marinozzi, Franco
Computer-Aided Design and Applications 2019 Journal Article, cited 3 times
Website
This paper presents a new method for colon tissues segmentation on Computed Tomography images which takes advantages of using deep and hierarchical learning about colon features through Convolutional Neural Networks (CNN). The proposed method works robustly reducing misclassified colon tissues pixels that are introduced by the presence of noise, artifacts, unclear edges, and other organs or different areas characterized by the same intensity value as the colon. Patch analysis is exploited for allowing the classification of each center pixel as colon tissue or background pixel. Experimental results demonstrate the proposed method achieves a higher effectiveness in terms of sensitivity and specificity with respect to three state-of the art methods.

User-centered design and evaluation of interactive segmentation methods for medical images

  • Gueziri, Houssem-Eddine
2017 Thesis, cited 1 times
Website
Segmentation of medical images is a challenging task that aims to identify a particular structure present on the image. Among the existing methods involving the user at different levels, from a fully-manual to a fully-automated task, interactive segmentation methods provide assistance to the user during the task to reduce the variability in the results and allow occasional corrections of segmentation failures. Therefore, they offer a compromise between the segmentation efficiency and the accuracy of the results. It is the user who judges whether the results are satisfactory and how to correct them during the segmentation, making the process subject to human factors. Despite the strong influence of the user on the outcomes of a segmentation task, the impact of such factors has received little attention, with the literature focusing the assessment of segmentation processes on computational performance. Yet, involving the user performance in the analysis is more representative of a realistic scenario. Our goal is to explore the user behaviour in order to improve the efficiency of interactive image segmentation processes. This is achieved through three contributions. First, we developed a method which is based on a new user interaction mechanism to provide hints as to where to concentrate the computations. This significantly improves the computation efficiency without sacrificing the quality of the segmentation. The benefits of using such hints are twofold: (i) because our contribution is based on user interaction, it generalizes to a wide range of segmentation methods, and (ii) it gives comprehensive indications about where to focus the segmentation search. The latter advantage is used to achieve the second contribution. We developed an automated method based on a multi-scale strategy to: (i) reduce the user’s workload and, (ii) improve the computational time up to tenfold, allowing real-time segmentation feedback. Third, we have investigated the effects of such improvements in computations on the user’s performance. We report an experiment that manipulates the delay induced by the computation time while performing an interactive segmentation task. Results reveal that the influence of this delay can be significantly reduced with an appropriate interaction mechanism design. In conclusion, this project provides an effective image segmentation solution that has been developed in compliance with user performance requirements. We validated our approach through multiple user studies that provided a step forward into understanding the user behaviour during interactive image segmentation. Titre traduit Conception et évaluation orientées utilisateur des méthodes de segmentation interactives des images médicales Résumé traduit La segmentation d’images consiste à identifier une structure particulière dans une image. Parmi les méthodes existantes qui impliquent l’utilisateur à différents niveaux, les méthodes de segmentation interactives fournissent un support logiciel pour assister l’utilisateur dans cette tâche, ce qui aide à réduire la variabilité des résultats et permet de corriger les erreurs occasionnelles. Ces méthodes offrent un compromis entre l’efficacité et la précision des résultats. En effet, durant la segmentation, l’utilisateur décide si les résultats sont satisfaisants et dans le cas contraire, comment les corriger, rendant le processus sujet aux facteurs humains. Malgré la forte influence qu’a l’utilisateur sur l’issue de la segmentation, l’impact de ces facteurs a reçu peu d’attention de la part de la communauté scientifique, qui souvent, réduit l’évaluation des methods de segmentation à leurs performances de calcul. Pourtant, inclure la performance de l’utilisateur lors de l’évaluation de la segmentation permet une représentation plus fidèle de la réalité. Notre but est d’explorer le comportement de l’utilisateur afin d’améliorer l’efficacité des méthodes de segmentation interactives. Cette tâche est réalisée en trois contributions. Dans un premier temps, nous avons développé un nouveau mécanisme d’interaction utilisateur qui oriente la méthode de segmentation vers les endroits de l’image où concentrer les calculs. Ceci augmente significativement l’efficacité des calculs sans atténuer la qualité de la segmentation. Il y a un double avantage à utiliser un tel mécanisme: (i) puisque notre contribution est base sur l’interaction utilisateur, l’approche est généralisable à un grand nombre de méthodes de segmentation, et (ii) ce mécanisme permet une meilleure compréhension des endroits de l’image où l’on doit orienter la recherche du contour lors de la segmentation. Ce dernier point est exploité pour réaliser la deuxième contribution. En effet, nous avons remplacé le mécanisme d’interaction par une méthode automatique basée sur une stratégie multi-échelle qui permet de: (i) réduire l’effort produit par l’utilisateur lors de la segmentation, et (ii) améliorer jusqu’à dix fois le temps de calcul, permettant une segmentation en temps-réel. Dans la troisième contribution, nous avons étudié l’effet d’une telle amélioration des performances de calculs sur l’utilisateur. Nous avons mené une expérience qui manipule les délais des calculs lors de la segmentation interactive. Les résultats révèlent qu’une conception appropriée du mécanisme d’interaction peut réduire l’effet de ces délais sur l’utilisateur. En conclusion, ce projet offer une solution interactive de segmentation d’images développée en tenant compte de la performance de l’utilisateur. Nous avons validé notre approche à travers de multiples études utilisateurs qui nous ont permis une meilleure compréhension du comportement utilisateur durant la segmentation interactive des images.

User-guided graph reduction for fast image segmentation

  • Gueziri, Houssem-Eddine
  • McGuffin, Michael J
  • Laporte, Catherine
2015 Conference Proceedings, cited 2 times
Website
Graph-based segmentation methods such as the random walker (RW) are known to be computationally expensive. For high resolution images, user interaction with the algorithm is significantly affected. This paper introduces a novel seeding approach for graph-based segmentation that reduces computation time. Instead of marking foreground and background pixels, the user roughly marks the object boundary forming separate regions. The image pixels are then grouped into a hierarchy of increasingly large layers based on their distance from these markings. Next, foreground and background seeds are automatically generated according to the hierarchical layers of each region. The highest layers of the hierarchy are ignored leading to a significant graph reduction. Finally, validation experiments based on multiple automatically generated input seeds were carried out on a variety of medical images. Results show a significant gain in time for high resolution images using the new approach.

A generalized graph reduction framework for interactive segmentation of large images

  • Gueziri, Houssem-Eddine
  • McGuffin, Michael J
  • Laporte, Catherine
Computer Vision and Image Understanding 2016 Journal Article, cited 5 times
Website
The speed of graph-based segmentation approaches, such as random walker (RW) and graph cut (GC), depends strongly on image size. For high-resolution images, the time required to compute a segmentation based on user input renders interaction tedious. We propose a novel method, using an approximate contour sketched by the user, to reduce the graph before passing it on to a segmentation algorithm such as RW or GC. This enables a significantly faster feedback loop. The user first draws a rough contour of the object to segment. Then, the pixels of the image are partitioned into "layers" (corresponding to different scales) based on their distance from the contour. The thickness of these layers increases with distance to the contour according to a Fibonacci sequence. An initial segmentation result is rapidly obtained after automatically generating foreground and background labels according to a specifically selected layer; all vertices beyond this layer are eliminated, restricting the segmentation to regions near the drawn contour. Further foreground background labels can then be added by the user to refine the segmentation. All iterations of the graph-based segmentation benefit from a reduced input graph, while maintaining full resolution near the object boundary. A user study with 16 participants was carried out for RW segmentation of a multi-modal dataset of 22 medical images, using either a standard mouse or a stylus pen to draw the contour. Results reveal that our approach significantly reduces the overall segmentation time compared with the status quo approach (p < 0.01). The study also shows that our approach works well with both input devices. Compared to super-pixel graph reduction, our approach provides full resolution accuracy at similar speed on a high-resolution benchmark image with both RW and GC segmentation methods. However, graph reduction based on super-pixels does not allow interactive correction of clustering errors. Finally, our approach can be combined with super-pixel clustering methods for further graph reduction, resulting in even faster segmentation. (C) 2016 Elsevier Inc. All rights reserved.

Feature selection and patch-based segmentation in MRI for prostate radiotherapy

  • Guinin, M
  • Ruan, S
  • Dubray, B
  • Massoptier, L
  • Gardin, I
2016 Conference Proceedings, cited 0 times
Website

A tool for lung nodules analysis based on segmentation and morphological operation

  • Gupta, Anindya
  • Martens, Olev
  • Le Moullec, Yannick
  • Saar, Tonis
2015 Conference Proceedings, cited 4 times
Website

Brain Tumor Detection using Curvelet Transform and Support Vector Machine

  • Gupta, Bhawna
  • Tiwari, Shamik
International Journal of Computer Science and Mobile Computing 2014 Journal Article, cited 8 times
Website

Appropriate Contrast Enhancement Measures for Brain and Breast Cancer Images

  • Gupta, Suneet
  • Porwal, Rabins
International Journal of Biomedical Imaging 2016 Journal Article, cited 10 times
Website

Cancer Digital Slide Archive: an informatics resource to support integrated in silico analysis of TCGA pathology data

  • Gutman, David A
  • Cobb, Jake
  • Somanna, Dhananjaya
  • Park, Yuna
  • Wang, Fusheng
  • Kurc, Tahsin
  • Saltz, Joel H
  • Brat, Daniel J
  • Cooper, Lee A
Journal of the American Medical Informatics Association 2013 Journal Article, cited 70 times
Website
BACKGROUND: The integration and visualization of multimodal datasets is a common challenge in biomedical informatics. Several recent studies of The Cancer Genome Atlas (TCGA) data have illustrated important relationships between morphology observed in whole-slide images, outcome, and genetic events. The pairing of genomics and rich clinical descriptions with whole-slide imaging provided by TCGA presents a unique opportunity to perform these correlative studies. However, better tools are needed to integrate the vast and disparate data types. OBJECTIVE: To build an integrated web-based platform supporting whole-slide pathology image visualization and data integration. MATERIALS AND METHODS: All images and genomic data were directly obtained from the TCGA and National Cancer Institute (NCI) websites. RESULTS: The Cancer Digital Slide Archive (CDSA) produced is accessible to the public (http://cancer.digitalslidearchive.net) and currently hosts more than 20,000 whole-slide images from 22 cancer types. DISCUSSION: The capabilities of CDSA are demonstrated using TCGA datasets to integrate pathology imaging with associated clinical, genomic and MRI measurements in glioblastomas and can be extended to other tumor types. CDSA also allows URL-based sharing of whole-slide images, and has preliminary support for directly sharing regions of interest and other annotations. Images can also be selected on the basis of other metadata, such as mutational profile, patient age, and other relevant characteristics. CONCLUSIONS: With the increasing availability of whole-slide scanners, analysis of digitized pathology images will become increasingly important in linking morphologic observations with genomic and clinical endpoints.

Cancer Digital Slide Archive: an informatics resource to support integrated in silico analysis of TCGA pathology data

  • Gutman, David A
  • Cobb, Jake
  • Somanna, Dhananjaya
  • Park, Yuna
  • Wang, Fusheng
  • Kurc, Tahsin
  • Saltz, Joel H
  • Brat, Daniel J
  • Cooper, Lee AD
  • Kong, Jun
Journal of the American Medical Informatics Association 2013 Journal Article, cited 70 times
Website
BACKGROUND: The integration and visualization of multimodal datasets is a common challenge in biomedical informatics. Several recent studies of The Cancer Genome Atlas (TCGA) data have illustrated important relationships between morphology observed in whole-slide images, outcome, and genetic events. The pairing of genomics and rich clinical descriptions with whole-slide imaging provided by TCGA presents a unique opportunity to perform these correlative studies. However, better tools are needed to integrate the vast and disparate data types. OBJECTIVE: To build an integrated web-based platform supporting whole-slide pathology image visualization and data integration. MATERIALS AND METHODS: All images and genomic data were directly obtained from the TCGA and National Cancer Institute (NCI) websites. RESULTS: The Cancer Digital Slide Archive (CDSA) produced is accessible to the public (http://cancer.digitalslidearchive.net) and currently hosts more than 20,000 whole-slide images from 22 cancer types. DISCUSSION: The capabilities of CDSA are demonstrated using TCGA datasets to integrate pathology imaging with associated clinical, genomic and MRI measurements in glioblastomas and can be extended to other tumor types. CDSA also allows URL-based sharing of whole-slide images, and has preliminary support for directly sharing regions of interest and other annotations. Images can also be selected on the basis of other metadata, such as mutational profile, patient age, and other relevant characteristics. CONCLUSIONS: With the increasing availability of whole-slide scanners, analysis of digitized pathology images will become increasingly important in linking morphologic observations with genomic and clinical endpoints.

Web based tools for visualizing imaging data and development of XNATView, a zero footprint image viewer

  • Gutman, David A
  • Dunn Jr, William D
  • Cobb, Jake
  • Stoner, Richard M
  • Kalpathy-Cramer, Jayashree
  • Erickson, Bradley
Frontiers in Neuroinformatics 2014 Journal Article, cited 12 times
Website
Advances in web technologies now allow direct visualization of imaging data sets without necessitating the download of large file sets or the installation of software. This allows centralization of file storage and facilitates image review and analysis. XNATView is a light framework recently developed in our lab to visualize DICOM images stored in The Extensible Neuroimaging Archive Toolkit (XNAT). It consists of a PyXNAT-based framework to wrap around the REST application programming interface (API) and query the data in XNAT. XNATView was developed to simplify quality assurance, help organize imaging data, and facilitate data sharing for intra- and inter-laboratory collaborations. Its zero-footprint design allows the user to connect to XNAT from a web browser, navigate through projects, experiments, and subjects, and view DICOM images with accompanying metadata all within a single viewing instance.

Somatic mutations associated with MRI-derived volumetric features in glioblastoma

  • Gutman, David A
  • Dunn Jr, William D
  • Grossmann, Patrick
  • Cooper, Lee AD
  • Holder, Chad A
  • Ligon, Keith L
  • Alexander, Brian M
  • Aerts, Hugo JWL
Neuroradiology 2015 Journal Article, cited 45 times
Website
INTRODUCTION: MR imaging can noninvasively visualize tumor phenotype characteristics at the macroscopic level. Here, we investigated whether somatic mutations are associated with and can be predicted by MRI-derived tumor imaging features of glioblastoma (GBM). METHODS: Seventy-six GBM patients were identified from The Cancer Imaging Archive for whom preoperative T1-contrast (T1C) and T2-FLAIR MR images were available. For each tumor, a set of volumetric imaging features and their ratios were measured, including necrosis, contrast enhancing, and edema volumes. Imaging genomics analysis assessed the association of these features with mutation status of nine genes frequently altered in adult GBM. Finally, area under the curve (AUC) analysis was conducted to evaluate the predictive performance of imaging features for mutational status. RESULTS: Our results demonstrate that MR imaging features are strongly associated with mutation status. For example, TP53-mutated tumors had significantly smaller contrast enhancing and necrosis volumes (p = 0.012 and 0.017, respectively) and RB1-mutated tumors had significantly smaller edema volumes (p = 0.015) compared to wild-type tumors. MRI volumetric features were also found to significantly predict mutational status. For example, AUC analysis results indicated that TP53, RB1, NF1, EGFR, and PDGFRA mutations could each be significantly predicted by at least one imaging feature. CONCLUSION: MRI-derived volumetric features are significantly associated with and predictive of several cancer-relevant, drug-targetable DNA mutations in glioblastoma. These results may shed insight into unique growth characteristics of individual tumors at the macroscopic level resulting from molecular events as well as increase the use of noninvasive imaging in personalized medicine.

OPTIMISING DELINEATION ACCURACY OF TUMOURS IN PET FOR RADIOTHERAPY PLANNING USING BLIND DECONVOLUTION

  • Guvenis, A
  • Koc, A
Radiation Protection Dosimetry 2015 Journal Article, cited 3 times
Website
Positron emission tomography (PET) imaging has been proven to be useful in radiotherapy planning for the determination of the metabolically active regions of tumours. Delineation of tumours, however, is a difficult task in part due to high noise levels and the partial volume effects originating mainly from the low camera resolution. The goal of this work is to study the effect of blind deconvolution on tumour volume estimation accuracy for different computer-aided contouring methods. The blind deconvolution estimates the point spread function (PSF) of the imaging system in an iterative manner in a way that the likelihood of the given image being the convolution output is maximised. In this way, the PSF of the imaging system does not need to be known. Data were obtained from a NEMA NU-2 IQ-based phantom with a GE DSTE-16 PET/CT scanner. The artificial tumour diameters were 13, 17, 22, 28 and 37 mm with a target/background ratio of 4:1. The tumours were delineated before and after blind deconvolution. Student's two-tailed paired t-test showed a significant decrease in volume estimation error (p < 0.001) when blind deconvolution was used in conjunction with computer-aided delineation methods. A manual delineation confirmation demonstrated an improvement from 26 to 16 % for the artificial tumour of size 37 mm while an improvement from 57 to 15 % was noted for the small tumour of 13 mm. Therefore, it can be concluded that blind deconvolution of reconstructed PET images may be used to increase tumour delineation accuracy.

Vector quantization-based automatic detection of pulmonary nodules in thoracic CT images

  • Han, Hao
  • Li, Lihong
  • Han, Fangfang
  • Zhang, Hao
  • Moore, William
  • Liang, Zhengrong
2013 Conference Proceedings, cited 8 times
Website

A novel computer-aided detection system for pulmonary nodule identification in CT images

  • Han, Hao
  • Li, Lihong
  • Wang, Huafeng
  • Zhang, Hao
  • Moore, William
  • Liang, Zhengrong
2014 Conference Proceedings, cited 5 times
Website

Descriptions and evaluations of methods for determining surface curvature in volumetric data

  • Hauenstein, Jacob D.
  • Newman, Timothy S.
Computers & Graphics 2020 Journal Article, cited 0 times
Website
Highlights • Methods using convolution or fitting are often the most accurate. • The existing TE method is fast and accurate on noise-free data. • The OP method is faster than existing, similarly accurate methods on real data. • Even modest errors in curvature notably impact curvature-based renderings. • On real data, GSTH, GSTI, and OP produce the best curvature-based renderings. Abstract Three methods developed for determining surface curvature in volumetric data are described, including one convolution-based method, one fitting-based method, and one method that uses normal estimates to directly determine curvature. Additionally, a study of the accuracy and computational performance of these methods and prior methods is presented. The study considers synthetic data, noise-added synthetic data, and real data. Sample volume renderings using curvature-based transfer functions, where curvatures were determined with the methods, are also exhibited.

A biomarker basing on radiomics for the prediction of overall survival in non–small cell lung cancer patients

  • He, Bo
  • Zhao, Wei
  • Pi, Jiang-Yuan
  • Han, Dan
  • Jiang, Yuan-Ming
  • Zhang, Zhen-Guang
Respiratory research 2018 Journal Article, cited 0 times
Website

Feasibility study of a multi-criteria decision-making based hierarchical model for multi-modality feature and multi-classifier fusion: Applications in medical prognosis prediction

  • He, Qiang
  • Li, Xin
  • Kim, DW Nathan
  • Jia, Xun
  • Gu, Xuejun
  • Zhen, Xin
  • Zhou, Linghong
Information Fusion 2020 Journal Article, cited 0 times
Website

Multiparametric MRI of prostate cancer: An update on state‐of‐the‐art techniques and their performance in detecting and localizing prostate cancer

  • Hegde, John V
  • Mulkern, Robert V
  • Panych, Lawrence P
  • Fennessy, Fiona M
  • Fedorov, Andriy
  • Maier, Stephan E
  • Tempany, Clare
Journal of Magnetic Resonance Imaging 2013 Journal Article, cited 164 times
Website
Magnetic resonance (MR) examinations of men with prostate cancer are most commonly performed for detecting, characterizing, and staging the extent of disease to best determine diagnostic or treatment strategies, which range from biopsy guidance to active surveillance to radical prostatectomy. Given both the exam's importance to individual treatment plans and the time constraints present for its operation at most institutions, it is essential to perform the study effectively and efficiently. This article reviews the most commonly employed modern techniques for prostate cancer MR examinations, exploring the relevant signal characteristics from the different methods discussed and relating them to intrinsic prostate tissue properties. Also, a review of recent articles using these methods to enhance clinical interpretation and assess clinical performance is provided. J. Magn. Reson. Imaging 2013;37:1035-1054. © 2013 Wiley Periodicals, Inc.

Transfer learning with multiple convolutional neural networks for soft tissue sarcoma MRI classification

  • Hermessi, Haithem
  • Mourali, Olfa
  • Zagrouba, Ezzeddine
2019 Conference Proceedings, cited 1 times
Website

Design of a Patient-Specific Radiotherapy Treatment Target

  • Heyns, Michael
  • Breseman, Kelsey
  • Lee, Christopher
  • Bloch, B Nicholas
  • Jaffe, Carl
  • Xiang, Hong
2013 Conference Proceedings, cited 3 times
Website

Automated Muscle Segmentation from Clinical CT using Bayesian U-Net for Personalized Musculoskeletal Modeling

  • Hiasa, Yuta
  • Otake, Yoshito
  • Takao, Masaki
  • Ogawa, Takeshi
  • Sugano, Nobuhiko
  • Sato, Yoshinobu
IEEE Trans Med Imaging 2019 Journal Article, cited 2 times
Website
We propose a method for automatic segmentation of individual muscles from a clinical CT. The method uses Bayesian convolutional neural networks with the U-Net architecture, using Monte Carlo dropout that infers an uncertainty metric in addition to the segmentation label. We evaluated the performance of the proposed method using two data sets: 20 fully annotated CTs of the hip and thigh regions and 18 partially annotated CTs that are publicly available from The Cancer Imaging Archive (TCIA) database. The experiments showed a Dice coefficient (DC) of 0.891+/-0.016 (mean+/-std) and an average symmetric surface distance (ASD) of 0.994+/-0.230 mm over 19 muscles in the set of 20 CTs. These results were statistically significant improvements compared to the state-of-the-art hierarchical multi-atlas method which resulted in 0.845+/-0.031 DC and 1.556+/-0.444 mm ASD. We evaluated validity of the uncertainty metric in the multi-class organ segmentation problem and demonstrated a correlation between the pixels with high uncertainty and the segmentation failure. One application of the uncertainty metric in activelearning is demonstrated, and the proposed query pixel selection method considerably reduced the manual annotation cost for expanding the training data set. The proposed method allows an accurate patient-specific analysis of individual muscle shapes in a clinical routine. This would open up various applications including personalization of biomechanical simulation and quantitative evaluation of muscle atrophy.

Approaches to uncovering cancer diagnostic and prognostic molecular signatures

  • Hong, Shengjun
  • Huang, Yi
  • Cao, Yaqiang
  • Chen, Xingwei
  • Han, Jing-Dong J
Molecular & Cellular Oncology 2014 Journal Article, cited 2 times
Website
The recent rapid development of high-throughput technology enables the study of molecular signatures for cancer diagnosis and prognosis at multiple levels, from genomic and epigenomic to transcriptomic. These unbiased large-scale scans provide important insights into the detection of cancer-related signatures. In addition to single-layer signatures, such as gene expression and somatic mutations, integrating data from multiple heterogeneous platforms using a systematic approach has been proven to be particularly effective for the identification of classification markers. This approach not only helps to uncover essential driver genes and pathways in the cancer network that are responsible for the mechanisms of cancer development, but will also lead us closer to the ultimate goal of personalized cancer therapy.

A Pipeline for Lung Tumor Detection and Segmentation from CT Scans Using Dilated Convolutional Neural Networks

  • Hossain, S
  • Najeeb, S
  • Shahriyar, A
  • Abdullah, ZR
  • Haque, MA
2019 Conference Proceedings, cited 0 times
Website
Lung cancer is the most prevalent cancer worldwide with about 230,000 new cases every year. Most cases go undiagnosed until it’s too late, especially in developing countries and remote areas. Early detection is key to beating cancer. Towards this end, the work presented here proposes an automated pipeline for lung tumor detection and segmentation from 3D lung CT scans from the NSCLC-Radiomics Dataset. It also presents a new dilated hybrid-3D convolutional neural network architecture for tumor segmentation. First, a binary classifier chooses CT scan slices that may contain parts of a tumor. To segment the tumors, the selected slices are passed to the segmentation model which extracts feature maps from each 2D slice using dilated convolutions and then fuses the stacked maps through 3D convolutions - incorporating the 3D structural information present in the CT scan volume into the output. Lastly, the segmentation masks are passed through a post-processing block which cleans them up through morphological operations. The proposed segmentation model outperformed other contemporary models like LungNet and U-Net. The average and median dice coefficient on the test set for the proposed model were 65.7% and 70.39% respectively. The next best model, LungNet had dice scores of 62.67% and 66.78%.

Publishing descriptions of non-public clinical datasets: proposed guidance for researchers, repositories, editors and funding organisations

  • Hrynaszkiewicz, Iain
  • Khodiyar, Varsha
  • Hufton, Andrew L
  • Sansone, Susanna-Assunta
Research Integrity and Peer Review 2016 Journal Article, cited 8 times
Website
Sharing of experimental clinical research data usually happens between individuals or research groups rather than via public repositories, in part due to the need to protect research participant privacy. This approach to data sharing makes it difficult to connect journal articles with their underlying datasets and is often insufficient for ensuring access to data in the long term. Voluntary data sharing services such as the Yale Open Data Access (YODA) and Clinical Study Data Request (CSDR) projects have increased accessibility to clinical datasets for secondary uses while protecting patient privacy and the legitimacy of secondary analyses but these resources are generally disconnected from journal articles-where researchers typically search for reliable information to inform future research. New scholarly journal and article types dedicated to increasing accessibility of research data have emerged in recent years and, in general, journals are developing stronger links with data repositories. There is a need for increased collaboration between journals, data repositories, researchers, funders, and voluntary data sharing services to increase the visibility and reliability of clinical research. Using the journal Scientific Data as a case study, we propose and show examples of changes to the format and peer-review process for journal articles to more robustly link them to data that are only available on request. We also propose additional features for data repositories to better accommodate non-public clinical datasets, including Data Use Agreements (DUAs).

Performance of sparse-view CT reconstruction with multi-directional gradient operators

  • Hsieh, C. J.
  • Jin, S. C.
  • Chen, J. C.
  • Kuo, C. W.
  • Wang, R. T.
  • Chu, W. C.
PLoS One 2019 Journal Article, cited 0 times
Website
To further reduce the noise and artifacts in the reconstructed image of sparse-view CT, we have modified the traditional total variation (TV) methods, which only calculate the gradient variations in x and y directions, and have proposed 8- and 26-directional (the multi-directional) gradient operators for TV calculation to improve the quality of reconstructed images. Different from traditional TV methods, the proposed 8- and 26-directional gradient operators additionally consider the diagonal directions in TV calculation. The proposed method preserves more information from original tomographic data in the step of gradient transform to obtain better reconstruction image qualities. Our algorithms were tested using two-dimensional Shepp-Logan phantom and three-dimensional clinical CT images. Results were evaluated using the root-mean-square error (RMSE), peak signal-to-noise ratio (PSNR), and universal quality index (UQI). All the experiment results show that the sparse-view CT images reconstructed using the proposed 8- and 26-directional gradient operators are superior to those reconstructed by traditional TV methods. Qualitative and quantitative analyses indicate that the more number of directions that the gradient operator has, the better images can be reconstructed. The 8- and 26-directional gradient operators we proposed have better capability to reduce noise and artifacts than traditional TV methods, and they are applicable to be applied to and combined with existing CT reconstruction algorithms derived from CS theory to produce better image quality in sparse-view reconstruction.

Quantitative glioma grading using transformed gray-scale invariant textures of MRI

  • Hsieh, Kevin Li-Chun
  • Chen, Cheng-Yu
  • Lo, Chung-Ming
Computers in biology and medicine 2017 Journal Article, cited 8 times
Website
Background: A computer-aided diagnosis (CAD) system based on intensity-invariant magnetic resonance (MR) imaging features was proposed to grade gliomas for general application to various scanning systems and settings. Method: In total, 34 glioblastomas and 73 lower-grade gliomas comprised the image database to evaluate the proposed CAD system. For each case, the local texture on MR images was transformed into a local binary pattern (LBP) which was intensity-invariant. From the LBP, quantitative image features, including the histogram moment and textures, were extracted and combined in a logistic regression classifier to establish a malignancy prediction model. The performance was compared to conventional texture features to demonstrate the improvement. Results: The performance of the CAD system based on LBP features achieved an accuracy of 93% (100/107), a sensitivity of 97% (33/34), a negative predictive value of 99% (67/68), and an area under the receiver operating characteristic curve (Az) of 0.94, which were significantly better than the conventional texture features: an accuracy of 84% (90/107), a sensitivity of 76% (26/34), a negative predictive value of 89% (64/72), and an Az of 0.89 with respective p values of 0.0303, 0.0122, 0.0201, and 0.0334. Conclusions: More-robust texture features were extracted from MR images and combined into a significantly better CAD system for distinguishing glioblastomas from lower-grade gliomas. The proposed CAD system would be more practical in clinical use with various imaging systems and settings.

Computer-aided grading of gliomas based on local and global MRI features

  • Hsieh, Kevin Li-Chun
  • Lo, Chung-Ming
  • Hsiao, Chih-Jou
Computer methods and programs in biomedicine 2017 Journal Article, cited 13 times
Website
BACKGROUND AND OBJECTIVES: A computer-aided diagnosis (CAD) system based on quantitative magnetic resonance imaging (MRI) features was developed to evaluate the malignancy of diffuse gliomas, which are central nervous system tumors. METHODS: The acquired image database for the CAD performance evaluation was composed of 34 glioblastomas and 73 diffuse lower-grade gliomas. In each case, tissues enclosed in a delineated tumor area were analyzed according to their gray-scale intensities on MRI scans. Four histogram moment features describing the global gray-scale distributions of gliomas tissues and 14 textural features were used to interpret local correlations between adjacent pixel values. With a logistic regression model, the individual feature set and a combination of both feature sets were used to establish the malignancy prediction model. RESULTS: Performances of the CAD system using global, local, and the combination of both image feature sets achieved accuracies of 76%, 83%, and 88%, respectively. Compared to global features, the combined features had significantly better accuracy (p = 0.0213). With respect to the pathology results, the CAD classification obtained substantial agreement kappa = 0.698, p < 0.001. CONCLUSIONS: Numerous proposed image features were significant in distinguishing glioblastomas from lower-grade gliomas. Combining them further into a malignancy prediction model would be promising in providing diagnostic suggestions for clinical use.

Effect of a computer-aided diagnosis system on radiologists' performance in grading gliomas with MRI

  • Hsieh, Kevin Li-Chun
  • Tsai, Ruei-Je
  • Teng, Yu-Chuan
  • Lo, Chung-Ming
PLoS One 2017 Journal Article, cited 0 times
Website
The effects of a computer-aided diagnosis (CAD) system based on quantitative intensity features with magnetic resonance (MR) imaging (MRI) were evaluated by examining radiologists' performance in grading gliomas. The acquired MRI database included 71 lower-grade gliomas and 34 glioblastomas. Quantitative image features were extracted from the tumor area and combined in a CAD system to generate a prediction model. The effect of the CAD system was evaluated in a two-stage procedure. First, a radiologist performed a conventional reading. A sequential second reading was determined with a malignancy estimation by the CAD system. Each MR image was regularly read by one radiologist out of a group of three radiologists. The CAD system achieved an accuracy of 87% (91/105), a sensitivity of 79% (27/34), a specificity of 90% (64/71), and an area under the receiver operating characteristic curve (Az) of 0.89. In the evaluation, the radiologists' Az values significantly improved from 0.81, 0.87, and 0.84 to 0.90, 0.90, and 0.88 with p = 0.0011, 0.0076, and 0.0167, respectively. Based on the MR image features, the proposed CAD system not only performed well in distinguishing glioblastomas from lower-grade gliomas but also provided suggestions about glioma grading to reinforce radiologists' confidence rating.

Brain Tumor Segmentation Using Multi-Cascaded Convolutional Neural Networks and Conditional Random Field

  • Hu, Kai
  • Gan, Qinghai
  • Zhang, Yuan
  • Deng, Shuhua
  • Xiao, Fen
  • Huang, Wei
  • Cao, Chunhong
  • Gao, Xieping
IEEE Access 2019 Journal Article, cited 2 times
Website
Accurate segmentation of brain tumor is an indispensable component for cancer diagnosis and treatment. In this paper, we propose a novel brain tumor segmentation method based on multicascaded convolutional neural network (MCCNN) and fully connected conditional random fields (CRFs). The segmentation process mainly includes the following two steps. First, we design a multi-cascaded network architecture by combining the intermediate results of several connected components to take the local dependencies of labels into account and make use of multi-scale features for the coarse segmentation. Second, we apply CRFs to consider the spatial contextual information and eliminate some spurious outputs for the fine segmentation. In addition, we use image patches obtained from axial, coronal, and sagittal views to respectively train three segmentation models, and then combine them to obtain the final segmentation result. The validity of the proposed method is evaluated on three publicly available databases. The experimental results show that our method achieves competitive performance compared with the state-of-the-art approaches.

A neural network approach to lung nodule segmentation

  • Hu, Yaoxiu
  • Menon, Prahlad G
2016 Conference Proceedings, cited 1 times
Website

Image Super-Resolution Algorithm Based on an Improved Sparse Autoencoder

  • Huang, Detian
  • Huang, Weiqin
  • Yuan, Zhenguo
  • Lin, Yanming
  • Zhang, Jian
  • Zheng, Lixin
Information 2018 Journal Article, cited 0 times
Website
Due to the limitations of the resolution of the imaging system and the influence of scene changes and other factors, sometimes only low-resolution images can be acquired, which cannot satisfy the practical application’s requirements. To improve the quality of low-resolution images, a novel super-resolution algorithm based on an improved sparse autoencoder is proposed. Firstly, in the training set preprocessing stage, the high- and low-resolution image training sets are constructed, respectively, by using high-frequency information of the training samples as the characterization, and then the zero-phase component analysis whitening technique is utilized to decorrelate the formed joint training set to reduce its redundancy. Secondly, a constructed sparse regularization term is added to the cost function of the traditional sparse autoencoder to further strengthen the sparseness constraint on the hidden layer. Finally, in the dictionary learning stage, the improved sparse autoencoder is adopted to achieve unsupervised dictionary learning to improve the accuracy and stability of the dictionary. Experimental results validate that the proposed algorithm outperforms the existing algorithms both in terms of the subjective visual perception and the objective evaluation indices, including the peak signal-to-noise ratio and the structural similarity measure.

Assessment of a radiomic signature developed in a general NSCLC cohort for predicting overall survival of ALK-positive patients with different treatment types

  • Huang, Lyu
  • Chen, Jiayan
  • Hu, Weigang
  • Xu, Xinyan
  • Liu, Di
  • Wen, Junmiao
  • Lu, Jiayu
  • Cao, Jianzhao
  • Zhang, Junhua
  • Gu, Yu
  • Wang, Jiazhou
  • Fan, Min
Clinical lung cancer 2019 Journal Article, cited 0 times
Website
Objectives To investigate the potential of a radiomic signature developed in a general NSCLC cohort for predicting the overall survival of ALK-positive patients with different treatment types. Methods After test-retest in the RIDER dataset, 132 features (ICC>0.9) were selected in the LASSO Cox regression model with a leave-one-out cross-validation. The NSCLC Radiomics collection from TCIA was randomly divided into a training set (N=254) and a validation set (N=63) to develop a general radiomic signature for NSCLC. In our ALK+ set, 35 patients received targeted therapy and 19 patients received non-targeted therapy. The developed signature was tested later in this ALK+ set. Performance of the signature was evaluated with C-index and stratification analysis. Results The general signature has good performance (C-index>0.6, log-rank p-value<0.05) in the NSCLC Radiomics collection. It includes five features: Geom_va_ratio, W_GLCM_LH_Std, W_GLCM_LH_DV, W_GLCM_HH_IM2 and W_his_HL_mean (Supplementary Table S2). Its accuracy of predicting overall survival in the ALK+ set achieved 0.649 (95%CI=0.640-0.658). Nonetheless, impaired performance was observed in the targeted therapy group (C-index=0.573, 95%CI=0.556-0.589) while significantly improved performance was observed in the non-targeted therapy group (C-index=0.832, 95%CI=0.832-0.852). Stratification analysis also showed that the general signature could only identify high- and low-risk patients in the non-targeted therapy group (log-rank p-value=0.00028). Conclusions This preliminary study suggests that the applicability of a general signature to ALK-positive patients is limited. The general radiomic signature seems to be only applicable to ALK-positive patients who had received non-targeted therapy, which indicates that developing special radiomics signatures for patients treated with TKI might be necessary. Abbreviations and acronyms TCIA The Cancer Imaging Archive ALK Anaplastic lymphoma kinase NSCLC Non-small cell lung cancer EML4-ALK fusion The echinoderm microtubule-associated protein like 4-anaplastic lymphoma kinase fusion C-index Concordance index CI Confidence interval ICC The intra-class correlation coefficient OS Overall Survival LASSO The Least Absolute Shrinkage and Selection Operator EGFR Epidermal Growth Factor Receptor TKI Tyrosine-kinase inhibitor

The Study on Data Hiding in Medical Images

  • Huang, Li-Chin
  • Tseng, Lin-Yu
  • Hwang, Min-Shiang
International Journal of Network Security 2012 Journal Article, cited 25 times
Website
Reversible data hiding plays an important role in medical image systems. Many hospitals have already applied the electronic medical information in healthcare systems. Reversible data hiding is one of the feasible methodologies to protect the individual privacy and confidential information. With application in several high quality medical devices, the detection rate of diseases and treating are improved at the early stage. Its demands havebeen rising for recognizing complicated anatomical structures in high quality images. However, most data hiding methods are still applied in 8-bit depth medical images with 255 intensity levels. This paper summarizes the existing reversible data hiding algorithms and introduces basic knowledge in medical image.

A reversible data hiding method by histogram shifting in high quality medical images

  • Huang, Li-Chin
  • Tseng, Lin-Yu
  • Hwang, Min-Shiang
Journal of Systems and Software 2013 Journal Article, cited 60 times
Website
Enormous demands for recognizing complicated anatomical structures in medical images have been demanded on high quality of medical image such as each pixel expressed by 16-bit depth. Now, most of data hiding algorithms are still applied in 8-bit depth medical images. We proposed a histogram shifting method for image reversible data hiding testing on high bit depth medical images. Among image local block pixels, we exploit the high correlation for smooth surface of anatomical structure in medical images. Thus, we apply a different value for each block of pixels to produce a difference histogram to embed secret bits. During data embedding stage, the image blocks are divided into two categories due to two corresponding embedding strategies. Via an inverse histogram shifting mechanism, the original image will be accurately recovered after the hidden data extraction. Due to requirements of medical images for data hiding, we proposed six criteria: (1) well-suited for high quality medical images, (2) without salt-and-pepper, (3) applicable to medical image with smooth surface, (4) well-suited sparse histogram of intensity levels, (5) free location map, (6) ability of adjusting data embedding capacity, PSNR and Inter-Slice PSNR. We proposed a data hiding methods satisfying above 6 criteria. © 2012 Elsevier Inc. All rights reserved

The Impact of Arterial Input Function Determination Variations on Prostate Dynamic Contrast-Enhanced Magnetic Resonance Imaging Pharmacokinetic Modeling: A Multicenter Data Analysis Challenge

  • Huang, Wei
  • Chen, Yiyi
  • Fedorov, Andriy
  • Li, Xia
  • Jajamovich, Guido H
  • Malyarenko, Dariya I
  • Aryal, Madhava P
  • LaViolette, Peter S
  • Oborski, Matthew J
  • O'Sullivan, Finbarr
Tomography: a journal for imaging research 2016 Journal Article, cited 21 times
Website

Variations of dynamic contrast-enhanced magnetic resonance imaging in evaluation of breast cancer therapy response: a multicenter data analysis challenge

  • Huang, W.
  • Li, X.
  • Chen, Y.
  • Li, X.
  • Chang, M. C.
  • Oborski, M. J.
  • Malyarenko, D. I.
  • Muzi, M.
  • Jajamovich, G. H.
  • Fedorov, A.
  • Tudorica, A.
  • Gupta, S. N.
  • Laymon, C. M.
  • Marro, K. I.
  • Dyvorne, H. A.
  • Miller, J. V.
  • Barbodiak, D. P.
  • Chenevert, T. L.
  • Yankeelov, T. E.
  • Mountz, J. M.
  • Kinahan, P. E.
  • Kikinis, R.
  • Taouli, B.
  • Fennessy, F.
  • Kalpathy-Cramer, J.
Transl Oncol 2014 Journal Article, cited 60 times
Website
Pharmacokinetic analysis of dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) time-course data allows estimation of quantitative parameters such as K (trans) (rate constant for plasma/interstitium contrast agent transfer), v e (extravascular extracellular volume fraction), and v p (plasma volume fraction). A plethora of factors in DCE-MRI data acquisition and analysis can affect accuracy and precision of these parameters and, consequently, the utility of quantitative DCE-MRI for assessing therapy response. In this multicenter data analysis challenge, DCE-MRI data acquired at one center from 10 patients with breast cancer before and after the first cycle of neoadjuvant chemotherapy were shared and processed with 12 software tools based on the Tofts model (TM), extended TM, and Shutter-Speed model. Inputs of tumor region of interest definition, pre-contrast T1, and arterial input function were controlled to focus on the variations in parameter value and response prediction capability caused by differences in models and associated algorithms. Considerable parameter variations were observed with the within-subject coefficient of variation (wCV) values for K (trans) and v p being as high as 0.59 and 0.82, respectively. Parameter agreement improved when only algorithms based on the same model were compared, e.g., the K (trans) intraclass correlation coefficient increased to as high as 0.84. Agreement in parameter percentage change was much better than that in absolute parameter value, e.g., the pairwise concordance correlation coefficient improved from 0.047 (for K (trans)) to 0.92 (for K (trans) percentage change) in comparing two TM algorithms. Nearly all algorithms provided good to excellent (univariate logistic regression c-statistic value ranging from 0.8 to 1.0) early prediction of therapy response using the metrics of mean tumor K (trans) and k ep (=K (trans)/v e, intravasation rate constant) after the first therapy cycle and the corresponding percentage changes. The results suggest that the interalgorithm parameter variations are largely systematic, which are not likely to significantly affect the utility of DCE-MRI for assessment of therapy response.

Radiomics of NSCLC: Quantitative CT Image Feature Characterization and Tumor Shrinkage Prediction

  • Hunter, Luke
2013 Thesis, cited 4 times
Website

Collage CNN for Renal Cell Carcinoma Detection from CT

  • Hussain, Mohammad Arafat
  • Amir-Khalili, Alborz
  • Hamarneh, Ghassan
  • Abugharbieh, Rafeef
2017 Conference Proceedings, cited 0 times
Website

Learnable image histograms-based deep radiomics for renal cell carcinoma grading and staging

  • Hussain, M. A.
  • Hamarneh, G.
  • Garbi, R.
Comput Med Imaging Graph 2021 Journal Article, cited 0 times
Website
Fuhrman cancer grading and tumor-node-metastasis (TNM) cancer staging systems are typically used by clinicians in the treatment planning of renal cell carcinoma (RCC), a common cancer in men and women worldwide. Pathologists typically use percutaneous renal biopsy for RCC grading, while staging is performed by volumetric medical image analysis before renal surgery. Recent studies suggest that clinicians can effectively perform these classification tasks non-invasively by analyzing image texture features of RCC from computed tomography (CT) data. However, image feature identification for RCC grading and staging often relies on laborious manual processes, which is error prone and time-intensive. To address this challenge, this paper proposes a learnable image histogram in the deep neural network framework that can learn task-specific image histograms with variable bin centers and widths. The proposed approach enables learning statistical context features from raw medical data, which cannot be performed by a conventional convolutional neural network (CNN). The linear basis function of our learnable image histogram is piece-wise differentiable, enabling back-propagating errors to update the variable bin centers and widths during training. This novel approach can segregate the CT textures of an RCC in different intensity spectra, which enables efficient Fuhrman low (I/II) and high (III/IV) grading as well as RCC low (I/II) and high (III/IV) staging. The proposed method is validated on a clinical CT dataset of 159 patients from The Cancer Imaging Archive (TCIA) database, and it demonstrates 80% and 83% accuracy in RCC grading and staging, respectively.

Automatic MRI Breast tumor Detection using Discrete Wavelet Transform and Support Vector Machines

  • Ibraheem, Amira Mofreh
  • Rahouma, Kamel Hussein
  • Hamed, Hesham F. A.
2019 Conference Paper, cited 0 times
Website
The human right is to live a healthy life free of serious diseases. Cancer is the most serious disease facing humans and possibly leading to death. So, a definitive solution must be done to these diseases, to eliminate them and also to protect humans from them. Breast cancer is considered being one of the dangerous types of cancers that face women in particular. Early examination should be done periodically and the diagnosis must be more sensitive and effective to preserve the women lives. There are various types of breast cancer images but magnetic resonance imaging (MRI) has become one of the important ways in breast cancer detection. In this work, a new method is done to detect the breast cancer using the MRI images that is preprocessed using a 2D Median filter. The features are extracted from the images using discrete wavelet transform (DWT). These features are reduced to 13 features. Then, support vector machine (SVM) is used to detect if there is a tumor or not. Simulation results have been accomplished using the MRI images datasets. These datasets are extracted from the standard Breast MRI database known as the “Reference Image Database to Evaluate Response (RIDER)”. The proposed method has achieved an accuracy of 98.03 % using the available MRIs database. The processing time for all processes was recorded as 0.894 seconds. The obtained results have demonstrated the superiority of the proposed system over the available ones in the literature.

Brain tumor segmentation in multi‐spectral MRI using convolutional neural networks (CNN)

  • Iqbal, Sajid
  • Ghani, M Usman
  • Saba, Tanzila
  • Rehman, Amjad
Microscopy research and technique 2018 Journal Article, cited 8 times
Website

A rotation and translation invariant method for 3D organ image classification using deep convolutional neural networks

  • Islam, Kh Tohidul
  • Wijewickrema, Sudanthi
  • O’Leary, Stephen
PeerJ Computer SciencePeerJ Computer Science 2019 Journal Article, cited 0 times
Website
Three-dimensional (3D) medical image classification is useful in applications such as disease diagnosis and content-based medical image retrieval. It is a challenging task due to several reasons. First, image intensity values are vastly different depending on the image modality. Second, intensity values within the same image modality may vary depending on the imaging machine and artifacts may also be introduced in the imaging process. Third, processing 3D data requires high computational power. In recent years, significant research has been conducted in the field of 3D medical image classification. However, most of these make assumptions about patient orientation and imaging direction to simplify the problem and/or work with the full 3D images. As such, they perform poorly when these assumptions are not met. In this paper, we propose a method of classification for 3D organ images that is rotation and translation invariant. To this end, we extract a representative two-dimensional (2D) slice along the plane of best symmetry from the 3D image. We then use this slice to represent the 3D image and use a 20-layer deep convolutional neural network (DCNN) to perform the classification task. We show experimentally, using multi-modal data, that our method is comparable to existing methods when the assumptions of patient orientation and viewing direction are met. Notably, it shows similarly high accuracy even when these assumptions are violated, where other methods fail. We also explore how this method can be used with other DCNN models as well as conventional classification approaches.

Magnetic resonance image features identify glioblastoma phenotypic subtypes with distinct molecular pathway activities

  • Itakura, Haruka
  • Achrol, Achal S
  • Mitchell, Lex A
  • Loya, Joshua J
  • Liu, Tiffany
  • Westbroek, Erick M
  • Feroze, Abdullah H
  • Rodriguez, Scott
  • Echegaray, Sebastian
  • Azad, Tej D
Science translational medicine 2015 Journal Article, cited 90 times
Website

Quantitative imaging in radiation oncology: An emerging science and clinical service

  • Jaffray, DA
  • Chung, C
  • Coolens, C
  • Foltz, W
  • Keller, H
  • Menard, C
  • Milosevic, M
  • Publicover, J
  • Yeung, I
2015 Conference Proceedings, cited 9 times
Website

Prediction of Treatment Response to Neoadjuvant Chemotherapy for Breast Cancer via Early Changes in Tumor Heterogeneity Captured by DCE-MRI Registration

  • Jahani, Nariman
  • Cohen, Eric
  • Hsieh, Meng-Kang
  • Weinstein, Susan P
  • Pantalone, Lauren
  • Hylton, Nola
  • Newitt, David
  • Davatzikos, Christos
  • Kontos, Despina
Sci RepScientific reports 2019 Journal Article, cited 0 times
Website
We analyzed DCE-MR images from 132 women with locally advanced breast cancer from the I-SPY1 trial to evaluate changes of intra-tumor heterogeneity for augmenting early prediction of pathologic complete response (pCR) and recurrence-free survival (RFS) after neoadjuvant chemotherapy (NAC). Utilizing image registration, voxel-wise changes including tumor deformations and changes in DCE-MRI kinetic features were computed to characterize heterogeneous changes within the tumor. Using five-fold cross-validation, logistic regression and Cox regression were performed to model pCR and RFS, respectively. The extracted imaging features were evaluated in augmenting established predictors, including functional tumor volume (FTV) and histopathologic and demographic factors, using the area under the curve (AUC) and the C-statistic as performance measures. The extracted voxel-wise features were also compared to analogous conventional aggregated features to evaluate the potential advantage of voxel-wise analysis. Voxel-wise features improved prediction of pCR (AUC = 0.78 (±0.03) vs 0.71 (±0.04), p < 0.05 and RFS (C-statistic = 0.76 ( ± 0.05), vs 0.63 ( ± 0.01)), p < 0.05, while models based on analogous aggregate imaging features did not show appreciable performance changes (p > 0.05). Furthermore, all selected voxel-wise features demonstrated significant association with outcome (p < 0.05). Thus, precise measures of voxel-wise changes in tumor heterogeneity extracted from registered DCE-MRI scans can improve early prediction of neoadjuvant treatment outcomes in locally advanced breast cancer.

Genomic mapping and survival prediction in glioblastoma: molecular subclassification strengthened by hemodynamic imaging biomarkers

  • Jain, Rajan
  • Poisson, Laila
  • Narang, Jayant
  • Gutman, David
  • Scarpace, Lisa
  • Hwang, Scott N
  • Holder, Chad
  • Wintermark, Max
  • Colen, Rivka R
  • Kirby, Justin
  • Freymann, John
  • Brat, Daniel J
  • Jaffe, Carl
  • Mikkelsen, Tom
Radiology 2013 Journal Article, cited 99 times
Website
PURPOSE: To correlate tumor blood volume, measured by using dynamic susceptibility contrast material-enhanced T2*-weighted magnetic resonance (MR) perfusion studies, with patient survival and determine its association with molecular subclasses of glioblastoma (GBM). MATERIALS AND METHODS: This HIPAA-compliant retrospective study was approved by institutional review board. Fifty patients underwent dynamic susceptibility contrast-enhanced T2*-weighted MR perfusion studies and had gene expression data available from the Cancer Genome Atlas. Relative cerebral blood volume (rCBV) (maximum rCBV [rCBV(max)] and mean rCBV [rCBV(mean)]) of the contrast-enhanced lesion as well as rCBV of the nonenhanced lesion (rCBV(NEL)) were measured. Patients were subclassified according to the Verhaak and Phillips classification schemas, which are based on similarity to defined genomic expression signature. We correlated rCBV measures with the molecular subclasses as well as with patient overall survival by using Cox regression analysis. RESULTS: No statistically significant differences were noted for rCBV(max), rCBV(mean) of contrast-enhanced lesion or rCBV(NEL) between the four Verhaak classes or the three Phillips classes. However, increased rCBV measures are associated with poor overall survival in GBM. The rCBV(max) (P = .0131) is the strongest predictor of overall survival regardless of potential confounders or molecular classification. Interestingly, including the Verhaak molecular GBM classification in the survival model clarifies the association of rCBV(mean) with patient overall survival (hazard ratio: 1.46, P = .0212) compared with rCBV(mean) alone (hazard ratio: 1.25, P = .1918). Phillips subclasses are not predictive of overall survival nor do they affect the predictive ability of rCBV measures on overall survival. CONCLUSION: The rCBV(max) measurements could be used to predict patient overall survival independent of the molecular subclasses of GBM; however, Verhaak classifiers provided additional information, suggesting that molecular markers could be used in combination with hemodynamic imaging biomarkers in the future.

Correlation of perfusion parameters with genes related to angiogenesis regulation in glioblastoma: a feasibility study

  • Jain, R
  • Poisson, L
  • Narang, J
  • Scarpace, L
  • Rosenblum, ML
  • Rempel, S
  • Mikkelsen, T
American Journal of Neuroradiology 2012 Journal Article, cited 39 times
Website
BACKGROUND AND PURPOSE: Integration of imaging and genomic data is critical for a better understanding of gliomas, particularly considering the increasing focus on the use of imaging biomarkers for patient survival and treatment response. The purpose of this study was to correlate CBV and PS measured by using PCT with the genes regulating angiogenesis in GBM. MATERIALS AND METHODS: Eighteen patients with WHO grade IV gliomas underwent pretreatment PCT and measurement of CBV and PS values from enhancing tumor. Tumor specimens were analyzed by TCGA by using Human Gene Expression Microarrays and were interrogated for correlation between CBV and PS estimates across the genome. We used the GO biologic process pathways for angiogenesis regulation to select genes of interest. RESULTS: We observed expression levels for 92 angiogenesis-associated genes (332 probes), 19 of which had significant correlation with PS and 9 of which had significant correlation with CBV (P < .05). Proangiogenic genes such as TNFRSF1A (PS = 0.53, P = .024), HIF1A (PS = 0.62, P = .0065), KDR (CBV = 0.60, P = .0084; PS = 0.59, P = .0097), TIE1 (CBV = 0.54, P = .022; PS = 0.49, P = .039), and TIE2/TEK (CBV = 0.58, P = .012) showed a significant positive correlation; whereas antiangiogenic genes such as VASH2 (PS = -0.72, P = .00011) showed a significant inverse correlation. CONCLUSIONS: Our findings are provocative, with some of the proangiogenic genes showing a positive correlation and some of the antiangiogenic genes showing an inverse correlation with tumor perfusion parameters, suggesting a molecular basis for these imaging biomarkers; however, this should be confirmed in a larger patient population.

Outcome prediction in patients with glioblastoma by using imaging, clinical, and genomic biomarkers: focus on the nonenhancing component of the tumor

  • Jain, R.
  • Poisson, L. M.
  • Gutman, D.
  • Scarpace, L.
  • Hwang, S. N.
  • Holder, C. A.
  • Wintermark, M.
  • Rao, A.
  • Colen, R. R.
  • Kirby, J.
  • Freymann, J.
  • Jaffe, C. C.
  • Mikkelsen, T.
  • Flanders, A.
Radiology 2014 Journal Article, cited 86 times
Website
PURPOSE: To correlate patient survival with morphologic imaging features and hemodynamic parameters obtained from the nonenhancing region (NER) of glioblastoma (GBM), along with clinical and genomic markers. MATERIALS AND METHODS: An institutional review board waiver was obtained for this HIPAA-compliant retrospective study. Forty-five patients with GBM underwent baseline imaging with contrast material-enhanced magnetic resonance (MR) imaging and dynamic susceptibility contrast-enhanced T2*-weighted perfusion MR imaging. Molecular and clinical predictors of survival were obtained. Single and multivariable models of overall survival (OS) and progression-free survival (PFS) were explored with Kaplan-Meier estimates, Cox regression, and random survival forests. RESULTS: Worsening OS (log-rank test, P = .0103) and PFS (log-rank test, P = .0223) were associated with increasing relative cerebral blood volume of NER (rCBVNER), which was higher with deep white matter involvement (t test, P = .0482) and poor NER margin definition (t test, P = .0147). NER crossing the midline was the only morphologic feature of NER associated with poor survival (log-rank test, P = .0125). Preoperative Karnofsky performance score (KPS) and resection extent (n = 30) were clinically significant OS predictors (log-rank test, P = .0176 and P = .0038, respectively). No genomic alterations were associated with survival, except patients with high rCBVNER and wild-type epidermal growth factor receptor (EGFR) mutation had significantly poor survival (log-rank test, P = .0306; area under the receiver operating characteristic curve = 0.62). Combining resection extent with rCBVNER marginally improved prognostic ability (permutation, P = .084). Random forest models of presurgical predictors indicated rCBVNER as the top predictor; also important were KPS, age at diagnosis, and NER crossing the midline. A multivariable model containing rCBVNER, age at diagnosis, and KPS can be used to group patients with more than 1 year of difference in observed median survival (0.49-1.79 years). CONCLUSION: Patients with high rCBVNER and NER crossing the midline and those with high rCBVNER and wild-type EGFR mutation showed poor survival. In multivariable survival models, however, rCBVNER provided unique prognostic information that went above and beyond the assessment of all NER imaging features, as well as clinical and genomic features.

Non-invasive tumor genotyping using radiogenomic biomarkers, a systematic review and oncology-wide pathway analysis

  • Jansen, Robin W
  • van Amstel, Paul
  • Martens, Roland M
  • Kooi, Irsan E
  • Wesseling, Pieter
  • de Langen, Adrianus J
  • Menke-Van der Houven, Catharina W
Oncotarget 2018 Journal Article, cited 0 times
Website

Computer-aided nodule detection and volumetry to reduce variability between radiologists in the interpretation of lung nodules at low-dose screening CT

  • Jeon, Kyung Nyeo
  • Goo, Jin Mo
  • Lee, Chang Hyun
  • Lee, Youkyung
  • Choo, Ji Yung
  • Lee, Nyoung Keun
  • Shim, Mi-Suk
  • Lee, In Sun
  • Kim, Kwang Gi
  • Gierada, David S
Investigative radiology 2012 Journal Article, cited 51 times
Website

Evaluation of Feature Robustness Against Technical Parameters in CT Radiomics: Verification of Phantom Study with Patient Dataset

  • Jin, Hyeongmin
  • Kim, Jong Hyo
Journal of Signal Processing Systems 2020 Journal Article, cited 1 times
Website
Recent advances in radiomics have shown promising results in prognostic and diagnostic studies with high dimensional imaging feature analysis. However, radiomic features are known to be affected by technical parameters and feature extraction methodology. We evaluate the robustness of CT radiomic features against the technical parameters involved in CT acquisition and feature extraction procedures using a standardized phantom and verify the feature robustness by using patient cases. ACR phantom was scanned with two tube currents, two reconstruction kernels, and two fields of view size. A total of 47 radiomic features of textures and first-order statistics were extracted on the homogeneous region from all scans. Intrinsic variability was measured to identify unstable features vulnerable to inherent CT noise and texture. Susceptibility index was defined to represent the susceptibility to the variation of a given technical parameter. Eighteen radiomic features were shown to be intrinsically unstable on reference condition. The features were more susceptible to the reconstruction kernel variation than to other sources of variation. The feature robustness evaluated on the phantom CT correlated with those evaluated on clinical CT scans. We revealed a number of scan parameters could significantly affect the radiomic features. These characteristics should be considered in a radiomic study when different scan parameters are used in a clinical dataset.

Enhancement of Deep Learning in Image Classification Performance Using Xception with the Swish Activation Function for Colorectal Polyp Preliminary Screening

  • Jinsakul, Natinai
  • Tsai, Cheng-Fa
  • Tsai, Chia-En
  • Wu, Pensee
Mathematics 2019 Journal Article, cited 0 times
One of the leading forms of cancer is colorectal cancer (CRC), which is responsible for increasing mortality in young people. The aim of this paper is to provide an experimental modification of deep learning of Xception with Swish and assess the possibility of developing a preliminary colorectal polyp screening system by training the proposed model with a colorectal topogram dataset in two and three classes. The results indicate that the proposed model can enhance the original convolutional neural network model with evaluation classification performance by achieving accuracy of up to 98.99% for classifying into two classes and 91.48% for three classes. For testing of the model with another external image, the proposed method can also improve the prediction compared to the traditional method, with 99.63% accuracy for true prediction of two classes and 80.95% accuracy for true prediction of three classes.

Analysis of Vestibular Labyrinthine Geometry and Variation in the Human Temporal Bone

  • Johnson Chacko, Lejo
  • Schmidbauer, Dominik T
  • Handschuh, Stephan
  • Reka, Alen
  • Fritscher, Karl D
  • Raudaschl, Patrik
  • Saba, Rami
  • Handler, Michael
  • Schier, Peter P
  • Baumgarten, Daniel
  • Fischer, Natalie
  • Pechriggl, Elisabeth J
  • Brenner, Erich
  • Hoermann, Romed
  • Glueckert, Rudolf
  • Schrott-Fischer, Anneliese
Frontiers in neuroscience 2018 Journal Article, cited 4 times
Website
Stable posture and body movement in humans is dictated by the precise functioning of the ampulla organs in the semi-circular canals. Statistical analysis of the interrelationship between bony and membranous compartments within the semi-circular canals is dependent on the visualization of soft tissue structures. Thirty-one human inner ears were prepared, post-fixed with osmium tetroxide and decalcified for soft tissue contrast enhancement. High resolution X-ray microtomography images at 15 mum voxel-size were manually segmented. This data served as templates for centerline generation and cross-sectional area extraction. Our estimates demonstrate the variability of individual specimens from averaged centerlines of both bony and membranous labyrinth. Centerline lengths and cross-sectional areas along these lines were identified from segmented data. Using centerlines weighted by the inverse squares of the cross-sectional areas, plane angles could be quantified. The fit planes indicate that the bony labyrinth resembles a Cartesian coordinate system more closely than the membranous labyrinth. A widening in the membranous labyrinth of the lateral semi-circular canal was observed in some of the specimens. Likewise, the cross-sectional areas in the perilymphatic spaces of the lateral canal differed from the other canals. For the first time we could precisely describe the geometry of the human membranous labyrinth based on a large sample size. Awareness of the variations in the canal geometry of the membranous and bony labyrinth would be a helpful reference in designing electrodes for future vestibular prosthesis and simulating fluid dynamics more precisely.

Interactive 3D Virtual Colonoscopic Navigation For Polyp Detection From CT Images

  • Joseph, Jinu
  • Kumar, Rajesh
  • Chandran, Pournami S
  • Vidya, PV
Procedia Computer Science 2017 Journal Article, cited 0 times
Website

Homology-based radiomic features for prediction of the prognosis of lung cancer based on CT-based radiomics

  • Kadoya, Noriyuki
  • Tanaka, Shohei
  • Kajikawa, Tomohiro
  • Tanabe, Shunpei
  • Abe, Kota
  • Nakajima, Yujiro
  • Yamamoto, Takaya
  • Takahashi, Noriyoshi
  • Takeda, Kazuya
  • Dobashi, Suguru
  • Takeda, Ken
  • Nakane, Kazuaki
  • Jingu, Keiichi
Med Phys 2020 Journal Article, cited 0 times
Website
PURPOSE: Radiomics is a new technique that enables noninvasive prognostic prediction by extracting features from medical images. Homology is a concept used in many branches of algebra and topology that can quantify the contact degree. In the present study, we developed homology-based radiomic features to predict the prognosis of non-small-cell lung cancer (NSCLC) patients and then evaluated the accuracy of this prediction method. METHODS: Four data sets were used: two to provide training and test data and two for the selection of robust radiomic features. All the data sets were downloaded from The Cancer Imaging Archive (TCIA). In two-dimensional cases, the Betti numbers consist of two values: b0 (zero-dimensional Betti number), which is the number of isolated components, and b1 (one-dimensional Betti number), which is the number of one-dimensional or "circular" holes. For homology-based evaluation, CT images must be converted to binarized images in which each pixel has two possible values: 0 or 1. All CT slices of the gross tumor volume were used for calculating the homology histogram. First, by changing the threshold of the CT value (range: -150 to 300 HU) for all its slices, we developed homology-based histograms for b0 , b1 , and b1 /b0 using binarized images All histograms were then summed, and the summed histogram was normalized by the number of slices. 144 homology-based radiomic features were defined from the histogram. To compare the standard radiomic features, 107 radiomic features were calculated using the standard radiomics technique. To clarify the prognostic power, the relationship between the values of the homology-based radiomic features and overall survival was evaluated using LASSO Cox regression model and the Kaplan-Meier method. The retained features with non-zero coefficients calculated by the LASSO Cox regression model were used for fitting the regression model. Moreover, these features were then integrated into a radiomics signature. An individualized rad score was calculated from a linear combination of the selected features, which were weighted by their respective coefficients. RESULTS: When the patients in the training and test data sets were stratified into high-risk and low-risk groups according to the rad scores, the overall survival of the groups was significantly different. The C-index values for the homology-based features (rad score), standard features (rad score), and tumor size were 0.625, 0.603, and 0.607, respectively, for the training data sets and 0.689, 0.668, and 0.667 for the test data sets. This result showed that homology-based radiomic features had slightly higher prediction power than the standard radiomic features. CONCLUSIONS: Prediction performance using homology-based radiomic features had a comparable or slightly higher prediction power than standard radiomic features. These findings suggest that homology-based radiomic features may have great potential for improving the prognostic prediction accuracy of CT-based radiomics. In this result, it is noteworthy that there are some limitations.

Multicenter CT phantoms public dataset for radiomics reproducibility tests

  • Kalendralis, Petros
  • Traverso, Alberto
  • Shi, Zhenwei
  • Zhovannik, Ivan
  • Monshouwer, Rene
  • Starmans, Martijn P A
  • Klein, Stefan
  • Pfaehler, Elisabeth
  • Boellaard, Ronald
  • Dekker, Andre
  • Wee, Leonard
Med Phys 2019 Journal Article, cited 0 times
Website
PURPOSE: The aim of this paper is to describe a public, open-access, computed tomography (CT) phantom image set acquired at three centers and collected especially for radiomics reproducibility research. The dataset is useful to test radiomic features reproducibility with respect to various parameters, such as acquisition settings, scanners, and reconstruction algorithms. ACQUISITION AND VALIDATION METHODS: Three phantoms were scanned in three independent institutions. Images of the following phantoms were acquired: Catphan 700 and COPDGene Phantom II (Phantom Laboratory, Greenwich, NY, USA), and the Triple modality 3D Abdominal Phantom (CIRS, Norfolk, VA, USA). Data were collected at three Dutch medical centers: MAASTRO Clinic (Maastricht, NL), Radboud University Medical Center (Nijmegen, NL), and University Medical Center Groningen (Groningen, NL) with scanners from two different manufacturers Siemens Healthcare and Philips Healthcare. The following acquisition parameter were varied in the phantom scans: slice thickness, reconstruction kernels, and tube current. DATA FORMAT AND USAGE NOTES: We made the dataset publically available on the Dutch instance of "Extensible Neuroimaging Archive Toolkit-XNAT" (https://xnat.bmia.nl). The dataset is freely available and reusable with attribution (Creative Commons 3.0 license). POTENTIAL APPLICATIONS: Our goal was to provide a findable, open-access, annotated, and reusable CT phantom dataset for radiomics reproducibility studies. Reproducibility testing and harmonization are fundamental requirements for wide generalizability of radiomics-based clinical prediction models. It is highly desirable to include only reproducible features into models, to be more assured of external validity across hitherto unseen contexts. In this view, phantom data from different centers represent a valuable source of information to exclude CT radiomic features that may already be unstable with respect to simplified structures and tightly controlled scan settings. The intended extension of our shared dataset is to include other modalities and phantoms with more realistic lesion simulations.

A low cost approach for brain tumor segmentation based on intensity modeling and 3D Random Walker

  • Kanas, Vasileios G
  • Zacharaki, Evangelia I
  • Davatzikos, Christos
  • Sgarbas, Kyriakos N
  • Megalooikonomou, Vasileios
Biomedical Signal Processing and Control 2015 Journal Article, cited 15 times
Website

Neurosense: deep sensing of full or near-full coverage head/brain scans in human magnetic resonance imaging

  • Kanber, B.
  • Ruffle, J.
  • Cardoso, J.
  • Ourselin, S.
  • Ciccarelli, O.
Neuroinformatics 2019 Journal Article, cited 0 times
Website
The application of automated algorithms to imaging requires knowledge of its content, a curatorial task, for which we ordinarily rely on the Digital Imaging and Communications in Medicine (DICOM) header as the only source of image meta-data. However, identifying brain MRI scans that have full or near-full coverage among a large number (e.g. >5000) of scans comprising both head/brain and other body parts is a time-consuming task that cannot be automated with the use of the information stored in the DICOM header attributes alone. Depending on the clinical scenario, an entire set of scans acquired in a single visit may often be labelled “BRAIN” in the DICOM field 0018,0015 (Body Part Examined), while the individual scans will often not only include brain scans with full coverage, but also others with partial brain coverage, scans of the spinal cord, and in some cases other body parts.

Public data and open source tools for multi-assay genomic investigation of disease

  • Kannan, Lavanya
  • Ramos, Marcel
  • Re, Angela
  • El-Hachem, Nehme
  • Safikhani, Zhaleh
  • Gendoo, Deena MA
  • Davis, Sean
  • Gomez-Cabrero, David
  • Castelo, Robert
  • Hansen, Kasper D
Briefings in bioinformatics 2015 Journal Article, cited 28 times
Website

Radiogenomic correlation for prognosis in patients with glioblastoma multiformae

  • Karnayana, Pallavi Machaiah
2013 Thesis, cited 0 times
Website

Identification of Tumor area from Brain MR Image

  • Kasım, Ömer
  • Kuzucuoğlu, Ahmet Emin
2016 Conference Proceedings, cited 1 times
Website

On-demand big data integration

  • Kathiravelu, Pradeeban
  • Sharma, Ashish
  • Galhardas, Helena
  • Van Roy, Peter
  • Veiga, Luís
Distributed and Parallel Databases 2018 Journal Article, cited 2 times
Website
Scientific research requires access, analysis, and sharing of data that is distributed across various heterogeneous data sources at the scale of the Internet. An eager extract, transform, and load (ETL) process constructs an integrated data repository as its first step, integrating and loading data in its entirety from the data sources. The bootstrapping of this process is not efficient for scientific research that requires access to data from very large and typically numerous distributed data sources. A lazy ETL process loads only the metadata, but still eagerly. Lazy ETL is faster in bootstrapping. However, queries on the integrated data repository of eager ETL perform faster, due to the availability of the entire data beforehand. In this paper, we propose a novel ETL approach for scientific data integration, as a hybrid of eager and lazy ETL approaches, and applied both to data as well as metadata. This way, hybrid ETL supports incremental integration and loading of metadata and data from the data sources. We incorporate a human-in-the-loop approach, to enhance the hybrid ETL, with selective data integration driven by the user queries and sharing of integrated data between users. We implement our hybrid ETL approach in a prototype platform, Óbidos, and evaluate it in the context of data sharing for medical research. Óbidos outperforms both the eager ETL and lazy ETL approaches, for scientific research data integration and sharing, through its selective loading of data and metadata, while storing the integrated data in a scalable integrated data repository.

“Radiotranscriptomics”: A synergy of imaging and transcriptomics in clinical assessment

  • Katrib, Amal
  • Hsu, William
  • Bui, Alex
  • Xing, Yi
Quantitative Biology 2016 Journal Article, cited 0 times

A joint intensity and edge magnitude-based multilevel thresholding algorithm for the automatic segmentation of pathological MR brain images

  • Kaur, Taranjit
  • Saini, Barjinder Singh
  • Gupta, Savita
Neural Computing and Applications 2016 Journal Article, cited 1 times
Website

Radiological Atlas for Patient Specific Model Generation

  • Kawa, Jacek
  • Juszczyk, Jan
  • Pyciński, Bartłomiej
  • Badura, Paweł
  • Pietka, Ewa
2014 Book Section, cited 11 times
Website

Supervised Dimension-Reduction Methods for Brain Tumor Image Data Analysis

  • Kawaguchi, Atsushi
2017 Book Section, cited 1 times
Website
The purpose of this study was to construct a risk score for glioblastomas based on magnetic resonance imaging (MRI) data. Tumor identification requires multimodal voxel-based imaging data that are highly dimensional, and multivariate models with dimension reduction are desirable for their analysis. We propose a two-step dimension-reduction method using a radial basis function–supervised multi-block sparse principal component analysis (SMS–PCA) method. The method is first implemented through the basis expansion of spatial brain images, and the scores are then reduced through regularized matrix decomposition in order to produce simultaneous data-driven selections of related brain regions supervised by univariate composite scores representing linear combinations of covariates such as age and tumor location. An advantage of the proposed method is that it identifies the associations of brain regions at the voxel level, and supervision is helpful in the interpretation.

eFis: A Fuzzy Inference Method for Predicting Malignancy of Small Pulmonary Nodules

  • Kaya, Aydın
  • Can, Ahmet Burak
2014 Book Section, cited 3 times
Website

Malignancy prediction by using characteristic-based fuzzy sets: A preliminary study

  • Kaya, Aydin
  • Can, Ahmet Burak
2015 Conference Proceedings, cited 0 times
Website

Computer-aided detection of brain tumors using image processing techniques

  • Kazdal, Seda
  • Dogan, Buket
  • Camurcu, Ali Yilmaz
2015 Conference Proceedings, cited 3 times
Website

Preliminary Detection and Analysis of Lung Cancer on CT images using MATLAB: A Cost-effective Alternative

  • Khan, Md Daud Hossain
  • Ahmed, Mansur
  • Bach, Christian
Journal of Biomedical Engineering and Medical Imaging 2016 Journal Article, cited 0 times

Zonal Segmentation of Prostate T2W-MRI using Atrous Convolutional Neural Network

  • Khan, Zia
  • Yahya, Norashikin
  • Alsaih, Khaled
  • Meriaudeau, Fabrice
2019 Conference Paper, cited 0 times
The number of prostate cancer cases is steadily increasing especially with rising number of ageing population. It is reported that 5-year relative survival rate for man with stage 1 prostate cancer is almost 99% hence, early detection will significantly improve treatment planning and increase survival rate. Magnetic resonance imaging (MRI) technique is a common imaging modality for diagnosis of prostate cancer. MRI provide good visualization of soft tissue and enable better lesion detection and staging of prostate cancer. The main challenge of prostate whole gland segmentation is due to blurry boundary of central gland (CG) and peripheral zone (PZ) which lead to differential diagnosis. Since there is substantial difference in occurance and characteristic of cancer in both zones. So to enhance the diagnosis of prostate gland, we implemented DeeplabV3+ semantic segmentation approach to segment the prostate into zones. DeepLabV3+ achieved significant results in segmentation of prostate MRI by applying several parallel atrous convolution with different rates. The CNN-based semantic segmentation approach is trained and tested on NCI-ISBI 1.5T and 3T MRI dataset consist of 40 patients. Performance evaluation based on Dice similarity coefficient (DSC) of the Deeplab-based segmentation is compared with two other CNN-based semantic segmentation techniques: FCN and PSNet. Results shows that prostate segmentation using DeepLabV3+ can perform better than FCN and PSNet with average DSC of 70.3% in PZ and 88% in CG zone. This indicates the significant contribution made by the atrous convolution layer, in producing better prostate segmentation result.

Prediction of 1p/19q Codeletion in Diffuse Glioma Patients Using Preoperative Multiparametric Magnetic Resonance Imaging

  • Kim, Donnie
  • Wang, Nicholas C
  • Ravikumar, Visweswaran
  • Raghuram, DR
  • Li, Jinju
  • Patel, Ankit
  • Wendt, Richard E
  • Rao, Ganesh
  • Rao, Arvind
Frontiers in computational neuroscience 2019 Journal Article, cited 0 times

Modification of population based arterial input function to incorporate individual variation

  • Kim, Harrison
Magn Reson Imaging 2018 Journal Article, cited 2 times
Website
This technical note describes how to modify a population-based arterial input function to incorporate variation among the individuals. In DCE-MRI, an arterial input function (AIF) is often distorted by pulsated inflow effect and noise. A population-based AIF (pAIF) has high signal-to-noise ratio (SNR), but cannot incorporate the individual variation. AIF variation is mainly induced by variation in cardiac output and blood volume of the individuals, which can be detected by the full width at half maximum (FWHM) during the first passage and the amplitude of AIF, respectively. Thus pAIF scaled in time and amplitude fitting to the individual AIF may serve as a high SNR AIF incorporating the individual variation. The proposed method was validated using DCE-MRI images of 18 prostate cancer patients. Root mean square error (RMSE) of pAIF from individual AIFs was 0.88+/-0.48mM (mean+/-SD), but it was reduced to 0.25+/-0.11mM after pAIF modification using the proposed method (p<0.0001).

Visual Interpretation of Convolutional Neural Network Predictions in Classifying Medical Image Modalities

  • Kim, Incheol
  • Rajaraman, Sivaramakrishnan
  • Antani, Sameer
Diagnostics (Basel) 2019 Journal Article, cited 0 times
Website
Deep learning (DL) methods are increasingly being applied for developing reliable computer-aided detection (CADe), diagnosis (CADx), and information retrieval algorithms. However, challenges in interpreting and explaining the learned behavior of the DL models hinders their adoption and use in real-world systems. In this study, we propose a novel method called “Class-selective Relevance Mapping” (CRM) for localizing and visualizing discriminative regions of interest (ROI) within a medical image. Such visualizations offer improved explanation of the convolutional neural network (CNN)-based DL model predictions. We demonstrate CRM effectiveness in classifying medical imaging modalities toward automatically labeling them for visual information retrieval applications. The CRM is based on linear sum of incremental mean squared errors (MSE) calculated at the output layer of the CNN model. It measures both positive and negative contributions of each spatial element in the feature maps produced from the last convolution layer leading to correct classification of an input image. A series of experiments on a “multi-modality” CNN model designed for classifying seven different types of image modalities shows that the proposed method is significantly better in detecting and localizing the discriminative ROIs than other state of the art class-activation methods. Further, to visualize its effectiveness we generate “class-specific” ROI maps by averaging the CRM scores of images in each modality class, and characterize the visual explanation through their different size, shape, and location for our multi-modality CNN model that achieved over 98% performance on a dataset constructed from publicly available images.

Training of deep convolutional neural nets to extract radiomic signatures of tumors

  • Kim, J.
  • Seo, S.
  • Ashrafinia, S.
  • Rahmim, A.
  • Sossi, V.
  • Klyuzhin, I.
Journal of Nuclear Medicine 2019 Journal Article, cited 0 times
Website
Objectives: Radiomics-based analysis of FDG PET images has been shown to improve the assessment and prediction of tumor growth rate, response to treatment and other patient outcomes [1]. An alternative new approach to image analysis involves the use of convolutional neural networks (CNNs), wherein relevant image features are learned implicitly and automatically in the process of network training [2]; this is in contrast to radiomics analyses, where the features are “hand-crafted” and are explicitly computed (EC). Although CNNs represent a more general approach, it is not clear whether the implicitly learned features may, or have the ability to include radiomics features (RFs) as a subset. If this is the case, CNN-based approaches may eventually obviate the use of EC RFs. Further, the use of CNNs instead of RFs may completely eliminate the need for feature selection and tumor delineation, enabling high-throughput data analyses. Thus, our objective was to test whether CNNs can learn to act similarly to several commonly used RFs. Using a set of simulated and real FDG PET images of tumors, we train the CNNs to estimate the values of RFs from the images without the explicit computation. We then compare the values of the CNN-estimated and EC features. Methods: Using a stochastic volumetric model for tumor growth, 2000 FDG images of tumors confined to a bounding box (BB) were simulated (40x40x40 voxels, voxel size 2.0 mm), and 10 RFs (3 x morphology, 4 x intensity histogram, 3 x texture features) were computed for each image using the SERA library [3] (compliant with the Image Biomarker Standardization Initiative, IBSI [4]). A 3D CNN with 4 convolutional layers, and a total of 164 filters, was implemented in Python using the Keras library with TensorFlow backend (https://www.keras.io). The mean absolute error was the optimized loss function. The CNN was trained to automatically estimate the values each of the 10 RFs for each image; 1900 of images were used for training, and 100 were used for testing, to compare the CNN-estimated values to the EC feature values. We also used a secondary test set comprised of 133 real tumor images, obtained from the head and neck PET/CT imaging study [5] publicly available at the Cancer Imaging Archive. The tumors were cropped to a BB, and the images were resampled to yield similar image size to the simulated image set. Results: After the training procedure, on the simulated test set the CNN was able to estimate the values of most EC RFs with 10-20% error (relative to the range). In the morphology group, the errors were 3.8% for volume, 12.0% for compactness, 15.7% for flatness. In the intensity group, the errors were 13.7% for the mean, 15.4% for variance, 12.3% for skewness, and 13.1% for kurtosis. In the texture group, the error was 10.6% for GLCM contrast, 13.4% for cluster tendency, and 21.7% for angular momentum. With all features, the difference between the CNN-estimated and EC feature values were statistically insignificant (two-sample t-test), and the correlation between the feature values was highly significant (p<0.01). On the real image test set, we observed higher error rates, on the order of 20-30%; however, with all but one feature (angular momentum), there was a significant correlation between the CNN-estimated and EC features (p<0.01). Conclusions: Our results suggest that CNNs can be trained to act similarly to several widely used RFs. While the accuracy of CNN-based estimates varied between the features, in general, the CNN showed a good propensity for learning. Thus, it is likely that with more complex network architectures and training data, features can be estimated more accurately. While a greater number of RFs need to be similarly tested in the future, these initial experiments provide first evidence that, given the sufficient quality and quantity of the training data, the CNNs indeed represent a more general approach to feature extraction, and may potentially replace radiomics-based analyses without compromising the descriptive thoroughness.

Design and evaluation of an accurate CNR-guided small region iterative restoration-based tumor segmentation scheme for PET using both simulated and real heterogeneous tumors

  • Koç, Alpaslan
  • Güveniş, Albert
Med Biol Eng ComputMed Biol Eng Comput 2020 Journal Article, cited 0 times
Website
Tumor delineation accuracy directly affects the effectiveness of radiotherapy. This study presents a methodology that minimizes potential errors during the automated segmentation of tumors in PET images. Iterative blind deconvolution was implemented in a region of interest encompassing the tumor with the number of iterations determined from contrast-to-noise ratios. The active contour and random forest classification-based segmentation method was evaluated using three distinct image databases that included both synthetic and real heterogeneous tumors. Ground truths about tumor volumes were known precisely. The volumes of the tumors were in the range of 0.49-26.34 cm(3), 0.64-1.52 cm(3), and 40.38-203.84 cm(3) respectively. Widely available software tools, namely, MATLAB, MIPAV, and ITK-SNAP were utilized. When using the active contour method, image restoration reduced mean errors in volumes estimation from 95.85 to 3.37%, from 815.63 to 17.45%, and from 32.61 to 6.80% for the three datasets. The accuracy gains were higher using datasets that include smaller tumors for which PVE is known to be more predominant. Computation time was reduced by a factor of about 10 in the smaller deconvolution region. Contrast-to-noise ratios were improved for all tumors in all data. The presented methodology has the potential to improve delineation accuracy in particular for smaller tumors at practically feasible computational times. Graphical abstract Evaluation of accurate lesion volumes using CNR-guided and ROI-based restoration method for PET images.

Radiogenomics of lower-grade gliomas: machine learning-based MRI texture analysis for predicting 1p/19q codeletion status

  • Kocak, B.
  • Durmaz, E. S.
  • Ates, E.
  • Sel, I.
  • Turgut Gunes, S.
  • Kaya, O. K.
  • Zeynalova, A.
  • Kilickesmez, O.
Eur Radiol 2019 Journal Article, cited 0 times
Website
OBJECTIVE: To evaluate the potential value of the machine learning (ML)-based MRI texture analysis for predicting 1p/19q codeletion status of lower-grade gliomas (LGG), using various state-of-the-art ML algorithms. MATERIALS AND METHODS: For this retrospective study, 107 patients with LGG were included from a public database. Texture features were extracted from conventional T2-weighted and contrast-enhanced T1-weighted MRI images, using LIFEx software. Training and unseen validation splits were created using stratified 10-fold cross-validation technique along with minority over-sampling. Dimension reduction was done using collinearity analysis and feature selection (ReliefF). Classifications were done using adaptive boosting, k-nearest neighbours, naive Bayes, neural network, random forest, stochastic gradient descent, and support vector machine. Friedman test and pairwise post hoc analyses were used for comparison of classification performances based on the area under the curve (AUC). RESULTS: Overall, the predictive performance of the ML algorithms were statistically significantly different, chi2(6) = 26.7, p < 0.001. There was no statistically significant difference among the performance of the neural network, naive Bayes, support vector machine, random forest, and stochastic gradient descent, adjusted p > 0.05. The mean AUC and accuracy values of these five algorithms ranged from 0.769 to 0.869 and from 80.1 to 84%, respectively. The neural network had the highest mean rank with mean AUC and accuracy values of 0.869 and 83.8%, respectively. CONCLUSIONS: The ML-based MRI texture analysis might be a promising non-invasive technique for predicting the 1p/19q codeletion status of LGGs. Using this technique along with various ML algorithms, more than four-fifths of the LGGs can be correctly classified. KEY POINTS: * More than four-fifths of the lower-grade gliomas can be correctly classified with machine learning-based MRI texture analysis. Satisfying classification outcomes are not limited to a single algorithm. * A few-slice-based volumetric segmentation technique would be a valid approach, providing satisfactory predictive textural information and avoiding excessive segmentation duration in clinical practice. * Feature selection is sensitive to different patient data set samples so that each sampling leads to the selection of different feature subsets, which needs to be considered in future works.

Creation and curation of the society of imaging informatics in Medicine Hackathon Dataset

  • Kohli, Marc
  • Morrison, James J
  • Wawira, Judy
  • Morgan, Matthew B
  • Hostetter, Jason
  • Genereaux, Brad
  • Hussain, Mohannad
  • Langer, Steve G
Journal of Digital Imaging 2018 Journal Article, cited 4 times
Website

Feasibility of synthetic computed tomography generated with an adversarial network for multi-sequence magnetic resonance-based brain radiotherapy

  • Koike, Yuhei
  • Akino, Yuichi
  • Sumida, Iori
  • Shiomi, Hiroya
  • Mizuno, Hirokazu
  • Yagi, Masashi
  • Isohashi, Fumiaki
  • Seo, Yuji
  • Suzuki, Osamu
  • Ogawa, Kazuhiko
J Radiat Res 2019 Journal Article, cited 0 times
Website
The aim of this work is to generate synthetic computed tomography (sCT) images from multi-sequence magnetic resonance (MR) images using an adversarial network and to assess the feasibility of sCT-based treatment planning for brain radiotherapy. Datasets for 15 patients with glioblastoma were selected and 580 pairs of CT and MR images were used. T1-weighted, T2-weighted and fluid-attenuated inversion recovery MR sequences were combined to create a three-channel image as input data. A conditional generative adversarial network (cGAN) was trained using image patches. The image quality was evaluated using voxel-wise mean absolute errors (MAEs) of the CT number. For the dosimetric evaluation, 3D conformal radiotherapy (3D-CRT) and volumetric modulated arc therapy (VMAT) plans were generated using the original CT set and recalculated using the sCT images. The isocenter dose and dose-volume parameters were compared for 3D-CRT and VMAT plans, respectively. The equivalent path length was also compared. The mean MAEs for the whole body, soft tissue and bone region were 108.1 +/- 24.0, 38.9 +/- 10.7 and 366.2 +/- 62.0 hounsfield unit, respectively. The dosimetric evaluation revealed no significant difference in the isocenter dose for 3D-CRT plans. The differences in the dose received by 2% of the volume (D2%), D50% and D98% relative to the prescribed dose were <1.0%. The overall equivalent path length was shorter than that for real CT by 0.6 +/- 1.9 mm. A treatment planning study using generated sCT detected only small, clinically negligible differences. These findings demonstrated the feasibility of generating sCT images for MR-only radiotherapy from multi-sequence MR images using cGAN.

Investigating the role of model-based and model-free imaging biomarkers as early predictors of neoadjuvant breast cancer therapy outcome

  • Kontopodis, Eleftherios
  • Venianaki, Maria
  • Manikis, George C
  • Nikiforaki, Katerina
  • Salvetti, Ovidio
  • Papadaki, Efrosini
  • Papadakis, Georgios Z
  • Karantanas, Apostolos H
  • Marias, Kostas
IEEE J Biomed Health Inform 2019 Journal Article, cited 0 times
Website
Imaging biomarkers (IBs) play a critical role in the clinical management of breast cancer (BRCA) patients throughout the cancer continuum for screening, diagnosis and therapy assessment especially in the neoadjuvant setting. However, certain model-based IBs suffer from significant variability due to the complex workflows involved in their computation, whereas model-free IBs have not been properly studied regarding clinical outcome. In the present study, IBs from 35 BRCA patients who received neoadjuvant chemotherapy (NAC) were extracted from dynamic contrast enhanced MR imaging (DCE-MRI) data with two different approaches, a model-free approach based on pattern recognition (PR), and a model-based one using pharmacokinetic compartmental modeling. Our analysis found that both model-free and model-based biomarkers can predict pathological complete response (pCR) after the first cycle of NAC. Overall, 8 biomarkers predicted the treatment response after the first cycle of NAC, with statistical significance (p-value<0.05), and 3 at the baseline. The best pCR predictors at first follow-up, achieving high AUC and sensitivity and specificity more than 50%, were the hypoxic component with threshold2 (AUC 90.4%) from the PR method, and the median value of kep (AUC 73.4%) from the model-based approach. Moreover, the 80th percentile of ve achieved the highest pCR prediction at baseline with AUC 78.5%. The results suggest that model-free DCE-MRI IBs could be a more robust alternative to complex, model-based ones such as kep and favor the hypothesis that the PR image-derived hypoxic image component captures actual tumor hypoxia information able to predict BRCA NAC outcome.

A Study on the Geometrical Limits and Modern Approaches to External Beam Radiotherapy

  • Kopchick, Benjamin
2020 Thesis, cited 0 times
Website
Radiation therapy is integral to treating cancer and improving survival probability. Improving treatment methods and modalities can lead to significant impacts on the life quality of cancer patients. One such method is stereotactic radiotherapy. Stereotactic radiotherapy is a form of External Beam Radiotherapy (EBRT). It delivers a highly conformal dose of radiation to a target from beams arranged at many different angles. The goal of any radiotherapy treatment is to deliver radiation only to the cancerous cells while maximally sparing other tissues. However, such a perfect treatment outcome is difficult to achieve due to the physical limitations of EBRT. The quality of treatment is dependent on the characteristics of these beams and the number of angles of which radiation is delivered. However, as technology and techniques have improved, the dependence on the quality of beams and beam coverage may have become less critical. This thesis investigates different geometric aspects of stereotactic radiotherapy and their impacts on treatment quality. The specific aims are: (1) To explore the treatment outcome of a virtual stereotactic delivery where no geometric limit exists in the sense of physical collisions. This allows for the full solid angle treatment space to be investigated and to explore if a large solid angle space is necessary to improve treatment. (2) To evaluate the effect of a reduced solid angle with a specific radiotherapy device using real clinical cases. (3) To investigate how the quality of a single beam influences treatment outcome when multiple overlapping beams are in use. (4) To study the feasibility of using a novel treatment method of lattice radiotherapy with an existing stereotactic device for treating breast cancer. All these aims were investigated with the use of inverse planning optimization and Monte-Carlo based particle transport simulations.

Automated Segmentation of Hyperintense Regions in FLAIR MRI Using Deep Learning

  • Korfiatis, Panagiotis
  • Kline, Timothy L
  • Erickson, Bradley J
Tomography: a journal for imaging research 2016 Journal Article, cited 16 times
Website

The quest for'diagnostically lossless' medical image compression: a comparative study of objective quality metrics for compressed medical images

  • Kowalik-Urbaniak, Ilona
  • Brunet, Dominique
  • Wang, Jiheng
  • Koff, David
  • Smolarski-Koff, Nadine
  • Vrscay, Edward R
  • Wallace, Bill
  • Wang, Zhou
2014 Conference Proceedings, cited 0 times

Usefulness of gradient tree boosting for predicting histological subtype and EGFR mutation status of non-small cell lung cancer on (18)F FDG-PET/CT

  • Koyasu, S.
  • Nishio, M.
  • Isoda, H.
  • Nakamoto, Y.
  • Togashi, K.
Ann Nucl Med 2020 Journal Article, cited 3 times
Website
OBJECTIVE: To develop and evaluate a radiomics approach for classifying histological subtypes and epidermal growth factor receptor (EGFR) mutation status in lung cancer on PET/CT images. METHODS: PET/CT images of lung cancer patients were obtained from public databases and used to establish two datasets, respectively to classify histological subtypes (156 adenocarcinomas and 32 squamous cell carcinomas) and EGFR mutation status (38 mutant and 100 wild-type samples). Seven types of imaging features were obtained from PET/CT images of lung cancer. Two types of machine learning algorithms were used to predict histological subtypes and EGFR mutation status: random forest (RF) and gradient tree boosting (XGB). The classifiers used either a single type or multiple types of imaging features. In the latter case, the optimal combination of the seven types of imaging features was selected by Bayesian optimization. Receiver operating characteristic analysis, area under the curve (AUC), and tenfold cross validation were used to assess the performance of the approach. RESULTS: In the classification of histological subtypes, the AUC values of the various classifiers were as follows: RF, single type: 0.759; XGB, single type: 0.760; RF, multiple types: 0.720; XGB, multiple types: 0.843. In the classification of EGFR mutation status, the AUC values were: RF, single type: 0.625; XGB, single type: 0.617; RF, multiple types: 0.577; XGB, multiple types: 0.659. CONCLUSIONS: The radiomics approach to PET/CT images, together with XGB and Bayesian optimization, is useful for classifying histological subtypes and EGFR mutation status in lung cancer.

Medical (CT) image generation with style

  • Krishna, Arjun
  • Mueller, Klaus
2019 Conference Proceedings, cited 0 times

Performance Analysis of Denoising in MR Images with Double Density Dual Tree Complex Wavelets, Curvelets and NonSubsampled Contourlet Transforms

  • Krishnakumar, V
  • Parthiban, Latha
Annual Review & Research in Biology 2014 Journal Article, cited 0 times

Three-dimensional lung nodule segmentation and shape variance analysis to detect lung cancer with reduced false positives

  • Krishnamurthy, Senthilkumar
  • Narasimhan, Ganesh
  • Rengasamy, Umamaheswari
Proceedings of the Institution of Mechanical Engineers, Part H: Journal of Engineering in Medicine 2016 Journal Article, cited 17 times
Website

Content-Based Medical Image Retrieval: A Survey of Applications to Multidimensional and Multimodality Data

  • Kumar, Ashnil
  • Kim, Jinman
  • Cai, Weidong
  • Fulham, Michael
  • Feng, Dagan
Journal of Digital Imaging 2013 Journal Article, cited 109 times
Website
Medical imaging is fundamental to modern healthcare, and its widespread use has resulted in the creation of image databases, as well as picture archiving and communication systems. These repositories now contain images from a diverse range of modalities, multidimensional (three-dimensional or time-varying) images, as well as co-aligned multimodality images. These image collections offer the opportunity for evidence-based diagnosis, teaching, and research; for these applications, there is a requirement for appropriate methods to search the collections for images that have characteristics similar to the case(s) of interest. Content-based image retrieval (CBIR) is an image search technique that complements the conventional text-based retrieval of images by using visual features, such as color, texture, and shape, as search criteria. Medical CBIR is an established field of study that is beginning to realize promise when applied to multidimensional and multimodality medical data. In this paper, we present a review of state-of-the-art medical CBIR approaches in five main categories: two-dimensional image retrieval, retrieval of images with three or more dimensions, the use of nonimage data to enhance the retrieval, multimodality image retrieval, and retrieval from diverse datasets. We use these categories as a framework for discussing the state of the art, focusing on the characteristics and modalities of the information used during medical image retrieval.

Discovery radiomics for pathologically-proven computed tomography lung cancer prediction

  • Kumar, Devinder
  • Chung, Audrey G
  • Shaifee, Mohammad J
  • Khalvati, Farzad
  • Haider, Masoom A
  • Wong, Alexander
2017 Conference Proceedings, cited 30 times
Website

Medical image segmentation using modified fuzzy c mean based clustering

  • Kumar, Dharmendra
  • Solanki, Anil Kumar
  • Ahlawat, Anil
  • Malhotra, Sukhnandan
2020 Conference Proceedings, cited 0 times
Website
Locating disease area in medical images is one of the most challenging task in the field of image segmentation. This paper presents a new approach of image-segmentation using modified fuzzy c-means(MFCM) clustering. Considering low illuminated medical images, the input image is firstly enhanced using histogram equalization(HE) technique. The enhanced image is now segmented into various regions using the MFCM based approach. The local information is employed in the objective-function of MFCM to overcome the issue of noise sensitivity. After that membership partitioning is improved by using fast membership filtering. The observed result of the proposed scheme is found suitable in terms of various evaluating parameters for experimentation.

Lung Nodule Classification Using Deep Features in CT Images

  • Kumar, Devinder
  • Wong, Alexander
  • Clausi, David A
2015 Conference Proceedings, cited 114 times
Website

Combining Generative Models for Multifocal Glioma Segmentation and Registration

  • Kwon, Dongjin
  • Shinohara, Russell T
  • Akbari, Hamed
  • Davatzikos, Christos
2014 Book Section, cited 55 times
Website
In this paper, we propose a new method for simultaneously segmenting brain scans of glioma patients and registering these scans to a normal atlas. Performing joint segmentation and registration for brain tumors is very challenging when tumors include multifocal masses and have complex shapes with heterogeneous textures. Our approach grows tumors for each mass from multiple seed points using a tumor growth model and modifies a normal atlas into one with tumors and edema using the combined results of grown tumors. We also generate a tumor shape prior via the random walk with restart, utilizing multiple tumor seeds as initial foreground information. We then incorporate this shape prior into an EM framework which estimates the mapping between the modified atlas and the scans, posteriors for each tissue labels, and the tumor growth model parameters. We apply our method to the BRATS 2013 leaderboard dataset to evaluate segmentation performance. Our method shows the best performance among all participants.

Acute Tumor Transition Angle on Computed Tomography Predicts Chromosomal Instability Status of Primary Gastric Cancer: Radiogenomics Analysis from TCGA and Independent Validation

  • Lai, Ying-Chieh
  • Yeh, Ta-Sen
  • Wu, Ren-Chin
  • Tsai, Cheng-Kun
  • Yang, Lan-Yan
  • Lin, Gigin
  • Kuo, Michael D
Cancers 2019 Journal Article, cited 0 times
Website
Chromosomal instability (CIN) of gastric cancer is correlated with distinct outcomes. This study aimed to investigate the role of computed tomography (CT) imaging traits in predicting the CIN status of gastric cancer. We screened 443 patients in the Cancer Genome Atlas gastric cancer cohort to filter 40 patients with complete CT imaging and genomic data as the training cohort. CT imaging traits were subjected to logistic regression to select independent predictors for the CIN status. For the validation cohort, we prospectively enrolled 18 gastric cancer patients for CT and tumor genomic analysis. The imaging predictors were tested in the validation cohort using receiver operating characteristic curve (ROC) analysis. Thirty patients (75%) in the training cohort and 9 patients (50%) in the validation cohort had CIN subtype gastric cancers. Smaller tumor diameter (p = 0.017) and acute tumor transition angle (p = 0.045) independently predict CIN status in the training cohort. In the validation cohort, acute tumor transition angle demonstrated the highest accuracy, sensitivity, and specificity of 88.9%, 88.9%, and 88.9%, respectively, and areas under ROC curve of 0.89. In conclusion, this pilot study showed acute tumor transition angle on CT images may predict the CIN status of gastric cancer.

A simple texture feature for retrieval of medical images

  • Lan, Rushi
  • Zhong, Si
  • Liu, Zhenbing
  • Shi, Zhuo
  • Luo, Xiaonan
Multimedia Tools and Applications 2017 Journal Article, cited 2 times
Website
Texture characteristic is an important attribute of medical images, and has been applied in many medical image applications. This paper proposes a simple approach to employ the texture features of medical images for retrieval. The developed approach first conducts image filtering to medical images using different Gabor and Schmid filters, and then uniformly partitions the filtered images into non-overlapping patches. These operations provide extensive local texture information of medical images. The bag-of-words model is finally used to obtain feature representations of the images. Compared with several existing features, the proposed one is more discriminative and efficient. Experiments on two benchmark medical CT image databases have demonstrated the effectiveness of the proposed approach.

Collaborative and Reproducible Research: Goals, Challenges, and Strategies

  • Langer, S. G.
  • Shih, G.
  • Nagy, P.
  • Landman, B. A.
J Digit Imaging 2018 Journal Article, cited 1 times
Website
Combining imaging biomarkers with genomic and clinical phenotype data is the foundation of precision medicine research efforts. Yet, biomedical imaging research requires unique infrastructure compared with principally text-driven clinical electronic medical record (EMR) data. The issues are related to the binary nature of the file format and transport mechanism for medical images as well as the post-processing image segmentation and registration needed to combine anatomical and physiological imaging data sources. The SiiM Machine Learning Committee was formed to analyze the gaps and challenges surrounding research into machine learning in medical imaging and to find ways to mitigate these issues. At the 2017 annual meeting, a whiteboard session was held to rank the most pressing issues and develop strategies to meet them. The results, and further reflections, are summarized in this paper.

A Roadmap for Foundational Research on Artificial Intelligence in Medical Imaging: From the 2018 NIH/RSNA/ACR/The Academy Workshop

  • Langlotz, Curtis P
  • Allen, Bibb
  • Erickson, Bradley J
  • Kalpathy-Cramer, Jayashree
  • Bigelow, Keith
  • Cook, Tessa S
  • Flanders, Adam E
  • Lungren, Matthew P
  • Mendelson, David S
  • Rudie, Jeffrey D
  • Wang, Ge
  • Kandarpa, Krishna
Radiology 2019 Journal Article, cited 1 times
Website
Imaging research laboratories are rapidly creating machine learning systems that achieve expert human performance using open-source methods and tools. These artificial intelligence systems are being developed to improve medical image reconstruction, noise reduction, quality assurance, triage, segmentation, computer-aided detection, computer-aided classification, and radiogenomics. In August 2018, a meeting was held in Bethesda, Maryland, at the National Institutes of Health to discuss the current state of the art and knowledge gaps and to develop a roadmap for future research initiatives. Key research priorities include: 1, new image reconstruction methods that efficiently produce images suitable for human interpretation from source data; 2, automated image labeling and annotation methods, including information extraction from the imaging report, electronic phenotyping, and prospective structured image reporting; 3, new machine learning methods for clinical imaging data, such as tailored, pretrained model architectures, and federated machine learning methods; 4, machine learning methods that can explain the advice they provide to human users (so-called explainable artificial intelligence); and 5, validated methods for image de-identification and data sharing to facilitate wide availability of clinical imaging data sets. This research roadmap is intended to identify and prioritize these needs for academic research laboratories, funding agencies, professional societies, and industry.

Discrimination of Benign and Malignant Suspicious BreastTumors Based on Semi-Quantitative DCE-MRI ParametersEmploying Support Vector Machine

  • Lavasani, Saeedeh Navaei
  • Kazerooni, Anahita Fathi
  • Rad, Hamidreza Saligheh
  • Gity, Masoumeh
Frontiers in Biomedical Technologies 2015 Journal Article, cited 4 times
Website

Liver Tumor Segmentation from MR Images Using 3D Fast Marching Algorithm and Single Hidden Layer Feedforward Neural Network

  • Le, Trong-Ngoc
  • Bao, Pham The
  • Huynh, Hieu Trung
BioMed Research International 2016 Journal Article, cited 5 times
Website
Objective. Our objective is to develop a computerized scheme for liver tumor segmentation in MR images. Materials and Methods. Our proposed scheme consists of four main stages. Firstly, the region of interest (ROI) image which contains the liver tumor region in the T1-weighted MR image series was extracted by using seed points. The noise in this ROI image was reduced and the boundaries were enhanced. A 3D fast marching algorithm was applied to generate the initial labeled regions which are considered as teacher regions. A single hidden layer feedforward neural network (SLFN), which was trained by a noniterative algorithm, was employed to classify the unlabeled voxels. Finally, the postprocessing stage was applied to extract and refine the liver tumor boundaries. The liver tumors determined by our scheme were compared with those manually traced by a radiologist, used as the "ground truth." Results. The study was evaluated on two datasets of 25 tumors from 16 patients. The proposed scheme obtained the mean volumetric overlap error of 27.43% and the mean percentage volume error of 15.73%. The mean of the average surface distance, the root mean square surface distance, and the maximal surface distance were 0.58 mm, 1.20 mm, and 6.29 mm, respectively.

Automatic GPU memory management for large neural models in TensorFlow

  • Le, Tung D.
  • Imai, Haruki
  • Negishi, Yasushi
  • Kawachiya, Kiyokuni
2019 Conference Proceedings, cited 0 times
Website
Deep learning models are becoming larger and will not fit inthe limited memory of accelerators such as GPUs for train-ing. Though many methods have been proposed to solvethis problem, they are rather ad-hoc in nature and difficultto extend and integrate with other techniques. In this pa-per, we tackle the problem in a formal way to provide astrong foundation for supporting large models. We proposea method of formally rewriting the computational graph of amodel where swap-out and swap-in operations are insertedto temporarily store intermediate results on CPU memory.By introducing a categorized topological ordering for simu-lating graph execution, the memory consumption of a modelcan be easily analyzed by using operation distances in theordering. As a result, the problem of fitting a large model intoa memory-limited accelerator is reduced to the problem ofreducing operation distances in a categorized topological or-dering. We then show how to formally derive swap-out andswap-in operations from an existing graph and present rulesto optimize the graph. Finally, we propose a simulation-basedauto-tuning to automatically find suitable graph-rewritingparameters for the best performance. We developed a modulein TensorFlow, calledLMS, by which we successfully trainedResNet-50 with a4.9x larger mini-batch size and 3D U-Netwith a5.6x larger image resolution.

A Three-Dimensional-Printed Patient-Specific Phantom for External Beam Radiation Therapy of Prostate Cancer

  • Lee, Christopher L
  • Dietrich, Max C
  • Desai, Uma G
  • Das, Ankur
  • Yu, Suhong
  • Xiang, Hong F
  • Jaffe, C Carl
  • Hirsch, Ariel E
  • Bloch, B Nicolas
Journal of Engineering and Science in Medical Diagnostics and Therapy 2018 Journal Article, cited 0 times
Website

High quality imaging from sparsely sampled computed tomography data with deep learning and wavelet transform in various domains

  • Lee, Donghoong
  • Choi, Sunghoon
  • Kim, Hee‐Joung
Medical physics 2018 Journal Article, cited 0 times
Website

Restoration of Full Data from Sparse Data in Low-Dose Chest Digital Tomosynthesis Using Deep Convolutional Neural Networks

  • Lee, Donghoon
  • Kim, Hee-Joung
Journal of Digital Imaging 2018 Journal Article, cited 0 times
Website

Prognostic value and molecular correlates of a CT image-based quantitative pleural contact index in early stage NSCLC

  • Lee, Juheon
  • Cui, Yi
  • Sun, Xiaoli
  • Li, Bailiang
  • Wu, Jia
  • Li, Dengwang
  • Gensheimer, Michael F
  • Loo, Billy W
  • Diehn, Maximilian
  • Li, Ruijiang
European Radiology 2018 Journal Article, cited 3 times
Website
PURPOSE: To evaluate the prognostic value and molecular basis of a CT-derived pleural contact index (PCI) in early stage non-small cell lung cancer (NSCLC). EXPERIMENTAL DESIGN: We retrospectively analysed seven NSCLC cohorts. A quantitative PCI was defined on CT as the length of tumour-pleura interface normalised by tumour diameter. We evaluated the prognostic value of PCI in a discovery cohort (n = 117) and tested in an external cohort (n = 88) of stage I NSCLC. Additionally, we identified the molecular correlates and built a gene expression-based surrogate of PCI using another cohort of 89 patients. To further evaluate the prognostic relevance, we used four datasets totalling 775 stage I patients with publically available gene expression data and linked survival information. RESULTS: At a cutoff of 0.8, PCI stratified patients for overall survival in both imaging cohorts (log-rank p = 0.0076, 0.0304). Extracellular matrix (ECM) remodelling was enriched among genes associated with PCI (p = 0.0003). The genomic surrogate of PCI remained an independent predictor of overall survival in the gene expression cohorts (hazard ratio: 1.46, p = 0.0007) adjusting for age, gender, and tumour stage. CONCLUSIONS: CT-derived pleural contact index is associated with ECM remodelling and may serve as a noninvasive prognostic marker in early stage NSCLC. KEY POINTS: * A quantitative pleural contact index (PCI) predicts survival in early stage NSCLC. * PCI is associated with extracellular matrix organisation and collagen catabolic process. * A multi-gene surrogate of PCI is an independent predictor of survival. * PCI can be used to noninvasively identify patients with poor prognosis.

Spatial Habitat Features Derived from Multiparametric Magnetic Resonance Imaging Data Are Associated with Molecular Subtype and 12-Month Survival Status in Glioblastoma Multiforme

  • Lee, Joonsang
  • Narang, Shivali
  • Martinez, Juan
  • Rao, Ganesh
  • Rao, Arvind
PLoS One 2015 Journal Article, cited 14 times
Website

The Impact of Obesity on Tumor Glucose Uptake in Breast and Lung Cancer

  • Leitner, Brooks P.
  • Perry, Rachel J.
JNCI Cancer Spectrum 2020 Journal Article, cited 0 times
Website
Obesity confers an increased incidence and poorer clinical prognosis in over ten cancer types. Paradoxically, obesity provides protection from poor outcomes in lung cancer. Mechanisms for the obesity-cancer links are not fully elucidated, with altered glucose metabolism being a promising candidate. Using 18F-Fluorodeoxyglucose positron-emission-tomography/computed-tomography images from The Cancer Imaging Archive, we explored the relationship between body mass index (BMI) and glucose metabolism in several cancers. In 188 patients (BMI: 27.7, SD = 5.1, Range = 17.4-49.3 kg/m2), higher BMI was associated with greater tumor glucose uptake in obesity-associated breast cancer r = 0.36, p = 0.02), and with lower tumor glucose uptake in non-small-cell lung cancer (r=-0.26, p = 0.048) using two-sided Pearson correlations. No relationship was observed in soft tissue sarcoma or squamous cell carcinoma. Harnessing The National Cancer Institute’s open-access database, we demonstrate altered tumor glucose metabolism as a potential mechanism for the detrimental and protective effects of obesity on breast and lung cancer, respectively.

Automated Segmentation of Prostate MR Images Using Prior Knowledge Enhanced Random Walker

  • Li, Ang
  • Li, Changyang
  • Wang, Xiuying
  • Eberl, Stefan
  • Feng, David Dagan
  • Fulham, Michael
2013 Conference Proceedings, cited 9 times
Website

Low-Dose CT streak artifacts removal using deep residual neural network

  • Li, Heyi
  • Mueller, Klaus
2017 Conference Proceedings, cited 6 times
Website

MR Imaging Radiomics Signatures for Predicting the Risk of Breast Cancer Recurrence as Given by Research Versions of MammaPrint, Oncotype DX, and PAM50 Gene Assays

  • Li, Hui
  • Zhu, Yitan
  • Burnside, Elizabeth S
  • Drukker, Karen
  • Hoadley, Katherine A
  • Fan, Cheng
  • Conzen, Suzanne D
  • Whitman, Gary J
  • Sutton, Elizabeth J
  • Net, Jose M
Radiology 2016 Journal Article, cited 103 times
Website

Quantitative MRI radiomics in the prediction of molecular classifications of breast cancer subtypes in the TCGA/TCIA data set

  • Li, Hui
  • Zhu, Yitan
  • Burnside, Elizabeth S
  • Huang, Erich
  • Drukker, Karen
  • Hoadley, Katherine A
  • Fan, Cheng
  • Conzen, Suzanne D
  • Zuley, Margarita
  • Net, Jose M
npj Breast Cancer 2016 Journal Article, cited 63 times
Website

Patient-specific biomechanical model as whole-body CT image registration tool

  • Li, Mao
  • Miller, Karol
  • Joldes, Grand Roman
  • Doyle, Barry
  • Garlapati, Revanth Reddy
  • Kikinis, Ron
  • Wittek, Adam
Medical Image Analysis 2015 Journal Article, cited 15 times
Website

Biomechanical model for computing deformations for whole‐body image registration: A meshless approach

  • Li, Mao
  • Miller, Karol
  • Joldes, Grand Roman
  • Kikinis, Ron
  • Wittek, Adam
International Journal for Numerical Methods in Biomedical Engineering 2016 Journal Article, cited 13 times
Website

A Fully-Automatic Multiparametric Radiomics Model: Towards Reproducible and Prognostic Imaging Signature for Prediction of Overall Survival in Glioblastoma Multiforme

  • Li, Qihua
  • Bai, Hongmin
  • Chen, Yinsheng
  • Sun, Qiuchang
  • Liu, Lei
  • Zhou, Sijie
  • Wang, Guoliang
  • Liang, Chaofeng
  • Li, Zhi-Cheng
Sci RepScientific reports 2017 Journal Article, cited 9 times
Website

Comparison Between Radiological Semantic Features and Lung-RADS in Predicting Malignancy of Screen-Detected Lung Nodules in the National Lung Screening Trial

  • Li, Qian
  • Balagurunathan, Yoganand
  • Liu, Ying
  • Qi, Jin
  • Schabath, Matthew B
  • Ye, Zhaoxiang
  • Gillies, Robert J
Clinical lung cancer 2017 Journal Article, cited 3 times
Website

Large-scale retrieval for medical image analytics: A comprehensive review

  • Li, Zhongyu
  • Zhang, Xiaofan
  • Müller, Henning
  • Zhang, Shaoting
Medical Image Analysis 2018 Journal Article, cited 23 times
Website

Evaluate the Malignancy of Pulmonary Nodules Using the 3D Deep Leaky Noisy-OR Network

  • Liao, Fangzhou
  • Liang, Ming
  • Li, Zhe
  • Hu, Xiaolin
  • Song, Sen
IEEE Trans Neural Netw Learn Syst 2017 Journal Article, cited 15 times
Website
Automatic diagnosing lung cancer from computed tomography scans involves two steps: detect all suspicious lesions (pulmonary nodules) and evaluate the whole-lung/pulmonary malignancy. Currently, there are many studies about the first step, but few about the second step. Since the existence of nodule does not definitely indicate cancer, and the morphology of nodule has a complicated relationship with cancer, the diagnosis of lung cancer demands careful investigations on every suspicious nodule and integration of information of all nodules. We propose a 3-D deep neural network to solve this problem. The model consists of two modules. The first one is a 3-D region proposal network for nodule detection, which outputs all suspicious nodules for a subject. The second one selects the top five nodules based on the detection confidence, evaluates their cancer probabilities, and combines them with a leaky noisy-OR gate to obtain the probability of lung cancer for the subject. The two modules share the same backbone network, a modified U-net. The overfitting caused by the shortage of the training data is alleviated by training the two modules alternately. The proposed model won the first place in the Data Science Bowl 2017 competition.

Promises and challenges for the implementation of computational medical imaging (radiomics) in oncology

  • Limkin, EJ
  • Sun, R
  • Dercle, L
  • Zacharaki, EI
  • Robert, C
  • Reuzé, S
  • Schernberg, A
  • Paragios, N
  • Deutsch, E
  • Ferté, C
Annals of Oncology 2017 Journal Article, cited 49 times
Website

A radiogenomics signature for predicting the clinical outcome of bladder urothelial carcinoma

  • Lin, Peng
  • Wen, Dong-Yue
  • Chen, Ling
  • Li, Xin
  • Li, Sheng-Hua
  • Yan, Hai-Biao
  • He, Rong-Quan
  • Chen, Gang
  • He, Yun
  • Yang, Hong
Eur Radiol 2019 Journal Article, cited 0 times
Website
OBJECTIVES: To determine the integrative value of contrast-enhanced computed tomography (CECT), transcriptomics data and clinicopathological data for predicting the survival of bladder urothelial carcinoma (BLCA) patients. METHODS: RNA sequencing data, radiomics features and clinical parameters of 62 BLCA patients were included in the study. Then, prognostic signatures based on radiomics features and gene expression profile were constructed by using least absolute shrinkage and selection operator (LASSO) Cox analysis. A multi-omics nomogram was developed by integrating radiomics, transcriptomics and clinicopathological data. More importantly, radiomics risk score-related genes were identified via weighted correlation network analysis and submitted to functional enrichment analysis. RESULTS: The radiomics and transcriptomics signatures significantly stratified BLCA patients into high- and low-risk groups in terms of the progression-free interval (PFI). The two risk models remained independent prognostic factors in multivariate analyses after adjusting for clinical parameters. A nomogram was developed and showed an excellent predictive ability for the PFI in BLCA patients. Functional enrichment analysis suggested that the radiomics signature we developed could reflect the angiogenesis status of BLCA patients. CONCLUSIONS: The integrative nomogram incorporated CECT radiomics, transcriptomics and clinical features improved the PFI prediction in BLCA patients and is a feasible and practical reference for oncological precision medicine. KEY POINTS: * Our radiomics and transcriptomics models are proved robust for survival prediction in bladder urothelial carcinoma patients. * A multi-omics nomogram model which integrates radiomics, transcriptomics and clinical features for prediction of progression-free interval in bladder urothelial carcinoma is established. * Molecular functional enrichment analysis is used to reveal the potential molecular function of radiomics signature.

The Current Role of Image Compression Standards in Medical Imaging

  • Liu, Feng
  • Hernandez-Cabronero, Miguel
  • Sanchez, Victor
  • Marcellin, Michael W
  • Bilgin, Ali
Information 2017 Journal Article, cited 4 times
Website

Machine Learning Models on Prognostic Outcome Prediction for Cancer Images with Multiple Modalities

  • Liu, Gengbo
2019 Thesis, cited 0 times
Website
Machine learning algorithms have been applied to predict different prognostic outcomes for many different diseases by directly using medical images. However, the higher resolution in various types of medical imaging modalities and new imaging feature extraction framework bringsnew challenges for predicting prognostic outcomes. Compared to traditional radiology practice, which is only based on visual interpretation and simple quantitative measurements, medical imaging featurescan dig deeper within medical images and potentially provide further objective support for clinical decisions.In this dissertation, we cover three projects with applying or designing machine learning models on predicting prognostic outcomes using various types of medical images.

DL-MRI: A Unified Framework of Deep Learning-Based MRI Super Resolution

  • Liu, Huanyu
  • Liu, Jiaqi
  • Li, Junbao
  • Pan, Jeng-Shyang
  • Yu, Xiaqiong
  • Lu, Hao Chun
Journal of Healthcare Engineering 2021 Journal Article, cited 0 times
Website
Magnetic resonance imaging (MRI) is widely used in the detection and diagnosis of diseases. High-resolution MR images will help doctors to locate lesions and diagnose diseases. However, the acquisition of high-resolution MR images requires high magnetic field intensity and long scanning time, which will bring discomfort to patients and easily introduce motion artifacts, resulting in image quality degradation. Therefore, the resolution of hardware imaging has reached its limit. Based on this situation, a unified framework based on deep learning super resolution is proposed to transfer state-of-the-art deep learning methods of natural images to MRI super resolution. Compared with the traditional image super-resolution method, the deep learning super-resolution method has stronger feature extraction and characterization ability, can learn prior knowledge from a large number of sample data, and has a more stable and excellent image reconstruction effect. We propose a unified framework of deep learning -based MRI super resolution, which has five current deep learning methods with the best super-resolution effect. In addition, a high-low resolution MR image dataset with the scales of ×2, ×3, and ×4 was constructed, covering 4 parts of the skull, knee, breast, and head and neck. Experimental results show that the proposed unified framework of deep learning super resolution has a better reconstruction effect on the data than traditional methods and provides a standard dataset and experimental benchmark for the application of deep learning super resolution in MR images.

Multi-subtype classification model for non-small cell lung cancer based on radiomics: SLS model

  • Liu, J.
  • Cui, J.
  • Liu, F.
  • Yuan, Y.
  • Guo, F.
  • Zhang, G.
Med Phys 2019 Journal Article, cited 0 times
Website
PURPOSE: Histological subtypes of non-small cell lung cancer (NSCLC) are crucial for systematic treatment decisions. However, the current studies which used non-invasive radiomic methods to classify NSCLC histology subtypes mainly focused on two main subtypes: squamous cell carcinoma (SCC) and adenocarcinoma (ADC), while multi-subtype classifications that included the other two subtypes of NSCLC: large cell carcinoma (LCC) and not otherwise specified (NOS), were very few in the previous studies. The aim of this work is to establish a multi-subtype classification model for the four main subtypes of NSCLC and improve the classification performance and generalization ability compared with previous studies. METHODS: In this work, we extracted 1029 features from regions of interest in computed tomography (CT) images of 349 patients from two different datasets using radiomic methods. Based on 'three-in-one' concept, we proposed a model called SLS wrapping three algorithms, synthetic minority oversampling technique, l2,1-norm minimization, and support vector machines, into one hybrid technique to classify the four main subtypes of NSCLC: SCC, ADC, LCC and NOS, which could cover the whole range of NSCLC. RESULTS: We analyzed the 247 features obtained by dimension reduction, and found that the extracted features from three methods: first order statistics, gray level co-occurrence matrix, and gray level size zone matrix, were more conducive to the classification of NSCLC subtypes. The proposed SLS model achieved an average classification accuracy of 0.89 on the training set (95% confidence interval [CI]: 0.846 to 0.912) and a classification accuracy of 0.86 on the test set (95% CI: 0.779 to 0.941). CONCLUSIONS: The experiment results showed that the subtypes of NSCLC could be well classified by radiomic method. Our SLS model can accurately classify and diagnose the four subtypes of NSCLC based on CT images, and thus it has the potential to be used in the clinical practice to provide valuable information for lung cancer treatment and further promote the personalized medicine. This article is protected by copyright. All rights reserved.

Synthetic minority image over-sampling technique: How to improve AUC for glioblastoma patient survival prediction

  • Liu, Renhao
  • Hall, Lawrence O.
  • Bowyer, Kevin W.
  • Goldgof, Dmitry B.
  • Gatenby, Robert
  • Ben Ahmed, Kaoutar
2017 Conference Proceedings, cited 3 times
Website
Real-world datasets are often imbalanced, with an important class having many fewer examples than other classes. In medical data, normal examples typically greatly outnumber disease examples. A classifier learned from imbalanced data, will tend to be very good at the predicting examples in the larger (normal) class, yet the smaller (disease) class is typically of more interest. Imbalance is dealt with at the feature vector level (create synthetic feature vectors or discard some examples from the larger class) or by assigning differential costs to errors. Here, we introduce a novel method for over-sampling minority class examples at the image level, rather than the feature vector level. Our method was applied to the problem of Glioblastoma patient survival group prediction. Synthetic minority class examples were created by adding Gaussian noise to original medical images from the minority class. Uniform local binary patterns (LBP) histogram features were then extracted from the original and synthetic image examples with a random forests classifier. Experimental results show the new method (Image SMOTE) increased minority class predictive accuracy and also the AUC (area under the receiver operating characteristic curve), compared to using the imbalanced dataset directly or to creating synthetic feature vectors.

A radiomic signature as a non-invasive predictor of progression-free survival in patients with lower-grade gliomas

  • Liu, Xing
  • Li, Yiming
  • Qian, Zenghui
  • Sun, Zhiyan
  • Xu, Kaibin
  • Wang, Kai
  • Liu, Shuai
  • Fan, Xing
  • Li, Shaowu
  • Zhang, Zhong
NeuroImage: Clinical 2018 Journal Article, cited 0 times
Website

Molecular profiles of tumor contrast enhancement: A radiogenomic analysis in anaplastic gliomas

  • Liu, Xing
  • Li, Yiming
  • Sun, Zhiyan
  • Li, Shaowu
  • Wang, Kai
  • Fan, Xing
  • Liu, Yuqing
  • Wang, Lei
  • Wang, Yinyan
  • Jiang, Tao
Cancer medicine 2018 Journal Article, cited 0 times
Website

JOURNAL CLUB: Computer-Aided Detection of Lung Nodules on CT With a Computerized Pulmonary Vessel Suppressed Function

  • Lo, ShihChung B
  • Freedman, Matthew T
  • Gillis, Laura B
  • White, Charles S
  • Mun, Seong K
American Journal of Roentgenology 2018 Journal Article, cited 4 times
Website

Effect of Imaging Parameter Thresholds on MRI Prediction of Neoadjuvant Chemotherapy Response in Breast Cancer Subtypes

  • Lo, Wei-Ching
  • Li, Wen
  • Jones, Ella F
  • Newitt, David C
  • Kornak, John
  • Wilmes, Lisa J
  • Esserman, Laura J
  • Hylton, Nola M
PLoS One 2016 Journal Article, cited 7 times
Website

Brain tumor segmentation using morphological processing and the discrete wavelet transform

  • Lojzim, Joshua Michael
  • Fries, Marcus
Journal of Young Investigators 2017 Journal Article, cited 0 times
Website

Machine Learning-Based Radiomics for Molecular Subtyping of Gliomas

  • Lu, Chia-Feng
  • Hsu, Fei-Ting
  • Hsieh, Kevin Li-Chun
  • Kao, Yu-Chieh Jill
  • Cheng, Sho-Jen
  • Hsu, Justin Bo-Kai
  • Tsai, Ping-Huei
  • Chen, Ray-Jade
  • Huang, Chao-Ching
  • Yen, Yun
Clinical Cancer Research 2018 Journal Article, cited 1 times
Website

A mathematical-descriptor of tumor-mesoscopic-structure from computed-tomography images annotates prognostic- and molecular-phenotypes of epithelial ovarian cancer

  • Lu, Haonan
  • Arshad, Mubarik
  • Thornton, Andrew
  • Avesani, Giacomo
  • Cunnea, Paula
  • Curry, Ed
  • Kanavati, Fahdi
  • Liang, Jack
  • Nixon, Katherine
  • Williams, Sophie T.
  • Hassan, Mona Ali
  • Bowtell, David D. L.
  • Gabra, Hani
  • Fotopoulou, Christina
  • Rockall, Andrea
  • Aboagye, Eric O.
Nature Communications 2019 Journal Article, cited 0 times
Website
The five-year survival rate of epithelial ovarian cancer (EOC) is approximately 35-40% despite maximal treatment efforts, highlighting a need for stratification biomarkers for personalized treatment. Here we extract 657 quantitative mathematical descriptors from the preoperative CT images of 364 EOC patients at their initial presentation. Using machine learning, we derive a non-invasive summary-statistic of the primary ovarian tumor based on 4 descriptors, which we name "Radiomic Prognostic Vector" (RPV). RPV reliably identifies the 5% of patients with median overall survival less than 2 years, significantly improves established prognostic methods, and is validated in two independent, multi-center cohorts. Furthermore, genetic, transcriptomic and proteomic analysis from two independent datasets elucidate that stromal phenotype and DNA damage response pathways are activated in RPV-stratified tumors. RPV and its associated analysis platform could be exploited to guide personalized therapy of EOC and is potentially transferrable to other cancer types.

Study on Prognosis Factors of Non-Small Cell Lung Cancer Based on CT Image Features

  • Lu, Xiaoteng
  • Gong, Jing
  • Nie, Shengdong
Journal of Medical Imaging and Health Informatics 2019 Journal Article, cited 0 times
This study aims to investigate the prognosis factors of non-small cell lung cancer (NSCLC) based on CT image features and develop a new quantitative image feature prognosis approach using CT images. Firstly, lung tumors were segmented and images features were extracted. Secondly, the Kaplan-Meier method was used to have a univariate survival analysis. A multiple survival analysis was carried out with the method of COX regression model. Thirdly, SMOTE algorithm was took to make the feature data balanced. Finally, classifiers based on WEKA were established to test the prognosis ability of independent prognosis factors. Univariate analysis results reflected that six features had significant influence on patients' prognosis. After multivariate analysis, angular second moment, srhge and volume were significantly related to the survival situation of NSCLC patients (P < 0.05). According to the results of classifiers, these three features could make a well prognosis on the NSCLC. The best classification accuracy was 78.4%. The results of our study suggested that angular second moment, srhge and volume were high potential independent prognosis factors of NSCLC.

vPSNR: a visualization-aware image fidelity metric tailored for diagnostic imaging

  • Lundström, Claes
International journal of computer assisted radiology and surgery 2013 Journal Article, cited 0 times
Website
Purpose Often, the large amounts of data generated in diagnostic imaging cause overload problems for IT systems and radiologists. This entails a need of effective use of data reduction beyond lossless levels, which, in turn, underlines the need to measure and control the image fidelity. Existing image fidelity metrics, however, fail to fully support important requirements from a modern clinical context: support for high-dimensional data, visualization awareness, and independence from the original data. Methods We propose an image fidelity metric, called the visual peak signal-to-noise ratio (vPSNR), fulfilling the three main requirements. A series of image fidelity tests on CT data sets is employed. The impact of visualization transform (grayscale window) on diagnostic quality of irreversibly compressed data sets is evaluated through an observer-based study. In addition, several tests were performed demonstrating the benefits, limitations, and characteristics of vPSNR in different data reduction scenarios. Results The visualization transform has a significant impact on diagnostic quality, and the vPSNR is capable of representing this effect. Moreover, the tests establish that the vPSNR is broadly applicable. Conclusions vPSNR fills a gap not served by existing image fidelity metrics, relevant for the clinical context. While vPSNR alone cannot fulfill all image fidelity needs, it can be a useful complement in a wide range of scenarios.

Automatic lung nodule classification with radiomics approach

  • Ma, Jingchen
  • Wang, Qian
  • Ren, Yacheng
  • Hu, Haibo
  • Zhao, Jun
2016 Conference Proceedings, cited 10 times
Website

Harmonizing the pixel size in retrospective computed tomography radiomics studies

  • Mackin, Dennis
  • Fave, Xenia
  • Zhang, Lifei
  • Yang, Jinzhong
  • Jones, A Kyle
  • Ng, Chaan S
PLoS One 2017 Journal Article, cited 19 times
Website

Automatic Classification of Normal and Cancer Lung CT Images Using Multiscale AM-FM Features

  • Magdy, Eman
  • Zayed, Nourhan
  • Fakhr, Mahmoud
International Journal of Biomedical Imaging 2015 Journal Article, cited 6 times
Website

Scale-Space DCE-MRI Radiomics Analysis Based on Gabor Filters for Predicting Breast Cancer Therapy Response

  • Manikis, Georgios C.
  • Venianaki, Maria
  • Skepasianos, Iraklis
  • Papadakis, Georgios Z.
  • Maris, Thomas G.
  • Agelaki, Sofia
  • Karantanas, Apostolos
  • Marias, Kostas
2019 Conference Paper, cited 0 times
Website
Radiomics-based studies have created an unprecedented momentum in computational medical imaging over the last years by significantly advancing and empowering correlational and predictive quantitative studies in numerous clinical applications. An important element of this exciting field of research especially in oncology is multi-scale texture analysis since it can effectively describe tissue heterogeneity, which is highly informative for clinical diagnosis and prognosis. There are however, several concerns regarding the plethora of radiomics features used in the literature especially regarding their performance consistency across studies. Since many studies use software packages that yield multi-scale texture features it makes sense to investigate the scale-space performance of texture candidate biomarkers under the hypothesis that significant texture markers may have a more persistent scale-space performance. To this end, this study proposes a methodology for the extraction of Gabor multi-scale and orientation texture DCE-MRI radiomics for predicting breast cancer complete response to neoadjuvant therapy. More specifically, a Gabor filter bank was created using four different orientations and ten different scales and then first-order and second-order texture features were extracted for each scale-orientation data representation. The performance of all these features was evaluated under a generalized repeated cross-validation framework in a scale-space fashion using extreme gradient boosting classifiers.

Measurement of smaller colon polyp in CT colonography images using morphological image processing

  • Manjunath, KN
  • Siddalingaswamy, PC
  • Prabhu, GK
International journal of computer assisted radiology and surgery 2017 Journal Article, cited 1 times
Website

A quantitative validation of segmented colon in virtual colonoscopy using image moments

  • Manjunath, K. N.
  • Prabhu, G. K.
  • Siddalingaswamy, P. C.
Biomedical Journal 2020 Journal Article, cited 1 times
Website
Background: Evaluation of segmented colon is one of the challenges in Computed Tomography Colonography (CTC). The objective of the study was to measure the segmented colon accurately using image processing techniques. Methods: This was a retrospective study, and the Institutional Ethical clearance was obtained for the secondary dataset. The technique was tested on 85 CTC dataset. The CTCdataset of 100 - 120 kVp, 100 mA, and ST (Slice Thickness) of 1.25 and 2.5 mm were used for empirical testing. The initial results of the work appear in the conference proceedings. Post colon segmentation, three distance measurement techniques, and one volumetric overlap computation were applied in Euclidian space in which the distances were measured on MPR views of the segmented and unsegmented colons and the volumetric overlap calculation between these two volumes. Results: The key finding was that the measurements on both the segmented and the unsegmented volumes remain same without much difference noticed. This was statistically proved. The results were validated quantitatively on 2D MPR images. An accuracy of 95.265 ± 0.4551% was achieved through volumetric overlap computation. Through paired t-test, at alpha = 5% ; statistical values were p = 0.6769, and t = 0.4169 which infer that there was no much significant difference. Conclusion: The combination of different validation techniques was applied to check the robustness of colon segmentation method, and good results were achieved with this approach. Through quantitative validation, the results were accepted at alpha =5%.

Domain-Based Analysis of Colon Polyp in CT Colonography Using Image-Processing Techniques

  • Manjunath, K N
  • Siddalingaswamy, PC
  • Prabhu, GK
Asian Pacific Journal of Cancer Prevention 2019 Journal Article, cited 0 times
Website
Background: The purpose of the research was to improve the polyp detection accuracy in CT Colonography (CTC)through effective colon segmentation, removal of tagged fecal matter through Electronic Cleansing (EC), and measuringthe smaller polyps. Methods: An improved method of boundary-based semi-automatic colon segmentation with theknowledge of colon distension, an adaptive multistep method for the virtual cleansing of segmented colon based onthe knowledge of Hounsfield Units, and an automated method of smaller polyp measurement using skeletonizationtechnique have been implemented. Results: The techniques were evaluated on 40 CTC dataset. The segmentationmethod was able to delineate the colon wall accurately. The submerged colonic structures were preserved withoutsoft tissue erosion, pseudo enhanced voxels were corrected, and the air-contrast layer was removed without losingthe adjacent tissues. The smaller polyp of size less than validated qualitatively and quantitatively. Segmented colons were validated through volumetric overlap computation,and accuracy of 95.826±0.6854% was achieved. In polyp measurement, the paired t-test method was applied to comparethe difference with ground truth and at α=5%, t=0.9937 and p=0.098 was achieved. The statistical values of TPR=90%,TNR=82.3% and accuracy=88.31% were achieved. Conclusion: An automated system of polyp measurement has beendeveloped starting from colon segmentation to improve the existing CTC solutions. The analysis of domain-basedapproach of polyp has given good results. A prototype software, which can be used as a low-cost polyp diagnosis tool,has been developed.

Tumor Growth in the Brain: Complexity and Fractality

  • Martín-Landrove, Miguel
  • Brú, Antonio
  • Rueda-Toicen, Antonio
  • Torres-Hoyos, Francisco
2016 Book Section, cited 1 times
Website

Computer-Assisted Decision Support System in Pulmonary Cancer Detection and Stage Classification on CT Images

  • Masood, Anum
  • Sheng, Bin
  • Li, Ping
  • Hou, Xuhong
  • Wei, Xiaoer
  • Qin, Jing
  • Feng, Dagan
Journal of biomedical informatics 2018 Journal Article, cited 10 times
Website

Automated Classification of Lung Diseases in Computed Tomography Images Using a Wavelet Based Convolutional Neural Network

  • Matsuyama, Eri
  • Tsai, Du-Yih
Journal of Biomedical Science and Engineering 2018 Journal Article, cited 0 times
Website

[18F] FDG Positron Emission Tomography (PET) Tumor and Penumbra Imaging Features Predict Recurrence in Non-Small Cell Lung Cancer

  • Mattonen, Sarah A.
  • Davidzon, Guido A.
  • Bakr, Shaimaa
  • Echegaray, Sebastian
  • Leung, Ann N. C.
  • Vasanawala, Minal
  • Horng, George
  • Napel, Sandy
  • Nair, Viswam S.
Tomography (Ann Arbor, Mich.) 2019 Journal Article, cited 0 times
Website
We identified computational imaging features on 18F-fluorodeoxyglucose positron emission tomography (PET) that predict recurrence/progression in non-small cell lung cancer (NSCLC). We retrospectively identified 291 patients with NSCLC from 2 prospectively acquired cohorts (training, n = 145; validation, n = 146). We contoured the metabolic tumor volume (MTV) on all pretreatment PET images and added a 3-dimensional penumbra region that extended outward 1 cm from the tumor surface. We generated 512 radiomics features, selected 435 features based on robustness to contour variations, and then applied randomized sparse regression (LASSO) to identify features that predicted time to recurrence in the training cohort. We built Cox proportional hazards models in the training cohort and independently evaluated the models in the validation cohort. Two features including stage and a MTV plus penumbra texture feature were selected by LASSO. Both features were significant univariate predictors, with stage being the best predictor (hazard ratio [HR] = 2.15 [95% confidence interval (CI): 1.56-2.95], P < .001). However, adding the MTV plus penumbra texture feature to stage significantly improved prediction (P = .006). This multivariate model was a significant predictor of time to recurrence in the training cohort (concordance = 0.74 [95% CI: 0.66-0.81], P < .001) that was validated in a separate validation cohort (concordance = 0.74 [95% CI: 0.67-0.81], P < .001). A combined radiomics and clinical model improved NSCLC recurrence prediction. FDG PET radiomic features may be useful biomarkers for lung cancer prognosis and add clinical utility for risk stratification.

Bone Marrow and Tumor Radiomics at (18)F-FDG PET/CT: Impact on Outcome Prediction in Non-Small Cell Lung Cancer

  • Mattonen, Sarah A
  • Davidzon, Guido A
  • Benson, Jalen
  • Leung, Ann N C
  • Vasanawala, Minal
  • Horng, George
  • Shrager, Joseph B
  • Napel, Sandy
  • Nair, Viswam S.
Radiology 2019 Journal Article, cited 0 times
Website
Background Primary tumor maximum standardized uptake value is a prognostic marker for non-small cell lung cancer. In the setting of malignancy, bone marrow activity from fluorine 18-fluorodeoxyglucose (FDG) PET may be informative for clinical risk stratification. Purpose To determine whether integrating FDG PET radiomic features of the primary tumor, tumor penumbra, and bone marrow identifies lung cancer disease-free survival more accurately than clinical features alone. Materials and Methods Patients were retrospectively analyzed from two distinct cohorts collected between 2008 and 2016. Each tumor, its surrounding penumbra, and bone marrow from the L3-L5 vertebral bodies was contoured on pretreatment FDG PET/CT images. There were 156 bone marrow and 512 tumor and penumbra radiomic features computed from the PET series. Randomized sparse Cox regression by least absolute shrinkage and selection operator identified features that predicted disease-free survival in the training cohort. Cox proportional hazards models were built and locked in the training cohort, then evaluated in an independent cohort for temporal validation. Results There were 227 patients analyzed; 136 for training (mean age, 69 years +/- 9 [standard deviation]; 101 men) and 91 for temporal validation (mean age, 72 years +/- 10; 91 men). The top clinical model included stage; adding tumor region features alone improved outcome prediction (log likelihood, -158 vs -152; P = .007). Adding bone marrow features continued to improve performance (log likelihood, -158 vs -145; P = .001). The top model integrated stage, two bone marrow texture features, one tumor with penumbra texture feature, and two penumbra texture features (concordance, 0.78; 95% confidence interval: 0.70, 0.85; P < .001). This fully integrated model was a predictor of poor outcome in the independent cohort (concordance, 0.72; 95% confidence interval: 0.64, 0.80; P < .001) and a binary score stratified patients into high and low risk of poor outcome (P < .001). Conclusion A model that includes pretreatment fluorine 18-fluorodeoxyglucose PET texture features from the primary tumor, tumor penumbra, and bone marrow predicts disease-free survival of patients with non-small cell lung cancer more accurately than clinical features alone. (c) RSNA, 2019 Online supplemental material is available for this article.

“One Stop Shop” for Prostate Cancer Staging using Imaging Biomarkers and Spatially Registered Multi-Parametric MRI

  • Mayer, Rulon
2020 Patent, cited 0 times
Website

Pilot study for supervised target detection applied to spatially registered multiparametric MRI in order to non-invasively score prostate cancer

  • Mayer, Rulon
  • Simone, Charles B
  • Skinner, William
  • Turkbey, Baris
  • Choyke, Peter
Computers in biology and medicine 2018 Journal Article, cited 0 times
Website
BACKGROUND: Gleason Score (GS) is a validated predictor of prostate cancer (PCa) disease progression and outcomes. GS from invasive needle biopsies suffers from significant inter-observer variability and possible sampling error, leading to underestimating disease severity ("underscoring") and can result in possible complications. A robust non-invasive image-based approach is, therefore, needed. PURPOSE: Use spatially registered multi-parametric MRI (MP-MRI), signatures, and supervised target detection algorithms (STDA) to non-invasively GS PCa at the voxel level. METHODS AND MATERIALS: This study retrospectively analyzed 26MP-MRI from The Cancer Imaging Archive. The MP-MRI (T2, Diffusion Weighted, Dynamic Contrast Enhanced) were spatially registered to each other, combined into stacks, and stitched together to form hypercubes. Multi-parametric (or multi-spectral) signatures derived from a training set of registered MP-MRI were transformed using statistics-based Whitening-Dewhitening (WD). Transformed signatures were inserted into STDA (having conical decision surfaces) applied to registered MP-MRI determined the tumor GS. The MRI-derived GS was quantitatively compared to the pathologist's assessment of the histology of sectioned whole mount prostates from patients who underwent radical prostatectomy. In addition, a meta-analysis of 17 studies of needle biopsy determined GS with confusion matrices and was compared to the MRI-determined GS. RESULTS: STDA and histology determined GS are highly correlated (R=0.86, p<0.02). STDA more accurately determined GS and reduced GS underscoring of PCa relative to needle biopsy as summarized by meta-analysis (p<0.05). CONCLUSION: This pilot study found registered MP-MRI, STDA, and WD transforms of signatures shows promise in non-invasively GS PCa and reducing underscoring with high spatial resolution.

Predicting outcomes in glioblastoma patients using computerized analysis of tumor shape: preliminary data

  • Mazurowski, Maciej A
  • Czarnek, Nicholas M
  • Collins, Leslie M
  • Peters, Katherine B
  • Clark, Kal L
2016 Conference Proceedings, cited 6 times
Website

Radiogenomic Analysis of Breast Cancer: Luminal B Molecular Subtype Is Associated with Enhancement Dynamics at MR Imaging

  • Mazurowski, Maciej A
  • Zhang, Jing
  • Grimm, Lars J
  • Yoon, Sora C
  • Silber, James I
Radiology 2014 Journal Article, cited 88 times
Website

Computer-extracted MR imaging features are associated with survival in glioblastoma patients

  • Mazurowski, Maciej A
  • Zhang, Jing
  • Peters, Katherine B
  • Hobbs, Hasan
Journal of neuro-oncology 2014 Journal Article, cited 33 times
Website
Automatic survival prognosis in glioblastoma (GBM) could result in improved treatment planning for the patient. The purpose of this research is to investigate the association of survival in GBM patients with tumor features in pre-operative magnetic resonance (MR) images assessed using a fully automatic computer algorithm. MR imaging data for 68 patients from two US institutions were used in this study. The images were obtained from the Cancer Imaging Archive. A fully automatic computer vision algorithm was applied to segment the images and extract eight imaging features from the MRI studies. The features included tumor side, proportion of enhancing tumor, proportion of necrosis, T1/FLAIR ratio, major axis length, minor axis length, tumor volume, and thickness of enhancing margin. We constructed a multivariate Cox proportional hazards regression model and used a likelihood ratio test to establish whether the imaging features are prognostic of survival. We also evaluated the individual prognostic value of each feature through multivariate analysis using the multivariate Cox model and univariate analysis using univariate Cox models for each feature. We found that the automatically extracted imaging features were predictive of survival (p = 0.031). Multivariate analysis of individual features showed that two individual features were predictive of survival: proportion of enhancing tumor (p = 0.013), and major axis length (p = 0.026). Univariate analysis indicated the same two features as significant (p = 0.021, and p = 0.017 respectively). We conclude that computer-extracted MR imaging features can be used for survival prognosis in GBM patients.

Quantitative Multiparametric MRI Features and PTEN Expression of Peripheral Zone Prostate Cancer: A Pilot Study

  • McCann, Stephanie M
  • Jiang, Yulei
  • Fan, Xiaobing
  • Wang, Jianing
  • Antic, Tatjana
  • Prior, Fred
  • VanderWeele, David
  • Oto, Aytekin
American Journal of Roentgenology 2016 Journal Article, cited 11 times
Website

EQUIPMENT TO ADDRESS INFRASTRUCTURE AND HUMAN RESOURCE CHALLENGES FOR RADIOTHERAPY IN LOW-RESOURCE SETTINGS

  • McCarroll, Rachel
2018 Thesis, cited 0 times
Website

Clinical Evaluation of a Fully-automatic Segmentation Method for Longitudinal Brain Tumor Volumetry

  • Meier, Raphael
  • Knecht, Urspeter
  • Loosli, Tina
  • Bauer, Stefan
  • Slotboom, Johannes
  • Wiest, Roland
  • Reyes, Mauricio
Sci RepScientific reports 2016 Journal Article, cited 26 times
Website

More accurate and efficient segmentation of organs‐at‐risk in radiotherapy with Convolutional Neural Networks Cascades

  • Men, Kuo
  • Geng, Huaizhi
  • Cheng, Chingyun
  • Zhong, Haoyu
  • Huang, Mi
  • Fan, Yong
  • Plastaras, John P
  • Lin, Alexander
  • Xiao, Ying
Medical physics 2018 Journal Article, cited 0 times
Website

Volumetric brain tumour detection from MRI using visual saliency

  • Mitra, Somosmita
  • Banerjee, Subhashis
  • Hayashi, Yoichi
PLoS One 2017 Journal Article, cited 2 times
Website
Medical image processing has become a major player in the world of automatic tumour region detection and is tantamount to the incipient stages of computer aided design. Saliency detection is a crucial application of medical image processing, and serves in its potential aid to medical practitioners by making the affected area stand out in the foreground from the rest of the background image. The algorithm developed here is a new approach to the detection of saliency in a three dimensional multi channel MR image sequence for the glioblastoma multiforme (a form of malignant brain tumour). First we enhance the three channels, FLAIR (Fluid Attenuated Inversion Recovery), T2 and T1C (contrast enhanced with gadolinium) to generate a pseudo coloured RGB image. This is then converted to the CIE L*a*b* color space. Processing on cubes of sizes k = 4, 8, 16, the L*a*b* 3D image is then compressed into volumetric units; each representing the neighbourhood information of the surrounding 64 voxels for k = 4, 512 voxels for k = 8 and 4096 voxels for k = 16, respectively. The spatial distance of these voxels are then compared along the three major axes to generate the novel 3D saliency map of a 3D image, which unambiguously highlights the tumour region. The algorithm operates along the three major axes to maximise the computation efficiency while minimising loss of valuable 3D information. Thus the 3D multichannel MR image saliency detection algorithm is useful in generating a uniform and logistically correct 3D saliency map with pragmatic applicability in Computer Aided Detection (CADe). Assignment of uniform importance to all three axes proves to be an important factor in volumetric processing, which helps in noise reduction and reduces the possibility of compromising essential information. The effectiveness of the algorithm was evaluated over the BRATS MICCAI 2015 dataset having 274 glioma cases, consisting both of high grade and low grade GBM. The results were compared with that of the 2D saliency detection algorithm taken over the entire sequence of brain data. For all comparisons, the Area Under the receiver operator characteristic (ROC) Curve (AUC) has been found to be more than 0.99 ± 0.01 over various tumour types, structures and locations.

IMAGE FUSION BASED LUNG NODULE DETECTION USING STRUCTURAL SIMILARITY AND MAX RULE

  • Mohana, P
  • Venkatesan, P
INTERNATIONAL JOURNAL OF ADVANCES IN SIGNAL AND IMAGE SCIENCES 2019 Journal Article, cited 0 times
Website
The uncontrollable cells in the lungs are the main cause of lung cancer that reduces the ability to breathe. In this study, fusion of Computed Tomography (CT) lung image and Positron Emission Tomography (PET) lung image using their structural similarity is presented. The fused image has more information compared to individual CT and PET lung images which helps radiologists to make decision quickly. Initially, the CT and PET images are divided into blocks of predefined size in an overlapping manner. The structural similarity between each block of CT and PET are computed for fusion. Image fusion is performed using a combination of structural similarity and MAX rule. If the structural similarity between CT and PET block is greater than a particular threshold, the MAX rule is applied; otherwise the pixel intensities in CT image are used. A simple thresholding approach is employed to detect the lung nodule from the fused image. The qualitative analyses show that the fusion approach provides more information with accurate detection of lung nodules.

Automated grading of non-small cell lung cancer by fuzzy rough nearest neighbour method

  • Moitra, Dipanjan
  • Mandal, Rakesh Kr
Network Modeling Analysis in Health Informatics and Bioinformatics 2019 Journal Article, cited 0 times
Lung cancer is one of the most lethal diseases across the world. Most lung cancers belong to the category of non-small cell lung cancer (NSCLC). Many studies have so far been carried out to avoid the hazards and bias of manual classification of NSCLC tumors. A few of such studies were intended towards automated nodal staging using the standard machine learning algorithms. Many others tried to classify tumors as either benign or malignant. None of these studies considered the pathological grading of NSCLC. Automated grading may perfectly depict the dissimilarity between normal tissue and cancer affected tissue. Such automation may save patients from undergoing a painful biopsy and may also help radiologists or oncologists in grading the tumor or lesion correctly. The present study aims at the automated grading of NSCLC tumors using the fuzzy rough nearest neighbour (FRNN) method. The dataset was extracted from The Cancer Imaging Archive and it comprised PET/CT images of NSCLC tumors of 211 patients. Accelerated segment test (FAST) and histogram oriented gradients methods were used to detect and extract features from the segmented images. Gray level co-occurrence matrix (GLCM) features were also considered in the study. The features along with the clinical grading information were fed into four machine learning algorithms: FRNN, logistic regression, multi-layer perceptron, and support vector machine. The results were thoroughly compared in the light of various evaluation-metrics. The confusion matrix was found balanced, and the outcome was found more cost-effective for FRNN. Results were also compared with various other leading studies done earlier in this field. The proposed FRNN model performed satisfactorily during the experiment. Further exploration of FRNN may be very helpful for radiologists and oncologists in planning the treatment for NSCLC. More varieties of cancers may be considered while conducting similar studies.

Automated AJCC (7th edition) staging of non-small cell lung cancer (NSCLC) using deep convolutional neural network (CNN) and recurrent neural network (RNN)

  • Moitra, Dipanjan
  • Mandal, Rakesh Kr
Health Inf Sci Syst 2019 Journal Article, cited 0 times
Website
Purpose: A large chunk of lung cancers are of the type non-small cell lung cancer (NSCLC). Both the treatment planning and patients' prognosis depend greatly on factors like AJCC staging which is an abstraction over TNM staging. Many significant efforts have so far been made towards automated staging of NSCLC, but the groundbreaking application of a deep neural networks (DNNs) is yet to be observed in this domain of study. DNN is capable of achieving higher level of accuracy than the traditional artificial neural networks (ANNs) as it uses deeper layers of convolutional neural network (CNN). The objective of the present study is to propose a simple yet fast CNN model combined with recurrent neural network (RNN) for automated AJCC staging of NSCLC and to compare the outcome with a few standard machine learning algorithms along with a few similar studies. Methods: The NSCLC radiogenomics collection from the cancer imaging archive (TCIA) dataset was considered for the study. The tumor images were refined and filtered by resizing, enhancing, de-noising, etc. The initial image processing phase was followed by texture based image segmentation. The segmented images were fed into a hybrid feature detection and extraction model which was comprised of two sequential phases: maximally stable extremal regions (MSER) and the speeded up robust features (SURF). After a prolonged experiment, the desired CNN-RNN model was derived and the extracted features were fed into the model. Results: The proposed CNN-RNN model almost outperformed the other machine learning algorithms under consideration. The accuracy remained steadily higher than the other contemporary studies. Conclusion: The proposed CNN-RNN model performed commendably during the study. Further studies may be carried out to refine the model and develop an improved auxiliary decision support system for oncologists and radiologists.

Prediction of Non-small Cell Lung Cancer Histology by a Deep Ensemble of Convolutional and Bidirectional Recurrent Neural Network

  • Moitra, Dipanjan
  • Mandal, Rakesh Kumar
Journal of Digital Imaging 2020 Journal Article, cited 0 times

Informatics in Radiology: An Open-Source and Open-Access Cancer Biomedical Informatics Grid Annotation and Image Markup Template Builder

  • Mongkolwat, Pattanasak
  • Channin, David S
  • Kleper, Vladimir
  • Rubin, Daniel L
Radiographics 2012 Journal Article, cited 15 times
Website
In a routine clinical environment or clinical trial, a case report form or structured reporting template can be used to quickly generate uniform and consistent reports. Annotation and image markup (AIM), a project supported by the National Cancer Institute's cancer biomedical informatics grid, can be used to collect information for a case report form or structured reporting template. AIM is designed to store, in a single information source, (a) the description of pixel data with use of markups or graphical drawings placed on the image, (b) calculation results (which may or may not be directly related to the markups), and (c) supplemental information. To facilitate the creation of AIM annotations with data entry templates, an AIM template schema and an open-source template creation application were developed to assist clinicians, image researchers, and designers of clinical trials to quickly create a set of data collection items, thereby ultimately making image information more readily accessible.

Deep Learning For Brain Tumor Segmentation

  • Moreno Lopez, Marc
2017 Thesis, cited 393 times
Website

Optimization Methods for Medical Image Super Resolution Reconstruction

  • Moustafa, Marwa
  • Ebied, Hala M
  • Helmy, Ashraf
  • Nazamy, Taymoor M
  • Tolba, Mohamed F
2016 Book Section, cited 0 times
Website

Forschungsanwendungen in der digitalen Radiologie

  • Müller, H
  • Hanbury, A
Der Radiologe 2016 Journal Article, cited 1 times
Website

Automated Brain Lesion Detection and Segmentation Using Magnetic Resonance Images

  • Nabizadeh, Nooshin
2015 Thesis, cited 10 times
Website

Brain tumors detection and segmentation in MR images: Gabor wavelet vs. statistical features

  • Nabizadeh, Nooshin
  • Kubat, Miroslav
Computers & Electrical Engineering 2015 Journal Article, cited 85 times
Website
Automated recognition of brain tumors in magnetic resonance images (MRI) is a difficult procedure owing to the variability and complexity of the location, size, shape, and texture of these lesions. Because of intensity similarities between brain lesions and normal tissues, some approaches make use of multi-spectral anatomical MRI scans. However, the time and cost restrictions for collecting multi-spectral MRI scans and some other difficulties necessitate developing an approach that can detect tumor tissues using a single-spectral anatomical MRI images. In this paper, we present a fully automatic system, which is able to detect slices that include tumor and, to delineate the tumor area. The experimental results on single contrast mechanism demonstrate the efficacy of our proposed technique in successfully segmenting brain tumor tissues with high accuracy and low computational complexity. Moreover, we include a study evaluating the efficacy of statistical features over Gabor wavelet features using several classifiers. This contribution fills the gap in the literature, as is the first to compare these sets of features for tumor segmentation applications. (C) 2015 Elsevier Ltd. All rights reserved.

Automatic tumor segmentation in single-spectral MRI using a texture-based and contour-based algorithm

  • Nabizadeh, Nooshin
  • Kubat, Miroslav
Expert Systems with Applications 2017 Journal Article, cited 8 times
Website
Automatic detection of brain tumors in single-spectral magnetic resonance images is a challenging task. Existing techniques suffer from inadequate performance, dependence on initial assumptions, and, sometimes, the need for manual interference. The research reported in this paper seeks to reduce some of these shortcomings, and to remove others, achieving satisfactory performance at reasonable computational costs. The success of the system described here is explained by the synergy of the following aspects: (1) a broad choice of high-level features to characterize the image's texture, (2) an efficient mechanism to eliminate less useful features (3) a machine-learning technique to induce a classifier that signals the presence of a tumor-affected tissue, and (4) an improved version of the skippy greedy snake algorithm to outline the tumor's contours. The paper describes the system and reports experiments with synthetic as well as real data. (C) 2017 Elsevier Ltd. All rights reserved.

Advanced 3D printed model of middle cerebral artery aneurysms for neurosurgery simulation

  • Nagassa, Ruth G
  • McMenamin, Paul G
  • Adams, Justin W
  • Quayle, Michelle R
  • Rosenfeld, Jeffrey V
3D Print Med 2019 Journal Article, cited 0 times
Website
BACKGROUND: Neurosurgical residents are finding it more difficult to obtain experience as the primary operator in aneurysm surgery. The present study aimed to replicate patient-derived cranial anatomy, pathology and human tissue properties relevant to cerebral aneurysm intervention through 3D printing and 3D print-driven casting techniques. The final simulator was designed to provide accurate simulation of a human head with a middle cerebral artery (MCA) aneurysm. METHODS: This study utilized living human and cadaver-derived medical imaging data including CT angiography and MRI scans. Computer-aided design (CAD) models and pre-existing computational 3D models were also incorporated in the development of the simulator. The design was based on including anatomical components vital to the surgery of MCA aneurysms while focusing on reproducibility, adaptability and functionality of the simulator. Various methods of 3D printing were utilized for the direct development of anatomical replicas and moulds for casting components that optimized the bio-mimicry and mechanical properties of human tissues. Synthetic materials including various types of silicone and ballistics gelatin were cast in these moulds. A novel technique utilizing water-soluble wax and silicone was used to establish hollow patient-derived cerebrovascular models. RESULTS: A patient-derived 3D aneurysm model was constructed for a MCA aneurysm. Multiple cerebral aneurysm models, patient-derived and CAD, were replicated as hollow high-fidelity models. The final assembled simulator integrated six anatomical components relevant to the treatment of cerebral aneurysms of the Circle of Willis in the left cerebral hemisphere. These included models of the cerebral vasculature, cranial nerves, brain, meninges, skull and skin. The cerebral circulation was modeled through the patient-derived vasculature within the brain model. Linear and volumetric measurements of specific physical modular components were repeated, averaged and compared to the original 3D meshes generated from the medical imaging data. Calculation of the concordance correlation coefficient (rhoc: 90.2%-99.0%) and percentage difference (</=0.4%) confirmed the accuracy of the models. CONCLUSIONS: A multi-disciplinary approach involving 3D printing and casting techniques was used to successfully construct a multi-component cerebral aneurysm surgery simulator. Further study is planned to demonstrate the educational value of the proposed simulator for neurosurgery residents.

Quantitative and Qualitative Evaluation of Convolutional Neural Networks with a Deeper U-Net for Sparse-View Computed Tomography Reconstruction

  • Nakai, H.
  • Nishio, M.
  • Yamashita, R.
  • Ono, A.
  • Nakao, K. K.
  • Fujimoto, K.
  • Togashi, K.
Acad Radiol 2019 Journal Article, cited 0 times
Website
Rationale and Objectives To evaluate the utility of a convolutional neural network (CNN) with an increased number of contracting and expanding paths of U-net for sparse-view CT reconstruction. Materials and Methods This study used 60 anonymized chest CT cases from a public database called “The Cancer Imaging Archive”. Eight thousand images from 40 cases were used for training. Eight hundred and 80 images from another 20 cases were used for quantitative and qualitative evaluation, respectively. Sparse-view CT images subsampled by a factor of 20 were simulated, and two CNNs were trained to create denoised images from the sparse-view CT. A CNN based on U-net with residual learning with four contracting and expanding paths (the preceding CNN) was compared with another CNN with eight contracting and expanding paths (the proposed CNN) both quantitatively (peak signal to noise ratio, structural similarity index), and qualitatively (the scores given by two radiologists for anatomical visibility, artifact and noise, and overall image quality) using the Wilcoxon signed-rank test. Nodule and emphysema appearance were also evaluated qualitatively. Results The proposed CNN was significantly better than the preceding CNN both quantitatively and qualitatively (overall image quality interquartile range, 3.0–3.5 versus 1.0–1.0 reported from the preceding CNN; p < 0.001). However, only 2 of 22 cases used for emphysematous evaluation (2 CNNs for every 11 cases with emphysema) had an average score of ≥ 2 (on a 3 point scale). Conclusion Increasing contracting and expanding paths may be useful for sparse-view CT reconstruction with CNN. However, poor reproducibility of emphysema appearance should also be noted. Key Words Convolutional neural network CNN Sparse-view CT Deep learning Abbreviations BN batch normalization CNN convolutional neural networks CTcomputed tomography dB decibel GGO ground glass opacity GPU graphics processing unit MSE the mean squared error PSNR peak signal to noise ratio ReLU rectified linear unit SSIM structural similarity index TCIA The Cancer Imaging Archive

Prediction of malignant glioma grades using contrast-enhanced T1-weighted and T2-weighted magnetic resonance images based on a radiomic analysis

  • Nakamoto, Takahiro
  • Takahashi, Wataru
  • Haga, Akihiro
  • Takahashi, Satoshi
  • Kiryu, Shigeru
  • Nawa, Kanabu
  • Ohta, Takeshi
  • Ozaki, Sho
  • Nozawa, Yuki
  • Tanaka, Shota
  • Mukasa, Akitake
  • Nakagawa, Keiichi
Sci RepScientific reports 2019 Journal Article, cited 0 times
Website
We conducted a feasibility study to predict malignant glioma grades via radiomic analysis using contrast-enhanced T1-weighted magnetic resonance images (CE-T1WIs) and T2-weighted magnetic resonance images (T2WIs). We proposed a framework and applied it to CE-T1WIs and T2WIs (with tumor region data) acquired preoperatively from 157 patients with malignant glioma (grade III: 55, grade IV: 102) as the primary dataset and 67 patients with malignant glioma (grade III: 22, grade IV: 45) as the validation dataset. Radiomic features such as size/shape, intensity, histogram, and texture features were extracted from the tumor regions on the CE-T1WIs and T2WIs. The Wilcoxon-Mann-Whitney (WMW) test and least absolute shrinkage and selection operator logistic regression (LASSO-LR) were employed to select the radiomic features. Various machine learning (ML) algorithms were used to construct prediction models for the malignant glioma grades using the selected radiomic features. Leave-one-out cross-validation (LOOCV) was implemented to evaluate the performance of the prediction models in the primary dataset. The selected radiomic features for all folds in the LOOCV of the primary dataset were used to perform an independent validation. As evaluation indices, accuracies, sensitivities, specificities, and values for the area under receiver operating characteristic curve (or simply the area under the curve (AUC)) for all prediction models were calculated. The mean AUC value for all prediction models constructed by the ML algorithms in the LOOCV of the primary dataset was 0.902 +/- 0.024 (95% CI (confidence interval), 0.873-0.932). In the independent validation, the mean AUC value for all prediction models was 0.747 +/- 0.034 (95% CI, 0.705-0.790). The results of this study suggest that the malignant glioma grades could be sufficiently and easily predicted by preparing the CE-T1WIs, T2WIs, and tumor delineations for each patient. Our proposed framework may be an effective tool for preoperatively grading malignant gliomas.

Regularized Three-Dimensional Generative Adversarial Nets for Unsupervised Metal Artifact Reduction in Head and Neck CT Images

  • Nakao, Megumi
  • Imanishi, Keiho
  • Ueda, Nobuhiro
  • Imai, Yuichiro
  • Kirita, Tadaaki
  • Matsuda, Tetsuya
IEEE Access 2020 Journal Article, cited 1 times
Website
The reduction of metal artifacts in computed tomography (CT) images, specifically for strongartifacts generated from multiple metal objects, is a challenging issue in medical imaging research. Althoughthere have been some studies on supervised metal artifact reduction through the learning of synthesizedartifacts, it is difficult for simulated artifacts to cover the complexity of the real physical phenomenathat may be observed in X-ray propagation. In this paper, we introduce metal artifact reduction methodsbased on an unsupervised volume-to-volume translation learned from clinical CT images. We constructthree-dimensional adversarial nets with a regularized loss function designed for metal artifacts from multipledental fillings. The results of experiments using a CT volume database of 361 patients demonstrate that theproposed framework has an outstanding capacity to reduce strong artifacts and to recover underlying missingvoxels, while preserving the anatomical features of soft tissues and tooth structures from the original images.

Classification of brain tumor isocitrate dehydrogenase status using MRI and deep learning

  • Nalawade, S.
  • Murugesan, G. K.
  • Vejdani-Jahromi, M.
  • Fisicaro, R. A.
  • Bangalore Yogananda, C. G.
  • Wagner, B.
  • Mickey, B.
  • Maher, E.
  • Pinho, M. C.
  • Fei, B.
  • Madhuranthakam, A. J.
  • Maldjian, J. A.
J Med Imaging (Bellingham) 2019 Journal Article, cited 0 times
Website
Isocitrate dehydrogenase (IDH) mutation status is an important marker in glioma diagnosis and therapy. We propose an automated pipeline for noninvasively predicting IDH status using deep learning and T2-weighted (T2w) magnetic resonance (MR) images with minimal preprocessing (N4 bias correction and normalization to zero mean and unit variance). T2w MR images and genomic data were obtained from The Cancer Imaging Archive dataset for 260 subjects (120 high-grade and 140 low-grade gliomas). A fully automated two-dimensional densely connected model was trained to classify IDH mutation status on 208 subjects and tested on another held-out set of 52 subjects using fivefold cross validation. Data leakage was avoided by ensuring subject separation during the slice-wise randomization. Mean classification accuracy of 90.5% was achieved for each axial slice in predicting the three classes of no tumor, IDH mutated, and IDH wild type. Test accuracy of 83.8% was achieved in predicting IDH mutation status for individual subjects on the test dataset of 52 subjects. We demonstrate a deep learning method to predict IDH mutation status using T2w MRI alone. Radiologic imaging studies using deep learning methods must address data leakage (subject duplication) in the randomization process to avoid upward bias in the reported classification accuracy.

Image Processing and Classification Techniques for Early Detection of Lung Cancer for Preventive Health Care: A Survey

  • Naresh, Prashant
  • Shettar, Rajashree
Int. J. of Recent Trends in Engineering & Technology 2014 Journal Article, cited 6 times
Website

Reduced lung-cancer mortality with low-dose computed tomographic screening

  • National
  • Lung
  • Screening
  • Trial
  • Research
  • Team
New England Journal of Medicine 2011 Journal Article, cited 4992 times
Website

The national lung screening trial: overview and study design

  • National
  • Lung
  • Screening
  • Trial
  • Research
  • Team
Radiology 2011 Journal Article, cited 760 times
Website

Security of Multi-frame DICOM Images Using XOR Encryption Approach

  • Natsheh, QN
  • Li, B
  • Gale, AG
Procedia Computer Science 2016 Journal Article, cited 4 times
Website

Automatic Classification of Brain MRI Images Using SVM and Neural Network Classifiers

  • Natteshan, NVS
  • Jothi, J Angel Arul
2015 Book Section, cited 8 times
Website

Discrimination of Benign and Malignant Suspicious Breast Tumors Based on Semi-Quantitative DCE-MRI Parameters Employing Support Vector Machine

  • Navaei-Lavasani, Saeedeh
  • Fathi-Kazerooni, Anahita
  • Saligheh-Rad, Hamidreza
  • Gity, Masoumeh
Frontiers in Biomedical Technologies 2015 Journal Article, cited 4 times
Website

Efficacy evaluation of 2D, 3D U-Net semantic segmentation and atlas-based segmentation of normal lungs excluding the trachea and main bronchi

  • Nemoto, Takafumi
  • Futakami, Natsumi
  • Yagi, Masamichi
  • Kumabe, Atsuhiro
  • Takeda, Atsuya
  • Kunieda, Etsuo
  • Shigematsu, Naoyuki
Journal of Radiation Research 2020 Journal Article, cited 0 times
Website
This study aimed to examine the efficacy of semantic segmentation implemented by deep learning and to confirm whether this method is more effective than a commercially dominant auto-segmentation tool with regards to delineating normal lung excluding the trachea and main bronchi. A total of 232 non-small-cell lung cancer cases were examined. The computed tomography (CT) images of these cases were converted from Digital Imaging and Communications in Medicine (DICOM) Radiation Therapy (RT) formats to arrays of 32 x 128 x 128 voxels and input into both 2D and 3D U-Net, which are deep learning networks for semantic segmentation. The number of training, validation and test sets were 160, 40 and 32, respectively. Dice similarity coefficients (DSCs) of the test set were evaluated employing Smart Segmentation Knowledge Based Contouring (Smart segmentation is an atlas-based segmentation tool), as well as the 2D and 3D U-Net. The mean DSCs of the test set were 0.964 [95% confidence interval (CI), 0.960-0.968], 0.990 (95% CI, 0.989-0.992) and 0.990 (95% CI, 0.989-0.991) with Smart segmentation, 2D and 3D U-Net, respectively. Compared with Smart segmentation, both U-Nets presented significantly higher DSCs by the Wilcoxon signed-rank test (P < 0.01). There was no difference in mean DSC between the 2D and 3D U-Net systems. The newly-devised 2D and 3D U-Net approaches were found to be more effective than a commercial auto-segmentation tool. Even the relatively shallow 2D U-Net which does not require high-performance computational resources was effective enough for the lung segmentation. Semantic segmentation using deep learning was useful in radiation treatment planning for lung cancers.

Big biomedical image processing hardware acceleration: A case study for K-means and image filtering

  • Neshatpour, Katayoun
  • Koohi, Arezou
  • Farahmand, Farnoud
  • Joshi, Rajiv
  • Rafatirad, Setareh
  • Sasan, Avesta
  • Homayoun, Houman
IEEE International Symposium on Circuits and Systems (ISCAS) 2016 Journal Article, cited 7 times
Website

Multisite concordance of apparent diffusion coefficient measurements across the NCI Quantitative Imaging Network

  • Newitt, David C
  • Malyarenko, Dariya
  • Chenevert, Thomas L
  • Quarles, C Chad
  • Bell, Laura
  • Fedorov, Andriy
  • Fennessy, Fiona
  • Jacobs, Michael A
  • Solaiyappan, Meiyappan
  • Hectors, Stefanie
  • Taouli, B.
  • Muzi, M.
  • Kinahan, P. E. E.
  • Schmainda, K. M.
  • Prah, M. A.
  • Taber, E. N.
  • Kroenke, C.
  • Huang, W., Arlinghaus, L.
  • Yankeelov, T. E.
  • Cao, Y.
  • Aryal, M.
  • Yen, Y.-F.
  • Kalpathy-Cramer, J.
  • Shukla-Dave, A.
  • Fung, M.
  • Liang, J.
  • Boss, M.
  • Hylton, N.
Journal of Medical Imaging 2017 Journal Article, cited 6 times
Website

Efficient Colorization of Medical Imaging based on Colour Transfer Method

  • Nida, Nudrat
  • Khan, Muhammad Usman Ghani
2016 Journal Article, cited 0 times
Website

A FRAMEWORK FOR AUTOMATIC COLORIZATION OF MEDICAL IMAGING

  • Nida, Nudrat
  • Sharif, Muhammad
  • Khan, Muhammad Usman Ghani
  • Yasmin, Mussarat
  • Fernandes, Steven Lawrence
IIOABJ 2016 Journal Article, cited 3 times
Website

Homological radiomics analysis for prognostic prediction in lung cancer patients

  • Ninomiya, Kenta
  • Arimura, Hidetaka
Physica Medica 2020 Journal Article, cited 0 times
Website

Computer-aided diagnosis of lung nodule using gradient tree boosting and Bayesian optimization

  • Nishio, Mizuho
  • Nishizawa, Mitsuo
  • Sugiyama, Osamu
  • Kojima, Ryosuke
  • Yakami, Masahiro
  • Kuroda, Tomohiro
  • Togashi, Kaori
PLoS One 2018 Journal Article, cited 3 times
Website

Segmentation of lung from CT using various active contour models

  • Nithila, Ezhil E
  • Kumar, SS
Biomedical Signal Processing and Control 2018 Journal Article, cited 0 times
Website

Image descriptors in radiology images: a systematic review

  • Nogueira, Mariana A
  • Abreu, Pedro Henriques
  • Martins, Pedro
  • Machado, Penousal
  • Duarte, Hugo
  • Santos, João
Artificial Intelligence Review 2016 Journal Article, cited 8 times
Website

Modified fast adaptive scatter kernel superposition (mfASKS) correction and its dosimetric impact on CBCT‐based proton therapy dose calculation

  • Nomura, Yusuke
  • Xu, Qiong
  • Peng, Hao
  • Takao, Seishin
  • Shimizu, Shinichi
  • Xing, Lei
  • Shirato, Hiroki
Medical physics 2020 Journal Article, cited 0 times
Website

Projection-domain scatter correction for cone beam computed tomography using a residual convolutional neural network

  • Nomura, Yusuke
  • Xu, Qiong
  • Shirato, Hiroki
  • Shimizu, Shinichi
  • Xing, Lei
Med Phys 2019 Journal Article, cited 0 times
Website
PURPOSE: Scatter is a major factor degrading the image quality of cone beam computed tomography (CBCT). Conventional scatter correction strategies require handcrafted analytical models with ad hoc assumptions, which often leads to less accurate scatter removal. This study aims to develop an effective scatter correction method using a residual convolutional neural network (CNN). METHODS: A U-net based 25-layer CNN was constructed for CBCT scatter correction. The establishment of the model consists of three steps: model training, validation, and testing. For model training, a total of 1800 pairs of x-ray projection and the corresponding scatter-only distribution in nonanthropomorphic phantoms taken in full-fan scan were generated using Monte Carlo simulation of a CBCT scanner installed with a proton therapy system. An end-to-end CNN training was implemented with two major loss functions for 100 epochs with a mini-batch size of 10. Image rotations and flips were randomly applied to augment the training datasets during training. For validation, 200 projections of a digital head phantom were collected. The proposed CNN-based method was compared to a conventional projection-domain scatter correction method named fast adaptive scatter kernel superposition (fASKS) method using 360 projections of an anthropomorphic head phantom. Two different loss functions were applied for the same CNN to evaluate the impact of loss functions on the final results. Furthermore, the CNN model trained with full-fan projections was fine-tuned for scatter correction in half-fan scan by using transfer learning with additional 360 half-fan projection pairs of nonanthropomorphic phantoms. The tuned-CNN model for half-fan scan was compared with the fASKS method as well as the CNN-based method without the fine-tuning using additional lung phantom projections. RESULTS: The CNN-based method provides projections with significantly reduced scatter and CBCT images with more accurate Hounsfield Units (HUs) than that of the fASKS-based method. Root mean squared error of the CNN-corrected projections was improved to 0.0862 compared to 0.278 for uncorrected projections or 0.117 for the fASKS-corrected projections. The CNN-corrected reconstruction provided better HU quantification, especially in regions near the air or bone interfaces. All four image quality measures, which include mean absolute error (MAE), mean squared error (MSE), peak signal-to-noise ratio (PSNR), and structural similarity (SSIM), indicated that the CNN-corrected images were significantly better than that of the fASKS-corrected images. Moreover, the proposed transfer learning technique made it possible for the CNN model trained with full-fan projections to be applicable to remove scatters in half-fan projections after fine-tuning with only a small number of additional half-fan training datasets. SSIM value of the tuned-CNN-corrected images was 0.9993 compared to 0.9984 for the non-tuned-CNN-corrected images or 0.9990 for the fASKS-corrected images. Finally, the CNN-based method is computationally efficient - the correction time for the 360 projections only took less than 5 s in the reported experiments on a PC (4.20 GHz Intel Core-i7 CPU) with a single NVIDIA GTX 1070 GPU. CONCLUSIONS: The proposed deep learning-based method provides an effective tool for CBCT scatter correction and holds significant value for quantitative imaging and image-guided radiation therapy.

Reproducibility of radiomic features using network analysis and its application in Wasserstein k-means clustering

  • Oh, Jung Hun
  • Apte, Aditya P.
  • Katsoulakis, Evangelia
  • Riaz, Nadeem
  • Hatzoglou, Vaios
  • Yu, Yao
  • Mahmood, Usman
  • Veeraraghavan, Harini
  • Pouryahya, Maryam
  • Iyer, Aditi
  • Shukla-Dave, Amita
  • Tannenbaum, Allen
  • Lee, Nancy Y.
  • Deasy, Joseph O.
Journal of Medical Imaging 2021 Journal Article, cited 0 times
Website
Purpose: The goal of this study is to develop innovative methods for identifying radiomic features that are reproducible over varying image acquisition settings. Approach: We propose a regularized partial correlation network to identify reliable and reproducible radiomic features. This approach was tested on two radiomic feature sets generated using two different reconstruction methods on computed tomography (CT) scans from a cohort of 47 lung cancer patients. The largest common network component between the two networks was tested on phantom data consisting of five cancer samples. To further investigate whether radiomic features found can identify phenotypes, we propose a k-means clustering algorithm coupled with the optimal mass transport theory. This approach following the regularized partial correlation network analysis was tested on CT scans from 77 head and neck squamous cell carcinoma (HNSCC) patients in the Cancer Imaging Archive (TCIA) and validated using an independent dataset. Results: A set of common radiomic features was found in relatively large network components between the resultant two partial correlation networks resulting from a cohort of lung cancer patients. The reliability and reproducibility of those radiomic features were further validated on phantom data using the Wasserstein distance. Further analysis using the network-based Wasserstein k-means algorithm on the TCIA HNSCC data showed that the resulting clusters separate tumor subsites as well as HPV status, and this was validated on an independent dataset. Conclusion: We showed that a network-based analysis enables identifying reproducible radiomic features and use of the selected set of features can enhance clustering results.

Memory-efficient 3D connected component labeling with parallel computing

  • Ohira, Norihiro
Signal, Image and Video Processing 2017 Journal Article, cited 0 times
Website

Optothermal tissue response for laser-based investigation of thyroid cancer

  • Okebiorun, Michael O.
  • ElGohary, Sherif H.
Informatics in Medicine Unlocked 2020 Journal Article, cited 0 times
Website
To characterize thyroid cancer imaging-based detection, we implemented a simulation of the optical and thermal response in an optical investigation of thyroid cancer. We employed the 3D Monte Carlo method and the bio-heat equation to determine the fluence and temperature distribution via the Molecular Optical Simulation Environment (MOSE) with a Finite element (FE) simulator. The optothermal effect of a neck surface-based source is also compared to a trachea-based source. Results show fluence and temperature distribution in a realistic 3D neck model with both endogenous and hypothetical tissue-specific exogenous contrast agents. It also reveals that the trachea illumination has a factor of ten better absorption and temperature change than the neck-surface illumination, and tumor-specific exogenous contrast agents have a relatively higher absorption and temperature change in the tumors, which could be assistive to clinicians and researchers to improve and better understand the region's response to laser-based diagnosis.

Uma Proposta Para Utilização De Workflows Científicos Para A Definição De Pipelines Para A Recuperação De Imagens Médicas Por Conteúdo Em Um Ambiente Distribuído

  • Oliveira, Luis Fernando Milano
2016 Thesis, cited 1 times
Website

Image segmentation on GPGPUs: a cellular automata-based approach

  • Olmedo, Irving
  • Perez, Yessika Guerra
  • Johnson, James F
  • Raut, Lakshman
  • Hoe, David HK
2013 Conference Proceedings, cited 0 times
Website

A Neuro-Fuzzy Based System for the Classification of Cells as Cancerous or Non-Cancerous

  • Omotosho, Adebayo
  • Oluwatobi, Asani Emmanuel
  • Oluwaseun, Ogundokun Roseline
  • Chukwuka, Ananti Emmanuel
  • Adekanmi, Adegun
International Journal of Medical Research & Health Sciences 2018 Journal Article, cited 0 times
Website

Automatic localization of target vertebrae in spine surgery using fast CT-to-fluoroscopy (3D-2D) image registration

  • Otake, Y
  • Schafer, S
  • Stayman, JW
  • Zbijewski, W
  • Kleinszig, G
  • Graumann, R
  • Khanna, AJ
  • Siewerdsen, JH
2012 Conference Proceedings, cited 8 times
Website

Medical image retrieval using hybrid wavelet network classifier

  • Othman, Sufri
  • Jemai, Olfa
  • Zaied, Mourad
  • Ben Amar, Chokri
2014 Conference Proceedings, cited 3 times
Website

Effective Diagnosis and Treatment through Content-Based Medical Image Retrieval (CBMIR) by Using Artificial Intelligence

  • Owais, Muhammad
  • Arsalan, Muhammad
  • Choi, Jiho
  • Park, Kang Ryoung
J Clin Med 2019 Journal Article, cited 0 times
Website
Medical-image-based diagnosis is a tedious task‚ and small lesions in various medical images can be overlooked by medical experts due to the limited attention span of the human visual system, which can adversely affect medical treatment. However, this problem can be resolved by exploring similar cases in the previous medical database through an efficient content-based medical image retrieval (CBMIR) system. In the past few years, heterogeneous medical imaging databases have been growing rapidly with the advent of different types of medical imaging modalities. Recently, a medical doctor usually refers to various types of imaging modalities all together such as computed tomography (CT), magnetic resonance imaging (MRI), X-ray, and ultrasound, etc of various organs in order for the diagnosis and treatment of specific disease. Accurate classification and retrieval of multimodal medical imaging data is the key challenge for the CBMIR system. Most previous attempts use handcrafted features for medical image classification and retrieval, which show low performance for a massive collection of multimodal databases. Although there are a few previous studies on the use of deep features for classification, the number of classes is very small. To solve this problem, we propose the classification-based retrieval system of the multimodal medical images from various types of imaging modalities by using the technique of artificial intelligence, named as an enhanced residual network (ResNet). Experimental results with 12 databases including 50 classes demonstrate that the accuracy and F1.score by our method are respectively 81.51% and 82.42% which are higher than those by the previous method of CBMIR (the accuracy of 69.71% and F1.score of 69.63%).

3D PULMONARY NODULES DETECTION USING FAST MARCHING SEGMENTATION

  • Paing, MP
  • Choomchuay, S
Journal of Fundamental and Applied Sciences 2017 Journal Article, cited 1 times
Website

Compressibility variations of JPEG2000 compressed computed tomography

  • Pambrun, Jean-Francois
  • Noumeir, Rita
2013 Conference Proceedings, cited 3 times
Website

A novel fused convolutional neural network for biomedical image classification

  • Pang, Shuchao
  • Du, Anan
  • Orgun, Mehmet A
  • Yu, Zhezhou
Medical & biological engineering & computing 2018 Journal Article, cited 0 times
Website

A Novel Biomedical Image Indexing and Retrieval System via Deep Preference Learning

  • Pang, Shuchao
  • Orgun, MA
  • Yu, Zhezhou
Computer methods and programs in biomedicine 2018 Journal Article, cited 4 times
Website
BACKGROUND AND OBJECTIVES: The traditional biomedical image retrieval methods as well as content-based image retrieval (CBIR) methods originally designed for non-biomedical images either only consider using pixel and low-level features to describe an image or use deep features to describe images but still leave a lot of room for improving both accuracy and efficiency. In this work, we propose a new approach, which exploits deep learning technology to extract the high-level and compact features from biomedical images. The deep feature extraction process leverages multiple hidden layers to capture substantial feature structures of high-resolution images and represent them at different levels of abstraction, leading to an improved performance for indexing and retrieval of biomedical images. METHODS: We exploit the current popular and multi-layered deep neural networks, namely, stacked denoising autoencoders (SDAE) and convolutional neural networks (CNN) to represent the discriminative features of biomedical images by transferring the feature representations and parameters of pre-trained deep neural networks from another domain. Moreover, in order to index all the images for finding the similarly referenced images, we also introduce preference learning technology to train and learn a kind of a preference model for the query image, which can output the similarity ranking list of images from a biomedical image database. To the best of our knowledge, this paper introduces preference learning technology for the first time into biomedical image retrieval. RESULTS: We evaluate the performance of two powerful algorithms based on our proposed system and compare them with those of popular biomedical image indexing approaches and existing regular image retrieval methods with detailed experiments over several well-known public biomedical image databases. Based on different criteria for the evaluation of retrieval performance, experimental results demonstrate that our proposed algorithms outperform the state-of-the-art techniques in indexing biomedical images. CONCLUSIONS: We propose a novel and automated indexing system based on deep preference learning to characterize biomedical images for developing computer aided diagnosis (CAD) systems in healthcare. Our proposed system shows an outstanding indexing ability and high efficiency for biomedical image retrieval applications and it can be used to collect and annotate the high-resolution images in a biomedical database for further biomedical image research and applications.

Deep learning for segmentation of brain tumors: Can we train with images from different institutions?

  • Paredes, David
  • Saha, Ashirbani
  • Mazurowski, Maciej A
2017 Conference Proceedings, cited 2 times
Website

Content dependent intra mode selection for medical image compression using HEVC

  • Parikh, S
  • Ruiz, D
  • Kalva, H
  • Fern, G
2016 Conference Proceedings, cited 3 times
Website

Influence of Contrast Administration on Computed Tomography–Based Analysis of Visceral Adipose and Skeletal Muscle Tissue in Clear Cell Renal Cell Carcinoma

  • Paris, Michael T
  • Furberg, Helena F
  • Petruzella, Stacey
  • Akin, Oguz
  • Hötker, Andreas M
  • Mourtzakis, Marina
Journal of Parenteral and Enteral Nutrition 2018 Journal Article, cited 0 times
Website

Automated Facial Recognition of Computed Tomography-Derived Facial Images: Patient Privacy Implications

  • Parks, Connie L
  • Monson, Keith L
Journal of Digital Imaging 2016 Journal Article, cited 3 times
Website

Machine learning applications for Radiomics: towards robust non-invasive predictors in clinical oncology

  • Parmar, Chintan
2017 Thesis, cited 1 times
Website

Machine Learning methods for Quantitative Radiomic Biomarkers

  • Parmar, C.
  • Grossmann, P.
  • Bussink, J.
  • Lambin, P.
  • Aerts, H. J.
Sci RepScientific reports 2015 Journal Article, cited 178 times
Website
Radiomics extracts and mines large number of medical imaging features quantifying tumor phenotypic characteristics. Highly accurate and reliable machine-learning approaches can drive the success of radiomic applications in clinical care. In this radiomic study, fourteen feature selection methods and twelve classification methods were examined in terms of their performance and stability for predicting overall survival. A total of 440 radiomic features were extracted from pre-treatment computed tomography (CT) images of 464 lung cancer patients. To ensure the unbiased evaluation of different machine-learning methods, publicly available implementations along with reported parameter configurations were used. Furthermore, we used two independent radiomic cohorts for training (n = 310 patients) and validation (n = 154 patients). We identified that Wilcoxon test based feature selection method WLCX (stability = 0.84 +/- 0.05, AUC = 0.65 +/- 0.02) and a classification method random forest RF (RSD = 3.52%, AUC = 0.66 +/- 0.03) had highest prognostic performance with high stability against data perturbation. Our variability analysis indicated that the choice of classification method is the most dominant source of performance variation (34.21% of total variance). Identification of optimal machine-learning methods for radiomic applications is a crucial step towards stable and clinically relevant radiomic biomarkers, providing a non-invasive way of quantifying and monitoring tumor-phenotypic characteristics in clinical practice.

Radiomic feature clusters and Prognostic Signatures specific for Lung and Head &Neck cancer

  • Parmar, C.
  • Leijenaar, R. T.
  • Grossmann, P.
  • Rios Velazquez, E.
  • Bussink, J.
  • Rietveld, D.
  • Rietbergen, M. M.
  • Haibe-Kains, B.
  • Lambin, P.
  • Aerts, H. J.
Sci RepScientific reports 2015 Journal Article, cited 0 times
Radiomics provides a comprehensive quantification of tumor phenotypes by extracting and mining large number of quantitative image features. To reduce the redundancy and compare the prognostic characteristics of radiomic features across cancer types, we investigated cancer-specific radiomic feature clusters in four independent Lung and Head &Neck (H) cancer cohorts (in total 878 patients). Radiomic features were extracted from the pre-treatment computed tomography (CT) images. Consensus clustering resulted in eleven and thirteen stable radiomic feature clusters for Lung and H cancer, respectively. These clusters were validated in independent external validation cohorts using rand statistic (Lung RS = 0.92, p < 0.001, H RS = 0.92, p < 0.001). Our analysis indicated both common as well as cancer-specific clustering and clinical associations of radiomic features. Strongest associations with clinical parameters: Prognosis Lung CI = 0.60 +/- 0.01, Prognosis H CI = 0.68 +/- 0.01; Lung histology AUC = 0.56 +/- 0.03, Lung stage AUC = 0.61 +/- 0.01, H HPV AUC = 0.58 +/- 0.03, H stage AUC = 0.77 +/- 0.02. Full utilization of these cancer-specific characteristics of image features may further improve radiomic biomarkers, providing a non-invasive way of quantifying and monitoring tumor phenotypic characteristics in clinical practice.

Fast and robust methods for non-rigid registration of medical images

  • Parraguez, Stefan Philippo Pszczolkowski
2014 Thesis, cited 1 times
Website

Multimodal Retrieval Framework for Brain Volumes in 3D MR Volumes

  • ParthaSarathi, Mangipudi
  • Ansari, Mohammad Ahmad
Journal of Medical and Biological Engineering 2017 Journal Article, cited 1 times
Website
The paper presents retrieval framework for extracting similar 3D tumor volumes in magnetic resonance brain volumes in response to a query tumor volume. Similar volumes correspond to closeness in spatial location of the brain structures. Query slice pertains to a new tumor volume of a patient and the output slices belong to the tumor volumes related to previous case histories stored in the database. The framework could be of immense help to the medical practitioners. It might prove to be a useful diagnostic aid for the medical expert and also serve as a teaching aid for researchers.

Decorin Expression Is Associated With Diffusion MR Phenotypes in Glioblastoma

  • Patel, Kunal S.
  • Raymond, Catalina
  • Yao, Jingwen
  • Tsung, Joseph
  • Liau, Linda M.
  • Everson, Richard
  • Cloughesy, Timothy F.
  • Ellingson, Benjamin
Neurosurgery 2019 Journal Article, cited 0 times
Abstract INTRODUCTION Significant evidence from multiple phase II trials have suggested diffusion-weighted imaging estimates of apparent diffusion coefficient (ADC) are a predictive imaging biomarker for survival benefit for recurrent glioblastoma when treated with anti-VEGF therapies, including bevacizumab, cediranib, and cabozantinib. Despite this observation, the underlying mechanism linking anti-VEGF therapeutic efficacy with diffusion MR characteristics remains unknown. We hypothesized that a high expression of decorin, a small proteoglycan that has been associated with sequestration of pro-angiogenic signaling as well as reduction in the viscosity of the extracellular environment, may be associated with elevated ADC. METHODS A differential gene expression analysis was carried out in human glioblastoma samples in whom preoperative diffusion imaging was obtained. ADC histogram analysis was carried out to calculate preoperative ADCL values, the average ADC in the lower distribution using a double Gaussian mixed model. The Cancer Imaging Archive (TCIA) and The Cancer Genome Atlas (TCGA) databases were queried to identify diffusion imaging and levels of decorin protein expression. Patients with recurrent glioblastoma who undergo resection prospectively had targeted biopsies based on the ADC analysis collected. These samples were stained for decorin and quantified using whole-slide image analysis software. RESULTS Differential gene expression analysis between tumors associated with high and low preoperative ADCL showed that patients with high ADCL had increased decorin gene expression. Patients from the TCGA database with elevated ADCL had a significantly higher level of decorin gene expression (P = .01). These patients had a survival advantage with a log-rank analysis (P = .002). Patients with preoperative diffusion imaging had multiple targeted intraoperative biopsies stained for decorin. Patients with high ADCL had increased decorin expression on immunohistochemistry (P = .002). CONCLUSION Increased ADCL on diffusion MR imaging is associated with high decorin expression as well as increased survival in glioblastoma. Decorin may play an important role the imaging features on diffusion MR and anti-VEGF treatment efficacy. Decorin expression may serve as a future therapeutic target in patients with favorable diffusion MR characteristics.

Swift Pre Rendering Volumetric Visualization of Magnetic Resonance Cardiac Images based on Isosurface Technique

  • Patel, Nikhilkumar P
  • Parmar, Shankar K
  • Jain, Kavindra R
Procedia Technology 2014 Journal Article, cited 0 times
Website

An Approach Toward Automatic Classification of Tumor Histopathology of Non–Small Cell Lung Cancer Based on Radiomic Features

  • Patil, Ravindra
  • Mahadevaiah, Geetha
  • Dekker, Andre
Tomography: a journal for imaging research 2016 Journal Article, cited 2 times
Website

Lung cancer incidence and mortality in National Lung Screening Trial participants who underwent low-dose CT prevalence screening: a retrospective cohort analysis of a randomised, multicentre, diagnostic screening trial

  • Patz Jr, Edward F
  • Greco, Erin
  • Gatsonis, Constantine
  • Pinsky, Paul
  • Kramer, Barnett S
  • Aberle, Denise R
The Lancet Oncology 2016 Journal Article, cited 67 times
Website

Semantic imaging features predict disease progression and survival in glioblastoma multiforme patients

  • Peeken, J. C.
  • Hesse, J.
  • Haller, B.
  • Kessel, K. A.
  • Nusslin, F.
  • Combs, S. E.
Strahlenther Onkol 2018 Journal Article, cited 1 times
Website
BACKGROUND: For glioblastoma (GBM), multiple prognostic factors have been identified. Semantic imaging features were shown to be predictive for survival prediction. No similar data have been generated for the prediction of progression. The aim of this study was to assess the predictive value of the semantic visually accessable REMBRANDT [repository for molecular brain neoplasia data] images (VASARI) imaging feature set for progression and survival, and the creation of joint prognostic models in combination with clinical and pathological information. METHODS: 189 patients were retrospectively analyzed. Age, Karnofsky performance status, gender, and MGMT promoter methylation and IDH mutation status were assessed. VASARI features were determined on pre- and postoperative MRIs. Predictive potential was assessed with univariate analyses and Kaplan-Meier survival curves. Following variable selection and resampling, multivariate Cox regression models were created. Predictive performance was tested on patient test sets and compared between groups. The frequency of selection for single variables and variable pairs was determined. RESULTS: For progression free survival (PFS) and overall survival (OS), univariate significant associations were shown for 9 and 10 VASARI features, respectively. Multivariate models yielded concordance indices significantly different from random for the clinical, imaging, combined, and combined+ MGMT models of 0.657, 0.636, 0.694, and 0.716 for OS, and 0.602, 0.604, 0.633, and 0.643 for PFS. "Multilocality," "deep white-matter invasion," "satellites," and "ependymal invasion" were over proportionally selected for multivariate model generation, underlining their importance. CONCLUSIONS: We demonstrated a predictive value of several qualitative imaging features for progression and survival. The performance of prognostic models was increased by combining clinical, pathological, and imaging features.

Auto Diagnostics of Lung Nodules Using Minimal Characteristics Extraction Technique

  • Peña, Diego M
  • Luo, Shouhua
  • Abdelgader, Abdeldime
Diagnostics 2016 Journal Article, cited 6 times
Website

Deep multi-modality collaborative learning for distant metastases predication in PET-CT soft-tissue sarcoma studies

  • Peng, Yige
  • Bi, Lei
  • Guo, Yuyu
  • Feng, Dagan
  • Fulham, Michael
  • Kim, Jinman
2019 Conference Proceedings, cited 0 times

A method of rapid quantification of patient-specific organ doses for CT using deep-learning-based multi-organ segmentation and GPU-accelerated Monte Carlo dose computing

  • Peng, Z.
  • Fang, X.
  • Yan, P.
  • Shan, H.
  • Liu, T.
  • Pei, X.
  • Wang, G.
  • Liu, B.
  • Kalra, M. K.
  • Xu, X. G.
Med Phys 2020 Journal Article, cited 0 times
Website
PURPOSE: One technical barrier to patient-specific computed tomography (CT) dosimetry has been the lack of computational tools for the automatic patient-specific multi-organ segmentation of CT images and rapid organ dose quantification. When previous CT images are available for the same body region of the patient, the ability to obtain patient-specific organ doses for CT - in a similar manner as radiation therapy treatment planning - will open the door to personalized and prospective CT scan protocols. This study aims to demonstrate the feasibility of combining deep-learning algorithms for automatic segmentation of multiple radiosensitive organs from CT images with the GPU-based Monte Carlo rapid organ dose calculation. METHODS: A deep convolutional neural network (CNN) based on the U-Net for organ segmentation is developed and trained to automatically delineate multiple radiosensitive organs from CT images. Two databases are used: The lung CT segmentation challenge 2017 (LCTSC) dataset that contains 60 thoracic CT scan patients, each consisting of five segmented organs, and the Pancreas-CT (PCT) dataset, which contains 43 abdominal CT scan patients each consisting of eight segmented organs. A fivefold cross-validation method is performed on both sets of data. Dice similarity coefficients (DSCs) are used to evaluate the segmentation performance against the ground truth. A GPU-based Monte Carlo dose code, ARCHER, is used to calculate patient-specific CT organ doses. The proposed method is evaluated in terms of relative dose errors (RDEs). To demonstrate the potential improvement of the new method, organ dose results are compared against those obtained for population-average patient phantoms used in an off-line dose reporting software, VirtualDose, at Massachusetts General Hospital. RESULTS: The median DSCs are found to be 0.97 (right lung), 0.96 (left lung), 0.92 (heart), 0.86 (spinal cord), 0.76 (esophagus) for the LCTSC dataset, along with 0.96 (spleen), 0.96 (liver), 0.95 (left kidney), 0.90 (stomach), 0.87 (gall bladder), 0.80 (pancreas), 0.75 (esophagus), and 0.61 (duodenum) for the PCT dataset. Comparing with organ dose results from population-averaged phantoms, the new patient-specific method achieved smaller absolute RDEs (mean +/- standard deviation) for all organs: 1.8% +/- 1.4% (vs 16.0% +/- 11.8%) for the lung, 0.8% +/- 0.7% (vs 34.0% +/- 31.1%) for the heart, 1.6% +/- 1.7% (vs 45.7% +/- 29.3%) for the esophagus, 0.6% +/- 1.2% (vs 15.8% +/- 12.7%) for the spleen, 1.2% +/- 1.0% (vs 18.1% +/- 15.7%) for the pancreas, 0.9% +/- 0.6% (vs 20.0% +/- 15.2%) for the left kidney, 1.7% +/- 3.1% (vs 19.1% +/- 9.8%) for the gallbladder, 0.3% +/- 0.3% (vs 24.2% +/- 18.7%) for the liver, and 1.6% +/- 1.7% (vs 19.3% +/- 13.6%) for the stomach. The trained automatic segmentation tool takes <5 s per patient for all 103 patients in the dataset. The Monte Carlo radiation dose calculations performed in parallel to the segmentation process using the GPU-accelerated ARCHER code take <4 s per patient to achieve <0.5% statistical uncertainty in all organ doses for all 103 patients in the database. CONCLUSION: This work shows the feasibility to perform combined automatic patient-specific multi-organ segmentation of CT images and rapid GPU-based Monte Carlo dose quantification with clinically acceptable accuracy and efficiency.

Efficient CT Image Reconstruction in a GPU Parallel Environment

  • Pérez, Tomás A Valencia
  • López, Javier M Hernández
  • Moreno-Barbosa, Eduardo
  • de Celis Alonso, Benito
  • Merino, Martín R Palomino
  • Meneses, Victor M Castaño
Tomography 2020 Journal Article, cited 0 times

Peritumoral and intratumoral radiomic features predict survival outcomes among patients diagnosed in lung cancer screening

  • Perez-Morales, J.
  • Tunali, I.
  • Stringfield, O.
  • Eschrich, S. A.
  • Balagurunathan, Y.
  • Gillies, R. J.
  • Schabath, M. B.
Sci RepScientific reports 2020 Journal Article, cited 0 times
Website
The National Lung Screening Trial (NLST) demonstrated that screening with low-dose computed tomography (LDCT) is associated with a 20% reduction in lung cancer mortality. One potential limitation of LDCT screening is overdiagnosis of slow growing and indolent cancers. In this study, peritumoral and intratumoral radiomics was used to identify a vulnerable subset of lung patients associated with poor survival outcomes. Incident lung cancer patients from the NLST were split into training and test cohorts and an external cohort of non-screen detected adenocarcinomas was used for further validation. After removing redundant and non-reproducible radiomics features, backward elimination analyses identified a single model which was subjected to Classification and Regression Tree to stratify patients into three risk-groups based on two radiomics features (NGTDM Busyness and Statistical Root Mean Square [RMS]). The final model was validated in the test cohort and the cohort of non-screen detected adenocarcinomas. Using a radio-genomics dataset, Statistical RMS was significantly associated with FOXF2 gene by both correlation and two-group analyses. Our rigorous approach generated a novel radiomics model that identified a vulnerable high-risk group of early stage patients associated with poor outcomes. These patients may require aggressive follow-up and/or adjuvant therapy to mitigate their poor outcomes.

Prediction of lung cancer incidence on the low-dose computed tomography arm of the National Lung Screening Trial: A dynamic Bayesian network

  • Petousis, Panayiotis
  • Han, Simon X
  • Aberle, Denise
  • Bui, Alex AT
Artificial intelligence in medicine 2016 Journal Article, cited 13 times
Website

Texture classification of lung computed tomography images

  • Pheng, Hang See
  • Shamsuddin, Siti M
2013 Conference Proceedings, cited 2 times
Website

Accuracy of emphysema quantification performed with reduced numbers of CT sections

  • Pilgram, Thomas K
  • Quirk, James D
  • Bierhals, Andrew J
  • Yusen, Roger D
  • Lefrak, Stephen S
  • Cooper, Joel D
  • Gierada, David S
American Journal of Roentgenology 2010 Journal Article, cited 8 times
Website

Precision Medicine and Radiogenomics in Breast Cancer: New Approaches toward Diagnosis and Treatment

  • Pinker, Katja
  • Chin, Joanne
  • Melsaether, Amy N
  • Morris, Elizabeth A
  • Moy, Linda
Radiology 2018 Journal Article, cited 7 times
Website

ROC curves for low-dose CT in the National Lung Screening Trial

  • Pinsky, P. F.
  • Gierada, D. S.
  • Nath, H.
  • Kazerooni, E. A.
  • Amorosa, J.
J Med ScreenJournal of medical screening 2013 Journal Article, cited 4 times
Website
The National Lung Screening Trial (NLST) reported a 20% reduction in lung cancer specific mortality using low-dose chest CT (LDCT) compared with chest radiograph (CXR) screening. The high number of false positive screens with LDCT (around 25%) raises concerns. NLST radiologists reported LDCT screens as either positive or not positive, based primarily on the presence of a 4+ mm non-calcified lung nodule (NCN). They did not explicitly record a propensity score for lung cancer. However, by using maximum NCN size, or alternatively, radiologists' recommendations for diagnostic follow-up categorized hierarchically, surrogate propensity scores (PSSZ and PSFR) were created. These scores were then used to compute ROC curves, which determine possible operating points of sensitivity versus false positive rate (1-Specificity). The area under the ROC curve (AUC) was 0.934 and 0.928 for PSFR and PSSZ, respectively; the former was significantly greater than the latter. With the NLST definition of a positive screen, sensitivity and specificity of LDCT was 93.1% and 76.5%, respectively. With cutoffs based on PSFR, a specificity of 92.4% could be achieved while only lowering sensitivity to 86.9%. Radiologists using LDCT have good predictive ability; the optimal operating point for sensitivity and specificity remains to be determined.

National lung screening trial: variability in nodule detection rates in chest CT studies

  • Pinsky, P. F.
  • Gierada, D. S.
  • Nath, P. H.
  • Kazerooni, E.
  • Amorosa, J.
Radiology 2013 Journal Article, cited 43 times
Website
PURPOSE: To characterize the variability in radiologists' interpretations of computed tomography (CT) studies in the National Lung Screening Trial (NLST) (including assessment of false-positive rates [FPRs] and sensitivity), to examine factors that contribute to variability, and to evaluate trade-offs between FPRs and sensitivity among different groups of radiologists. MATERIALS AND METHODS: The HIPAA-compliant NLST was approved by the institutional review board at each screening center; all participants provided informed consent. NLST radiologists reported overall screening results, nodule-specific findings, and recommendations for diagnostic follow-up. A noncalcified nodule of 4 mm or larger constituted a positive screening result. The FPR was defined as the rate of positive screening examinations in participants without a cancer diagnosis within 1 year. Descriptive analyses and mixed-effects models were utilized. The average odds ratio (OR) for a false-positive result across all pairs of radiologists was used as a measure of variability. RESULTS: One hundred twelve radiologists at 32 screening centers each interpreted 100 or more NLST CT studies, interpreting 72 160 of 75 126 total NLST CT studies in aggregate. The mean FPR for radiologists was 28.7% +/- 13.7 (standard deviation), with a range of 3.8%-69.0%. The model yielded an average OR of 2.49 across all pairs of radiologists and an OR of 1.83 for pairs within the same screening center. Mean FPRs were similar for academic versus nonacademic centers (27.9% and 26.7%, respectively) and for centers inside (25.0%) versus outside (28.7%) the U.S. "histoplasmosis belt." Aggregate sensitivity was 96.5% for radiologists with FPRs higher than the median (27.1%), compared with 91.9% for those with FPRs lower than the median (P = .02). CONCLUSION: There was substantial variability in radiologists' FPRs. Higher FPRs were associated with modestly higher sensitivity.

Short-and long-term lung cancer risk associated with noncalcified nodules observed on low-dose CT

  • Pinsky, Paul F
  • Nath, P Hrudaya
  • Gierada, David S
  • Sonavane, Sushil
  • Szabo, Eva
Cancer prevention research 2014 Journal Article, cited 10 times
Website

A versatile method for bladder segmentation in computed tomography two-dimensional images under adverse conditions

  • Pinto, João Ribeiro
  • Tavares, João Manuel RS
Proceedings of the Institution of Mechanical Engineers, Part H: Journal of Engineering in Medicine 2017 Journal Article, cited 1 times
Website

Genomics of Brain Tumor Imaging

  • Pope, Whitney B
Neuroimaging Clinics of North America 2015 Journal Article, cited 26 times
Website

Disorder in Pixel-Level Edge Directions on T1WI Is Associated with the Degree of Radiation Necrosis in Primary and Metastatic Brain Tumors: Preliminary Findings

  • Prasanna, P
  • Rogers, L
  • Lam, TC
  • Cohen, M
  • Siddalingappa, A
  • Wolansky, L
  • Pinho, M
  • Gupta, A
  • Hatanpaa, KJ
  • Madabhushi, A
American Journal of Neuroradiology 2019 Journal Article, cited 0 times
Website

An anatomic transcriptional atlas of human glioblastoma

  • Puchalski, Ralph B
  • Shah, Nameeta
  • Miller, Jeremy
  • Dalley, Rachel
  • Nomura, Steve R
  • Yoon, Jae-Guen
  • Smith, Kimberly A
  • Lankerovich, Michael
  • Bertagnolli, Darren
  • Bickley, Kris
Science 2018 Journal Article, cited 6 times
Website

An anatomic transcriptional atlas of human glioblastoma

  • Puchalski, Ralph B
  • Shah, Nameeta
  • Miller, Jeremy
  • Dalley, Rachel
  • Nomura, Steve R
  • Yoon, Jae-Guen
  • Smith, Kimberly A
  • Lankerovich, Michael
  • Bertagnolli, Darren
  • Bickley, Kris
  • Boe, Andrew F
  • Brouner, Krissy
  • Butler, Stephanie
  • Caldejon, Shiella
  • Chapin, Mike
  • Datta, Suvro
  • Dee, Nick
  • Desta, Tsega
  • Dolbeare, Tim
  • Dotson, Nadezhda
  • Ebbert, Amanda
  • Feng, David
  • Feng, Xu
  • Fisher, Michael
  • Gee, Garrett
  • Goldy, Jeff
  • Gourley, Lindsey
  • Gregor, Benjamin W
  • Gu, Guangyu
  • Hejazinia, Nika
  • Hohmann, John
  • Hothi, Parvinder
  • Howard, Robert
  • Joines, Kevin
  • Kriedberg, Ali
  • Kuan, Leonard
  • Lau, Chris
  • Lee, Felix
  • Lee, Hwahyung
  • Lemon, Tracy
  • Long, Fuhui
  • Mastan, Naveed
  • Mott, Erika
  • Murthy, Chantal
  • Ngo, Kiet
  • Olson, Eric
  • Reding, Melissa
  • Riley, Zack
  • Rosen, David
  • Sandman, David
  • Shapovalova, Nadiya
  • Slaughterbeck, Clifford R
  • Sodt, Andrew
  • Stockdale, Graham
  • Szafer, Aaron
  • Wakeman, Wayne
  • Wohnoutka, Paul E
  • White, Steven J
  • Marsh, Don
  • Rostomily, Robert C
  • Ng, Lydia
  • Dang, Chinh
  • Jones, Allan
  • Keogh, Bart
  • Gittleman, Haley R
  • Barnholtz-Sloan, Jill S
  • Cimino, Patrick J
  • Uppin, Megha S
  • Keene, C Dirk
  • Farrokhi, Farrokh R
  • Lathia, Justin D
  • Berens, Michael E
  • Iavarone, Antonio
  • Bernard, Amy
  • Lein, Ed
  • Phillips, John W
  • Rostad, Steven W
  • Cobbs, Charles
  • Hawrylycz, Michael J
  • Foltz, Greg D
Science 2018 Journal Article, cited 6 times
Website
Glioblastoma is an aggressive brain tumor that carries a poor prognosis. The tumor's molecular and cellular landscapes are complex, and their relationships to histologic features routinely used for diagnosis are unclear. We present the Ivy Glioblastoma Atlas, an anatomically based transcriptional atlas of human glioblastoma that aligns individual histologic features with genomic alterations and gene expression patterns, thus assigning molecular information to the most important morphologic hallmarks of the tumor. The atlas and its clinical and genomic database are freely accessible online data resources that will serve as a valuable platform for future investigations of glioblastoma pathogenesis, diagnosis, and treatment.

A Reversible and Imperceptible Watermarking Approach for Ensuring the Integrity and Authenticity of Brain MR Images

  • Qasim, Asaad Flayyih
2019 Thesis, cited 0 times
Website
The digital medical workflow has many circumstances in which the image data can be manipulated both within the secured Hospital Information Systems (HIS) and outside, as images are viewed, extracted and exchanged. This potentially grows ethical and legal concerns regarding modifying images details that are crucial in medical examinations. Digital watermarking is recognised as a robust technique for enhancing trust within medical imaging by detecting alterations applied to medical images. Despite its efficiency, digital watermarking has not been widely used in medical imaging. Existing watermarking approaches often suffer from validation of their appropriateness to medical domains. Particularly, several research gaps have been identified: (i) essential requirements for the watermarking of medical images are not well defined; (ii) no standard approach can be found in the literature to evaluate the imperceptibility of watermarked images; and (iii) no study has been conducted before to test digital watermarking in a medical imaging workflow. This research aims to investigate digital watermarking to designing, analysing and applying it to medical images to confirm manipulations can be detected and tracked. In addressing these gaps, a number of original contributions have been presented. A new reversible and imperceptible watermarking approach is presented to detect manipulations of brain Magnetic Resonance (MR) images based on Difference Expansion (DE) technique. Experimental results show that the proposed method, whilst fully reversible, can also realise a watermarked image with low degradation for reasonable and controllable embedding capacity. This is fulfilled by encoding the data into smooth regions (blocks that have least differences between their pixels values) inside the Region of Interest (ROI) part of medical images and also through the elimination of the large location map (location of pixels used for encoding the data) required at extraction to retrieve the encoded data. This compares favourably to outcomes reported under current state-of-art techniques in terms of visual image quality of watermarked images. This was also evaluated through conducting a novel visual assessment based on relative Visual Grading Analysis (relative VGA) to define a perceptual threshold in which modifications become noticeable to radiographers. The proposed approach is then integrated into medical systems to verify its validity and applicability in a real application scenario of medical imaging where medical images are generated, exchanged and archived. This enhanced security measure, therefore, enables the detection of image manipulations, by an imperceptible and reversible watermarking approach, that may establish increased trust in the digital medical imaging workflow.

Identification of biomarkers for pseudo and true progression of GBM based on radiogenomics study

  • Qian, Xiaohua
  • Tan, Hua
  • Zhang, Jian
  • Liu, Keqin
  • Yang, Tielin
  • Wang, Maode
  • Debinskie, Waldemar
  • Zhao, Weilin
  • Chan, Michael D
  • Zhou, Xiaobo
Oncotarget 2016 Journal Article, cited 8 times
Website

Texture Classification Study of MR Images for Hepatocellular Carcinoma

  • QIU, Jia-jun
  • WU, Yue
  • HUI, Bei
  • LIU, Yan-bo
电子科技大学学报Bioelectronics 2019 Journal Article, cited 0 times
Website
Combining wavelet multi-resolution analysis method and statistical analysis method, a composite texture classification model is proposed to evaluate its value in computer-aided diagnosis of hepatocellular carcinoma (HCC) and normal liver tissue based on magnetic resonance (MR) images. First, training samples are divided into two groups by two categories, statistics of wavelet coefficients are calculated in each group. Second, two discretizations are performed on wavelet coefficients of a new sample based on the two sets of statistical results, and two groups of features can be extracted by histogram, co-occurrence matrix, and run-length matrix, etc. Finally, classification is performed twice based on the two groups of features to calculate the category attribute probabilities, then a decision is conducted. The experimental results demonstrate that the proposed model can obtain better classification performance than routine methods, it is rewarding for the computer-aided diagnosis of HCC and normal liver tissue based on MR images.

Prostate segmentation: An efficient convex optimization approach with axial symmetry using 3-D TRUS and MR images

  • Qiu, Wu
  • Yuan, Jing
  • Ukwatta, Eranga
  • Sun, Yue
  • Rajchl, Martin
  • Fenster, Aaron
Medical Imaging, IEEE Transactions on 2014 Journal Article, cited 58 times
Website

An Efficient Framework for Accurate Arterial Input Selection in DSC-MRI of Glioma Brain Tumors

  • Rahimzadeh, H
  • Kazerooni, A Fathi
  • Deevband, MR
  • Rad, H Saligheh
Journal of Biomedical Physics and Engineering 2018 Journal Article, cited 0 times
Website

Intelligent texture feature extraction and indexing for MRI image retrieval using curvelet and PCA with HTF

  • Rajakumar, K
  • Muttan, S
  • Deepa, G
  • Revathy, S
  • Priya, B Shanmuga
Advances in Natural and Applied Sciences 2015 Journal Article, cited 0 times
Website

Brain Tumor Classification Using MRI Images with K-Nearest Neighbor Method

  • Ramdlon, Rafi Haidar
  • Martiana Kusumaningtyas, Entin
  • Karlita, Tita
2019 Conference Proceedings, cited 0 times
The accuracy level in diagnosing tumor type through MRI results is required to establish appropriate medical treatment. MRI results can be computationally examined using K-Nearest Neighbor method, a basic science application and classification technique of image processing. Tumor classification system is designed to detect tumor and edema in T1 and T2 images sequences, as well as to label and classify tumor type. Data interpretation of such system derives from Axial section of MRI results only, which is classified into three classes: Astrocytoma, Glioblastoma, and Oligodendroglioma. To detect tumor area, basic image processing technique is employed, comprising of image enhancement, image binarization, morphological image, and watershed. Tumor classification is applied after segmentation process of Shape Extration Feature is undertaken. The results of tumor classification obtained was 89.5 percent, which is able to provide information regarding tumor detection more clearly and specifically.

Exploring relationships between multivariate radiological phenotypes and genetic features: A case-study in Glioblastoma using the Cancer Genome Atlas

  • Rao, Arvind
2013 Conference Proceedings, cited 0 times

Integrative Analysis of mRNA, microRNA, and Protein Correlates of Relative Cerebral Blood Volume Values in GBM Reveals the Role for Modulators of Angiogenesis and Tumor Proliferation

  • Rao, Arvind
  • Manyam, Ganiraju
  • Rao, Ganesh
  • Jain, Rajan
Cancer Informatics 2016 Journal Article, cited 5 times
Website

A combinatorial radiographic phenotype may stratify patient survival and be associated with invasion and proliferation characteristics in glioblastoma

  • Rao, Arvind
  • Rao, Ganesh
  • Gutman, David A
  • Flanders, Adam E
  • Hwang, Scott N
  • Rubin, Daniel L
  • Colen, Rivka R
  • Zinn, Pascal O
  • Jain, Rajan
  • Wintermark, Max
Journal of neurosurgery 2016 Journal Article, cited 19 times
Website

Magnetic resonance spectroscopy as an early indicator of response to anti-angiogenic therapy in patients with recurrent glioblastoma: RTOG 0625/ACRIN 6677

  • Ratai, E. M.
  • Zhang, Z.
  • Snyder, B. S.
  • Boxerman, J. L.
  • Safriel, Y.
  • McKinstry, R. C.
  • Bokstein, F.
  • Gilbert, M. R.
  • Sorensen, A. G.
  • Barboriak, D. P.
Neuro-oncology 2013 Journal Article, cited 0 times
Website
Background. The prognosis for patients with recurrent glioblastoma remains poor. The purpose of this study was to assess the potential role of MR spectroscopy as an early indicator of response to anti-angiogenic therapy. Methods. Thirteen patients with recurrent glioblastoma were enrolled in RTOG 0625/ACRIN 6677, a prospective multicenter trial in which bevacizumab was used in combination with either temozolomide or irinotecan. Patients were scanned prior to treatment and at specific timepoints during the treatment regimen. Postcontrast T1-weighted MRI was used to assess 6-month progression-free survival. Spectra from the enhancing tumor and peritumoral regions were defined on the postcontrast T1-weighted images. Changes in the concentration ratios of N-acetylaspartate/creatine (NAA/Cr), choline-containing compounds (Cho)/Cr, and NAA/Cho were quantified in comparison with pretreatment values. Results. NAA/Cho levels increased and Cho/Cr levels decreased within enhancing tumor at 2 weeks relative to pretreatment levels (P = .048 and P = .016, respectively), suggesting a possible antitumor effect of bevacizumab with cytotoxic chemotherapy. Nine of the 13 patients were alive and progression free at 6 months. Analysis of receiver operating characteristic curves for NAA/Cho changes in tumor at 8 weeks revealed higher levels in patients progression free at 6 months (area under the curve = 0.85), suggesting that NAA/Cho is associated with treatment response. Similar results were observed for receiver operating characteristic curve analyses against 1-year survival. In addition, decreased Cho/Cr and increased NAA/Cr and NAA/Cho in tumor periphery at 16 weeks posttreatment were associated with both 6-month progression-free survival and 1-year survival. Conclusion. Changes in NAA and Cho by MR spectroscopy may potentially be useful as imaging biomarkers in assessing response to anti-angiogenic treatment.

Multivariate Analysis of Preoperative Magnetic Resonance Imaging Reveals Transcriptomic Classification of de novo Glioblastoma Patients

  • Rathore, Saima
  • Akbari, Hamed
  • Bakas, Spyridon
  • Pisapia, Jared M
  • Shukla, Gaurav
  • Rudie, Jeffrey D
  • Da, Xiao
  • Davuluri, Ramana V
  • Dahmane, Nadia
  • O'Rourke, Donald M
Frontiers in computational neuroscience 2019 Journal Article, cited 0 times

Accelerating Machine Learning with Training Data Management

  • Ratner, Alexander Jason
2019 Thesis, cited 1 times
Website
One of the biggest bottlenecks in developing machine learning applications today is the need for large hand-labeled training datasets. Even at the world's most sophisticated technology companies, and especially at other organizations across science, medicine, industry, and government, the time and monetary cost of labeling and managing large training datasets is often the blocking factor in using machine learning. In this thesis, we describe work on training data management systems that enable users to programmatically build and manage training datasets, rather than labeling and managing them by hand, and present algorithms and supporting theory for automatically modeling this noisier process of training set specification in order to improve the resulting training set quality. We then describe extensive empirical results and real-world deployments demonstrating that programmatically building, managing, and modeling training sets in this way can lead to radically faster, more flexible, and more accessible ways of developing machine learning applications. We start by describing data programming, a paradigm for labeling training datasets programmatically rather than by hand, and Snorkel, an open source training data management system built around data programming that has been used by major technology companies, academic labs, and government agencies to build machine learning applications in days or weeks rather than months or years. In Snorkel, rather than hand-labeling training data, users write programmatic operators called labeling functions, which label data using various heuristic or weak supervision strategies such as pattern matching, distant supervision, and other models. These labeling functions can have noisy, conflicting, and correlated outputs, which Snorkel models and combines into clean training labels without requiring any ground truth using theoretically consistent modeling approaches we develop. We then report on extensive empirical validations, user studies, and real-world applications of Snorkel in industrial, scientific, medical, and other use cases ranging from knowledge base construction from text data to medical monitoring over image and video data. Next, we will describe two other approaches for enabling users to programmatically build and manage training datasets, both currently integrated into the Snorkel open source framework: Snorkel MeTaL, an extension of data programming and Snorkel to the setting where users have multiple related classification tasks, in particular focusing on multi-task learning; and TANDA, a system for optimizing and managing strategies for data augmentation, a critical training dataset management technique wherein a labeled dataset is artificially expanded by transforming data points. Finally, we will conclude by outlining future research directions for further accelerating and democratizing machine learning workflows, such as higher-level programmatic interfaces and massively multi-task frameworks.

Automated pulmonary nodule CT image characterization in lung cancer screening

  • Reeves, Anthony P
  • Xie, Yiting
  • Jirapatnakul, Artit
International journal of computer assisted radiology and surgery 2016 Journal Article, cited 19 times
Website

Automated image quality assessment for chest CT scans

  • Reeves, A. P.
  • Xie, Y.
  • Liu, S.
Med Phys 2018 Journal Article, cited 0 times
Website
PURPOSE: Medical image quality needs to be maintained at standards sufficient for effective clinical reading. Automated computer analytic methods may be applied to medical images for quality assessment. METHODS: For chest CT scans in a lung cancer screening context, an automated quality assessment method is presented that characterizes image noise and image intensity calibration. This is achieved by image measurements in three automatically segmented homogeneous regions of the scan: external air, trachea lumen air, and descending aorta blood. Profiles of CT scanner behavior are also computed. RESULTS: The method has been evaluated on both phantom and real low-dose chest CT scans and results show that repeatable noise and calibration measures may be realized by automated computer algorithms. Noise and calibration profiles show relevant differences between different scanners and protocols. CONCLUSIONS: Automated image quality assessment may be useful for quality control for lung cancer screening and may enable performance improvements to automated computer analysis methods.

Improved False Positive Reduction by Novel Morphological Features for Computer-Aided Polyp Detection in CT Colonography

  • Ren, Yacheng
  • Ma, Jingchen
  • Xiong, Junfeng
  • Chen, Yi
  • Lu, Lin
  • Zhao, Jun
IEEE journal of biomedical and health informatics 2018 Journal Article, cited 3 times
Website

Optimizing deep belief network parameters using grasshopper algorithm for liver disease classification

  • Renukadevi, Thangavel
  • Karunakaran, Saminathan
International Journal of Imaging Systems and Technology 2019 Journal Article, cited 0 times
Image processing plays a vital role in many areas such as healthcare, military, scientific and business due to its wide variety of advantages and applications. Detection of computed tomography (CT) liver disease is one of the difficult tasks in the medical field. Hand crafted features and classifications are the two types of methods used in the previous approaches, to classify liver disease. But these classification results are not optimal. In this article, we propose a novel method utilizing deep belief network (DBN) with grasshopper optimization algorithm (GOA) for liver disease classification. Initially, the image quality is enhanced by preprocessing techniques and then features like texture, color and shape are extracted. The extracted features are reduced by utilizing the dimensionality reduction method like principal component analysis (PCA). Here, the DBN parameters are optimized using GOA for recognizing liver disease. The experiments are performed on the real time and open source CT image datasets which embraces normal, cyst, hepatoma, and cavernous hemangiomas, fatty liver, metastasis, cirrhosis, and tumor samples. The proposed method yields 98% accuracy, 95.82% sensitivity, 97.52% specificity, 98.53% precision, and 96.8% F‐1 score in simulation process when compared with other existing techniques.

Multi-fractal detrended texture feature for brain tumor classification

  • Reza, Syed MS
  • Mays, Randall
  • Iftekharuddin, Khan M
2015 Conference Proceedings, cited 5 times
Website

Conditional Generative Adversarial Refinement Networks for Unbalanced Medical Image Semantic Segmentation

  • Rezaei, Mina
  • Yang, Haojin
  • Harmuth, Konstantin
  • Meinel, Christoph
2019 Conference Proceedings, cited 0 times
Website

Detection of Lung Nodules on Medical Images by the Use of Fractal Segmentation

  • Rezaie, Afsaneh Abdollahzadeh
  • Habiboghli, Ali
International Journal of Interactive Multimedia and Artificial Intelligence 2017 Journal Article, cited 0 times
Website
In the present paper, a method for the detection of malignant and benign tumors on the CT scan images has been proposed. In the proposed method, firstly the area of interest in which the tumor may exist is selected on the original image and by the use of image segmentation and determination of the image's threshold limit, the tumor's area is specified and then edge detection filters are used for detection of the tumor's edge. After detection of area and by calculating the fractal dimensions with less percent of errors and better resolution, the areas where contain the tumor are determined. The images used in the proposed method have been extracted from cancer imaging archive database that is made available for public. Compared to other methods, our proposed method recognizes successfully benign and malignant tumors in all cases that have been clinically approved and belong to the database.

3D medical image denoising using 3D block matching and low-rank matrix completion

  • Roozgard, Aminmohammad
  • Barzigar, Nafise
  • Verma, Pramode
  • Cheng, Samuel
2013 Conference Proceedings, cited 0 times
Website

Malignant nodule detection on lung ct scan images with kernel rx-algorithm

  • Roozgard, Aminmohammad
  • Cheng, Samuel
  • Liu, Hong
2012 Conference Proceedings, cited 22 times
Website

A new 2.5 D representation for lymph node detection using random sets of deep convolutional neural network observations

  • Roth, Holger R
  • Lu, Le
  • Seff, Ari
  • Cherry, Kevin M
  • Hoffman, Joanne
  • Wang, Shijun
  • Liu, Jiamin
  • Turkbey, Evrim
  • Summers, Ronald M
2014 Book Section, cited 192 times
Website

Visual Interpretation with Three-Dimensional Annotations (VITA): Three-Dimensional Image Interpretation Tool for Radiological Reporting

  • Roy, Sharmili
  • Brown, Michael S
  • Shih, George L
Journal of Digital Imaging 2014 Journal Article, cited 5 times
Website
This paper introduces a software framework called Visual Interpretation with Three-Dimensional Annotations (VITA) that is able to automatically generate three-dimensional (3D) visual summaries based on radiological annotations made during routine exam reporting. VITA summaries are in the form of rotating 3D volumes where radiological annotations are highlighted to place important clinical observations into a 3D context. The rendered volume is produced as a Digital Imaging and Communications in Medicine (DICOM) object and is automatically added to the study for archival in Picture Archiving and Communication System (PACS). In addition, a video summary (e.g., MPEG4) can be generated for sharing with patients and for situations where DICOM viewers are not readily available to referring physicians. The current version of VITA is compatible with ClearCanvas; however, VITA can work with any PACS workstation that has a structured annotation implementation (e.g., Extendible Markup Language, Health Level 7, Annotation and Image Markup) and is able to seamlessly integrate into the existing reporting workflow. In a survey with referring physicians, the vast majority strongly agreed that 3D visual summaries improve the communication of the radiologists' reports and aid communication with patients.

Multi-Disease Segmentation of Gliomas and White Matter Hyperintensities in the BraTS Data Using a 3D Convolutional Neural Network

  • Rudie, Jeffrey D.
  • Weiss, David A.
  • Saluja, Rachit
  • Rauschecker, Andreas M.
  • Wang, Jiancong
  • Sugrue, Leo
  • Bakas, Spyridon
  • Colby, John B.
Frontiers in computational neuroscience 2019 Journal Article, cited 0 times
An important challenge in segmenting real-world biomedical imaging data is the presence of multiple disease processes within individual subjects. Most adults above age 60 exhibit a variable degree of small vessel ischemic disease, as well as chronic infarcts, which will manifest as white matter hyperintensities (WMH) on brain MRIs. Subjects diagnosed with gliomas will also typically exhibit some degree of abnormal T2 signal due to WMH, rather than just due to tumor. We sought to develop a fully automated algorithm to distinguish and quantify these distinct disease processes within individual subjects’ brain MRIs. To address this multi-disease problem, we trained a 3D U-Net to distinguish between abnormal signal arising from tumors vs. WMH in the 3D multi-parametric MRI (mpMRI, i.e., native T1-weighted, T1-post-contrast, T2, T2-FLAIR) scans of the International Brain Tumor Segmentation (BraTS) 2018 dataset (ntraining = 285, nvalidation = 66). Our trained neuroradiologist manually annotated WMH on the BraTS training subjects, finding that 69% of subjects had WMH. Our 3D U-Net model had a 4-channel 3D input patch (80 × 80 × 80) from mpMRI, four encoding and decoding layers, and an output of either four [background, active tumor (AT), necrotic core (NCR), peritumoral edematous/infiltrated tissue (ED)] or five classes (adding WMH as the fifth class). For both the four- and five-class output models, the median Dice for whole tumor (WT) extent (i.e., union of AT, ED, NCR) was 0.92 in both training and validation sets. Notably, the five-class model achieved significantly (p = 0.002) lower/better Hausdorff distances for WT extent in the training subjects. There was strong positive correlation between manually segmented and predicted volumes for WT (r = 0.96) and WMH (r = 0.89). Larger lesion volumes were positively correlated with higher/better Dice scores for WT (r = 0.33), WMH (r = 0.34), and across all lesions (r = 0.89) on a log(10) transformed scale. While the median Dice for WMH was 0.42 across training subjects with WMH, the median Dice was 0.62 for those with at least 5 cm3 of WMH. We anticipate the development of computational algorithms that are able to model multiple diseases within a single subject will be a critical step toward translating and integrating artificial intelligence systems into the heterogeneous real-world clinical workflow.

TCIApathfinder: an R client for The Cancer Imaging Archive REST API

  • Russell, Pamela
  • Fountain, Kelly
  • Wolverton, Dulcy
  • Ghosh, Debashis
Cancer research 2018 Journal Article, cited 1 times
Website

Automatic Removal of Mechanical Fixations from CT Imagery with Particle Swarm Optimisation

  • Ryalat, Mohammad Hashem
  • Laycock, Stephen
  • Fisher, Mark
2017 Conference Proceedings, cited 0 times
Website

Deciphering unclassified tumors of non-small-cell lung cancer through radiomics

  • Saad, Maliazurina
  • Choi, Tae-Sun
Computers in biology and medicine 2017 Journal Article, cited 8 times
Website

Computer-assisted subtyping and prognosis for non-small cell lung cancer patients with unresectable tumor

  • Saad, Maliazurina
  • Choi, Tae-Sun
Computerized Medical Imaging and Graphics 2018 Journal Article, cited 0 times
Website
BACKGROUND: The histological classification or subtyping of non-small cell lung cancer is essential for systematic therapy decisions. Differentiating between the two main subtypes of pulmonary adenocarcinoma and squamous cell carcinoma highlights the considerable differences that exist in the prognosis of patient outcomes. Physicians rely on a pathological analysis to reveal these phenotypic variations that requires invasive methods, such as biopsy and resection sample, but almost 70% of tumors are unresectable at the point of diagnosis. METHOD: A computational method that fuses two frameworks of computerized subtyping and prognosis was proposed, and it was validated against publicly available dataset in The Cancer Imaging Archive that consisted of 82 curated patients with CT scans. The accuracy of the proposed method was compared with the gold standard of pathological analysis, as defined by theInternational Classification of Disease for Oncology (ICD-O). A series of survival outcome test cases were evaluated using the Kaplan-Meier estimator and log-rank test (p-value) between the computational method and ICD-O. RESULTS: The computational method demonstrated high accuracy in subtyping (96.2%) and good consistency in the statistical significance of overall survival prediction for adenocarcinoma and squamous cell carcinoma patients (p<0.03) with respect to its counterpart pathological subtyping (p<0.02). The degree of reproducibility between prognosis taken on computational and pathological subtyping was substantial with an averaged concordance correlation coefficient (CCC) of 0.9910. CONCLUSION: The findings in this study support the idea that quantitative analysis is capable of representing tissue characteristics, as offered by a qualitative analysis.

Are shape morphologies associated with survival? A potential shape-based biomarker predicting survival in lung cancer

  • Saad, Maliazurina
  • Lee, Ik Hyun
  • Choi, Tae-Sun
J Cancer Res Clin Oncol 2019 Journal Article, cited 0 times
Website
PURPOSE: Imaging biomarkers (IBMs) are increasingly investigated as prognostic indicators. IBMs might be capable of assisting treatment selection by providing useful insights into tumor-specific factors in a non-invasive manner. METHODS: We investigated six three-dimensional shape-based IBMs: eccentricities between (I) intermediate-major axis (Eimaj), (II) intermediate-minor axis (Eimin), (III) major-minor axis (Emj-mn) and volumetric index of (I) sphericity (VioS), (II) flattening (VioF), (III) elongating (VioE). Additionally, we investigated previously established two-dimensional shape IBMs: eccentricity (E), index of sphericity (IoS), and minor-to-major axis length (Mn_Mj). IBMs were compared in terms of their predictive performance for 5-year overall survival in two independent cohorts of patients with lung cancer. Cohort 1 received surgical excision, while cohort 2 received radiation therapy alone or chemo-radiation therapy. Univariate and multivariate survival analyses were performed. Correlations with clinical parameters were evaluated using analysis of variance. IBM reproducibility was assessed using concordance correlation coefficients (CCCs). RESULTS: E was associated with reduced survival in cohort 1 (hazard ratio [HR]: 0.664). Eimin and VioF were associated with reduced survival in cohort 2 (HR 1.477 and 1.701). VioS was associated with reduced survival in cohorts 1 and 2 (HR 1.758 and 1.472). Spherical tumors correlated with shorter survival durations than did irregular tumors (median survival difference: 1.21 and 0.35 years in cohorts 1 and 2, respectively). VioS was a significant predictor of survival in multivariate analyses of both cohorts. All IBMs showed good reproducibility (CCC ranged between 0.86-0.98). CONCLUSIONS: In both investigated cohorts, VioS successfully linked shape morphology to patient survival.

Automated delineation of non‐small cell lung cancer: A step toward quantitative reasoning in medical decision science

  • Saad, Maliazurina
  • Lee, Ik Hyun
  • Choi, Tae‐Sun
International Journal of Imaging Systems and Technology 2019 Journal Article, cited 0 times
Website
Quantitative reasoning in medical decision science relies on the delineation of pathological objects. For example, evidence‐based clinical decisions regarding lung diseases require the segmentation of nodules, tumors, or cancers. Non‐small cell lung cancer (NSCLC) tends to be large sized, irregularly shaped, and grows against surrounding structures imposing challenges in the segmentation, even for expert clinicians. An automated delineation tool based on spatial analysis was developed and studied on 25 sets of computed tomography scans of NSCLC. Manual and automated delineations were compared, and the proposed method exhibited robustness in terms of the tumor size (5.32–18.24 mm), shape (spherical or irregular), contouring (lobulated, spiculated, or cavitated), localization (solitary, pleural, mediastinal, endobronchial, or tagging), and laterality (left or right lobe) with accuracy between 80% and 99%. Small discrepancies observed between the manual and automated delineations may arise from the variability in the practitioners' definitions of region of interest or imaging artifacts that reduced the tissue resolution.

DEMARCATE: Density-based Magnetic Resonance Image Clustering for Assessing Tumor Heterogeneity in Cancer

  • Saha, Abhijoy
  • Banerjee, Sayantan
  • Kurtek, Sebastian
  • Narang, Shivali
  • Lee, Joonsang
  • Rao, Ganesh
  • Martinez, Juan
  • Bharath, Karthik
  • Rao, Arvind UK
  • Baladandayuthapani, Veerabhadran
NeuroImage: Clinical 2016 Journal Article, cited 4 times
Website

Total Variation for Image Denoising Based on a Novel Smart Edge Detector: An Application to Medical Images

  • Said, Ahmed Ben
  • Hadjidj, Rachid
  • Foufou, Sebti
Journal of Mathematical Imaging and Vision 2018 Journal Article, cited 0 times
Website

High Level Mammographic Information Fusion For Real World Ontology Population

  • Salem, Yosra Ben
  • Idodi, Rihab
  • Ettabaa, Karim Saheb
  • Hamrouni, Kamel
  • Solaiman, Basel
Journal of Digital Information Management 2017 Journal Article, cited 1 times
Website
In this paper, we propose a novel approach for ontology instantiating from real data related to the mammographic domain. In our study, we are interested in handling two modalities of mammographic images:mammography and Breast MRI. Firstly, we propose to model both images content in ontological representations since ontologies allow the description of the objects from a common perspective. In order, to overcome the ambiguity problem of representation of image’s entities, we propose to take advantage of the possibility theory applied to the ontological representation. Second, both local generated ontologies are merged in a unique formal representation with the use of two similarity measures: syntactic measure and possibilistic measure. The candidate instances are, finally, used for the global domain ontology populating in order to empower the mammographic knowledge base. The approach was validated on real world domain and the results were evaluated in terms of precision and recall by an expert.

Spatial Organization and Molecular Correlation of Tumor-Infiltrating Lymphocytes Using Deep Learning on Pathology Images

  • Saltz, J.
  • Gupta, R.
  • Hou, L.
  • Kurc, T.
  • Singh, P.
  • Nguyen, V.
  • Samaras, D.
  • Shroyer, K. R.
  • Zhao, T.
  • Batiste, R.
  • Van Arnam, J.
  • Cancer Genome Atlas Research, Network
  • Shmulevich, I.
  • Rao, A. U. K.
  • Lazar, A. J.
  • Sharma, A.
  • Thorsson, V.
Cell Rep 2018 Journal Article, cited 23 times
Website
Beyond sample curation and basic pathologic characterization, the digitized H&E-stained images of TCGA samples remain underutilized. To highlight this resource, we present mappings of tumor-infiltrating lymphocytes (TILs) based on H&E images from 13 TCGA tumor types. These TIL maps are derived through computational staining using a convolutional neural network trained to classify patches of images. Affinity propagation revealed local spatial structure in TIL patterns and correlation with overall survival. TIL map structural patterns were grouped using standard histopathological parameters. These patterns are enriched in particular T cell subpopulations derived from molecular measures. TIL densities and spatial structure were differentially enriched among tumor types, immune subtypes, and tumor molecular subtypes, implying that spatial infiltrate state could reflect particular tumor cell aberration states. Obtaining spatial lymphocytic patterns linked to the rich genomic characterization of TCGA samples demonstrates one use for the TCGA image archives with insights into the tumor-immune microenvironment.

Identifying key radiogenomic associations between DCE-MRI and micro-RNA expressions for breast cancer

  • Samala, Ravi K
  • Chan, Heang-Ping
  • Hadjiiski, Lubomir
  • Helvie, Mark A
  • Kim, Renaid
2017 Conference Proceedings, cited 1 times
Website

Classification of Lung CT Images using BRISK Features

  • Sambasivarao, B.
  • Prathiba, G.
International Journal of Engineering and Advanced Technology (IJEAT) 2019 Journal Article, cited 0 times
Website
Lung cancer is the major cause of death in humans. To increase the survival rate of the people, early detection of cancer is required. Lung cancer that starts in the cells of lung is mainly of two types i.e., cancerous (malignant) and non-cancerous cell (benign). In this paper, work is done on the lung images obtained from the Society of Photographic Instrumentation Engineers (SPIE) database. This SPIE database contains normal, benign and malignant images. In this work, 300 images from the database are used out of which 150 are benign and 150 are malignant. Feature points of lung tumor images are extracted by using Binary Robust Invariant Scale Keypoints (BRISK). BRISK attains commensurate characteristic of correspondence at much less computation time. BRISK is adaptive, high quality accomplishments in avant-grade algorithms. BRISK features divide the pairs of pixels surrounding the keypoint into two subsets: short-distance and long-distance pairs. The orientation of the feature point is calculated by Local intensity gradients from long distance pairs. Rotation of Short distance pairs is obtained using this orientation. These BRISK features are used by classifier for classifying the lung tumors as either benign or malignant. The performance is evaluated by calculating the accuracy.

Data augmentation using generative adversarial networks (CycleGAN) to improve generalizability in CT segmentation tasks

  • Sandfort, Veit
  • Yan, Ke
  • Pickhardt, Perry J
  • Summers, Ronald M
Sci RepScientific reports 2019 Journal Article, cited 0 times
Website
Labeled medical imaging data is scarce and expensive to generate. To achieve generalizable deep learning models large amounts of data are needed. Standard data augmentation is a method to increase generalizability and is routinely performed. Generative adversarial networks offer a novel method for data augmentation. We evaluate the use of CycleGAN for data augmentation in CT segmentation tasks. Using a large image database we trained a CycleGAN to transform contrast CT images into non-contrast images. We then used the trained CycleGAN to augment our training using these synthetic non-contrast images. We compared the segmentation performance of a U-Net trained on the original dataset compared to a U-Net trained on the combined dataset of original data and synthetic non-contrast images. We further evaluated the U-Net segmentation performance on two separate datasets: The original contrast CT dataset on which segmentations were created and a second dataset from a different hospital containing only non-contrast CTs. We refer to these 2 separate datasets as the in-distribution and out-of-distribution datasets, respectively. We show that in several CT segmentation tasks performance is improved significantly, especially in out-of-distribution (noncontrast CT) data. For example, when training the model with standard augmentation techniques, performance of segmentation of the kidneys on out-of-distribution non-contrast images was dramatically lower than for in-distribution data (Dice score of 0.09 vs. 0.94 for out-of-distribution vs. in-distribution data, respectively, p < 0.001). When the kidney model was trained with CycleGAN augmentation techniques, the out-of-distribution (non-contrast) performance increased dramatically (from a Dice score of 0.09 to 0.66, p < 0.001). Improvements for the liver and spleen were smaller, from 0.86 to 0.89 and 0.65 to 0.69, respectively. We believe this method will be valuable to medical imaging researchers to reduce manual segmentation effort and cost in CT imaging.

Regression based overall survival prediction of glioblastoma multiforme patients using a single discovery cohort of multi-institutional multi-channel MR images

  • Sanghani, Parita
  • Ang, Beng Ti
  • King, Nicolas Kon Kam
  • Ren, Hongliang
Med Biol Eng ComputMed Biol Eng Comput 2019 Journal Article, cited 0 times
Website
Glioblastoma multiforme (GBM) are malignant brain tumors, associated with poor overall survival (OS). This study aims to predict OS of GBM patients (in days) using a regression framework and assess the impact of tumor shape features on OS prediction. Multi-channel MR image derived texture features, tumor shape, and volumetric features, and patient age were obtained for 163 GBM patients. In order to assess the impact of tumor shape features on OS prediction, two feature sets, with and without tumor shape features, were created. For the feature set with tumor shape features, the mean prediction error (MPE) was 14.6 days and its 95% confidence interval (CI) was 195.8 days. For the feature set excluding shape features, the MPE was 17.1 days and its 95% CI was observed to be 212.7 days. The coefficient of determination (R2) value obtained for the feature set with shape features was 0.92, while it was 0.90 for the feature set excluding shape features. Although marginal, inclusion of shape features improves OS prediction in GBM patients. The proposed OS prediction method using regression provides good accuracy and overcomes the limitations of GBM OS classification, like choosing data-derived or pre-decided thresholds to define the OS groups.

A scheme for patient study retrieval from 3D brain MR volumes

  • Sarathi, Mangipudi Partha
  • Ansari, MA
2015 Conference Proceedings, cited 1 times
Website

Mir-21–Sox2 Axis Delineates Glioblastoma Subtypes with Prognostic Impact

  • Sathyan, Pratheesh
  • Zinn, Pascal O
  • Marisetty, Anantha L
  • Liu, Bin
  • Kamal, Mohamed Mostafa
  • Singh, Sanjay K
  • Bady, Pierre
  • Lu, Li
  • Wani, Khalida M
  • Veo, Bethany L
The Journal of Neuroscience 2015 Journal Article, cited 18 times
Website

Multisite Concordance of DSC-MRI Analysis for Brain Tumors: Results of a National Cancer Institute Quantitative Imaging Network Collaborative Project

  • Schmainda, KM
  • Prah, MA
  • Rand, SD
  • Liu, Y
  • Logan, B
  • Muzi, M
  • Rane, SD
  • Da, X
  • Yen, Y-F
  • Kalpathy-Cramer, J
American Journal of Neuroradiology 2018 Journal Article, cited 0 times
Website

Quantitative Delta T1 (dT1) as a Replacement for Adjudicated Central Reader Analysis of Contrast-Enhancing Tumor Burden: A Subanalysis of the American College of Radiology Imaging Network 6677/Radiation Therapy Oncology Group 0625 Multicenter Brain Tumor Trial.

  • Schmainda, K M
  • Prah, M A
  • Zhang, Z
  • Snyder, B S
  • Rand, S D
  • Jensen, T R
  • Barboriak, D P
  • Boxerman, J L
AJNR Am J Neuroradiol 2019 Journal Article, cited 0 times
Website
BACKGROUND AND PURPOSE: Brain tumor clinical trials requiring solid tumor assessment typically rely on the 2D manual delineation of enhancing tumors by >/=2 expert readers, a time-consuming step with poor interreader agreement. As a solution, we developed quantitative dT1 maps for the delineation of enhancing lesions. This retrospective analysis compares dT1 with 2D manual delineation of enhancing tumors acquired at 2 time points during the post therapeutic surveillance period of the American College of Radiology Imaging Network 6677/Radiation Therapy Oncology Group 0625 (ACRIN 6677/RTOG 0625) clinical trial. MATERIALS AND METHODS: Patients enrolled in ACRIN 6677/RTOG 0625, a multicenter, randomized Phase II trial of bevacizumab in recurrent glioblastoma, underwent standard MR imaging before and after treatment initiation. For 123 patients from 23 institutions, both 2D manual delineation of enhancing tumors and dT1 datasets were evaluable at weeks 8 (n = 74) and 16 (n = 57). Using dT1, we assessed the radiologic response and progression at each time point. Percentage agreement with adjudicated 2D manual delineation of enhancing tumor reads and association between progression status and overall survival were determined. RESULTS: For identification of progression, dT1 and adjudicated 2D manual delineation of enhancing tumor reads were in perfect agreement at week 8, with 73.7% agreement at week 16. Both methods showed significant differences in overall survival at each time point. When nonprogressors were further divided into responders versus nonresponders/nonprogressors, the agreement decreased to 70.3% and 52.6%, yet dT1 showed a significant difference in overall survival at week 8 (P = .01), suggesting that dT1 may provide greater sensitivity for stratifying subpopulations. CONCLUSIONS: This study shows that dT1 can predict early progression comparable with the standard method but offers the potential for substantial time and cost savings for clinical trials.

Dynamic susceptibility contrast MRI measures of relative cerebral blood volume as a prognostic marker for overall survival in recurrent glioblastoma: results from the ACRIN 6677/RTOG 0625 multicenter trial

  • Schmainda, K. M.
  • Zhang, Z.
  • Prah, M.
  • Snyder, B. S.
  • Gilbert, M. R.
  • Sorensen, A. G.
  • Barboriak, D. P.
  • Boxerman, J. L.
Neuro Oncol 2015 Journal Article, cited 0 times
Website
Background. The study goal was to determine whether changes in relative cerebral blood volume (rCBV) derived from dynamic susceptibility contrast (DSC) MRI are predictive of overall survival (OS) in patients with recurrent glioblastoma multiforme (GBM) when measured 2, 8, and 16 weeks after treatment initiation. Methods. Patients with recurrent GBM (37/123) enrolled in ACRIN 6677/RTOG 0625, a multicenter, randomized, phase II trial of bevacizumab with irinotecan or temozolomide, consented to DSC-MRI plus conventional MRI, 21 with DSC-MRI at baseline and at least 1 postbaseline scan. Contrast-enhancing regions of interest were determined semi-automatically using pre- and postcontrast T1-weighted images. Mean tumor rCBV normalized to white matter (nRCBV) and standardized rCBV (sRCBV) were determined for these regions of interest. The OS rates for patients with positive versus negative changes from baseline in nRCBV and sRCBV were compared using Wilcoxon rank-sum and Kaplan-Meier survival estimates with log-rank tests. Results. Patients surviving at least 1 year (OS-1) had significantly larger decreases in nRCBV at week 2 (P=.0451) and sRCBV at week 16 (P=.014). Receiver operating characteristic analysis found the percent changes of nRCBV and sRCBV at week 2 and sRCBV at week 16, but not rCBV data at week 8, to be good prognostic markers for OS-1. Patients with positive change from baseline rCBV had significantly shorter OS than those with negative change at both week 2 and week 16 (P=.0015 and P=.0067 for nRCBV and P=.0251 and P=.0004 for sRCBV, respectively). Conclusions. Early decreases in rCBV are predictive of improved survival in patients with recurrent GBM treated with bevacizumab.

Anatomical Segmentation of CT images for Radiation Therapy planning using Deep Learning

  • Schreier, Jan
2018 Thesis, cited 0 times
Website

Predicting all-cause and lung cancer mortality using emphysema score progression rate between baseline and follow-up chest CT images: A comparison of risk model performances

  • Schreuder, Anton
  • Jacobs, Colin
  • Gallardo-Estrella, Leticia
  • Prokop, Mathias
  • Schaefer-Prokop, Cornelia M
  • van Ginneken, Bram
PLoS One 2019 Journal Article, cited 0 times
Website

Classification of CT pulmonary opacities as perifissural nodules: reader variability

  • Schreuder, Anton
  • van Ginneken, Bram
  • Scholten, Ernst T
  • Jacobs, Colin
  • Prokop, Mathias
  • Sverzellati, Nicola
  • Desai, Sujal R
  • Devaraj, Anand
  • Schaefer-Prokop, Cornelia M
Radiology 2018 Journal Article, cited 3 times
Website

2d view aggregation for lymph node detection using a shallow hierarchy of linear classifiers

  • Seff, Ari
  • Lu, Le
  • Cherry, Kevin M
  • Roth, Holger R
  • Liu, Jiamin
  • Wang, Shijun
  • Hoffman, Joanne
  • Turkbey, Evrim B
  • Summers, Ronald M
2014 Book Section, cited 21 times
Website