Two-phase multi-model automatic brain tumour diagnosis system from magnetic resonance images using convolutional neural networks

  • Abd-Ellah, Mahmoud Khaled
  • Awad, Ali Ismail
  • Khalaf, Ashraf AM
  • Hamed, Hesham FA
EURASIP Journal on Image and Video Processing 2018 Journal Article, cited 0 times
Website

Detection of Lung Nodules on Medical Images by the Use of Fractal Segmentation

  • Abdollahzadeh Rezaie, Afsaneh
  • Habiboghli, Ali
International Journal of Interactive Multimedia and Artificial Inteligence 2017 Journal Article, cited 0 times
Website

Robust Computer-Aided Detection of Pulmonary Nodules from Chest Computed Tomography

  • Abduh, Zaid
  • Wahed, Manal Abdel
  • Kadah, Yasser M
Journal of Medical Imaging and Health Informatics 2016 Journal Article, cited 5 times
Website
Detection of pulmonary nodules in chest computed tomography scans play an important role in the early diagnosis of lung cancer. A simple yet effective computer-aided detection system is developed to distinguish pulmonary nodules in chest CT scans. The proposed system includes feature extraction, normalization, selection and classification steps. One hundred forty-nine gray level statistical features are extracted from selected regions of interest. A min-max normalization method is used followed by sequential forward feature selection technique with logistic regression model used as criterion function that selected an optimal set of five features for classification. The classification step was done using nearest neighbor and support vector machine (SVM) classifiers with separate training and testing sets. Several measures to evaluate the system performance were used including the area under ROC curve (AUC), sensitivity, specificity, precision, accuracy, F1 score and Cohen-k factor. Excellent performance with high sensitivity and specificity is reported using data from two reference datasets as compared to previous work.

A generalized framework for medical image classification and recognition

  • Abedini, M
  • Codella, NCF
  • Connell, JH
  • Garnavi, R
  • Merler, M
  • Pankanti, S
  • Smith, JR
  • Syeda-Mahmood, T
IBM Journal of Research and Development 2015 Journal Article, cited 19 times
Website
In this work, we study the performance of a two-stage ensemble visual machine learning framework for classification of medical images. In the first stage, models are built for subsets of features and data, and in the second stage, models are combined. We demonstrate the performance of this framework in four contexts: 1) The public ImageCLEF (Cross Language Evaluation Forum) 2013 medical modality recognition benchmark, 2) echocardiography view and mode recognition, 3) dermatology disease recognition across two datasets, and 4) a broad medical image dataset, merged from multiple data sources into a collection of 158 categories covering both general and specific medical concepts-including modalities, body regions, views, and disease states. In the first context, the presented system achieves state-of-art performance of 82.2% multiclass accuracy. In the second context, the system attains 90.48% multiclass accuracy. In the third, state-of-art performance of 90% specificity and 90% sensitivity is obtained on a small standardized dataset of 200 images using a leave-one-out strategy. For a larger dataset of 2,761 images, 95% specificity and 98% sensitivity is obtained on a 20% held-out test set. Finally, in the fourth context, the system achieves sensitivity and specificity of 94.7% and 98.4%, respectively, demonstrating the ability to generalize over domains.

Computer-aided diagnosis of clinically significant prostate cancer from MRI images using sparse autoencoder and random forest classifier

  • Abraham, Bejoy
  • Nair, Madhu S
Biocybernetics and Biomedical Engineering 2018 Journal Article, cited 0 times
Website

Computer-aided classification of prostate cancer grade groups from MRI images using texture features and stacked sparse autoencoder

  • Abraham, Bejoy
  • Nair, Madhu S
Computerized Medical Imaging and Graphics 2018 Journal Article, cited 1 times
Website

Automated grading of prostate cancer using convolutional neural network and ordinal class classifier

  • Abraham, Bejoy
  • Nair, Madhu S.
Informatics in Medicine Unlocked 2019 Journal Article, cited 0 times
Website
Prostate Cancer (PCa) is one of the most prominent cancer among men. Early diagnosis and treatment planning are significant in reducing the mortality rate due to PCa. Accurate prediction of grade is required to ensure prompt treatment for cancer. Grading of prostate cancer can be considered as an ordinal class classification problem. This paper presents a novel method for the grading of prostate cancer from multiparametric magnetic resonance images using VGG-16 Convolutional Neural Network and Ordinal Class Classifier with J48 as the base classifier. Multiparametric magnetic resonance images of the PROSTATEx-2 2017 grand challenge dataset are employed for this work. The method achieved a moderate quadratic weighted kappa score of 0.4727 in the grading of PCa into 5 grade groups, which is higher than state-of-the-art methods. The method also achieved a positive predictive value of 0.9079 in predicting clinically significant prostate cancer.

Adaptive Enhancement Technique for Cancerous Lung Nodule in Computed Tomography Images

  • AbuBaker, Ayman A
International Journal of Engineering and Technology 2016 Journal Article, cited 1 times
Website

Automated lung tumor detection and diagnosis in CT Scans using texture feature analysis and SVM

  • Adams, Tim
  • Dörpinghaus, Jens
  • Jacobs, Marc
  • Steinhage, Volker
Communication Papers of the Federated Conference on Computer Science and Information Systems 2018 Journal Article, cited 0 times
Website

Defining a Radiomic Response Phenotype: A Pilot Study using targeted therapy in NSCLC

  • Aerts, Hugo JWL
  • Grossmann, Patrick
  • Tan, Yongqiang
  • Oxnard, Geoffrey G
  • Rizvi, Naiyer
  • Schwartz, Lawrence H
  • Zhao, Binsheng
Sci RepScientific reports 2016 Journal Article, cited 40 times
Website
Medical imaging plays a fundamental role in oncology and drug development, by providing a non-invasive method to visualize tumor phenotype. Radiomics can quantify this phenotype comprehensively by applying image-characterization algorithms, and may provide important information beyond tumor size or burden. In this study, we investigated if radiomics can identify a gefitinib response-phenotype, studying high-resolution computed-tomography (CT) imaging of forty-seven patients with early-stage non-small cell lung cancer before and after three weeks of therapy. On the baseline-scan, radiomic-feature Laws-Energy was significantly predictive for EGFR-mutation status (AUC = 0.67, p = 0.03), while volume (AUC = 0.59, p = 0.27) and diameter (AUC = 0.56, p = 0.46) were not. Although no features were predictive on the post-treatment scan (p > 0.08), the change in features between the two scans was strongly predictive (significant feature AUC-range = 0.74-0.91). A technical validation revealed that the associated features were also highly stable for test-retest (mean +/- std: ICC = 0.96 +/- 0.06). This pilot study shows that radiomic data before treatment is able to predict mutation status and associated gefitinib response non-invasively, demonstrating the potential of radiomics-based phenotyping to improve the stratification and response assessment between tyrosine kinase inhibitors (TKIs) sensitive and resistant patient populations.

Decoding tumour phenotype by noninvasive imaging using a quantitative radiomics approach

  • Aerts, H. J.
  • Velazquez, E. R.
  • Leijenaar, R. T.
  • Parmar, C.
  • Grossmann, P.
  • Cavalho, S.
  • Bussink, J.
  • Monshouwer, R.
  • Haibe-Kains, B.
  • Rietveld, D.
  • Hoebers, F.
  • Rietbergen, M. M.
  • Leemans, C. R.
  • Dekker, A.
  • Quackenbush, J.
  • Gillies, R. J.
  • Lambin, P.
2014 Journal Article, cited 1029 times
Website
Human cancers exhibit strong phenotypic differences that can be visualized noninvasively by medical imaging. Radiomics refers to the comprehensive quantification of tumour phenotypes by applying a large number of quantitative image features. Here we present a radiomic analysis of 440 features quantifying tumour image intensity, shape and texture, which are extracted from computed tomography data of 1,019 patients with lung or head-and-neck cancer. We find that a large number of radiomic features have prognostic power in independent data sets of lung and head-and-neck cancer patients, many of which were not identified as significant before. Radiogenomics analysis reveals that a prognostic radiomic signature, capturing intratumour heterogeneity, is associated with underlying gene-expression patterns. These data suggest that radiomics identifies a general prognostic phenotype existing in both lung and head-and-neck cancer. This may have a clinical impact as imaging is routinely used in clinical practice, providing an unprecedented opportunity to improve decision-support in cancer treatment at low cost.

An Augmentation in the Diagnostic Potency of Breast Cancer through A Deep Learning Cloud-Based AI Framework to Compute Tumor Malignancy & Risk

  • Agarwal, O
International Research Journal of Innovations in Engineering and Technology (IRJIET) 2019 Journal Article, cited 0 times
Website
This research project focuses on developing a web-based multi-platform solution for augmenting prognostic strategies to diagnose breast cancer (BC), from a variety of different tests, including histology, mammography, cytopathology, and fine-needle aspiration cytology, all inan automated fashion. The respective application utilizes tensor-based data representations and deep learning architectural algorithms, to produce optimized models for the prediction of novel instances against each of these medical tests. This system has been designed in a way that all of its computation can be integrated seamlessly into a clinical setting, without posing any disruption to a clinician’s productivity or workflow, but rather an enhancement of their capabilities. This software can make the diagnostic process automated, standardized, faster, and even more accurate than current benchmarks achieved by both pathologists, and radiologists, which makes it invaluable from a clinical standpoint to make well-informed diagnostic decisions with nominal resources.

Automatic mass detection in mammograms using deep convolutional neural networks

  • Agarwal, Richa
  • Diaz, Oliver
  • Lladó, Xavier
  • Yap, Moi Hoon
  • Martí, Robert
Journal of Medical Imaging 2019 Journal Article, cited 0 times
Website
With recent advances in the field of deep learning, the use of convolutional neural networks (CNNs) in medical imaging has become very encouraging. The aim of our paper is to propose a patch-based CNN method for automated mass detection in full-field digital mammograms (FFDM). In addition to evaluating CNNs pretrained with the ImageNet dataset, we investigate the use of transfer learning for a particular domain adaptation. First, the CNN is trained using a large public database of digitized mammograms (CBIS-DDSM dataset), and then the model is transferred and tested onto the smaller database of digital mammograms (INbreast dataset). We evaluate three widely used CNNs (VGG16, ResNet50, InceptionV3) and show that the InceptionV3 obtains the best performance for classifying the mass and nonmass breast region for CBIS-DDSM. We further show the benefit of domain adaptation between the CBIS-DDSM (digitized) and INbreast (digital) datasets using the InceptionV3 CNN. Mass detection evaluation follows a fivefold cross-validation strategy using free-response operating characteristic curves. Results show that the transfer learning from CBIS-DDSM obtains a substantially higher performance with the best true positive rate (TPR) of 0.98 0.02 at 1.67 false positives per image (FPI), compared with transfer learning from ImageNet with TPR of 0.91 0.07 at 2.1 FPI. In addition, the proposed framework improves upon mass detection results described in the literature on the INbreast database, in terms of both TPR and FPI.

Patient-Wise Versus Nodule-Wise Classification of Annotated Pulmonary Nodules using Pathologically Confirmed Cases

  • Aggarwal, Preeti
  • Vig, Renu
  • Sardana, HK
Journal of Computers 2013 Journal Article, cited 5 times
Website
This paper presents a novel framework for combining well known shape, texture, size and resolution informatics descriptor of solitary pulmonary nodules (SPNs) detected using CT scan. The proposed methodology evaluates the performance of classifier in differentiating benign, malignant as well as metastasis SPNs with 246 chests CT scan of patients. Both patient-wise as well as nodule-wise available diagnostic report of 80 patients was used in differentiating the SPNs and the results were compared. For patient-wise data, generated a model with efficiency of 62.55% with labeled nodules and using semi-supervised approach, labels of rest of the unknown nodules were predicted and finally classification accuracy of 82.32% is achieved with all labeled nodules. For nodule-wise data, ground truth database of labeled nodules is expanded from a very small ground truth using content based image retrieval (CBIR) method and achieved a precision of 98%. Proposed methodology not only avoids unnecessary biopsies but also efficiently label unknown nodules using pre-diagnosed cases which can certainly help the physicians in diagnosis.

Automatic lung segmentation in low-dose chest CT scans using convolutional deep and wide network (CDWN)

  • Agnes, S Akila
  • Anitha, J
  • Peter, J Dinesh
Neural Computing and Applications 2018 Journal Article, cited 0 times
Website

Adaptive Multi-Column Deep Neural Networks with Application to Robust Image Denoising

  • Agostinelli, Forest
  • Anderson, Michael R
  • Lee, Honglak
2013 Conference Proceedings, cited 118 times
Website
Stacked sparse denoising auto-encoders (SSDAs) have recently been shown to be successful at removing noise from corrupted images. However, like most denoising techniques, the SSDA is not robust to variation in noise types beyond what it has seen during training. We present the multi-column stacked sparse denoising autoencoder, a novel technique of combining multiple SSDAs into a multi-column SSDA (MC-SSDA) by combining the outputs of each SSDA. We eliminate the need to determine the type of noise, let alone its statistics, at test time. We show that good denoising performance can be achieved with a single system on a variety of different noise types, including ones not seen in the training set. Additionally, we experimentally demonstrate the efficacy of MC-SSDA denoising by achieving MNIST digit error rates on denoised images at close to that of the uncorrupted images.

Tumor Lesion Segmentation from 3D PET Using a Machine Learning Driven Active Surface

  • Ahmadvand, Payam
  • Duggan, Nóirín
  • Bénard, François
  • Hamarneh, Ghassan
2016 Conference Proceedings, cited 4 times
Website

Increased robustness in reference region model analysis of DCE MRI using two‐step constrained approaches

  • Ahmed, Zaki
  • Levesque, Ives R
Magnetic Resonance in Medicine 2016 Journal Article, cited 1 times
Website

An extended reference region model for DCE‐MRI that accounts for plasma volume

  • Ahmed, Zaki
  • Levesque, Ives R
NMR in Biomedicine 2018 Journal Article, cited 0 times
Website

Pharmacokinetic modeling of dynamic contrast-enhanced MRI using a reference region and input function tail

  • Ahmed, Z.
  • Levesque, I. R.
Magn Reson Med 2019 Journal Article, cited 0 times
Website
PURPOSE: Quantitative analysis of dynamic contrast-enhanced MRI (DCE-MRI) requires an arterial input function (AIF) which is difficult to measure. We propose the reference region and input function tail (RRIFT) approach which uses a reference tissue and the washout portion of the AIF. METHODS: RRIFT was evaluated in simulations with 100 parameter combinations at various temporal resolutions (5-30 s) and noise levels (sigma = 0.01-0.05 mM). RRIFT was compared against the extended Tofts model (ETM) in 8 studies from patients with glioblastoma multiforme. Two versions of RRIFT were evaluated: one using measured patient-specific AIF tails, and another assuming a literature-based AIF tail. RESULTS: RRIFT estimated the transfer constant K trans and interstitial volume v e with median errors within 20% across all simulations. RRIFT was more accurate and precise than the ETM at temporal resolutions slower than 10 s. The percentage error of K trans had a median and interquartile range of -9 +/- 45% with the ETM and -2 +/- 17% with RRIFT at a temporal resolution of 30 s under noiseless conditions. RRIFT was in excellent agreement with the ETM in vivo, with concordance correlation coefficients (CCC) of 0.95 for K trans , 0.96 for v e , and 0.73 for the plasma volume v p using a measured AIF tail. With the literature-based AIF tail, the CCC was 0.89 for K trans , 0.93 for v e and 0.78 for v p . CONCLUSIONS: Quantitative DCE-MRI analysis using the input function tail and a reference tissue yields absolute kinetic parameters with the RRIFT method. This approach was viable in simulation and in vivo for temporal resolutions as low as 30 s.

Pharmacokinetic modeling of dynamic contrast‐enhanced MRI using a reference region and input function tail

  • Ahmed, Zaki
  • Levesque, Ives R
Magnetic Resonance in Medicine 2020 Journal Article, cited 0 times
Website

Automated and Manual Quantification of Tumour Cellularity in Digital Slides for Tumour Burden Assessment

  • Akbar, S.
  • Peikari, M.
  • Salama, S.
  • Panah, A. Y.
  • Nofech-Mozes, S.
  • Martel, A. L.
Sci RepScientific reports 2019 Journal Article, cited 3 times
Website
The residual cancer burden index is an important quantitative measure used for assessing treatment response following neoadjuvant therapy for breast cancer. It has shown to be predictive of overall survival and is composed of two key metrics: qualitative assessment of lymph nodes and the percentage of invasive or in situ tumour cellularity (TC) in the tumour bed (TB). Currently, TC is assessed through eye-balling of routine histopathology slides estimating the proportion of tumour cells within the TB. With the advances in production of digitized slides and increasing availability of slide scanners in pathology laboratories, there is potential to measure TC using automated algorithms with greater precision and accuracy. We describe two methods for automated TC scoring: 1) a traditional approach to image analysis development whereby we mimic the pathologists' workflow, and 2) a recent development in artificial intelligence in which features are learned automatically in deep neural networks using image data alone. We show strong agreements between automated and manual analysis of digital slides. Agreements between our trained deep neural networks and experts in this study (0.82) approach the inter-rater agreements between pathologists (0.89). We also reveal properties that are captured when we apply deep neural network to whole slide images, and discuss the potential of using such visualisations to improve upon TC assessment in the future.

Map-Reduce based tipping point scheduler for parallel image processing

  • Akhtar, Mohammad Nishat
  • Saleh, Junita Mohamad
  • Awais, Habib
  • Bakar, Elmi Abu
Expert Systems with Applications 2019 Journal Article, cited 0 times
Website
Nowadays, Big Data image processing is very much in need due to its proven success in the field of business information system, medical science and social media. However, as the days are passing by, the computation of Big Data images is becoming more complex which ultimately results in complex resource management and higher task execution time. Researchers have been using a combination of CPU and GPU based computing to cut down the execution time, however, when it comes to scaling of compute nodes, then the combination of CPU and GPU based computing still remains a challenge due to the high communication cost factor. In order to tackle this issue, the Map-Reduce framework has come out to be a viable option as its workflow optimization could be enhanced by changing its underlying job scheduling mechanism. This paper presents a comparative study of job scheduling algorithms which could be deployed over various Big Data based image processing application and also proposes a tipping point scheduling algorithm to optimize the workflow for job execution on multiple nodes. The evaluation of the proposed scheduling algorithm is done by implementing parallel image segmentation algorithm to detect lung tumor for up to 3GB size of image dataset. In terms of performance comprising of task execution time and throughput, the proposed tipping point scheduler has come out to be the best scheduler followed by the Map-Reduce based Fair scheduler. The proposed tipping point scheduler is 1.14 times better than Map-Reduce based Fair scheduler and 1.33 times better than Map-Reduced based FIFO scheduler in terms of task execution time and throughput. In terms of speedup comparison between single node and multiple nodes, the proposed tipping point scheduler attained a speedup of 4.5 X for multi-node architecture. Keywords: Job scheduler; Workflow optimization; Map-Reduce; Tipping point scheduler; Parallel image segmentation; Lung tumor

A review of lung cancer screening and the role of computer-aided detection

  • Al Mohammad, B
  • Brennan, PC
  • Mello-Thoms, C
Clinical Radiology 2017 Journal Article, cited 23 times
Website

Radiologist performance in the detection of lung cancer using CT

  • Al Mohammad, B
  • Hillis, SL
  • Reed, W
  • Alakhras, M
  • Brennan, PC
Clinical Radiology 2019 Journal Article, cited 2 times
Website

Breast Cancer Diagnostic System Based on MR images Using KPCA-Wavelet Transform and Support Vector Machine

  • AL-Dabagh, Mustafa Zuhaer
  • AL-Mukhtar, Firas H
IJAERS 2017 Journal Article, cited 0 times
Website

A Novel Approach to Improving Brain Image Classification Using Mutual Information-Accelerated Singular Value Decomposition

  • Al-Saffar, Zahraa A
  • Yildirim, Tülay
IEEE Access 2020 Journal Article, cited 0 times
Website

Quantitative assessment of colorectal morphology: Implications for robotic colonoscopy

  • Alazmani, A
  • Hood, A
  • Jayne, D
  • Neville, A
  • Culmer, P
Medical engineering & physics 2016 Journal Article, cited 11 times
Website
This paper presents a method of characterizing the distribution of colorectal morphometrics. It uses three-dimensional region growing and topological thinning algorithms to determine and visualize the luminal volume and centreline of the colon, respectively. Total and segmental lengths, diameters, volumes, and tortuosity angles were then quantified. The effects of body orientations on these parameters were also examined. Variations in total length were predominately due to differences in the transverse colon and sigmoid segments, and did not significantly differ between body orientations. The diameter of the proximal colon was significantly larger than the distal colon, with the largest value at the ascending and cecum segments. The volume of the transverse colon was significantly the largest, while those of the descending colon and rectum were the smallest. The prone position showed a higher frequency of high angles and consequently found to be more torturous than the supine position. This study yielded a method for complete segmental measurements of healthy colorectal anatomy and its tortuosity. The transverse and sigmoid colons were the major determinant in tortuosity and morphometrics between body orientations. Quantitative understanding of these parameters may potentially help to facilitate colonoscopy techniques, accuracy of polyp spatial distribution detection, and design of novel endoscopic devices.

Deep learning for segmentation of brain tumors: Impact of cross-institutional training and testing

  • AlBadawy, E. A.
  • Saha, A.
  • Mazurowski, M. A.
Med Phys 2018 Journal Article, cited 5 times
Website
BACKGROUND AND PURPOSE: Convolutional neural networks (CNNs) are commonly used for segmentation of brain tumors. In this work, we assess the effect of cross-institutional training on the performance of CNNs. METHODS: We selected 44 glioblastoma (GBM) patients from two institutions in The Cancer Imaging Archive dataset. The images were manually annotated by outlining each tumor component to form ground truth. To automatically segment the tumors in each patient, we trained three CNNs: (a) one using data for patients from the same institution as the test data, (b) one using data for the patients from the other institution and (c) one using data for the patients from both of the institutions. The performance of the trained models was evaluated using Dice similarity coefficients as well as Average Hausdorff Distance between the ground truth and automatic segmentations. The 10-fold cross-validation scheme was used to compare the performance of different approaches. RESULTS: Performance of the model significantly decreased (P < 0.0001) when it was trained on data from a different institution (dice coefficients: 0.68 +/- 0.19 and 0.59 +/- 0.19) as compared to training with data from the same institution (dice coefficients: 0.72 +/- 0.17 and 0.76 +/- 0.12). This trend persisted for segmentation of the entire tumor as well as its individual components. CONCLUSIONS: There is a very strong effect of selecting data for training on performance of CNNs in a multi-institutional setting. Determination of the reasons behind this effect requires additional comprehensive investigation.

Self-organizing Approach to Learn a Level-set Function for Object Segmentation in Complex Background Environments

  • Albalooshi, Fatema A
2015 Thesis, cited 0 times
Website

Multi-modal Multi-temporal Brain Tumor Segmentation, Growth Analysis and Texture-based Classification

  • Alberts, Esther
2019 Thesis, cited 0 times
Website
Brain tumor analysis is an active field of research, which has received a lot of attention from both the medical and the technical communities in the past decades. The purpose of this thesis is to investigate brain tumor segmentation, growth analysis and tumor classification based on multi-modal magnetic resonance (MR) image datasets of low- and high-grade glioma making use of computer vision and machine learning methodologies. Brain tumor segmentation involves the delineation of tumorous structures, such as edema, active tumor and necrotic tumor core, and healthy brain tissues, often categorized in gray matter, white matter and cerebro-spinal fluid. Deep learning frameworks have proven to be among the most accurate brain tumor segmentation techniques, performing particularly well when large accurately annotated image datasets are available. A first project is designed to build a more flexible model, which allows for intuitive semi-automated user-interaction, is less dependent on training data, and can handle missing MR modalities. The framework is based on a Bayesian network with hidden variables optimized by the expectation-maximization algorithm, and is tailored to handle non-Gaussian multivariate distributions using the concept of Gaussian copulas. To generate reliable priors for the generative probabilistic model and to spatially regularize the segmentation results, it is extended with an initialization and a post-processing module, both based on supervoxels classified by random forests. Brain tumor segmentation allows to assess tumor volumetry over time, which is important to identify disease progression (tumor regrowth) after therapy. In a second project, a dataset of temporal MR sequences is analyzed. To that end, brain tumor segmentation and brain tumor growth assessment are unified within a single framework using a conditional random field (CRF). The CRF extends over the temporal patient datasets and includes directed links with infinite weight in order to incorporate growth or shrinkage constraints. The model is shown to obtain temporally coherent tumor segmentation and aids in estimating the likelihood of disease progression after therapy. Recent studies classify brain tumors based on their genotypic parameters, which are reported to have an important impact on the prognosis and the therapy of patients. A third project is aimed to investigate whether the genetic profile of glioma can be predicted based on the MR images only, which would eliminate the need to take biopsies. A multi-modal medical image classification framework is built, classifying glioma in three genetic classes based on DNA methylation status. The framework makes use of short local image descriptors as well as deep-learned features acquired by denoising auto-encoders to generate meaningful image features. The framework is successfully validated and shown to obtain high accuracies even though the same image-based classification task is hardly possible for medical experts.

Automatic intensity windowing of mammographic images based on a perceptual metric

  • Albiol, Alberto
  • Corbi, Alberto
  • Albiol, Francisco
Medical physics 2017 Journal Article, cited 0 times
Website

Semi-automatic classification of prostate cancer on multi-parametric MR imaging using a multi-channel 3D convolutional neural network

  • Aldoj, Nader
  • Lukas, Steffen
  • Dewey, Marc
  • Penzkofer, Tobias
Eur Radiol 2019 Journal Article, cited 1 times
Website
OBJECTIVE: To present a deep learning-based approach for semi-automatic prostate cancer classification based on multi-parametric magnetic resonance (MR) imaging using a 3D convolutional neural network (CNN). METHODS: Two hundred patients with a total of 318 lesions for which histological correlation was available were analyzed. A novel CNN was designed, trained, and validated using different combinations of distinct MRI sequences as input (e.g., T2-weighted, apparent diffusion coefficient (ADC), diffusion-weighted images, and K-trans) and the effect of different sequences on the network's performance was tested and discussed. The particular choice of modeling approach was justified by testing all relevant data combinations. The model was trained and validated using eightfold cross-validation. RESULTS: In terms of detection of significant prostate cancer defined by biopsy results as the reference standard, the 3D CNN achieved an area under the curve (AUC) of the receiver operating characteristics ranging from 0.89 (88.6% and 90.0% for sensitivity and specificity respectively) to 0.91 (81.2% and 90.5% for sensitivity and specificity respectively) with an average AUC of 0.897 for the ADC, DWI, and K-trans input combination. The other combinations scored less in terms of overall performance and average AUC, where the difference in performance was significant with a p value of 0.02 when using T2w and K-trans; and 0.00025 when using T2w, ADC, and DWI. Prostate cancer classification performance is thus comparable to that reported for experienced radiologists using the prostate imaging reporting and data system (PI-RADS). Lesion size and largest diameter had no effect on the network's performance. CONCLUSION: The diagnostic performance of the 3D CNN in detecting clinically significant prostate cancer is characterized by a good AUC and sensitivity and high specificity. KEY POINTS: * Prostate cancer classification using a deep learning model is feasible and it allows direct processing of MR sequences without prior lesion segmentation. * Prostate cancer classification performance as measured by AUC is comparable to that of an experienced radiologist. * Perfusion MR images (K-trans), followed by DWI and ADC, have the highest effect on the overall performance; whereas T2w images show hardly any improvement.

Semi-automatic classification of prostate cancer on multi-parametric MR imaging using a multi-channel 3D convolutional neural network

  • Aldoj, Nader
  • Lukas, Steffen
  • Dewey, Marc
  • Penzkofer, Tobias
European Radiology 2020 Journal Article, cited 1 times
Website

Radiogenomics in renal cell carcinoma

  • Alessandrino, Francesco
  • Shinagare, Atul B
  • Bossé, Dominick
  • Choueiri, Toni K
  • Krajewski, Katherine M
Abdominal Radiology 2018 Journal Article, cited 0 times
Website

Robust Detection of Circles in the Vessel Contours and Application to Local Probability Density Estimation

  • Alvarez, Luis
  • González, Esther
  • Esclarín, Julio
  • Gomez, Luis
  • Alemán-Flores, Miguel
  • Trujillo, Agustín
  • Cuenca, Carmelo
  • Mazorra, Luis
  • Tahoces, Pablo G
  • Carreira, José M
2017 Book Section, cited 3 times
Website

Transferable HMM probability matrices in multi‐orientation geometric medical volumes segmentation

  • AlZu'bi, Shadi
  • AlQatawneh, Sokyna
  • ElBes, Mohammad
  • Alsmirat, Mohammad
Concurrency and Computation: Practice and Experience 2019 Journal Article, cited 0 times
Website
Acceptable error rate, low quality assessment, and time complexity are the major problems in image segmentation, which needed to be discovered. A variety of acceleration techniques have been applied and achieve real time results, but still limited in 3D. HMM is one of the best statistical techniques that played a significant rule recently. The problem associated with HMM is time complexity, which has been resolved using different accelerator. In this research, we propose a methodology for transferring HMM matrices from image to another skipping the training time for the rest of the 3D volume. One HMM train is generated and generalized to the whole volume. The concepts behind multi‐orientation geometrical segmentation has been employed here to improve the quality of HMM segmentation. Axial, saggital, and coronal orientations have been considered individually and together to achieve accurate segmentation results in less processing time and superior quality in the detection accuracy.

Imaging Biomarker Ontology (IBO): A Biomedical Ontology to Annotate and Share Imaging Biomarker Data

  • Amdouni, Emna
  • Gibaud, Bernard
Journal on Data Semantics 2018 Journal Article, cited 0 times
Website

Hybrid Mass Detection in Breast MRI Combining Unsupervised Saliency Analysis and Deep Learning

  • Amit, Guy
  • Hadad, Omer
  • Alpert, Sharon
  • Tlusty, Tal
  • Gur, Yaniv
  • Ben-Ari, Rami
  • Hashoul, Sharbell
2017 Conference Paper, cited 15 times
Website
To interpret a breast MRI study, a radiologist has to examine over 1000 images, and integrate spatial and temporal information from multiple sequences. The automated detection and classification of suspicious lesions can help reduce the workload and improve accuracy. We describe a hybrid mass-detection algorithm that combines unsupervised candidate detection with deep learning-based classification. The detection algorithm first identifies image-salient regions, as well as regions that are cross-salient with respect to the contralateral breast image. We then use a convolutional neural network (CNN) to classify the detected candidates into true-positive and false-positive masses. The network uses a novel multi-channel image representation; this representation encompasses information from the anatomical and kinetic image features, as well as saliency maps. We evaluated our algorithm on a dataset of MRI studies from 171 patients, with 1957 annotated slices of malignant (59%) and benign (41%) masses. Unsupervised saliency-based detection provided a sensitivity of 0.96 with 9.7 false-positive detections per slice. Combined with CNN classification, the number of false positive detections dropped to 0.7 per slice, with 0.85 sensitivity. The multi-channel representation achieved higher classification performance compared to single-channel images. The combination of domain-specific unsupervised methods and general-purpose supervised learning offers advantages for medical imaging applications, and may improve the ability of automated algorithms to assist radiologists.

Breast Cancer Response Prediction in Neoadjuvant Chemotherapy Treatment Based on Texture Analysis

  • Ammar, Mohammed
  • Mahmoudi, Saïd
  • Stylianos, Drisis
Procedia Computer Science 2016 Journal Article, cited 2 times
Website

Medical Image Classification Algorithm Based on Weight Initialization-Sliding Window Fusion Convolutional Neural Network

  • An, Feng-Ping
Complexity 2019 Journal Article, cited 0 times
Website
Due to the complexity of medical images, traditional medical image classification methods have been unable to meet actual application needs. In recent years, the rapid development of deep learning theory has provided a technical approach for solving medical image classification tasks. However, deep learning has the following problems in medical image classification. First, it is impossible to construct a deep learning model hierarchy for medical image properties; second, the network initialization weights of deep learning models are not well optimized. Therefore, this paper starts from the perspective of network optimization and improves the nonlinear modeling ability of the network through optimization methods. A new network weight initialization method is proposed, which alleviates the problem that existing deep learning model initialization is limited by the type of the nonlinear unit adopted and increases the potential of the neural network to handle different visual tasks. Moreover, through an in-depth study of the multicolumn convolutional neural network framework, this paper finds that the number of features and the convolution kernel size at different levels of the convolutional neural network are different. In contrast, the proposed method can construct different convolutional neural network models that adapt better to the characteristics of the medical images of interest and thus can better train the resulting heterogeneous multicolumn convolutional neural networks. Finally, using the adaptive sliding window fusion mechanism proposed in this paper, both methods jointly complete the classification task of medical images. Based on the above ideas, this paper proposes a medical classification algorithm based on a weight initialization/sliding window fusion for multilevel convolutional neural networks. The methods proposed in this study were applied to breast mass, brain tumor tissue, and medical image database classification experiments. The results show that the proposed method not only achieves a higher average accuracy than that of traditional machine learning and other deep learning methods but also is more stable and more robust.

Application of Fuzzy c-means and Neural networks to categorize tumor affected breast MR Images

  • Anand, Shruthi
  • Vinod, Viji
  • Rampure, Anand
International Journal of Applied Engineering Research 2015 Journal Article, cited 4 times
Website

Imaging Genomics in Glioblastoma Multiforme: A Predictive Tool for Patients Prognosis, Survival, and Outcome

  • Anil, Rahul
  • Colen, Rivka R
Magnetic Resonance Imaging Clinics of North America 2016 Journal Article, cited 3 times
Website
The integration of imaging characteristics and genomic data has started a new trend in approach toward management of glioblastoma (GBM). Many ongoing studies are investigating imaging phenotypical signatures that could explain more about the behavior of GBM and its outcome. The discovery of biomarkers has played an adjuvant role in treating and predicting the outcome of patients with GBM. Discovering these imaging phenotypical signatures and dysregulated pathways/genes is needed and required to engineer treatment based on specific GBM manifestations. Characterizing these parameters will establish well-defined criteria so researchers can build on the treatment of GBM through personal medicine.

Lung nodule detection using 3D convolutional neural networks trained on weakly labeled data

  • Anirudh, Rushil
  • Thiagarajan, Jayaraman J
  • Bremer, Timo
  • Kim, Hyojin
2016 Conference Proceedings, cited 33 times
Website

Brain tumour classification using two-tier classifier with adaptive segmentation technique

  • Anitha, V
  • Murugavalli, S
IET Computer Vision 2016 Journal Article, cited 46 times
Website
A brain tumour is a mass of tissue that is structured by a gradual addition of anomalous cells and it is important to classify brain tumours from the magnetic resonance imaging (MRI) for treatment. Human investigation is the routine technique for brain MRI tumour detection and tumours classification. Interpretation of images is based on organised and explicit classification of brain MRI and also various techniques have been proposed. Information identified with anatomical structures and potential abnormal tissues which are noteworthy to treat are given by brain tumour segmentation on MRI, the proposed system uses the adaptive pillar K-means algorithm for successful segmentation and the classification methodology is done by the two-tier classification approach. In the proposed system, at first the self-organising map neural network trains the features extracted from the discrete wavelet transform blend wavelets and the resultant filter factors are consequently trained by the K-nearest neighbour and the testing process is also accomplished in two stages. The proposed two-tier classification system classifies the brain tumours in double training process which gives preferable performance over the traditional classification method. The proposed system has been validated with the support of real data sets and the experimental results showed enhanced performance.

Classification of lung adenocarcinoma transcriptome subtypes from pathological images using deep convolutional networks

  • Antonio, Victor Andrew A
  • Ono, Naoaki
  • Saito, Akira
  • Sato, Tetsuo
  • Altaf-Ul-Amin, Md
  • Kanaya, Shigehiko
International journal of computer assisted radiology and surgery 2018 Journal Article, cited 0 times
Website

Fast wavelet based image characterization for content based medical image retrieval

  • Anwar, Syed Muhammad
  • Arshad, Fozia
  • Majid, Muhammad
2017 Conference Proceedings, cited 4 times
Website
A large collection of medical images surrounds health care centers and hospitals. Medical images produced by different modalities like magnetic resonance imaging (MRI), computed tomography (CT), positron emission tomography (PET), and X-rays have increased incredibly with the advent of latest technologies for image acquisition. Retrieving clinical images of interest from these large data sets is a thought-provoking and demanding task. In this paper, a fast wavelet based medical image retrieval system is proposed that can aid physicians in the identification or analysis of medical images. The image signature is calculated using kurtosis and standard deviation as features. A possible use case is when the radiologist has some suspicion on diagnosis and wants further case histories, the acquired clinical images are sent (e.g. MRI images of brain) as a query to the content based medical image retrieval system. The system is tuned to retrieve the top most relevant images to the query. The proposed system is computationally efficient and more accurate in terms of the quality of retrieved images.

End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography

  • Ardila, D.
  • Kiraly, A. P.
  • Bharadwaj, S.
  • Choi, B.
  • Reicher, J. J.
  • Peng, L.
  • Tse, D.
  • Etemadi, M.
  • Ye, W.
  • Corrado, G.
  • Naidich, D. P.
  • Shetty, S.
Nat Med 2019 Journal Article, cited 1 times
Website
With an estimated 160,000 deaths in 2018, lung cancer is the most common cause of cancer death in the United States(1). Lung cancer screening using low-dose computed tomography has been shown to reduce mortality by 20-43% and is now included in US screening guidelines(1-6). Existing challenges include inter-grader variability and high false-positive and false-negative rates(7-10). We propose a deep learning algorithm that uses a patient's current and prior computed tomography volumes to predict the risk of lung cancer. Our model achieves a state-of-the-art performance (94.4% area under the curve) on 6,716 National Lung Cancer Screening Trial cases, and performs similarly on an independent clinical validation set of 1,139 cases. We conducted two reader studies. When prior computed tomography imaging was not available, our model outperformed all six radiologists with absolute reductions of 11% in false positives and 5% in false negatives. Where prior computed tomography imaging was available, the model performance was on-par with the same radiologists. This creates an opportunity to optimize the screening process via computer assistance and automation. While the vast majority of patients remain unscreened, we show the potential for deep learning models to increase the accuracy, consistency and adoption of lung cancer screening worldwide.

Potentials of radiomics for cancer diagnosis and treatment in comparison with computer-aided diagnosis

  • Arimura, Hidetaka
  • Soufi, Mazen
  • Ninomiya, Kenta
  • Kamezawa, Hidemi
  • Yamada, Masahiro
Radiological Physics and Technology 2018 Journal Article, cited 0 times
Website
Computer-aided diagnosis (CAD) is a field that is essentially based on pattern recognition that improves the accuracy of a diagnosis made by a physician who takes into account the computer’s “opinion” derived from the quantitative analysis of radiological images. Radiomics is a field based on data science that massively and comprehensively analyzes a large number of medical images to extract a large number of phenotypic features reflecting disease traits, and explores the associations between the features and patients’ prognoses for precision medicine. According to the definitions for both, you may think that radiomics is not a paraphrase of CAD, but you may also think that these definitions are “image manipulation”. However, there are common and different features between the two fields. This review paper elaborates on these common and different features and introduces the potential of radiomics for cancer diagnosis and treatment by comparing it with CAD.

The lung image database consortium (LIDC) and image database resource initiative (IDRI): a completed reference database of lung nodules on CT scans

  • Armato III, Samuel G
  • McLennan, Geoffrey
  • Bidaut, Luc
  • McNitt-Gray, Michael F
  • Meyer, Charles R
  • Reeves, Anthony P
  • Zhao, Binsheng
  • Aberle, Denise R
  • Henschke, Claudia I
  • Hoffman, Eric A
Medical physics 2011 Journal Article, cited 546 times
Website

The lung image database consortium (LIDC) and image database resource initiative (IDRI): a completed reference database of lung nodules on CT scans

  • Armato III, Samuel G
  • McLennan, Geoffrey
  • Bidaut, Luc
  • McNitt-Gray, Michael F
  • Meyer, Charles R
  • Reeves, Anthony P
  • Zhao, Binsheng
  • Aberle, Denise R
  • Henschke, Claudia I
  • Hoffman, Eric A
  • Kazerooni, E. A.
  • MacMahon, H.
  • Van Beeke, E. J.
  • Yankelevitz, D.
  • Biancardi, A. M.
  • Bland, P. H.
  • Brown, M. S.
  • Engelmann, R. M.
  • Laderach, G. E.
  • Max, D.
  • Pais, R. C.
  • Qing, D. P.
  • Roberts, R. Y.
  • Smith, A. R.
  • Starkey, A.
  • Batrah, P.
  • Caligiuri, P.
  • Farooqi, A.
  • Gladish, G. W.
  • Jude, C. M.
  • Munden, R. F.
  • Petkovska, I.
  • Quint, L. E.
  • Schwartz, L. H.
  • Sundaram, B.
  • Dodd, L. E.
  • Fenimore, C.
  • Gur, D.
  • Petrick, N.
  • Freymann, J.
  • Kirby, J.
  • Hughes, B.
  • Casteele, A. V.
  • Gupte, S.
  • Sallamm, M.
  • Heath, M. D.
  • Kuhn, M. H.
  • Dharaiya, E.
  • Burns, R.
  • Fryd, D. S.
  • Salganicoff, M.
  • Anand, V.
  • Shreter, U.
  • Vastagh, S.
  • Croft, B. Y.
Medical physics 2011 Journal Article, cited 546 times
Website
PURPOSE: The development of computer-aided diagnostic (CAD) methods for lung nodule detection, classification, and quantitative assessment can be facilitated through a well-characterized repository of computed tomography (CT) scans. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) completed such a database, establishing a publicly available reference for the medical imaging research community. Initiated by the National Cancer Institute (NCI), further advanced by the Foundation for the National Institutes of Health (FNIH), and accompanied by the Food and Drug Administration (FDA) through active participation, this public-private partnership demonstrates the success of a consortium founded on a consensus-based process. METHODS: Seven academic centers and eight medical imaging companies collaborated to identify, address, and resolve challenging organizational, technical, and clinical issues to provide a solid foundation for a robust database. The LIDC/IDRI Database contains 1018 cases, each of which includes images from a clinical thoracic CT scan and an associated XML file that records the results of a two-phase image annotation process performed by four experienced thoracic radiologists. In the initial blinded-read phase, each radiologist independently reviewed each CT scan and marked lesions belonging to one of three categories ("nodule > or =3 mm," "nodule <3 mm," and "non-nodule > or =3 mm"). In the subsequent unblinded-read phase, each radiologist independently reviewed their own marks along with the anonymized marks of the three other radiologists to render a final opinion. The goal of this process was to identify as completely as possible all lung nodules in each CT scan without requiring forced consensus. RESULTS: The Database contains 7371 lesions marked "nodule" by at least one radiologist. 2669 of these lesions were marked "nodule > or =3 mm" by at least one radiologist, of which 928 (34.7%) received such marks from all four radiologists. These 2669 lesions include nodule outlines and subjective nodule characteristic ratings. CONCLUSIONS: The LIDC/IDRI Database is expected to provide an essential medical imaging research resource to spur CAD development, validation, and dissemination in clinical practice.

Collaborative projects

  • Armato, S
  • McNitt-Gray, M
  • Meyer, C
  • Reeves, A
  • Clarke, L
Int J CARS 2012 Journal Article, cited 307 times
Website

Special Section Guest Editorial: LUNGx Challenge for computerized lung nodule classification: reflections and lessons learned

  • Armato, Samuel G
  • Hadjiiski, Lubomir
  • Tourassi, Georgia D
  • Drukker, Karen
  • Giger, Maryellen L
  • Li, Feng
  • Redmond, George
  • Farahani, Keyvan
  • Kirby, Justin S
  • Clarke, Laurence P
Journal of Medical Imaging 2015 Journal Article, cited 20 times
Website
The purpose of this work is to describe the LUNGx Challenge for the computerized classification of lung nodules on diagnostic computed tomography (CT) scans as benign or malignant and report the performance of participants' computerized methods along with that of six radiologists who participated in an observer study performing the same Challenge task on the same dataset. The Challenge provided sets of calibration and testing scans, established a performance assessment process, and created an infrastructure for case dissemination and result submission. Ten groups applied their own methods to 73 lung nodules (37 benign and 36 malignant) that were selected to achieve approximate size matching between the two cohorts. Area under the receiver operating characteristic curve (AUC) values for these methods ranged from 0.50 to 0.68; only three methods performed statistically better than random guessing. The radiologists' AUC values ranged from 0.70 to 0.85; three radiologists performed statistically better than the best-performing computer method. The LUNGx Challenge compared the performance of computerized methods in the task of differentiating benign from malignant lung nodules on CT scans, placed in the context of the performance of radiologists on the same task. The continued public availability of the Challenge cases will provide a valuable resource for the medical imaging research community.

Discovery of pre-therapy 2-deoxy-2-18 F-fluoro-D-glucose positron emission tomography-based radiomics classifiers of survival outcome in non-small-cell lung cancer patients

  • Arshad, Mubarik A
  • Thornton, Andrew
  • Lu, Haonan
  • Tam, Henry
  • Wallitt, Kathryn
  • Rodgers, Nicola
  • Scarsbrook, Andrew
  • McDermott, Garry
  • Cook, Gary J
  • Landau, David
European journal of nuclear medicine and molecular imaging 2018 Journal Article, cited 0 times
Website

Morphological and multi-level geometrical descriptor analysis in CT and MRI volumes for automatic pancreas segmentation

  • Asaturyan, Hykoush
  • Gligorievski, Antonio
  • Villarini, Barbara
Computerized Medical Imaging and Graphics 2019 Journal Article, cited 3 times
Website
Automatic pancreas segmentation in 3D radiological scans is a critical, yet challenging task. As a prerequisite for computer-aided diagnosis (CADx) systems, accurate pancreas segmentation could generate both quantitative and qualitative information towards establishing the severity of a condition, and thus provide additional guidance for therapy planning. Since the pancreas is an organ of high inter-patient anatomical variability, previous segmentation approaches report lower quantitative accuracy scores in comparison to abdominal organs such as the liver or kidneys. This paper presents a novel approach for automatic pancreas segmentation in magnetic resonance imaging (MRI) and computer tomography (CT) scans. This method exploits 3D segmentation that, when coupled with geometrical and morphological characteristics of abdominal tissue, classifies distinct contours in tight pixel-range proximity as “pancreas” or “non-pancreas”. There are three main stages to this approach: (1) identify a major pancreas region and apply contrast enhancement to differentiate between pancreatic and surrounding tissue; (2) perform 3D segmentation via continuous max-flow and min-cuts approach, structured forest edge detection, and a training dataset of annotated pancreata; (3) eliminate non-pancreatic contours from resultant segmentation via morphological operations on area, structure and connectivity between distinct contours. The proposed method is evaluated on a dataset containing 82 CT image volumes, achieving mean Dice Similarity coefficient (DSC) of 79.3 ± 4.4%. Two MRI datasets containing 216 and 132 image volumes are evaluated, achieving mean DSC 79.6 ± 5.7% and 81.6 ± 5.1% respectively. This approach is statistically stable, reflected by lower metrics in standard deviation in comparison to state-of-the-art approaches.

Early survival prediction in non-small cell lung cancer from PET/CT images using an intra-tumor partitioning method.

  • Astaraki, Mehdi
  • Wang, Chunliang
  • Buizza, Giulia
  • Toma-Dasu, Iuliana
  • Lazzeroni, Marta
  • Smedby, Orjan
Physica Medica 2019 Journal Article, cited 0 times
Website
PURPOSE: To explore prognostic and predictive values of a novel quantitative feature set describing intra-tumor heterogeneity in patients with lung cancer treated with concurrent and sequential chemoradiotherapy. METHODS: Longitudinal PET-CT images of 30 patients with non-small cell lung cancer were analysed. To describe tumor cell heterogeneity, the tumors were partitioned into one to ten concentric regions depending on their sizes, and, for each region, the change in average intensity between the two scans was calculated for PET and CT images separately to form the proposed feature set. To validate the prognostic value of the proposed method, radiomics analysis was performed and a combination of the proposed novel feature set and the classic radiomic features was evaluated. A feature selection algorithm was utilized to identify the optimal features, and a linear support vector machine was trained for the task of overall survival prediction in terms of area under the receiver operating characteristic curve (AUROC). RESULTS: The proposed novel feature set was found to be prognostic and even outperformed the radiomics approach with a significant difference (AUROCSALoP=0.90 vs. AUROCradiomic=0.71) when feature selection was not employed, whereas with feature selection, a combination of the novel feature set and radiomics led to the highest prognostic values. CONCLUSION: A novel feature set designed for capturing intra-tumor heterogeneity was introduced. Judging by their prognostic power, the proposed features have a promising potential for early survival prediction.

Computer Aided Detection Scheme To Improve The Prognosis Assessment Of Early Stage Lung Cancer Patients

  • Athira, KV
  • Nithin, SS
Computer 2018 Journal Article, cited 0 times
Website

Radiogenomics of clear cell renal cell carcinoma: preliminary findings of The Cancer Genome Atlas–Renal Cell Carcinoma (TCGA–RCC) Imaging Research Group

  • Atul, B
Abdominal imaging 2015 Journal Article, cited 47 times
Website

Analysis of dual tree M‐band wavelet transform based features for brain image classification

  • Ayalapogu, Ratna Raju
  • Pabboju, Suresh
  • Ramisetty, Rajeswara Rao
Magnetic Resonance in Medicine 2018 Journal Article, cited 1 times
Website

Analysis of Classification Methods for Diagnosis of Pulmonary Nodules in CT Images

  • Baboo, Capt Dr S Santhosh
  • Iyyapparaj, E
IOSR Journal of Electrical and Electronics Engineering 2017 Journal Article, cited 0 times
Website
The main aim of this work is to propose a novel Computer-aided detection (CAD) system based on a Contextual clustering combined with region growing for assisting radiologists in early identification of lung cancer from computed tomography(CT) scans. Instead of using conventional thresholding approach, this proposed work uses Contextual Clustering which yields a more accurate segmentation of the lungs from the chest volume. Following segmentation GLCM features are extracted which are then classified using three different classifiers namely Random forest, SVM and k-NN.

Detection of Brain Tumour in MRI Scan Images using Tetrolet Transform and SVM Classifier

  • Babu, B Shoban
  • Varadarajan, S
Indian Journal of Science and Technology 2017 Journal Article, cited 1 times
Website

BIOMEDICAL IMAGE RETRIEVAL USING LBWP

  • Babu, Joyce Sarah
  • Mathew, Soumya
  • Simon, Rini
International Research Journal of Engineering and Technology 2017 Journal Article, cited 0 times
Website

Virtual clinical trial for task-based evaluation of a deep learning synthetic mammography algorithm

  • Badal, Andreu
  • Cha, Kenny H.
  • Divel, Sarah E.
  • Graff, Christian G.
  • Zeng, Rongping
  • Badano, Aldo
2019 Conference Proceedings, cited 0 times
Website
Image processing algorithms based on deep learning techniques are being developed for a wide range of medical applications. Processed medical images are typically evaluated with the same kind of image similarity metrics used for natural scenes, disregarding the medical task for which the images are intended. We propose a com- putational framework to estimate the clinical performance of image processing algorithms using virtual clinical trials. The proposed framework may provide an alternative method for regulatory evaluation of non-linear image processing algorithms. To illustrate this application of virtual clinical trials, we evaluated three algorithms to compute synthetic mammograms from digital breast tomosynthesis (DBT) scans based on convolutional neural networks previously used for denoising low dose computed tomography scans. The inputs to the networks were one or more noisy DBT projections, and the networks were trained to minimize the difference between the output and the corresponding high dose mammogram. DBT and mammography images simulated with the Monte Carlo code MC-GPU using realistic breast phantoms were used for network training and validation. The denoising algorithms were tested in a virtual clinical trial by generating 3000 synthetic mammograms from the public VICTRE dataset of simulated DBT scans. The detectability of a calcification cluster and a spiculated mass present in the images was calculated using an ensemble of 30 computational channelized Hotelling observers. The signal detectability results, which took into account anatomic and image reader variability, showed that the visibility of the mass was not affected by the post-processing algorithm, but that the resulting slight blurring of the images severely impacted the visibility of the calcification cluster. The evaluation of the algorithms using the pixel-based metrics peak signal to noise ratio and structural similarity in image patches was not able to predict the reduction in performance in the detectability of calcifications. These two metrics are computed over the whole image and do not consider any particular task, and might not be adequate to estimate the diagnostic performance of the post-processed images.

Technical and Clinical Factors Affecting Success Rate of a Deep Learning Method for Pancreas Segmentation on CT

  • Bagheri, Mohammad Hadi
  • Roth, Holger
  • Kovacs, William
  • Yao, Jianhua
  • Farhadi, Faraz
  • Li, Xiaobai
  • Summers, Ronald M
Acad Radiol 2019 Journal Article, cited 0 times
Website
PURPOSE: Accurate pancreas segmentation has application in surgical planning, assessment of diabetes, and detection and analysis of pancreatic tumors. Factors that affect pancreas segmentation accuracy have not been previously reported. The purpose of this study is to identify technical and clinical factors that adversely affect the accuracy of pancreas segmentation on CT. METHOD AND MATERIALS: In this IRB and HIPAA compliant study, a deep convolutional neural network was used for pancreas segmentation in a publicly available archive of 82 portal-venous phase abdominal CT scans of 53 men and 29 women. The accuracies of the segmentations were evaluated by the Dice similarity coefficient (DSC). The DSC was then correlated with demographic and clinical data (age, gender, height, weight, body mass index), CT technical factors (image pixel size, slice thickness, presence or absence of oral contrast), and CT imaging findings (volume and attenuation of pancreas, visceral abdominal fat, and CT attenuation of the structures within a 5 mm neighborhood of the pancreas). RESULTS: The average DSC was 78% +/- 8%. Factors that were statistically significantly correlated with DSC included body mass index (r=0.34, p < 0.01), visceral abdominal fat (r=0.51, p < 0.0001), volume of the pancreas (r=0.41, p=0.001), standard deviation of CT attenuation within the pancreas (r=0.30, p=0.01), and median and average CT attenuation in the immediate neighborhood of the pancreas (r = -0.53, p < 0.0001 and r=-0.52, p < 0.0001). There were no significant correlations between the DSC and the height, gender, or mean CT attenuation of the pancreas. CONCLUSION: Increased visceral abdominal fat and accumulation of fat within or around the pancreas are major factors associated with more accurate segmentation of the pancreas. Potential applications of our findings include assessment of pancreas segmentation difficulty of a particular scan or dataset and identification of methods that work better for more challenging pancreas segmentations.

Imaging genomics in cancer research: limitations and promises

  • Bai, Harrison X
  • Lee, Ashley M
  • Yang, Li
  • Zhang, Paul
  • Davatzikos, Christos
  • Maris, John M
  • Diskin, Sharon J
The British journal of radiology 2016 Journal Article, cited 28 times
Website

BraTS Multimodal Brain Tumor Segmetation Challenge

  • Bakas, Spyridon
2017 Conference Proceedings, cited 0 times
Website

GLISTRboost: Combining Multimodal MRI Segmentation, Registration, and Biophysical Tumor Growth Modeling with Gradient Boosting Machines for Glioma Segmentation.

  • Bakas, S.
  • Zeng, K.
  • Sotiras, A.
  • Rathore, S.
  • Akbari, H.
  • Gaonkar, B.
  • Rozycki, M.
  • Pati, S.
  • Davatzikos, C.
Brainlesion 2016 Journal Article, cited 49 times
Website
We present an approach for segmenting low- and high-grade gliomas in multimodal magnetic resonance imaging volumes. The proposed approach is based on a hybrid generative-discriminative model. Firstly, a generative approach based on an Expectation-Maximization framework that incorporates a glioma growth model is used to segment the brain scans into tumor, as well as healthy tissue labels. Secondly, a gradient boosting multi-class classification scheme is used to refine tumor labels based on information from multiple patients. Lastly, a probabilistic Bayesian strategy is employed to further refine and finalize the tumor segmentation based on patient-specific intensity statistics from the multiple modalities. We evaluated our approach in 186 cases during the training phase of the BRAin Tumor Segmentation (BRATS) 2015 challenge and report promising results. During the testing phase, the algorithm was additionally evaluated in 53 unseen cases, achieving the best performance among the competing methods.

A radiogenomic dataset of non-small cell lung cancer

  • Bakr, Shaimaa
  • Gevaert, Olivier
  • Echegaray, Sebastian
  • Ayers, Kelsey
  • Zhou, Mu
  • Shafiq, Majid
  • Zheng, Hong
  • Benson, Jalen Anthony
  • Zhang, Weiruo
  • Leung, Ann NC
Scientific data 2018 Journal Article, cited 1 times
Website

Secure telemedicine using RONI halftoned visual cryptography without pixel expansion

  • Bakshi, Arvind
  • Patel, Anoop Kumar
Journal of Information Security and Applications 2019 Journal Article, cited 0 times
Website
To provide quality healthcare services worldwide telemedicine is a well-known technique. It delivers healthcare services remotely. For the diagnosis of disease and prescription by the doctor, lots of information is needed to be shared over public and private channels. Medical information like MRI, X-Ray, CT-scan etc. contains very personal information and needs to be secured. Security like confidentiality, privacy, and integrity of medical data is still a challenge. It is observed that the existing security techniques like digital watermarking, encryption are not efficient for real-time use. This paper investigates the problem and provides the solution of security considering major aspects, using Visual Cryptography (VC). The proposed algorithm creates shares for parts of the image which does not have relevant information. All the information which contains data related to the disease is supposed to be relevant and is marked as the region of interest (ROI). The integrity of the image is maintained by inserting some information in the region of non-interest (RONI). All the shares generated are transmitted over different channels and embedded information is decrypted by overlapping (in XOR fashion) shares in theta(1) time. Visual perception of all the results discussed in this article is very clear. The proposed algorithm has performance metrics as PSNR (peak signal-to-noise ratio), SSIM (structure similarity matrix), and Accuracy having values 22.9452, 0.9701, and 99.8740 respectively. (C) 2019 Elsevier Ltd. All rights reserved.

Test–Retest Reproducibility Analysis of Lung CT Image Features

  • Balagurunathan, Yoganand
  • Kumar, Virendra
  • Gu, Yuhua
  • Kim, Jongphil
  • Wang, Hua
  • Liu, Ying
  • Goldgof, Dmitry B
  • Hall, Lawrence O
  • Korn, Rene
  • Zhao, Binsheng
Journal of Digital Imaging 2014 Journal Article, cited 85 times
Website

Quantitative Imaging features Improve Discrimination of Malignancy in Pulmonary nodules

  • Balagurunathan, Yoganand
  • Schabath, Matthew B.
  • Wang, Hua
  • Liu, Ying
  • Gillies, Robert J.
Sci RepScientific reports 2019 Journal Article, cited 0 times
Website
Pulmonary nodules are frequently detected radiological abnormalities in lung cancer screening. Nodules of the highest- and lowest-risk for cancer are often easily diagnosed by a trained radiologist there is still a high rate of indeterminate pulmonary nodules (IPN) of unknown risk. Here, we test the hypothesis that computer extracted quantitative features ("radiomics") can provide improved risk-assessment in the diagnostic setting. Nodules were segmented in 3D and 219 quantitative features are extracted from these volumes. Using these features novel malignancy risk predictors are formed with various stratifications based on size, shape and texture feature categories. We used images and data from the National Lung Screening Trial (NLST), curated a subset of 479 participants (244 for training and 235 for testing) that included incident lung cancers and nodule-positive controls. After removing redundant and non-reproducible features, optimal linear classifiers with area under the receiver operator characteristics (AUROC) curves were used with an exhaustive search approach to find a discriminant set of image features, which were validated in an independent test dataset. We identified several strong predictive models, using size and shape features the highest AUROC was 0.80. Using non-size based features the highest AUROC was 0.85. Combining features from all the categories, the highest AUROC were 0.83.

Bone-Cancer Assessment and Destruction Pattern Analysis in Long-Bone X-ray Image

  • Bandyopadhyay, Oishila
  • Biswas, Arindam
  • Bhattacharya, Bhargab B
J Digit Imaging 2018 Journal Article, cited 0 times
Website
Bone cancer originates from bone and rapidly spreads to the rest of the body affecting the patient. A quick and preliminary diagnosis of bone cancer begins with the analysis of bone X-ray or MRI image. Compared to MRI, an X-ray image provides a low-cost diagnostic tool for diagnosis and visualization of bone cancer. In this paper, a novel technique for the assessment of cancer stage and grade in long bones based on X-ray image analysis has been proposed. Cancer-affected bone images usually appear with a variation in bone texture in the affected region. A fusion of different methodologies is used for the purpose of our analysis. In the proposed approach, we extract certain features from bone X-ray images and use support vector machine (SVM) to discriminate healthy and cancerous bones. A technique based on digital geometry is deployed for localizing cancer-affected regions. Characterization of the present stage and grade of the disease and identification of the underlying bone-destruction pattern are performed using a decision tree classifier. Furthermore, the method leads to the development of a computer-aided diagnostic tool that can readily be used by paramedics and doctors. Experimental results on a number of test cases reveal satisfactory diagnostic inferences when compared with ground truth known from clinical findings.

A novel fully automated MRI-based deep-learning method for classification of IDH mutation status in brain gliomas

  • Bangalore Yogananda, Chandan Ganesh
  • Shah, Bhavya R
  • Vejdani-Jahromi, Maryam
  • Nalawade, Sahil S
  • Murugesan, Gowtham K
  • Yu, Frank F
  • Pinho, Marco C
  • Wagner, Benjamin C
  • Mickey, Bruce
  • Patel, Toral R
Neuro-oncology 2020 Journal Article, cited 4 times
Website

A New Adaptive-Weighted Fusion Rule for Wavelet based PET/CT Fusion

  • Barani, R
  • Sumathi, M
International Journal of Signal Processing, Image Processing and Pattern Recognition 2016 Journal Article, cited 1 times
Website

Interreader Variability of Dynamic Contrast-enhanced MRI of Recurrent Glioblastoma: The Multicenter ACRIN 6677/RTOG 0625 Study

  • Barboriak, Daniel P
  • Zhang, Zheng
  • Desai, Pratikkumar
  • Snyder, Bradley S
  • Safriel, Yair
  • McKinstry, Robert C
  • Bokstein, Felix
  • Sorensen, Gregory
  • Gilbert, Mark R
  • Boxerman, Jerrold L
Radiology 2019 Journal Article, cited 2 times
Website
Purpose To evaluate factors contributing to interreader variation (IRV) in parameters measured at dynamic contrast material-enhanced (DCE) MRI in patients with glioblastoma who were participating in a multicenter trial. Materials and Methods A total of 18 patients (mean age, 57 years +/- 13 [standard deviation]; 10 men) who volunteered for the advanced imaging arm of ACRIN 6677, a substudy of the RTOG 0625 clinical trial for recurrent glioblastoma treatment, underwent analyzable DCE MRI at one of four centers. The 78 imaging studies were analyzed centrally to derive the volume transfer constant (K(trans)) for gadolinium between blood plasma and tissue extravascular extracellular space, fractional volume of the extracellular extravascular space (ve), and initial area under the gadolinium concentration curve (IAUGC). Two independently trained teams consisting of a neuroradiologist and a technologist segmented the enhancing tumor on three-dimensional spoiled gradient-recalled acquisition in the steady-state images. Mean and median parameter values in the enhancing tumor were extracted after registering segmentations to parameter maps. The effect of imaging time relative to treatment, map quality, imager magnet and sequence, average tumor volume, and reader variability in tumor volume on IRV was studied by using intraclass correlation coefficients (ICCs) and linear mixed models. Results Mean interreader variations (+/- standard deviation) (difference as a percentage of the mean) for mean and median IAUGC, mean and median K(trans), and median ve were 18% +/- 24, 17% +/- 23, 27% +/- 34, 16% +/- 27, and 27% +/- 34, respectively. ICCs for these metrics ranged from 0.90 to 1.0 for baseline and from 0.48 to 0.76 for posttreatment examinations. Variability in reader-derived tumor volume was significantly related to IRV for all parameters. Conclusion Differences in reader tumor segmentations are a significant source of interreader variation for all dynamic contrast-enhanced MRI parameters. (c) RSNA, 2018 Online supplemental material is available for this article. See also the editorial by Wolf in this issue.

Pathologically-Validated Tumor Prediction Maps in MRI

  • Barrington, Alex
2019 Thesis, cited 0 times
Website
Glioblastoma (GBM) is an aggressive cancer with an average 5-year survival rate of about 5%. Following treatment with surgery, radiation, and chemotherapy, diagnosing tumor recurrence requires serial magnetic resonance imaging (MRI) scans. Infiltrative tumor cells beyond gadolinium enhancement on T1-weighted MRI are difficult to detect. This study therefore aims to improve tumor detection beyond traditional tumor margins. To accomplish this, a neural network model was trained to classify tissue samples as ‘tumor’ or ‘not tumor’. This model was then used to classify thousands of tiles from histology samples acquired at autopsy with known MRI locations on the patient’s final clinical MRI scan. This combined radiological-pathological (rad-path) dataset was then treated as a ground truth to train a second model for predicting tumor presence from MRI alone. Predictive maps were created for seven patients left out of the training steps, and tissue samples were tested to determine the model’s accuracy. The final model produced a receiver operator characteristic (ROC) area under the curve (AUC) of 0.70. This study demonstrates a new method for detecting infiltrative tumor beyond conventional radiologist defined margins based on neural networks applied to rad-path datasets in glioblastoma.

Equating quantitative emphysema measurements on different CT image reconstructions

  • Bartel, Seth T
  • Bierhals, Andrew J
  • Pilgram, Thomas K
  • Hong, Cheng
  • Schechtman, Kenneth B
  • Conradi, Susan H
  • Gierada, David S
Medical physics 2011 Journal Article, cited 15 times
Website
PURPOSE: To mathematically model the relationship between CT measurements of emphysema obtained from images reconstructed using different section thicknesses and kernels and to evaluate the accuracy of the models for converting measurements to those of a reference reconstruction. METHODS: CT raw data from the lung cancer screening examinations of 138 heavy smokers were reconstructed at 15 different combinations of section thickness and kernel. An emphysema index was quantified as the percentage of the lung with attenuation below -950 HU (EI950). Linear, quadratic, and power functions were used to model the relationship between EI950 values obtained with a reference 1 mm, medium smooth kernel reconstruction and values from each of the other 14 reconstructions. Preferred models were selected using the corrected Akaike information criterion (AICc), coefficients of determination (R2), and residuals (conversion errors), and cross-validated by a jackknife approach using the leave-one-out method. RESULTS: The preferred models were power functions, with model R2 values ranging from 0.949 to 0.998. The errors in converting EI950 measurements from other reconstructions to the 1 mm, medium smooth kernel reconstruction in leave-one-out testing were less than 3.0 index percentage points for all reconstructions, and less than 1.0 index percentage point for five reconstructions. Conversion errors were related in part to image noise, emphysema distribution, and attenuation histogram parameters. Conversion inaccuracy related to increased kernel sharpness tended to be reduced by increased section thickness. CONCLUSIONS: Image reconstruction-related differences in quantitative emphysema measurements were successfully modeled using power functions.

A Heterogeneous and Multi-Range Soft-Tissue Deformation Model for Applications in Adaptive Radiotherapy

  • Bartelheimer, Kathrin
2020 Thesis, cited 0 times
Website
Abstract During fractionated radiotherapy, anatomical changes result in uncertainties in the applied dose distribution. With increasing steepness of applied dose gradients, the relevance of patient deformations increases. Especially in proton therapy, small anatomical changes in the order of millimeters can result in large range uncertainties and therefore in substantial deviations from the planned dose. To quantify the anatomical changes, deformation models are required. With upcoming MR-guidance, the soft-tissue deformations gain visibility, but so far only few soft-tissue models meeting the requirements of high-precision radiotherapy exist. Most state-of-the-art models either lack anatomical detail or exhibit long computation times. In this work, a fast soft-tissue deformation model is developed which is capable of considering tissue properties of heterogeneous tissue. The model is based on the chainmail (CM)-concept, which is improved by three basic features. For the first time, rotational degrees of freedom are introduced into the CM-concept to improve the characteristic deformation behavior. A novel concept for handling multiple deformation initiators is developed to cope with global deformation input. And finally, a concept for handling various shapes of deformation input is proposed to provide a high flexibility concerning the design of deformation input. To demonstrate the model flexibility, it was coupled to a kinematic skeleton model for the head and neck region, which provides anatomically correct deformation input for the bones. For exemplary patient CTs, the combined model was shown to be capable of generating artificially deformed CT images with realistic appearance. This was achieved for small-range deformations in the order of interfractional deformations, as well as for large-range deformations like an arms-up to arms-down deformation, as can occur between images of different modalities. The deformation results showed a strong improvement in biofidelity, compared to the original chainmail-concept, as well as compared to clinically used image-based deformation methods. The computation times for the model are in the order of 30 min for single-threaded calculations, by simple code parallelization times in the order of 1 min can be achieved. Applications that require realistic forward deformations of CT images will benefit from the improved biofidelity of the developed model. Envisioned applications are the generation of plan libraries and virtual phantoms, as well as data augmentation for deep learning approaches. Due to the low computation times, the model is also well suited for image registration applications. In this context, it will contribute to an improved calculation of accumulated dose, as is required in high-precision adaptive radiotherapy. Translation of abstract (German) Anatomische Veränderungen im Laufe der fraktionierten Strahlentherapie erzeugen Unsicherheiten in der tatsächlich applizierten Dosisverteilung. Je steiler die Dosisgradienten in der Verteilung sind, desto größer wird der Einfluss von Patientendeformationen. Insbesondere in der Protonentherapie erzeugen schon kleine anatomische Veränderungen im mm-Bereich große Unsicherheiten in der Reichweite und somit extreme Unterschiede zur geplanten Dosis. Um solche anatomischen Veränderungen zu quantifizieren, werden Deformationsmodelle benötigt. Durch die aufkommenden Möglichkeiten von MR-guidance gewinnt das Weichgewebe an Sichtbarkeit. Allerdings gibt es bisher nur wenige Modelle für Weichgewebe, welche den Anforderungen von hochpräziser Strahlentherapie genügen. Die meisten Modelle berücksichtigen entweder nicht genügend anatomische Details oder benötigen lange Rechenzeiten. In dieser Arbeit wird ein schnelles Deformationsmodell für Weichgewebe entwickelt, welches es ermöglicht, Gewebeeigenschaften von heterogenem Gewebe zu berücksichtigen. Dieses Modell basiert auf dem Chainmail (CM)-Konzept, welches um drei grundlegende Eigenschaften erweitert wird. Rotationsfreiheitsgrade werden in das CM-Konzept eingebracht, um das charakteristische Deformationsverhalten zu verbessern. Es wird ein neues Konzept für multiple Deformationsinitiatoren entwickelt, um mit globalem Deformationsinput umgehen zu können. Und zuletzt wird ein Konzept zum Umgang mit verschiedenen Formen von Deformationsinput vorgestellt, welches eine hohe Flexibilität für die Kopplung zu anderen Modellen ermöglicht. Um diese Flexibilität des Modells zu zeigen, wurde es mit einem kinematischen Skelettmodell für die Kopf-Hals-Region gekoppelt, welches anatomisch korrekten Input für die Knochen liefert. Basierend auf exemplarischen Patientendatensätzen wurde gezeigt, dass das gekoppelte Modell realistisch aussehende, künstlich deformierte CTs erzeugen kann. Dies war sowohl für eine kleine Deformation im Bereich von interfraktionellen Bewegungen als auch für eine große Deformation, wie z.B. eine arms-up zu arms-down Bewegung, welche zwischen multimodalen Bildern auftreten kann, möglich. Die Ergebnisse zeigen eine starke Verbesserung der Biofidelity im Vergleich zum CM-Modell, und auch im Vergleich zu klinisch eingesetzten bildbasierten Deformationsmodellen. Die Rechenzeiten für das Modell liegen im Bereich von 30 min für single-threaded Berechnungen. Durch einfache Code-Parallelisierung können Zeiten im Bereich von 1 min erreicht werden. Anwendungen, die realistische CTs aus Vorwärtsdeformationen benötigen, werden von der verbesserten Biofidelity des entwickelten Modells profitieren. Mögliche Anwendungen sind die Erstellung von Plan-Bibliotheken und virtuellen Phantomen sowie Daten-Augmentation für deep-learning Ansätze. Aufgrund der geringen Rechenzeiten ist das Modell auch für Anwendungen in der Bildregistrierung gut geeignet. In diesem Kontext wird es zu einer verbesserten Berechnung der akkumulierten Dosis beitragen, welche für hochpräzise adaptive Strahlentherapie benötigt wird.

Removing Mixture Noise from Medical Images Using Block Matching Filtering and Low-Rank Matrix Completion

  • Barzigar, Nafise
  • Roozgard, Aminmohammad
  • Verma, Pramode K
  • Cheng, Samuel
2012 Conference Proceedings, cited 2 times
Website

Machine Intelligence for Advanced Medical Data Analysis: Manifold Learning Approach

  • Bashiri, Fereshteh Sadat
2019 Thesis, cited 0 times
Website
In the current work, linear and non-linear manifold learning techniques, specifically Principle Component Analysis (PCA) and Laplacian Eigenmaps, are studied in detail. Their applications in medical image and shape analysis are investigated. In the first contribution, a manifold learning-based multi-modal image registration technique is developed, which results in a unified intensity system through intensity transformation between the reference and sensed images. The transformation eliminates intensity variations in multi-modal medical scans and hence facilitates employing well-studied monomodal registration techniques. The method can be used for registering multi-modal images with full and partial data. Next, a manifold learning-based scale invariant global shape descriptor is introduced. The proposed descriptor benefits from the capability of Laplacian Eigenmap in dealing with high dimensional data by introducing an exponential weighting scheme. It eliminates the limitations tied to the well-known cotangent weighting scheme, namely dependency on triangular mesh representation and high intra-class quality of 3D models. In the end, a novel descriptive model for diagnostic classification of pulmonary nodules is presented. The descriptive model benefits from structural differences between benign and malignant nodules for automatic and accurate prediction of a candidate nodule. It extracts concise and discriminative features automatically from the 3D surface structure of a nodule using spectral features studied in the previous work combined with a point cloud-based deep learning network. Extensive experiments have been conducted and have shown that the proposed algorithms based on manifold learning outperform several state-of-the-art methods. Advanced computational techniques with a combination of manifold learning and deep networks can play a vital role in effective healthcare delivery by providing a framework for several fundamental tasks in image and shape processing, namely, registration, classification, and detection of features of interest.

Call for Data Standardization: Lessons Learned and Recommendations in an Imaging Study

  • Basu, Amrita
  • Warzel, Denise
  • Eftekhari, Aras
  • Kirby, Justin S
  • Freymann, John
  • Knable, Janice
  • Sharma, Ashish
  • Jacobs, Paula
JCO Clin Cancer Inform 2019 Journal Article, cited 0 times
Website
PURPOSE: Data sharing creates potential cost savings, supports data aggregation, and facilitates reproducibility to ensure quality research; however, data from heterogeneous systems require retrospective harmonization. This is a major hurdle for researchers who seek to leverage existing data. Efforts focused on strategies for data interoperability largely center around the use of standards but ignore the problems of competing standards and the value of existing data. Interoperability remains reliant on retrospective harmonization. Approaches to reduce this burden are needed. METHODS: The Cancer Imaging Archive (TCIA) is an example of an imaging repository that accepts data from a diversity of sources. It contains medical images from investigators worldwide and substantial nonimage data. Digital Imaging and Communications in Medicine (DICOM) standards enable querying across images, but TCIA does not enforce other standards for describing nonimage supporting data, such as treatment details and patient outcomes. In this study, we used 9 TCIA lung and brain nonimage files containing 659 fields to explore retrospective harmonization for cross-study query and aggregation. It took 329.5 hours, or 2.3 months, extended over 6 months to identify 41 overlapping fields in 3 or more files and transform 31 of them. We used the Genomic Data Commons (GDC) data elements as the target standards for harmonization. RESULTS: We characterized the issues and have developed recommendations for reducing the burden of retrospective harmonization. Once we harmonized the data, we also developed a Web tool to easily explore harmonized collections. CONCLUSION: While prospective use of standards can support interoperability, there are issues that complicate this goal. Our work recognizes and reveals retrospective harmonization issues when trying to reuse existing data and recommends national infrastructure to address these issues.

Variability of manual segmentation of the prostate in axial T2-weighted MRI: A multi-reader study

  • Becker, A. S.
  • Chaitanya, K.
  • Schawkat, K.
  • Müehlematter, U. J.
  • Hotker, A. M.
  • Konukoglu, E.
  • Donati, O. F.
Eur J Radiol 2019 Journal Article, cited 3 times
Website
PURPOSE: To evaluate the interreader variability in prostate and seminal vesicle (SV) segmentation on T2w MRI. METHODS: Six readers segmented the peripheral zone (PZ), transitional zone (TZ) and SV slice-wise on axial T2w prostate MRI examinations of n=80 patients. Twenty different similarity scores, including dice score (DS), Hausdorff distance (HD) and volumetric similarity coefficient (VS), were computed with the VISCERAL EvaluateSegmentation software for all structures combined and separately for the whole gland (WG=PZ+TZ), TZ and SV. Differences between base, midgland and apex were evaluated with DS slice-wise. Descriptive statistics for similarity scores were computed. Wilcoxon testing to evaluate differences of DS, HD and VS was performed. RESULTS: Overall segmentation variability was good with a mean DS of 0.859 (+/-SD=0.0542), HD of 36.6 (+/-34.9 voxels) and VS of 0.926 (+/-0.065). The WG showed a DS, HD and VS of 0.738 (+/-0.144), 36.2 (+/-35.6 vx) and 0.853 (+/-0.143), respectively. The TZ showed generally lower variability with a DS of 0.738 (+/-0.144), HD of 24.8 (+/-16 vx) and VS of 0.908 (+/-0.126). The lowest variability was found for the SV with DS of 0.884 (+/-0.0407), HD of 17 (+/-10.9 vx) and VS of 0.936 (+/-0.0509). We found a markedly lower DS of the segmentations in the apex (0.85+/-0.12) compared to the base (0.87+/-0.10, p<0.01) and the midgland (0.89+/-0.10, p<0.001). CONCLUSIONS: We report baseline values for interreader variability of prostate and SV segmentation on T2w MRI. Variability was highest in the apex, lower in the base, and lowest in the midgland.

Anatomical DCE-MRI phantoms generated from glioma patient data

  • Beers, Andrew
  • Chang, Ken
  • Brown, James
  • Zhu, Xia
  • Sengupta, Dipanjan
  • Willke, Theodore L
  • Gerstner, Elizabeth
  • Rosen, Bruce
  • Kalpathy-Cramer, Jayashree
2018 Conference Proceedings, cited 0 times
Website

Multi‐site quality and variability analysis of 3D FDG PET segmentations based on phantom and clinical image data

  • Beichel, Reinhard R
  • Smith, Brian J
  • Bauer, Christian
  • Ulrich, Ethan J
  • Ahmadvand, Payam
  • Budzevich, Mikalai M
  • Gillies, Robert J
  • Goldgof, Dmitry
  • Grkovski, Milan
  • Hamarneh, Ghassan
Medical physics 2017 Journal Article, cited 7 times
Website
PURPOSE: Radiomics utilizes a large number of image-derived features for quantifying tumor characteristics that can in turn be correlated with response and prognosis. Unfortunately, extraction and analysis of such image-based features is subject to measurement variability and bias. The challenge for radiomics is particularly acute in Positron Emission Tomography (PET) where limited resolution, a high noise component related to the limited stochastic nature of the raw data, and the wide variety of reconstruction options confound quantitative feature metrics. Extracted feature quality is also affected by tumor segmentation methods used to define regions over which to calculate features, making it challenging to produce consistent radiomics analysis results across multiple institutions that use different segmentation algorithms in their PET image analysis. Understanding each element contributing to these inconsistencies in quantitative image feature and metric generation is paramount for ultimate utilization of these methods in multi-institutional trials and clinical oncology decision making. METHODS: To assess segmentation quality and consistency at the multi-institutional level, we conducted a study of seven institutional members of the National Cancer Institute Quantitative Imaging Network. For the study, members were asked to segment a common set of phantom PET scans acquired over a range of imaging conditions as well as a second set of head and neck cancer (HNC) PET scans. Segmentations were generated at each institution using their preferred approach. In addition, participants were asked to repeat segmentations with a time interval between initial and repeat segmentation. This procedure resulted in overall 806 phantom insert and 641 lesion segmentations. Subsequently, the volume was computed from the segmentations and compared to the corresponding reference volume by means of statistical analysis. RESULTS: On the two test sets (phantom and HNC PET scans), the performance of the seven segmentation approaches was as follows. On the phantom test set, the mean relative volume errors ranged from 29.9 to 87.8% of the ground truth reference volumes, and the repeat difference for each institution ranged between -36.4 to 39.9%. On the HNC test set, the mean relative volume error ranged between -50.5 to 701.5%, and the repeat difference for each institution ranged between -37.7 to 31.5%. In addition, performance measures per phantom insert/lesion size categories are given in the paper. On phantom data, regression analysis resulted in coefficient of variation (CV) components of 42.5% for scanners, 26.8% for institutional approaches, 21.1% for repeated segmentations, 14.3% for relative contrasts, 5.3% for count statistics (acquisition times), and 0.0% for repeated scans. Analysis showed that the CV components for approaches and repeated segmentations were significantly larger on the HNC test set with increases by 112.7% and 102.4%, respectively. CONCLUSION: Analysis results underline the importance of PET scanner reconstruction harmonization and imaging protocol standardization for quantification of lesion volumes. In addition, to enable a distributed multi-site analysis of FDG PET images, harmonization of analysis approaches and operator training in combination with highly automated segmentation methods seems to be advisable. Future work will focus on quantifying the impact of segmentation variation on radiomics system performance.

Radiogenomic-Based Survival Risk Stratification of Tumor Habitat on Gd-T1w MRI Is Associated with Biological Processes in Glioblastoma

  • Beig, Niha
  • Bera, Kaustav
  • Prasanna, Prateek
  • Antunes, Jacob
  • Correa, Ramon
  • Singh, Salendra
  • Saeed Bamashmos, Anas
  • Ismail, Marwa
  • Braman, Nathaniel
  • Verma, Ruchika
  • Hill, Virginia B
  • Statsevych, Volodymyr
  • Ahluwalia, Manmeet S
  • Varadan, Vinay
  • Madabhushi, Anant
  • Tiwari, Pallavi
Clin Cancer Res 2020 Journal Article, cited 0 times
Website
PURPOSE: To (i) create a survival risk score using radiomic features from the tumor habitat on routine MRI to predict progression-free survival (PFS) in glioblastoma and (ii) obtain a biological basis for these prognostic radiomic features, by studying their radiogenomic associations with molecular signaling pathways. EXPERIMENTAL DESIGN: Two hundred three patients with pretreatment Gd-T1w, T2w, T2w-FLAIR MRI were obtained from 3 cohorts: The Cancer Imaging Archive (TCIA; n = 130), Ivy GAP (n = 32), and Cleveland Clinic (n = 41). Gene-expression profiles of corresponding patients were obtained for TCIA cohort. For every study, following expert segmentation of tumor subcompartments (necrotic core, enhancing tumor, peritumoral edema), 936 3D radiomic features were extracted from each subcompartment across all MRI protocols. Using Cox regression model, radiomic risk score (RRS) was developed for every protocol to predict PFS on the training cohort (n = 130) and evaluated on the holdout cohort (n = 73). Further, Gene Ontology and single-sample gene set enrichment analysis were used to identify specific molecular signaling pathway networks associated with RRS features. RESULTS: Twenty-five radiomic features from the tumor habitat yielded the RRS. A combination of RRS with clinical (age and gender) and molecular features (MGMT and IDH status) resulted in a concordance index of 0.81 (P < 0.0001) on training and 0.84 (P = 0.03) on the test set. Radiogenomic analysis revealed associations of RRS features with signaling pathways for cell differentiation, cell adhesion, and angiogenesis, which contribute to chemoresistance in GBM. CONCLUSIONS: Our findings suggest that prognostic radiomic features from routine Gd-T1w MRI may also be significantly associated with key biological processes that affect response to chemotherapy in GBM.

Radiogenomic analysis of hypoxia pathway is predictive of overall survival in Glioblastoma

  • Beig, Niha
  • Patel, Jay
  • Prasanna, Prateek
  • Hill, Virginia
  • Gupta, Amit
  • Correa, Ramon
  • Bera, Kaustav
  • Singh, Salendra
  • Partovi, Sasan
  • Varadan, Vinay
Sci RepScientific reports 2018 Journal Article, cited 5 times
Website

Radiogenomic analysis of hypoxia pathway reveals computerized MRI descriptors predictive of overall survival in Glioblastoma

  • Beig, Niha
  • Patel, Jay
  • Prasanna, Prateek
  • Partovi, Sasan
  • Varadhan, Vinay
  • Madabhushi, Anant
  • Tiwari, Pallavi
2017 Conference Proceedings, cited 3 times
Website

Longitudinal fan-beam computed tomography dataset for head-and-neck squamous cell carcinoma patients

  • Bejarano, T.
  • De Ornelas-Couto, M.
  • Mihaylov, I. B.
Med Phys 2019 Journal Article, cited 0 times
Website
PURPOSE: To describe in detail a dataset consisting of longitudinal fan-beam computed tomography (CT) imaging to visualize anatomical changes in head-and-neck squamous cell carcinoma (HNSCC) patients throughout radiotherapy (RT) treatment course. ACQUISITION AND VALIDATION METHODS: This dataset consists of CT images from 31 HNSCC patients who underwent volumetric modulated arc therapy (VMAT). Patients had three CT scans acquired throughout the duration of the radiation treatment course. Pretreatment planning CT scans with a median of 13 days before treatment (range: 2-27), mid-treatment CT at 22 days after start of treatment (range: 13-38), and post-treatment CT 65 days after start of treatment (range: 35-192). Patients received RT treatment to a total dose of 58-70 Gy, using daily 2.0-2.20 Gy, fractions for 30-35 fractions. The fan-beam CT images were acquired using a Siemens 16-slice CT scanner head protocol with 120 kV and current of 400 mAs. A helical scan with 1 rotation per second was used with a slice thickness of 2 mm and table increment of 1.2 mm. In addition to the imaging data, contours of anatomical structures for RT, demographic, and outcome measurements are provided. DATA FORMAT AND USAGE NOTES: The dataset with DICOM files including images, RTSTRUCT files, and RTDOSE files can be found and publicly accessed in the Cancer Imaging Archive (TCIA, http://www.cancerimagingarchive.net/) as collection Head-and-neck squamous cell carcinoma patients with CT taken during pretreatment, mid-treatment, and post-treatment (HNSCC-3DCT-RT). DISCUSSION: This is the first dataset to date in TCIA which provides a collection of multiple CT imaging studies (pretreatment, mid-treatment, and post-treatment) throughout the treatment course. The dataset can serve a wide array of research projects including (but not limited to): quantitative imaging assessment, investigation on anatomical changes with treatment progress, dosimetry of target volumes and/or normal structures due to anatomical changes occurring during treatment, investigation of RT toxicity, and concurrent chemotherapy and RT effects on head-and-neck patients.

Evaluating the Use of rCBV as a Tumor Grade and Treatment Response Classifier Across NCI Quantitative Imaging Network Sites: Part II of the DSC-MRI Digital Reference Object (DRO) Challenge

  • Bell, Laura C
  • Semmineh, Natenael
  • An, Hongyu
  • Eldeniz, Cihat
  • Wahl, Richard
  • Schmainda, Kathleen M
  • Prah, Melissa A
  • Erickson, Bradley J
  • Korfiatis, Panagiotis
  • Wu, Chengyue
  • Sorace, Anna G
  • Yankeelov, Thomas E
  • Rutledge, Neal
  • Chenevert, Thomas L
  • Malyarenko, Dariya
  • Liu, Yichu
  • Brenner, Andrew
  • Hu, Leland S
  • Zhou, Yuxiang
  • Boxerman, Jerrold L
  • Yen, Yi-Fen
  • Kalpathy-Cramer, Jayashree
  • Beers, Andrew L
  • Muzi, Mark
  • Madhuranthakam, Ananth J
  • Pinho, Marco
  • Johnson, Brian
  • Quarles, C Chad
Tomography 2020 Journal Article, cited 1 times
Website
We have previously characterized the reproducibility of brain tumor relative cerebral blood volume (rCBV) using a dynamic susceptibility contrast magnetic resonance imaging digital reference object across 12 sites using a range of imaging protocols and software platforms. As expected, reproducibility was highest when imaging protocols and software were consistent, but decreased when they were variable. Our goal in this study was to determine the impact of rCBV reproducibility for tumor grade and treatment response classification. We found that varying imaging protocols and software platforms produced a range of optimal thresholds for both tumor grading and treatment response, but the performance of these thresholds was similar. These findings further underscore the importance of standardizing acquisition and analysis protocols across sites and software benchmarking.

Analysis of postprocessing steps for residue function dependent dynamic susceptibility contrast (DSC)‐MRI biomarkers and their clinical impact on glioma grading for both 1.5 and 3T

  • Bell, Laura C
  • Stokes, Ashley M
  • Quarles, C. Chad
Journal of Magnetic Resonance Imaging 2019 Journal Article, cited 0 times
Website
BACKGROUND: Dynamic susceptibility contrast (DSC)-MRI analysis pipelines differ across studies and sites, potentially confounding the clinical value and use of the derived biomarkers. PURPOSE/HYPOTHESIS: To investigate how postprocessing steps for computation of cerebral blood volume (CBV) and residue function dependent parameters (cerebral blood flow [CBF], mean transit time [MTT], capillary transit heterogeneity [CTH]) impact glioma grading. STUDY TYPE: Retrospective study from The Cancer Imaging Archive (TCIA). POPULATION: Forty-nine subjects with low- and high-grade gliomas. FIELD STRENGTH/SEQUENCE: 1.5 and 3.0T clinical systems using a single-echo echo planar imaging (EPI) acquisition. ASSESSMENT: Manual regions of interest (ROIs) were provided by TCIA and automatically segmented ROIs were generated by k-means clustering. CBV was calculated based on conventional equations. Residue function dependent biomarkers (CBF, MTT, CTH) were found by two deconvolution methods: circular discretization followed by a signal-to-noise ratio (SNR)-adapted eigenvalue thresholding (Method 1) and Volterra discretization with L-curve-based Tikhonov regularization (Method 2). STATISTICAL TESTS: Analysis of variance, receiver operating characteristics (ROC), and logistic regression tests. RESULTS: MTT alone was unable to statistically differentiate glioma grade (P > 0.139). When normalized, tumor CBF, CTH, and CBV did not differ across field strengths (P > 0.141). Biomarkers normalized to automatically segmented regions performed equally (rCTH AUROC is 0.73 compared with 0.74) or better (rCBF AUROC increases from 0.74-0.84; rCBV AUROC increases 0.78-0.86) than manually drawn ROIs. By updating the current deconvolution steps (Method 2), rCTH can act as a classifier for glioma grade (P < 0.007), but not if processed by current conventional DSC methods (Method 1) (P > 0.577). Lastly, higher-order biomarkers (eg, rCBF and rCTH) along with rCBV increases AUROC to 0.92 for differentiating tumor grade as compared with 0.78 and 0.86 (manual and automatic reference regions, respectively) for rCBV alone. DATA CONCLUSION: With optimized analysis pipelines, higher-order perfusion biomarkers (rCBF and rCTH) improve glioma grading as compared with CBV alone. Additionally, postprocessing steps impact thresholds needed for glioma grading. LEVEL OF EVIDENCE: 3 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2019.

Analysis of postprocessing steps for residue function dependent dynamic susceptibility contrast (DSC)‐MRI biomarkers and their clinical impact on glioma grading for both 1.5 and 3T

  • Bell, Laura C
  • Stokes, Ashley M
  • Quarles, C Chad
Journal of Magnetic Resonance Imaging 2020 Journal Article, cited 0 times
Website

Automatic Design of Window Operators for the Segmentation of the Prostate Gland in Magnetic Resonance Images

  • Benalcázar, Marco E
  • Brun, Marcel
  • Ballarin, Virginia
2015 Conference Proceedings, cited 0 times
Website

Overview of the American Society for Radiation Oncology–National Institutes of Health–American Association of Physicists in Medicine Workshop 2015: Exploring Opportunities for Radiation Oncology in the Era of Big Data

  • Benedict, Stanley H
  • Hoffman, Karen
  • Martel, Mary K
  • Abernethy, Amy P
  • Asher, Anthony L
  • Capala, Jacek
  • Chen, Ronald C
  • Chera, Bhisham
  • Couch, Jennifer
  • Deye, James
International Journal of Radiation Oncology• Biology• Physics 2016 Journal Article, cited 0 times

Segmentation of three-dimensional images with parametric active surfaces and topology changes

  • Benninghoff, Heike
  • Garcke, Harald
Journal of Scientific Computing 2017 Journal Article, cited 1 times
Website
In this paper, we introduce a novel parametric finite element method for segmentation of three-dimensional images. We consider a piecewise constant version of the Mumford-Shah and the Chan-Vese functionals and perform a region-based segmentation of 3D image data. An evolution law is derived from energy minimization problems which push the surfaces to the boundaries of 3D objects in the image. We propose a parametric scheme which describes the evolution of parametric surfaces. An efficient finite element scheme is proposed for a numerical approximation of the evolution equations. Since standard parametric methods cannot handle topology changes automatically, an efficient method is presented to detect, identify and perform changes in the topology of the surfaces. One main focus of this paper are the algorithmic details to handle topology changes like splitting and merging of surfaces and change of the genus of a surface. Different artificial images are studied to demonstrate the ability to detect the different types of topology changes. Finally, the parametric method is applied to segmentation of medical 3D images.

Adverse prognosis of glioblastoma contacting the subventricular zone: Biological correlates

  • Berendsen, S.
  • van Bodegraven, E.
  • Seute, T.
  • Spliet, W. G. M.
  • Geurts, M.
  • Hendrikse, J.
  • Schoysman, L.
  • Huiszoon, W. B.
  • Varkila, M.
  • Rouss, S.
  • Bell, E. H.
  • Kroonen, J.
  • Chakravarti, A.
  • Bours, V.
  • Snijders, T. J.
  • Robe, P. A.
PLoS One 2019 Journal Article, cited 2 times
Website
INTRODUCTION: The subventricular zone (SVZ) in the brain is associated with gliomagenesis and resistance to treatment in glioblastoma. In this study, we investigate the prognostic role and biological characteristics of subventricular zone (SVZ) involvement in glioblastoma. METHODS: We analyzed T1-weighted, gadolinium-enhanced MR images of a retrospective cohort of 647 primary glioblastoma patients diagnosed between 2005-2013, and performed a multivariable Cox regression analysis to adjust the prognostic effect of SVZ involvement for clinical patient- and tumor-related factors. Protein expression patterns of a.o. markers of neural stem cellness (CD133 and GFAP-delta) and (epithelial-) mesenchymal transition (NF-kappaB, C/EBP-beta and STAT3) were determined with immunohistochemistry on tissue microarrays containing 220 of the tumors. Molecular classification and mRNA expression-based gene set enrichment analyses, miRNA expression and SNP copy number analyses were performed on fresh frozen tissue obtained from 76 tumors. Confirmatory analyses were performed on glioblastoma TCGA/TCIA data. RESULTS: Involvement of the SVZ was a significant adverse prognostic factor in glioblastoma, independent of age, KPS, surgery type and postoperative treatment. Tumor volume and postoperative complications did not explain this prognostic effect. SVZ contact was associated with increased nuclear expression of the (epithelial-) mesenchymal transition markers C/EBP-beta and phospho-STAT3. SVZ contact was not associated with molecular subtype, distinct gene expression patterns, or markers of stem cellness. Our main findings were confirmed in a cohort of 229 TCGA/TCIA glioblastomas. CONCLUSION: In conclusion, involvement of the SVZ is an independent prognostic factor in glioblastoma, and associates with increased expression of key markers of (epithelial-) mesenchymal transformation, but does not correlate with stem cellness, molecular subtype, or specific (mi)RNA expression patterns.

Pulmonary nodule detection using a cascaded SVM classifier

  • Bergtholdt, Martin
  • Wiemker, Rafael
  • Klinder, Tobias
2016 Conference Proceedings, cited 9 times
Website
Automatic detection of lung nodules from chest CT has been researched intensively over the last decades resulting also in several commercial products. However, solutions are adopted only slowly into daily clinical routine as many current CAD systems still potentially miss true nodules while at the same time generating too many false positives (FP). While many earlier approaches had to rely on rather few cases for development, larger databases become now available and can be used for algorithmic development. In this paper, we address the problem of lung nodule detection via a cascaded SVM classifier. The idea is to sequentially perform two classification tasks in order to select from an extremely large pool of potential candidates the few most likely ones. As the initial pool is allowed to contain thousands of candidates, very loose criteria could be applied during this pre-selection. In this way, the chances that a true nodule is falsely rejected as a candidate are reduced significantly. The final algorithm is trained and tested on the full LIDC/IDRI database. Comparison is done against two previously published CAD systems. Overall, the algorithm achieved sensitivity of 0.859 at 2.5 FP/volume where the other two achieved sensitivity values of 0.321 and 0.625, respectively. On low dose data sets, only slight increase in the number of FP/volume was observed, while the sensitivity was not affected.

Deep-learning framework to detect lung abnormality – A study with chest X-Ray and lung CT scan images

  • Bhandary, Abhir
  • Prabhu, G. Ananth
  • Rajinikanth, V.
  • Thanaraj, K. Palani
  • Satapathy, Suresh Chandra
  • Robbins, David E.
  • Shasky, Charles
  • Zhang, Yu-Dong
  • Tavares, João Manuel R. S.
  • Raja, N. Sri Madhava
Pattern Recognition Letters 2020 Journal Article, cited 0 times
Website
Lung abnormalities are highly risky conditions in humans. The early diagnosis of lung abnormalities is essential to reduce the risk by enabling quick and efficient treatment. This research work aims to propose a Deep-Learning (DL) framework to examine lung pneumonia and cancer. This work proposes two different DL techniques to assess the considered problem: (i) The initial DL method, named a modified AlexNet (MAN), is proposed to classify chest X-Ray images into normal and pneumonia class. In the MAN, the classification is implemented using with Support Vector Machine (SVM), and its performance is compared against Softmax. Further, its performance is validated with other pre-trained DL techniques, such as AlexNet, VGG16, VGG19 and ResNet50. (ii) The second DL work implements a fusion of handcrafted and learned features in the MAN to improve classification accuracy during lung cancer assessment. This work employs serial fusion and Principal Component Analysis (PCA) based features selection to enhance the feature vector. The performance of this DL frame work is tested using benchmark lung cancer CT images of LIDC-IDRI and classification accuracy (97.27%) is attained.

Isolation of Prostate Gland in T1-Weighted Magnetic Resonance Images using Computer Vision

  • Bhattacharya, Sayantan
  • Sharma, Apoorv
  • Gupta, Rinki
  • Bhan, Anupama
2020 Conference Proceedings, cited 0 times
Website

G-DOC Plus–an integrative bioinformatics platform for precision medicine

  • Bhuvaneshwar, Krithika
  • Belouali, Anas
  • Singh, Varun
  • Johnson, Robert M
  • Song, Lei
  • Alaoui, Adil
  • Harris, Michael A
  • Clarke, Robert
  • Weiner, Louis M
  • Gusev, Yuriy
BMC bioinformatics 2016 Journal Article, cited 14 times
Website

Artificial intelligence in cancer imaging: Clinical challenges and applications

  • Bi, Wenya Linda
  • Hosny, Ahmed
  • Schabath, Matthew B
  • Giger, Maryellen L
  • Birkbak, Nicolai J
  • Mehrtash, Alireza
  • Allison, Tavis
  • Arnaout, Omar
  • Abbosh, Christopher
  • Dunn, Ian F
CA: a cancer journal for clinicians 2019 Journal Article, cited 0 times
Website

A comparison of ground truth estimation methods

  • Biancardi, Alberto M
  • Jirapatnakul, Artit C
  • Reeves, Anthony P
International journal of computer assisted radiology and surgery 2010 Journal Article, cited 17 times
Website
PURPOSE: Knowledge of the exact shape of a lesion, or ground truth (GT), is necessary for the development of diagnostic tools by means of algorithm validation, measurement metric analysis, accurate size estimation. Four methods that estimate GTs from multiple readers' documentations by considering the spatial location of voxels were compared: thresholded Probability-Map at 0.50 (TPM(0.50)) and at 0.75 (TPM(0.75)), simultaneous truth and performance level estimation (STAPLE) and truth estimate from self distances (TESD). METHODS: A subset of the publicly available Lung Image Database Consortium archive was used, selecting pulmonary nodules documented by all four radiologists. The pair-wise similarities between the estimated GTs were analyzed by computing the respective Jaccard coefficients. Then, with respect to the readers' marking volumes, the estimated volumes were ranked and the sign test of the differences between them was performed. RESULTS: (a) the rank variations among the four methods and the volume differences between STAPLE and TESD are not statistically significant, (b) TPM(0.50) estimates are statistically larger (c) TPM(0.75) estimates are statistically smaller (d) there is some spatial disagreement in the estimates as the one-sided 90% confidence intervals between TPM(0.75) and TPM(0.50), TPM(0.75) and STAPLE, TPM(0.75) and TESD, TPM(0.50) and STAPLE, TPM(0.50) and TESD, STAPLE and TESD, respectively, show: [0.67, 1.00], [0.67, 1.00], [0.77, 1.00], [0.93, 1.00], [0.85, 1.00], [0.85, 1.00]. CONCLUSIONS: The method used to estimate the GT is important: the differences highlighted that STAPLE and TESD, notwithstanding a few weaknesses, appear to be equally viable as a GT estimator, while the increased availability of computing power is decreasing the appeal afforded to TPMs. Ultimately, the choice of which GT estimation method, between the two, should be preferred depends on the specific characteristics of the marked data that is used with respect to the two elements that differentiate the method approaches: relative reliabilities of the readers and the reliability of the region boundaries.

Learning to detect anatomical landmarks of the pelvis in X-rays from arbitrary views

  • Bier, B.
  • Goldmann, F.
  • Zaech, J. N.
  • Fotouhi, J.
  • Hegeman, R.
  • Grupp, R.
  • Armand, M.
  • Osgood, G.
  • Navab, N.
  • Maier, A.
  • Unberath, M.
Int J Comput Assist Radiol Surg 2019 Journal Article, cited 0 times
Website
Purpose Minimally invasive alternatives are now available for many complex surgeries. These approaches are enabled by the increasing availability of intra-operative image guidance. Yet, fluoroscopic X-rays suffer from projective transformation and thus cannot provide direct views onto anatomy. Surgeons could highly benefit from additional information, such as the anatomical landmark locations in the projections, to support intra-operative decision making. However, detecting landmarks is challenging since the viewing direction changes substantially between views leading to varying appearance of the same landmark. Therefore, and to the best of our knowledge, view-independent anatomical landmark detection has not been investigated yet. Methods In this work, we propose a novel approach to detect multiple anatomical landmarks in X-ray images from arbitrary viewing directions. To this end, a sequential prediction framework based on convolutional neural networks is employed to simultaneously regress all landmark locations. For training, synthetic X-rays are generated with a physically accurate forward model that allows direct application of the trained model to real X-ray images of the pelvis. View invariance is achieved via data augmentation by sampling viewing angles on a spherical segment of 120∘×90∘ . Results On synthetic data, a mean prediction error of 5.6 ± 4.5 mm is achieved. Further, we demonstrate that the trained model can be directly applied to real X-rays and show that these detections define correspondences to a respective CT volume, which allows for analytic estimation of the 11 degree of freedom projective mapping. Conclusion We present the first tool to detect anatomical landmarks in X-ray images independent of their viewing direction. Access to this information during surgery may benefit decision making and constitutes a first step toward global initialization of 2D/3D registration without the need of calibration. As such, the proposed concept has a strong prospect to facilitate and enhance applications and methods in the realm of image-guided surgery.

Performance analysis of unsupervised optimal fuzzy clustering algorithm for MRI brain tumor segmentation

  • Blessy, SA Praylin Selva
  • Sulochana, C Helen
Technology and Health Care 2014 Journal Article, cited 0 times
Website

Performance analysis of unsupervised optimal fuzzy clustering algorithm for MRI brain tumor segmentation

  • Blessy, SA Praylin Selva
  • Sulochana, C Helen
Technology and Health Care 2015 Journal Article, cited 0 times
Website
BACKGROUND: Segmentation of brain tumor from Magnetic Resonance Imaging (MRI) becomes very complicated due to the structural complexities of human brain and the presence of intensity inhomogeneities. OBJECTIVE: To propose a method that effectively segments brain tumor from MR images and to evaluate the performance of unsupervised optimal fuzzy clustering (UOFC) algorithm for segmentation of brain tumor from MR images. METHODS: Segmentation is done by preprocessing the MR image to standardize intensity inhomogeneities followed by feature extraction, feature fusion and clustering. RESULTS: Different validation measures are used to evaluate the performance of the proposed method using different clustering algorithms. The proposed method using UOFC algorithm produces high sensitivity (96%) and low specificity (4%) compared to other clustering methods. CONCLUSIONS: Validation results clearly show that the proposed method with UOFC algorithm effectively segments brain tumor from MR images.

Integration of convolutional neural networks for pulmonary nodule malignancy assessment in a lung cancer classification pipeline

  • Bonavita, I.
  • Rafael-Palou, X.
  • Ceresa, M.
  • Piella, G.
  • Ribas, V.
  • Gonzalez Ballester, M. A.
Comput Methods Programs Biomed 2020 Journal Article, cited 3 times
Website
BACKGROUND AND OBJECTIVE: The early identification of malignant pulmonary nodules is critical for a better lung cancer prognosis and less invasive chemo or radio therapies. Nodule malignancy assessment done by radiologists is extremely useful for planning a preventive intervention but is, unfortunately, a complex, time-consuming and error-prone task. This explains the lack of large datasets containing radiologists malignancy characterization of nodules; METHODS: In this article, we propose to assess nodule malignancy through 3D convolutional neural networks and to integrate it in an automated end-to-end existing pipeline of lung cancer detection. For training and testing purposes we used independent subsets of the LIDC dataset; RESULTS: Adding the probabilities of nodules malignity in a baseline lung cancer pipeline improved its F1-weighted score by 14.7%, whereas integrating the malignancy model itself using transfer learning outperformed the baseline prediction by 11.8% of F1-weighted score; CONCLUSIONS: Despite the limited size of the lung cancer datasets, integrating predictive models of nodule malignancy improves prediction of lung cancer.

CT Colonography: External Clinical Validation of an Algorithm for Computer-assisted Prone and Supine Registration

  • Boone, Darren J
  • Halligan, Steve
  • Roth, Holger R
  • Hampshire, Tom E
  • Helbren, Emma
  • Slabaugh, Greg G
  • McQuillan, Justine
  • McClelland, Jamie R
  • Hu, Mingxing
  • Punwani, Shonit
Radiology 2013 Journal Article, cited 5 times
Website
PURPOSE: To perform external validation of a computer-assisted registration algorithm for prone and supine computed tomographic (CT) colonography and to compare the results with those of an existing centerline method. MATERIALS AND METHODS: All contributing centers had institutional review board approval; participants provided informed consent. A validation sample of CT colonographic examinations of 51 patients with 68 polyps (6-55 mm) was selected from a publicly available, HIPAA compliant, anonymized archive. No patients were excluded because of poor preparation or inadequate distension. Corresponding prone and supine polyp coordinates were recorded, and endoluminal surfaces were registered automatically by using a computer algorithm. Two observers independently scored three-dimensional endoluminal polyp registration success. Results were compared with those obtained by using the normalized distance along the colonic centerline (NDACC) method. Pairwise Wilcoxon signed rank tests were used to compare gross registration error and McNemar tests were used to compare polyp conspicuity. RESULTS: Registration was possible in all 51 patients, and 136 paired polyp coordinates were generated (68 polyps) to test the algorithm. Overall mean three-dimensional polyp registration error (mean +/- standard deviation, 19.9 mm +/- 20.4) was significantly less than that for the NDACC method (mean, 27.4 mm +/- 15.1; P = .001). Accuracy was unaffected by colonic segment (P = .76) or luminal collapse (P = .066). During endoluminal review by two observers (272 matching tasks, 68 polyps, prone to supine and supine to prone coordinates), 223 (82%) polyp matches were visible (120 degrees field of view) compared with just 129 (47%) when the NDACC method was used (P < .001). By using multiplanar visualization, 48 (70%) polyps were visible after scrolling +/- 15 mm in any multiplanar axis compared with 16 (24%) for NDACC (P < .001). CONCLUSION: Computer-assisted registration is more accurate than the NDACC method for mapping the endoluminal surface and matching the location of polyps in corresponding prone and supine CT colonographic acquisitions.

Solid Indeterminate Nodules with a Radiological Stability Suggesting Benignity: A Texture Analysis of Computed Tomography Images Based on the Kurtosis and Skewness of the Nodule Volume Density Histogram

  • Borguezan, Bruno Max
  • Lopes, Agnaldo José
  • Saito, Eduardo Haruo
  • Higa, Claudio
  • Silva, Aristófanes Corrêa
  • Nunes, Rodolfo Acatauassú
Pulmonary Medicine 2019 Journal Article, cited 0 times
Website
BACKGROUND: The number of incidental findings of pulmonary nodules using imaging methods to diagnose other thoracic or extrathoracic conditions has increased, suggesting the need for in-depth radiological image analyses to identify nodule type and avoid unnecessary invasive procedures. OBJECTIVES:The present study evaluated solid indeterminate nodules with a radiological stability suggesting benignity (SINRSBs) through a texture analysis of computed tomography (CT) images. METHODS: A total of 100 chest CT scans were evaluated, including 50 cases of SINRSBs and 50 cases of malignant nodules. SINRSB CT scans were performed using the same noncontrast enhanced CT protocol and equipment; the malignant nodule data were acquired from several databases. The kurtosis (KUR) and skewness (SKW) values of these tests were determined for the whole volume of each nodule, and the histograms were classified into two basic patterns: peaks or plateaus. RESULTS: The mean (MEN) KUR values of the SINRSBs and malignant nodules were 3.37 ± 3.88 and 5.88 ± 5.11, respectively. The receiver operating characteristic (ROC) curve showed that the sensitivity and specificity for distinguishing SINRSBs from malignant nodules were 65% and 66% for KUR values >6, respectively, with an area under the curve (AUC) of 0.709 (p< 0.0001). The MEN SKW values of the SINRSBs and malignant nodules were 1.73 ± 0.94 and 2.07 ± 1.01, respectively. The ROC curve showed that the sensitivity and specificity for distinguishing malignant nodules from SINRSBs were 65% and 66% for SKW values >3.1, respectively, with an AUC of 0.709 (p < 0.0001). An analysis of the peak and plateau histograms revealed sensitivity, specificity, and accuracy values of 84%, 74%, and 79%, respectively. CONCLUSION: KUR, SKW, and histogram shape can help to noninvasively diagnose SINRSBs but should not be used alone or without considering clinical data.

Radiogenomics of Clear Cell Renal Cell Carcinoma: Associations Between mRNA-Based Subtyping and CT Imaging Features

  • Bowen, Lan
  • Xiaojing, Li
Academic radiology 2018 Journal Article, cited 0 times
Website

Singular value decomposition using block least mean square method for image denoising and compression

  • Boyat, Ajay Kumar
  • Khare, Parth
2015 Conference Proceedings, cited 1 times
Website

Association of Peritumoral Radiomics With Tumor Biology and Pathologic Response to Preoperative Targeted Therapy for HER2 (ERBB2)-Positive Breast Cancer

  • Braman, Nathaniel
  • Prasanna, Prateek
  • Whitney, Jon
  • Singh, Salendra
  • Beig, Niha
  • Etesami, Maryam
  • Bates, David D. B.
  • Gallagher, Katherine
  • Bloch, B. Nicolas
  • Vulchi, Manasa
  • Turk, Paulette
  • Bera, Kaustav
  • Abraham, Jame
  • Sikov, William M.
  • Somlo, George
  • Harris, Lyndsay N.
  • Gilmore, Hannah
  • Plecha, Donna
  • Varadan, Vinay
  • Madabhushi, Anant
JAMA Netw Open 2019 Journal Article, cited 0 times
Website
Importance There has been significant recent interest in understanding the utility of quantitative imaging to delineate breast cancer intrinsic biological factors and therapeutic response. No clinically accepted biomarkers are as yet available for estimation of response to human epidermal growth factor receptor 2 (currently known as ERBB2, but referred to as HER2 in this study)–targeted therapy in breast cancer. Objective To determine whether imaging signatures on clinical breast magnetic resonance imaging (MRI) could noninvasively characterize HER2-positive tumor biological factors and estimate response to HER2-targeted neoadjuvant therapy. Design, Setting, and Participants In a retrospective diagnostic study encompassing 209 patients with breast cancer, textural imaging features extracted within the tumor and annular peritumoral tissue regions on MRI were examined as a means to identify increasingly granular breast cancer subgroups relevant to therapeutic approach and response. First, among a cohort of 117 patients who received an MRI prior to neoadjuvant chemotherapy (NAC) at a single institution from April 27, 2012, through September 4, 2015, imaging features that distinguished HER2+ tumors from other receptor subtypes were identified. Next, among a cohort of 42 patients with HER2+ breast cancers with available MRI and RNaseq data accumulated from a multicenter, preoperative clinical trial (BrUOG 211B), a signature of the response-associated HER2-enriched (HER2-E) molecular subtype within HER2+ tumors (n = 42) was identified. The association of this signature with pathologic complete response was explored in 2 patient cohorts from different institutions, where all patients received HER2-targeted NAC (n = 28, n = 50). Finally, the association between significant peritumoral features and lymphocyte distribution was explored in patients within the BrUOG 211B trial who had corresponding biopsy hematoxylin-eosin–stained slide images. Data analysis was conducted from January 15, 2017, to February 14, 2019. Main Outcomes and Measures Evaluation of imaging signatures by the area under the receiver operating characteristic curve (AUC) in identifying HER2+ molecular subtypes and distinguishing pathologic complete response (ypT0/is) to NAC with HER2-targeting. Results In the 209 patients included (mean [SD] age, 51.1 [11.7] years), features from the peritumoral regions better discriminated HER2-E tumors (maximum AUC, 0.85; 95% CI, 0.79-0.90; 9-12 mm from the tumor) compared with intratumoral features (AUC, 0.76; 95% CI, 0.69-0.84). A classifier combining peritumoral and intratumoral features identified the HER2-E subtype (AUC, 0.89; 95% CI, 0.84-0.93) and was significantly associated with response to HER2-targeted therapy in both validation cohorts (AUC, 0.80; 95% CI, 0.61-0.98 and AUC, 0.69; 95% CI, 0.53-0.84). Features from the 0- to 3-mm peritumoral region were significantly associated with the density of tumor-infiltrating lymphocytes (R2 = 0.57; 95% CI, 0.39-0.75; P = .002). Conclusions and Relevance A combination of peritumoral and intratumoral characteristics appears to identify intrinsic molecular subtypes of HER2+ breast cancers from imaging, offering insights into immune response within the peritumoral environment and suggesting potential benefit for treatment guidance.

A volumetric technique for fossil body mass estimation applied to Australopithecus afarensis

  • Brassey, Charlotte A
  • O'Mahoney, Thomas G
  • Chamberlain, Andrew T
  • Sellers, William I
Journal of human evolution 2018 Journal Article, cited 3 times
Website

Constructing 3D-Printable CAD Models of Prostates from MR Images

  • Breseman, Kelsey
  • Lee, Christopher
  • Bloch, B Nicholas
  • Jaffe, Carl
2013 Conference Proceedings, cited 1 times
Website
This paper describes the development of a procedure to generate patient-specific, three-dimensional (3D) solid models of prostates (and related anatomy) from magnetic resonance (MR) images. The 3D models are rendered in STL file format which can be physically printed or visualized on a holographic display system. An example is presented in which a 3D model is printed following this procedure.

An ensemble learning approach for brain cancer detection exploiting radiomic features

  • Brunese, Luca
  • Mercaldo, Francesco
  • Reginelli, Alfonso
  • Santone, Antonella
Comput Methods Programs Biomed 2019 Journal Article, cited 1 times
Website
BACKGROUND AND OBJECTIVE: The brain cancer is one of the most aggressive tumour: the 70% of the patients diagnosed with this malignant cancer will not survive. Early detection of brain tumours can be fundamental to increase survival rates. The brain cancers are classified into four different grades (i.e., I, II, III and IV) according to how normal or abnormal the brain cells look. The following work aims to recognize the different brain cancer grades by analysing brain magnetic resonance images. METHODS: A method to identify the components of an ensemble learner is proposed. The ensemble learner is focused on the discrimination between different brain cancer grades using non invasive radiomic features. The considered radiomic features are belonging to five different groups: First Order, Shape, Gray Level Co-occurrence Matrix, Gray Level Run Length Matrix and Gray Level Size Zone Matrix. We evaluate the features effectiveness through hypothesis testing and through decision boundaries, performance analysis and calibration plots thus we select the best candidate classifiers for the ensemble learner. RESULTS: We evaluate the proposed method with 111,205 brain magnetic resonances belonging to two freely available data-sets for research purposes. The results are encouraging: we obtain an accuracy of 99% for the benign grade I and the II, III and IV malignant brain cancer detection. CONCLUSION: The experimental results confirm that the ensemble learner designed with the proposed method outperforms the current state-of-the-art approaches in brain cancer grade detection starting from magnetic resonance images.

Quantitative variations in texture analysis features dependent on MRI scanning parameters: A phantom model

  • Buch, Karen
  • Kuno, Hirofumi
  • Qureshi, Muhammad M
  • Li, Baojun
  • Sakai, Osamu
Journal of applied clinical medical physics 2018 Journal Article, cited 0 times
Website

Quantitative Imaging Biomarker Ontology (QIBO) for Knowledge Representation of Biomedical Imaging Biomarkers

  • Buckler, AndrewJ
  • Ouellette, M.
  • Danagoulian, J.
  • Wernsing, G.
  • Liu, TiffanyTing
  • Savig, Erica
  • Suzek, BarisE
  • Rubin, DanielL
  • Paik, David
Journal of Digital Imaging 2013 Journal Article, cited 17 times
Website

Association of genomic subtypes of lower-grade gliomas with shape features automatically extracted by a deep learning algorithm

  • Buda, Mateusz
  • Saha, Ashirbani
  • Mazurowski, Maciej A
Computers in biology and medicine 2019 Journal Article, cited 1 times
Website
Recent analysis identified distinct genomic subtypes of lower-grade glioma tumors which are associated with shape features. In this study, we propose a fully automatic way to quantify tumor imaging characteristics using deep learning-based segmentation and test whether these characteristics are predictive of tumor genomic subtypes. We used preoperative imaging and genomic data of 110 patients from 5 institutions with lower-grade gliomas from The Cancer Genome Atlas. Based on automatic deep learning segmentations, we extracted three features which quantify two-dimensional and three-dimensional characteristics of the tumors. Genomic data for the analyzed cohort of patients consisted of previously identified genomic clusters based on IDH mutation and 1p/19q co-deletion, DNA methylation, gene expression, DNA copy number, and microRNA expression. To analyze the relationship between the imaging features and genomic clusters, we conducted the Fisher exact test for 10 hypotheses for each pair of imaging feature and genomic subtype. To account for multiple hypothesis testing, we applied a Bonferroni correction. P-values lower than 0.005 were considered statistically significant. We found the strongest association between RNASeq clusters and the bounding ellipsoid volume ratio (p < 0.0002) and between RNASeq clusters and margin fluctuation (p < 0.005). In addition, we identified associations between bounding ellipsoid volume ratio and all tested molecular subtypes (p < 0.02) as well as between angular standard deviation and RNASeq cluster (p < 0.02). In terms of automatic tumor segmentation that was used to generate the quantitative image characteristics, our deep learning algorithm achieved a mean Dice coefficient of 82% which is comparable to human performance.

Comparing nonrigid registration techniques for motion corrected MR prostate diffusion imaging

  • Buerger, C
  • Sénégas, J
  • Kabus, S
  • Carolus, H
  • Schulz, H
  • Agarwal, H
  • Turkbey, B
  • Choyke, PL
  • Renisch, S
Medical physics 2015 Journal Article, cited 4 times
Website
PURPOSE: T2-weighted magnetic resonance imaging (MRI) is commonly used for anatomical visualization in the pelvis area, such as the prostate, with high soft-tissue contrast. MRI can also provide functional information such as diffusion-weighted imaging (DWI) which depicts the molecular diffusion processes in biological tissues. The combination of anatomical and functional imaging techniques is widely used in oncology, e.g., for prostate cancer diagnosis and staging. However, acquisition-specific distortions as well as physiological motion lead to misalignments between T2 and DWI and consequently to a reduced diagnostic value. Image registration algorithms are commonly employed to correct for such misalignment. METHODS: The authors compare the performance of five state-of-the-art nonrigid image registration techniques for accurate image fusion of DWI with T2. RESULTS: Image data of 20 prostate patients with cancerous lesions or cysts were acquired. All registration algorithms were validated using intensity-based as well as landmark-based techniques. CONCLUSIONS: The authors' results show that the "fast elastic image registration" provides most accurate results with a target registration error of 1.07 +/- 0.41 mm at minimum execution times of 11 +/- 1 s.

Using computer‐extracted image phenotypes from tumors on breast magnetic resonance imaging to predict breast cancer pathologic stage

  • Burnside, Elizabeth S
  • Drukker, Karen
  • Li, Hui
  • Bonaccio, Ermelinda
  • Zuley, Margarita
  • Ganott, Marie
  • Net, Jose M
  • Sutton, Elizabeth J
  • Brandt, Kathleen R
  • Whitman, Gary J
Cancer 2016 Journal Article, cited 28 times
Website

Medical Image Retrieval Based on Convolutional Neural Network and Supervised Hashing

  • Cai, Yiheng
  • Li, Yuanyuan
  • Qiu, Changyan
  • Ma, Jie
  • Gao, Xurong
IEEE Access 2019 Journal Article, cited 0 times
Website
In recent years, with extensive application in image retrieval and other tasks, a convolutional neural network (CNN) has achieved outstanding performance. In this paper, a new content-based medical image retrieval (CBMIR) framework using CNN and hash coding is proposed. The new framework adopts a Siamese network in which pairs of images are used as inputs, and a model is learned to make images belonging to the same class have similar features by using weight sharing and a contrastive loss function. In each branch of the network, CNN is adapted to extract features, followed by hash mapping, which is used to reduce the dimensionality of feature vectors. In the training process, a new loss function is designed to make the feature vectors more distinguishable, and a regularization term is added to encourage the real value outputs to approximate the desired binary values. In the retrieval phase, the compact binary hash code of the query image is achieved from the trained network and is subsequently compared with the hash codes of the database images. We experimented on two medical image datasets: the cancer imaging archive-computed tomography (TCIA-CT) and the vision and image analysis group/international early lung cancer action program (VIA/I-ELCAP). The results indicate that our method is superior to existing hash methods and CNN methods. Compared with the traditional hashing method, feature extraction based on CNN has advantages. The proposed algorithm combining a Siamese network with the hash method is superior to the classical CNN-based methods. The application of a new loss function can effectively improve retrieval accuracy.

Head and neck cancer patient images for determining auto-segmentation accuracy in T2-weighted magnetic resonance imaging through expert manual segmentations

  • Cardenas, Carlos E
  • Mohamed, Abdallah S R
  • Yang, Jinzhong
  • Gooding, Mark
  • Veeraraghavan, Harini
  • Kalpathy-Cramer, Jayashree
  • Ng, Sweet Ping
  • Ding, Yao
  • Wang, Jihong
  • Lai, Stephen Y
  • Fuller, Clifton D
  • Sharp, Greg
Med Phys 2020 Dataset, cited 0 times
Website
PURPOSE: The use of magnetic resonance imaging (MRI) in radiotherapy treatment planning has rapidly increased due to its ability to evaluate patient's anatomy without the use of ionizing radiation and due to its high soft tissue contrast. For these reasons, MRI has become the modality of choice for longitudinal and adaptive treatment studies. Automatic segmentation could offer many benefits for these studies. In this work, we describe a T2-weighted MRI dataset of head and neck cancer patients that can be used to evaluate the accuracy of head and neck normal tissue auto-segmentation systems through comparisons to available expert manual segmentations. ACQUISITION AND VALIDATION METHODS: T2-weighted MRI images were acquired for 55 head and neck cancer patients. These scans were collected after radiotherapy computed tomography (CT) simulation scans using a thermoplastic mask to replicate patient treatment position. All scans were acquired on a single 1.5 T Siemens MAGNETOM Aera MRI with two large four-channel flex phased-array coils. The scans covered the region encompassing the nasopharynx region cranially and supraclavicular lymph node region caudally, when possible, in the superior-inferior direction. Manual contours were created for the left/right submandibular gland, left/right parotids, left/right lymph node level II, and left/right lymph node level III. These contours underwent quality assurance to ensure adherence to predefined guidelines, and were corrected if edits were necessary. DATA FORMAT AND USAGE NOTES: The T2-weighted images and RTSTRUCT files are available in DICOM format. The regions of interest are named based on AAPM's Task Group 263 nomenclature recommendations (Glnd_Submand_L, Glnd_Submand_R, LN_Neck_II_L, Parotid_L, Parotid_R, LN_Neck_II_R, LN_Neck_III_L, LN_Neck_III_R). This dataset is available on The Cancer Imaging Archive (TCIA) by the National Cancer Institute under the collection "AAPM RT-MAC Grand Challenge 2019" (https://doi.org/10.7937/tcia.2019.bcfjqfqb). POTENTIAL APPLICATIONS: This dataset provides head and neck patient MRI scans to evaluate auto-segmentation systems on T2-weighted images. Additional anatomies could be provided at a later time to enhance the existing library of contours.

PARaDIM - A PHITS-based Monte Carlo tool for internal dosimetry with tetrahedral mesh computational phantoms

  • Carter, L. M.
  • Crawford, T. M.
  • Sato, T.
  • Furuta, T.
  • Choi, C.
  • Kim, C. H.
  • Brown, J. L.
  • Bolch, W. E.
  • Zanzonico, P. B.
  • Lewis, J. S.
J Nucl Med 2019 Journal Article, cited 0 times
Website
Mesh-type and voxel-based computational phantoms comprise the current state-of-the-art for internal dose assessment via Monte Carlo simulations, but excel in different aspects, with mesh-type phantoms offering advantages over their voxel counterparts in terms of their flexibility and realistic representation of detailed patient- or subject-specific anatomy. We have developed PARaDIM, a freeware application for implementing tetrahedral mesh-type phantoms in absorbed dose calculations via the Particle and Heavy Ion Transport code System (PHITS). It considers all medically relevant radionuclides including alpha, beta, gamma, positron, and Auger/conversion electron emitters, and handles calculation of mean dose to individual regions, as well as 3D dose distributions for visualization and analysis in a variety of medical imaging softwares. This work describes the development of PARaDIM, documents the measures taken to test and validate its performance, and presents examples to illustrate its uses. Methods: Human, small animal, and cell-level dose calculations were performed with PARaDIM and the results compared with those of widely accepted dosimetry programs and literature data. Several tetrahedral phantoms were developed or adapted using computer-aided modeling techniques for these comparisons. Results: For human dose calculations, agreement of PARaDIM with OLINDA 2.0 was good - within 10-20% for most organs - despite geometric differences among the phantoms tested. Agreement with MIRDcell for cell-level S-value calculations was within 5% in most cases. Conclusion: PARaDIM extends the use of Monte Carlo dose calculations to the broader community in nuclear medicine by providing a user-friendly graphical user interface for calculation setup and execution. PARaDIM leverages the enhanced anatomical realism provided by advanced computational reference phantoms or bespoke image-derived phantoms to enable improved assessments of radiation doses in a variety of radiopharmaceutical use cases, research, and preclinical development.

Multimodal mixed reality visualisation for intraoperative surgical guidance

  • Cartucho, João
  • Shapira, David
  • Ashrafian, Hutan
  • Giannarou, Stamatia
International journal of computer assisted radiology and surgery 2020 Journal Article, cited 0 times
Website

The Impact of Normalization Approaches to Automatically Detect Radiogenomic Phenotypes Characterizing Breast Cancer Receptors Status

  • Castaldo, Rossana
  • Pane, Katia
  • Nicolai, Emanuele
  • Salvatore, Marco
  • Franzese, Monica
Cancers (Basel) 2020 Journal Article, cited 0 times
Website
In breast cancer studies, combining quantitative radiomic with genomic signatures can help identifying and characterizing radiogenomic phenotypes, in function of molecular receptor status. Biomedical imaging processing lacks standards in radiomic feature normalization methods and neglecting feature normalization can highly bias the overall analysis. This study evaluates the effect of several normalization techniques to predict four clinical phenotypes such as estrogen receptor (ER), progesterone receptor (PR), human epidermal growth factor receptor 2 (HER2), and triple negative (TN) status, by quantitative features. The Cancer Imaging Archive (TCIA) radiomic features from 91 T1-weighted Dynamic Contrast Enhancement MRI of invasive breast cancers were investigated in association with breast invasive carcinoma miRNA expression profiling from the Cancer Genome Atlas (TCGA). Three advanced machine learning techniques (Support Vector Machine, Random Forest, and Naive Bayesian) were investigated to distinguish between molecular prognostic indicators and achieved an area under the ROC curve (AUC) values of 86%, 93%, 91%, and 91% for the prediction of ER+ versus ER-, PR+ versus PR-, HER2+ versus HER2-, and triple-negative, respectively. In conclusion, radiomic features enable to discriminate major breast cancer molecular subtypes and may yield a potential imaging biomarker for advancing precision medicine.

Selección de un algoritmo para la clasificación de Nódulos Pulmonares Solitarios

  • Castro, Arelys Rivero
  • Correa, Luis Manuel Cruz
  • Lezcano, Jeffrey Artiles
Revista Cubana de Informática Médica 2016 Journal Article, cited 0 times
Website

MRI volume changes of axillary lymph nodes as predictor of pathological complete responses to neoadjuvant chemotherapy in breast cancer

  • Cattell, Renee F.
  • Kang, James J.
  • Ren, Thomas
  • Huang, Pauline B.
  • Muttreja, Ashima
  • Dacosta, Sarah
  • Li, Haifang
  • Baer, Lea
  • Clouston, Sean
  • Palermo, Roxanne
  • Fisher, Paul
  • Bernstein, Cliff
  • Cohen, Jules A.
  • Duong, Tim Q.
Clinical Breast Cancer 2019 Journal Article, cited 0 times
Website
Introduction Longitudinal monitoring of breast tumor volume over the course of chemotherapy is informative of pathological response. This study aims to determine whether axillary lymph node (aLN) volume by MRI could augment the prediction accuracy of treatment response to neoadjuvant chemotherapy (NAC). Materials and Methods Level-2a curated data from I-SPY-1 TRIAL (2002-2006) were used. Patients had stage 2 or 3 breast cancer. MRI was acquired pre-, during and post-NAC. A subset with visible aLN on MRI was identified (N=132). Prediction of pathological complete response (PCR) was made using breast tumor volume changes, nodal volume changes, and combined breast tumor and nodal volume changes with sub-stratification with and without large lymph nodes (3mL or ∼1.79cm diameter cutoff). Receiver-operator-curve analysis was used to quantify prediction performance. Results Rate of change of aLN and breast tumor volume were informative of pathological response, with prediction being most informative early in treatment (AUC: 0.63-0.82) compared to later in treatment (AUC: 0.50-0.73). Larger aLN volume was associated with hormone receptor negativity, with the largest nodal volume for triple negative subtypes. Sub-stratification by node size improved predictive performance, with the best predictive model for large nodes having AUC of 0.82. Conclusion Axillary lymph node MRI offers clinically relevant information and has the potential to predict treatment response to neoadjuvant chemotherapy in breast cancer patients.

Highly accurate model for prediction of lung nodule malignancy with CT scans

  • Causey, Jason L
  • Zhang, Junyu
  • Ma, Shiqian
  • Jiang, Bo
  • Qualls, Jake A
  • Politte, David G
  • Prior, Fred
  • Zhang, Shuzhong
  • Huang, Xiuzhen
Sci RepScientific reports 2018 Journal Article, cited 5 times
Website
Computed tomography (CT) examinations are commonly used to predict lung nodule malignancy in patients, which are shown to improve noninvasive early diagnosis of lung cancer. It remains challenging for computational approaches to achieve performance comparable to experienced radiologists. Here we present NoduleX, a systematic approach to predict lung nodule malignancy from CT data, based on deep learning convolutional neural networks (CNN). For training and validation, we analyze >1000 lung nodules in images from the LIDC/IDRI cohort. All nodules were identified and classified by four experienced thoracic radiologists who participated in the LIDC project. NoduleX achieves high accuracy for nodule malignancy classification, with an AUC of ~0.99. This is commensurate with the analysis of the dataset by experienced radiologists. Our approach, NoduleX, provides an effective framework for highly accurate nodule malignancy prediction with the model trained on a large patient population. Our results are replicable with software available at http://bioinformatics.astate.edu/NoduleX .

Renal cell carcinoma: predicting RUNX3 methylation level and its consequences on survival with CT features

  • Dongzhi Cen
  • Li Xu
  • Siwei Zhang
  • Zhiguang Chen
  • Yan Huang
  • Ziqi Li
  • Bo Liang
European Radiology 2019 Journal Article, cited 0 times
Website
PURPOSE: To investigate associations between CT imaging features, RUNX3 methylation level, and survival in clear cell renal cell carcinoma (ccRCC). MATERIALS AND METHODS: Patients were divided into high RUNX3 methylation and low RUNX3 methylation groups according to RUNX3 methylation levels (the threshold was identified by using X-tile). The CT scanning data from 106 ccRCC patients were retrospectively analyzed. The relationship between RUNX3 methylation level and overall survivals was evaluated using the Kaplan-Meyer analysis and Cox regression analysis (univariate and multivariate). The relationship between RUNX3 methylation level and CT features was evaluated using chi-square test and logistic regression analysis (univariate and multivariate). RESULTS: beta value cutoff of 0.53 to distinguish high methylation (N = 44) from low methylation tumors (N = 62). Patients with lower levels of methylation had longer median overall survival (49.3 vs. 28.4) months (low vs. high, adjusted hazard ratio [HR] 4.933, 95% CI 2.054-11.852, p < 0.001). On univariate logistic regression analysis, four risk factors (margin, side, long diameter, and intratumoral vascularity) were associated with RUNX3 methylation level (all p < 0.05). Multivariate logistic regression analysis found that three risk factors (side: left vs. right, odds ratio [OR] 2.696; p = 0.024; 95% CI 1.138-6.386; margin: ill-defined vs. well-defined, OR 2.685; p = 0.038; 95% CI 1.057-6.820; and intratumoral vascularity: yes vs. no, OR 3.286; p = 0.008; 95% CI 1.367-7.898) were significant independent predictors of high methylation tumors. This model had an area under the receiver operating characteristic curve (AUC) of 0.725 (95% CI 0.623-0.827). CONCLUSIONS: Higher levels of RUNX3 methylation are associated with shorter survival in ccRCC patients. And presence of intratumoral vascularity, ill-defined margin, and left side tumor were significant independent predictors of high methylation level of RUNX3 gene. KEY POINTS: * RUNX3 methylation level is negatively associated with overall survival in ccRCC patients. * Presence of intratumoral vascularity, ill-defined margin, and left side tumor were significant independent predictors of high methylation level of RUNX3 gene.

Segmentation, tracking, and kinematics of lung parenchyma and lung tumors from 4D CT with application to radiation treatment planning

  • Cha, Jungwon
2018 Thesis, cited 0 times
Website
This thesis is concerned with development of techniques for efficient computerized analysis of 4-D CT data. The goal is to have a highly automated approach to segmentation of the lung boundary and lung nodules inside the lung. The determination of exact lung tumor location over space and time by image segmentation is an essential step to track thoracic malignancies. Accurate image segmentation helps clinical experts examine the anatomy and structure and determine the disease progress. Since 4-D CT provides structural and anatomical information during tidal breathing, we use the same data to also measure mechanical properties related to deformation of the lung tissue including Jacobian and strain at high resolutions and as a function of time. Radiation Treatment of patients with lung cancer can benefit from knowledge of these measures of regional ventilation. Graph-cuts techniques have been popular for image segmentation since they are able to treat highly textured data via robust global optimization, avoiding local minima in graph based optimization. The graph-cuts methods have been used to extract globally optimal boundaries from images by s/t cut, with energy function based on model-specific visual cues, and useful topological constraints. The method makes N-dimensional globally optimal segmentation possible with good computational efficiency. Even though the graph-cuts method can extract objects where there is a clear intensity difference, segmentation of organs or tumors pose a challenge. For organ segmentation, many segmentation methods using a shape prior have been proposed. However, in the case of lung tumors, the shape varies from patient to patient, and with location. In this thesis, we use a shape prior for tumors through a training step and PCA analysis based on the Active Shape Model (ASM). The method has been tested on real patient data from the Brown Cancer Center at the University of Louisville. We performed temporal B-spline deformable registration of the 4-D CT data - this yielded 3-D deformation fields between successive respiratory phases from which measures of regional lung function were determined. During the respiratory cycle, the lung volume changes and five different lobes of the lung (two in the left and three in the right lung) show different deformation yielding different strain and Jacobian maps. In this thesis, we determine the regional lung mechanics in the Lagrangian frame of reference through different respiratory phases, for example, Phase10 to 20, Phase10 to 30, Phase10 to 40, and Phase10 to 50. Single photon emission computed tomography (SPECT) lung imaging using radioactive tracers with SPECT ventilation and SPECT perfusion imaging also provides functional information. As part of an IRB-approved study therefore, we registered the max-inhale CT volume to both VSPECT and QSPECT data sets using the Demon's non-rigid registration algorithm in patient subjects. Subsequently, statistical correlation between CT ventilation images (Jacobian and strain values), with both VSPECT and QSPECT was undertaken. Through statistical analysis with the Spearman's rank correlation coefficient, we found that Jacobian values have the highest correlation with both VSPECT and QSPECT.

Segmentation and tracking of lung nodules via graph‐cuts incorporating shape prior and motion from 4D CT

  • Cha, Jungwon
  • Farhangi, Mohammad Mehdi
  • Dunlap, Neal
  • Amini, Amir A
Medical physics 2018 Journal Article, cited 5 times
Website

Evaluation of data augmentation via synthetic images for improved breast mass detection on mammograms using deep learning

  • Cha, K. H.
  • Petrick, N.
  • Pezeshk, A.
  • Graff, C. G.
  • Sharma, D.
  • Badal, A.
  • Sahiner, B.
J Med Imaging (Bellingham) 2020 Journal Article, cited 1 times
Website
We evaluated whether using synthetic mammograms for training data augmentation may reduce the effects of overfitting and increase the performance of a deep learning algorithm for breast mass detection. Synthetic mammograms were generated using in silico procedural analytic breast and breast mass modeling algorithms followed by simulated x-ray projections of the breast models into mammographic images. In silico breast phantoms containing masses were modeled across the four BI-RADS breast density categories, and the masses were modeled with different sizes, shapes, and margins. A Monte Carlo-based x-ray transport simulation code, MC-GPU, was used to project the three-dimensional phantoms into realistic synthetic mammograms. 2000 mammograms with 2522 masses were generated to augment a real data set during training. From the Curated Breast Imaging Subset of the Digital Database for Screening Mammography (CBIS-DDSM) data set, we used 1111 mammograms (1198 masses) for training, 120 mammograms (120 masses) for validation, and 361 mammograms (378 masses) for testing. We used faster R-CNN for our deep learning network with pretraining from ImageNet using the Resnet-101 architecture. We compared the detection performance when the network was trained using different percentages of the real CBIS-DDSM training set (100%, 50%, and 25%), and when these subsets of the training set were augmented with 250, 500, 1000, and 2000 synthetic mammograms. Free-response receiver operating characteristic (FROC) analysis was performed to compare performance with and without the synthetic mammograms. We generally observed an improved test FROC curve when training with the synthetic images compared to training without them, and the amount of improvement depended on the number of real and synthetic images used in training. Our study shows that enlarging the training data with synthetic samples can increase the performance of deep learning systems.

Computational assessment of stomach tumor volume from multi-slice computerized tomography images in presence of type 2 cancer

  • Chacón, Gerardo
  • Rodríguez, Johel E
  • Bermúdez, Valmore
  • Vera, Miguel
  • Hernández, Juan Diego
  • Vargas, Sandra
  • Pardo, Aldo
  • Lameda, Carlos
  • Madriz, Delia
  • Bravo, Antonio J
F1000Research 2018 Journal Article, cited 0 times
Website

Automated feature extraction in brain tumor by magnetic resonance imaging using gaussian mixture models

  • Chaddad, Ahmad
Journal of Biomedical Imaging 2015 Journal Article, cited 29 times
Website

GBM heterogeneity characterization by radiomic analysis of phenotype anatomical planes

  • Chaddad, Ahmad
  • Desrosiers, Christian
  • Toews, Matthew
2016 Conference Proceedings, cited 4 times
Website

Radiomic analysis of multi-contrast brain MRI for the prediction of survival in patients with glioblastoma multiforme

  • Chaddad, Ahmad
  • Desrosiers, Christian
  • Toews, Matthew
2016 Conference Proceedings, cited 11 times
Website
Image texture features are effective at characterizing the microstructure of cancerous tissues. This paper proposes predicting the survival times of glioblastoma multiforme (GBM) patients using texture features extracted in multi-contrast brain MRI images. Texture features are derived locally from contrast enhancement, necrosis and edema regions in T1-weighted post-contrast and fluid-attenuated inversion-recovery (FLAIR) MRIs, based on the gray-level co-occurrence matrix representation. A statistical analysis based on the Kaplan-Meier method and log-rank test is used to identify the texture features related with the overall survival of GBM patients. Results are presented on a dataset of 39 GBM patients. For FLAIR images, four features (Energy, Correlation, Variance and Inverse of Variance) from contrast enhancement regions and a feature (Homogeneity) from edema regions were shown to be associated with survival times (p-value <; 0.01). Likewise, in T1-weighted images, three features (Energy, Correlation, and Variance) from contrast enhancement regions were found to be useful for predicting the overall survival of GBM patients. These preliminary results show the advantages of texture analysis in predicting the prognosis of GBM patients from multi-contrast brain MRI.

Phenotypic characterization of glioblastoma identified through shape descriptors

  • Chaddad, Ahmad
  • Desrosiers, Christian
  • Toews, Matthew
2016 Conference Proceedings, cited 4 times
Website
This paper proposes quantitatively describing the shape of glioblastoma (GBM) tissue phenotypes as a set of shape features derived from segmentations, for the purposes of discriminating between GBM phenotypes and monitoring tumor progression. GBM patients were identified from the Cancer Genome Atlas, and quantitative MR imaging data were obtained from the Cancer Imaging Archive. Three GBM tissue phenotypes are considered including necrosis, active tumor and edema/invasion. Volumetric tissue segmentations are obtained from registered T1˗weighted (T1˗WI) postcontrast and fluid-attenuated inversion recovery (FLAIR) MRI modalities. Shape features are computed from respective tissue phenotype segmentations, and a Kruskal-Wallis test was employed to select features capable of classification with a significance level of p < 0.05. Several classifier models are employed to distinguish phenotypes, where a leave-one-out cross-validation was performed. Eight features were found statistically significant for classifying GBM phenotypes with p <0.05, orientation is uninformative. Quantitative evaluations show the SVM results in the highest classification accuracy of 87.50%, sensitivity of 94.59% and specificity of 92.77%. In summary, the shape descriptors proposed in this work show high performance in predicting GBM tissue phenotypes. They are thus closely linked to morphological characteristics of GBM phenotypes and could potentially be used in a computer assisted labeling system.

Predicting survival time of lung cancer patients using radiomic analysis

  • Chaddad, Ahmad
  • Desrosiers, Christian
  • Toews, Matthew
  • Abdulkarim, Bassam
Oncotarget 2017 Journal Article, cited 4 times
Website
Objectives: This study investigates the prediction of Non-small cell lung cancer (NSCLC) patient survival outcomes based on radiomic texture and shape features automatically extracted from tumor image data. Materials and Methods: Retrospective analysis involves CT scans of 315 NSCLC patients from The Cancer Imaging Archive (TCIA). A total of 24 image features are computed from labeled tumor volumes of patients within groups defined using NSCLC subtype and TNM staging information. Spearman's rank correlation, Kaplan-Meier estimation and log-rank tests were used to identify features related to long/short NSCLC patient survival groups. Automatic random forest classification was used to predict patient survival group from multivariate feature data. Significance is assessed at P < 0.05 following Holm-Bonferroni correction for multiple comparisons. Results: Significant correlations between radiomic features and survival were observed for four clinical groups: (group, [absolute correlation range]): (large cell carcinoma (LCC) [0.35, 0.43]), (tumor size T2, [0.31, 0.39]), (non lymph node metastasis N0, [0.3, 0.33]), (TNM stage I, [0.39, 0.48]). Significant log-rank relationships between features and survival time were observed for three clinical groups: (group, hazard ratio): (LCC, 3.0), (LCC, 3.9), (T2, 2.5) and (stage I, 2.9). Automatic survival prediction performance (i.e. below/above median) is superior for combined radiomic features with age-TNM in comparison to standard TNM clinical staging information (clinical group, mean area-under-the-ROC-curve (AUC)): (LCC, 75.73%), (N0, 70.33%), (T2, 70.28%) and (TNM-I, 76.17%). Conclusion: Quantitative lung CT imaging features can be used as indicators of survival, in particular for patients with large-cell-carcinoma (LCC), primary-tumor-sizes (T2) and no lymph-node-metastasis (N0).

Multimodal Radiomic Features for the Predicting Gleason Score of Prostate Cancer

  • Chaddad, Ahmad
  • Kucharczyk, Michael
  • Niazi, Tamim
Cancers 2018 Journal Article, cited 1 times
Website

Predicting Gleason Score of Prostate Cancer Patients using Radiomic Analysis

  • Chaddad, Ahmad
  • Niazi, Tamim
  • Probst, Stephan
  • Bladou, Franck
  • Anidjar, Moris
  • Bahoric, Boris
Frontiers in Oncology 2018 Journal Article, cited 0 times
Website

Prediction of survival with multi-scale radiomic analysis in glioblastoma patients

  • Chaddad, Ahmad
  • Sabri, Siham
  • Niazi, Tamim
  • Abdulkarim, Bassam
Medical & biological engineering & computing 2018 Journal Article, cited 1 times
Website
We propose a multiscale texture features based on Laplacian-of Gaussian (LoG) filter to predict progression free (PFS) and overall survival (OS) in patients newly diagnosed with glioblastoma (GBM). Experiments use the extracted features derived from 40 patients of GBM with T1-weighted imaging (T1-WI) and Fluid-attenuated inversion recovery (FLAIR) images that were segmented manually into areas of active tumor, necrosis, and edema. Multiscale texture features were extracted locally from each of these areas of interest using a LoG filter and the relation between features to OS and PFS was investigated using univariate (i.e., Spearman’s rank correlation coefficient, log-rank test and Kaplan-Meier estimator) and multivariate analyses (i.e., Random Forest classifier). Three and seven features were statistically correlated with PFS and OS, respectively, with absolute correlation values between 0.32 and 0.36 and p < 0.05. Three features derived from active tumor regions only were associated with OS (p < 0.05) with hazard ratios (HR) of 2.9, 3, and 3.24, respectively. Combined features showed an AUC value of 85.37 and 85.54% for predicting the PFS and OS of GBM patients, respectively, using the random forest (RF) classifier. We presented a multiscale texture features to characterize the GBM regions and predict he PFS and OS. The efficiency achievable suggests that this technique can be developed into a GBM MR analysis system suitable for clinical use after a thorough validation involving more patients.

High-Throughput Quantification of Phenotype Heterogeneity Using Statistical Features

  • Chaddad, Ahmad
  • Tanougast, Camel
Advances in Bioinformatics 2015 Journal Article, cited 5 times
Website
Statistical features are widely used in radiology for tumor heterogeneity assessment using magnetic resonance (MR) imaging technique. In this paper, feature selection based on decision tree is examined to determine the relevant subset of glioblastoma (GBM) phenotypes in the statistical domain. To discriminate between active tumor (vAT) and edema/invasion (vE) phenotype, we selected the significant features using analysis of variance (ANOVA) with p value < 0.01. Then, we implemented the decision tree to define the optimal subset features of phenotype classifier. Naive Bayes (NB), support vector machine (SVM), and decision tree (DT) classifier were considered to evaluate the performance of the feature based scheme in terms of its capability to discriminate vAT from vE. Whole nine features were statistically significant to classify the vAT from vE with p value < 0.01. Feature selection based on decision tree showed the best performance by the comparative study using full feature set. The feature selected showed that the two features Kurtosis and Skewness achieved a highest range value of 58.33-75.00% accuracy classifier and 73.88-92.50% AUC. This study demonstrated the ability of statistical features to provide a quantitative, individualized measurement of glioblastoma patient and assess the phenotype progression.

Quantitative evaluation of robust skull stripping and tumor detection applied to axial MR images

  • Chaddad, Ahmad
  • Tanougast, Camel
Brain Informatics 2016 Journal Article, cited 28 times
Website

Extracted magnetic resonance texture features discriminate between phenotypes and are associated with overall survival in glioblastoma multiforme patients

  • Chaddad, Ahmad
  • Tanougast, Camel
Medical & biological engineering & computing 2016 Journal Article, cited 16 times
Website
GBM is a markedly heterogeneous brain tumor consisting of three main volumetric phenotypes identifiable on magnetic resonance imaging: necrosis (vN), active tumor (vAT), and edema/invasion (vE). The goal of this study is to identify the three glioblastoma multiforme (GBM) phenotypes using a texture-based gray-level co-occurrence matrix (GLCM) approach and determine whether the texture features of phenotypes are related to patient survival. MR imaging data in 40 GBM patients were analyzed. Phenotypes vN, vAT, and vE were segmented in a preprocessing step using 3D Slicer for rigid registration by T1-weighted imaging and corresponding fluid attenuation inversion recovery images. The GBM phenotypes were segmented using 3D Slicer tools. Texture features were extracted from GLCM of GBM phenotypes. Thereafter, Kruskal-Wallis test was employed to select the significant features. Robust predictive GBM features were identified and underwent numerous classifier analyses to distinguish phenotypes. Kaplan-Meier analysis was also performed to determine the relationship, if any, between phenotype texture features and survival rate. The simulation results showed that the 22 texture features were significant with p value < 0.05. GBM phenotype discrimination based on texture features showed the best accuracy, sensitivity, and specificity of 79.31, 91.67, and 98.75 %, respectively. Three texture features derived from active tumor parts: difference entropy, information measure of correlation, and inverse difference were statistically significant in the prediction of survival, with log-rank p values of 0.001, 0.001, and 0.008, respectively. Among 22 features examined, three texture features have the ability to predict overall survival for GBM patients demonstrating the utility of GLCM analyses in both the diagnosis and prognosis of this patient population.

Automated lung field segmentation in CT images using mean shift clustering and geometrical features

  • Chama, Chanukya Krishna
  • Mukhopadhyay, Sudipta
  • Biswas, Prabir Kumar
  • Dhara, Ashis Kumar
  • Madaiah, Mahendra Kasuvinahally
  • Khandelwal, Niranjan
2013 Conference Proceedings, cited 8 times
Website

Using Docker to support reproducible research

  • Chamberlain, Ryan
  • Invenshure, L
  • Schommer, Jennifer
2014 Report, cited 30 times
Website

Residual Convolutional Neural Network for the Determination of IDH Status in Low-and High-Grade Gliomas from MR Imaging

  • Chang, Ken
  • Bai, Harrison X
  • Zhou, Hao
  • Su, Chang
  • Bi, Wenya Linda
  • Agbodza, Ena
  • Kavouridis, Vasileios K
  • Senders, Joeky T
  • Boaro, Alessandro
  • Beers, Andrew
Clinical Cancer Research 2018 Journal Article, cited 26 times
Website

Automatic assessment of glioma burden: A deep learning algorithm for fully automated volumetric and bi-dimensional measurement

  • Chang, Ken
  • Beers, Andrew L
  • Bai, Harrison X
  • Brown, James M
  • Ly, K Ina
  • Li, Xuejun
  • Senders, Joeky T
  • Kavouridis, Vasileios K
  • Boaro, Alessandro
  • Su, Chang
  • Bi, Wenya Linda
  • Rapalino, Otto
  • Liao, Weihua
  • Shen, Qin
  • Zhou, Hao
  • Xiao, Bo
  • Wang, Yinyan
  • Zhang, Paul J
  • Pinho, Marco C
  • Wen, Patrick Y
  • Batchelor, Tracy T
  • Boxerman, Jerrold L
  • Arnaout, Omar
  • Rosen, Bruce R
  • Gerstner, Elizabeth R
  • Yang, Li
  • Huang, Raymond Y
  • Kalpathy-Cramer, Jayashree
Neuro Oncol 2019 Journal Article, cited 0 times
Website
BACKGROUND: Longitudinal measurement of glioma burden with MRI is the basis for treatment response assessment. In this study, we developed a deep learning algorithm that automatically segments abnormal FLAIR hyperintensity and contrast-enhancing tumor, quantitating tumor volumes as well as the product of maximum bi-dimensional diameters according to the Response Assessment in Neuro-Oncology (RANO) criteria (AutoRANO). METHODS: Two cohorts of patients were used for this study. One consisted of 843 pre-operative MRIs from 843 patients with low- or high-grade gliomas from four institutions and the second consisted 713 longitudinal, post-operative MRI visits from 54 patients with newly diagnosed glioblastomas (each with two pre-treatment "baseline" MRIs) from one institution. RESULTS: The automatically generated FLAIR hyperintensity volume, contrast-enhancing tumor volume, and AutoRANO were highly repeatable for the double-baseline visits, with an intraclass correlation coefficient (ICC) of 0.986, 0.991, and 0.977, respectivelyon the cohort of post-operative GBM patients. Furthermore, there was high agreement between manually and automatically measured tumor volumes, with ICC values of 0.915, 0.924, and 0.965 for pre-operative FLAIR hyperintensity, post-operative FLAIR hyperintensity, and post-operative contrast-enhancing tumor volumes, respectively. Lastly, the ICC for comparing manually and automatically derived longitudinal changes in tumor burden was 0.917, 0.966, and 0.850 for FLAIR hyperintensity volume, contrast-enhancing tumor volume, and RANO measures, respectively. CONCLUSIONS: Our automated algorithm demonstrates potential utility for evaluating tumor burden in complex post-treatment settings, although further validation in multi-center clinical trials will be needed prior to widespread implementation.

Deep-Learning Convolutional Neural Networks Accurately Classify Genetic Mutations in Gliomas

  • Chang, P
  • Grinband, J
  • Weinberg, BD
  • Bardis, M
  • Khy, M
  • Cadena, G
  • Su, M-Y
  • Cha, S
  • Filippi, CG
  • Bota, D
American Journal of Neuroradiology 2018 Journal Article, cited 5 times
Website

Primer for Image Informatics in Personalized Medicine

  • Chang, Young Hwan
  • Foley, Patrick
  • Azimi, Vahid
  • Borkar, Rohan
  • Lefman, Jonathan
Procedia Engineering 2016 Journal Article, cited 0 times
Website

“Big data” and “open data”: What kind of access should researchers enjoy?

  • Chatellier, Gilles
  • Varlet, Vincent
  • Blachier-Poisson, Corinne
Thérapie 2016 Journal Article, cited 0 times

MRI prostate cancer radiomics: Assessment of effectiveness and perspectives

  • Chatzoudis, Pavlos
2018 Thesis, cited 0 times
Website

A Fast Semi-Automatic Segmentation Tool for Processing Brain Tumor Images

  • Chen, Andrew X
  • Rabadán, Raúl
2017 Book Section, cited 0 times
Website

aLow-dose CT via convolutional neural network

  • Chen, Hu
  • Zhang, Yi
  • Zhang, Weihua
  • Liao, Peixi
  • Li, Ke
  • Zhou, Jiliu
  • Wang, Ge
Biomedical Optics Express 2017 Journal Article, cited 89 times
Website

Revealing Tumor Habitats from Texture Heterogeneity Analysis for Classification of Lung Cancer Malignancy and Aggressiveness

  • Cherezov, Dmitry
  • Goldgof, Dmitry
  • Hall, Lawrence
  • Gillies, Robert
  • Schabath, Matthew
  • Müller, Henning
  • Depeursinge, Adrien
Sci RepScientific reports 2019 Journal Article, cited 0 times
Website
We propose an approach for characterizing structural heterogeneity of lung cancer nodules using Computed Tomography Texture Analysis (CTTA). Measures of heterogeneity were used to test the hypothesis that heterogeneity can be used as predictor of nodule malignancy and patient survival. To do this, we use the National Lung Screening Trial (NLST) dataset to determine if heterogeneity can represent differences between nodules in lung cancer and nodules in non-lung cancer patients. 253 participants are in the training set and 207 participants in the test set. To discriminate cancerous from non-cancerous nodules at the time of diagnosis, a combination of heterogeneity and radiomic features were evaluated to produce the best area under receiver operating characteristic curve (AUROC) of 0.85 and accuracy 81.64%. Second, we tested the hypothesis that heterogeneity can predict patient survival. We analyzed 40 patients diagnosed with lung adenocarcinoma (20 short-term and 20 long-term survival patients) using a leave-one-out cross validation approach for performance evaluation. A combination of heterogeneity features and radiomic features produce an AUROC of 0.9 and an accuracy of 85% to discriminate long- and short-term survivors.

Computed Tomography (CT) Image Quality Enhancement via a Uniform Framework Integrating Noise Estimation and Super-Resolution Networks

  • Chi, Jianning
  • Zhang, Yifei
  • Yu, Xiaosheng
  • Wang, Ying
  • Wu, Chengdong
Sensors (Basel) 2019 Journal Article, cited 2 times
Website
Computed tomography (CT) imaging technology has been widely used to assist medical diagnosis in recent years. However, noise during the process of imaging, and data compression during the process of storage and transmission always interrupt the image quality, resulting in unreliable performance of the post-processing steps in the computer assisted diagnosis system (CADs), such as medical image segmentation, feature extraction, and medical image classification. Since the degradation of medical images typically appears as noise and low-resolution blurring, in this paper, we propose a uniform deep convolutional neural network (DCNN) framework to handle the de-noising and super-resolution of the CT image at the same time. The framework consists of two steps: Firstly, a dense-inception network integrating an inception structure and dense skip connection is proposed to estimate the noise level. The inception structure is used to extract the noise and blurring features with respect to multiple receptive fields, while the dense skip connection can reuse those extracted features and transfer them across the network. Secondly, a modified residual-dense network combined with joint loss is proposed to reconstruct the high-resolution image with low noise. The inception block is applied on each skip connection of the dense-residual network so that the structure features of the image are transferred through the network more than the noise and blurring features. Moreover, both the perceptual loss and the mean square error (MSE) loss are used to restrain the network, leading to better performance in the reconstruction of image edges and details. Our proposed network integrates the degradation estimation, noise removal, and image super-resolution in one uniform framework to enhance medical image quality. We apply our method to the Cancer Imaging Archive (TCIA) public dataset to evaluate its ability in medical image quality enhancement. The experimental results demonstrate that the proposed method outperforms the state-of-the-art methods on de-noising and super-resolution by providing higher peak signal to noise ratio (PSNR) and structure similarity index (SSIM) values.

SVM-PUK Kernel Based MRI-brain Tumor Identification Using Texture and Gabor Wavelets

  • Chinnam, Siva
  • Sistla, Venkatramaphanikumar
  • Kolli, Venkata
Traitement du Signal 2019 Journal Article, cited 0 times
Website

Imaging phenotypes of breast cancer heterogeneity in pre-operative breast Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) scans predict 10-year recurrence

  • Chitalia, Rhea
  • Rowland, Jennifer
  • McDonald, Elizabeth S
  • Pantalone, Lauren
  • Cohen, Eric A
  • Gastounioti, Aimilia
  • Feldman, Michael
  • Schnall, Mitchell
  • Conant, Emily
  • Kontos, Despina
Clinical Cancer Research 2019 Journal Article, cited 0 times
Website

Classification of the glioma grading using radiomics analysis

  • Cho, Hwan-ho
  • Lee, Seung-hak
  • Kim, Jonghoon
  • Park, Hyunjin
PeerJ 2018 Journal Article, cited 0 times
Website

Integrative analysis of imaging and transcriptomic data of the immune landscape associated with tumor metabolism in lung adenocarcinoma: Clinical and prognostic implications

  • Choi, Hongyoon
  • Na, Kwon Joong
THERANOSTICS 2018 Journal Article, cited 0 times
Website

Machine learning and radiomic phenotyping of lower grade gliomas: improving survival prediction

  • Choi, Yoon Seong
  • Ahn, Sung Soo
  • Chang, Jong Hee
  • Kang, Seok-Gu
  • Kim, Eui Hyun
  • Kim, Se Hoon
  • Jain, Rajan
  • Lee, Seung-Koo
Eur Radiol 2020 Journal Article, cited 0 times
Website
BACKGROUND AND PURPOSE: Recent studies have highlighted the importance of isocitrate dehydrogenase (IDH) mutational status in stratifying biologically distinct subgroups of gliomas. This study aimed to evaluate whether MRI-based radiomic features could improve the accuracy of survival predictions for lower grade gliomas over clinical and IDH status. MATERIALS AND METHODS: Radiomic features (n = 250) were extracted from preoperative MRI data of 296 lower grade glioma patients from databases at our institutional (n = 205) and The Cancer Genome Atlas (TCGA)/The Cancer Imaging Archive (TCIA) (n = 91) datasets. For predicting overall survival, random survival forest models were trained with radiomic features; non-imaging prognostic factors including age, resection extent, WHO grade, and IDH status on the institutional dataset, and validated on the TCGA/TCIA dataset. The performance of the random survival forest (RSF) model and incremental value of radiomic features were assessed by time-dependent receiver operating characteristics. RESULTS: The radiomics RSF model identified 71 radiomic features to predict overall survival, which were successfully validated on TCGA/TCIA dataset (iAUC, 0.620; 95% CI, 0.501-0.756). Relative to the RSF model from the non-imaging prognostic parameters, the addition of radiomic features significantly improved the overall survival prediction accuracy of the random survival forest model (iAUC, 0.627 vs. 0.709; difference, 0.097; 95% CI, 0.003-0.209). CONCLUSION: Radiomic phenotyping with machine learning can improve survival prediction over clinical profile and genomic data for lower grade gliomas. KEY POINTS: * Radiomics analysis with machine learning can improve survival prediction over the non-imaging factors (clinical and molecular profiles) for lower grade gliomas, across different institutions.

Incremental Prognostic Value of ADC Histogram Analysis over MGMT Promoter Methylation Status in Patients with Glioblastoma

  • Choi, Yoon Seong
  • Ahn, Sung Soo
  • Kim, Dong Wook
  • Chang, Jong Hee
  • Kang, Seok-Gu
  • Kim, Eui Hyun
  • Kim, Se Hoon
  • Rim, Tyler Hyungtaek
  • Lee, Seung-Koo
Radiology 2016 Journal Article, cited 18 times
Website
Purpose To investigate the incremental prognostic value of apparent diffusion coefficient (ADC) histogram analysis over oxygen 6-methylguanine-DNA methyltransferase (MGMT) promoter methylation status in patients with glioblastoma and the correlation between ADC parameters and MGMT status. Materials and Methods This retrospective study was approved by institutional review board, and informed consent was waived. A total of 112 patients with glioblastoma were divided into training (74 patients) and test (38 patients) sets. Overall survival (OS) and progression-free survival (PFS) was analyzed with ADC parameters, MGMT status, and other clinical factors. Multivariate Cox regression models with and without ADC parameters were constructed. Model performance was assessed with c index and receiver operating characteristic curve analyses for 12- and 16-month OS and 12-month PFS in the training set and validated in the test set. ADC parameters were compared according to MGMT status for the entire cohort. Results By using ADC parameters, the c indices and diagnostic accuracies for 12- and 16-month OS and 12-month PFS in the models showed significant improvement, with the exception of c indices in the models for PFS (P < .05 for all) in the training set. In the test set, the diagnostic accuracy was improved by using ADC parameters and was significant, with the 25th and 50th percentiles of ADC for 16-month OS (P = .040 and P = .047) and the 25th percentile of ADC for 12-month PFS (P = .026). No significant correlation was found between ADC parameters and MGMT status. Conclusion ADC histogram analysis had incremental prognostic value over MGMT promoter methylation status in patients with glioblastoma. ((c)) RSNA, 2016 Online supplemental material is available for this article.

ST3GAL1-associated transcriptomic program in glioblastoma tumor growth, invasion, and prognosis

  • Chong, Yuk Kien
  • Sandanaraj, Edwin
  • Koh, Lynnette WH
  • Thangaveloo, Moogaambikai
  • Tan, Melanie SY
  • Koh, Geraldene RH
  • Toh, Tan Boon
  • Lim, Grace GY
  • Holbrook, Joanna D
  • Kon, Oi Lian
Journal of the National Cancer Institute 2016 Journal Article, cited 16 times
Website

ST3GAL1-associated transcriptomic program in glioblastoma tumor growth, invasion, and prognosis

  • Chong, Yuk Kien
  • Sandanaraj, Edwin
  • Koh, Lynnette WH
  • Thangaveloo, Moogaambikai
  • Tan, Melanie SY
  • Koh, Geraldene RH
  • Toh, Tan Boon
  • Lim, Grace GY
  • Holbrook, Joanna D
  • Kon, Oi Lian
  • Nadarajah, M.
  • Ng, I.
  • Ng, W. H.
  • Tan, N. S.
  • Lim, K. L.
  • Tang, C.
  • Ang, B. T.
Journal of the National Cancer Institute 2016 Journal Article, cited 16 times
Website
BACKGROUND: Cell surface sialylation is associated with tumor cell invasiveness in many cancers. Glioblastoma is the most malignant primary brain tumor and is highly infiltrative. ST3GAL1 sialyltransferase gene is amplified in a subclass of glioblastomas, and its role in tumor cell self-renewal remains unexplored. METHODS: Self-renewal of patient glioma cells was evaluated using clonogenic, viability, and invasiveness assays. ST3GAL1 was identified from differentially expressed genes in Peanut Agglutinin-stained cells and validated in REMBRANDT (n = 390) and Gravendeel (n = 276) clinical databases. Gene set enrichment analysis revealed upstream processes. TGFbeta signaling on ST3GAL1 transcription was assessed using chromatin immunoprecipitation. Transcriptome analysis of ST3GAL1 knockdown cells was done to identify downstream pathways. A constitutively active FoxM1 mutant lacking critical anaphase-promoting complex/cyclosome ([APC/C]-Cdh1) binding sites was used to evaluate ST3Gal1-mediated regulation of FoxM1 protein. Finally, the prognostic role of ST3Gal1 was determined using an orthotopic xenograft model (3 mice groups comprising nontargeting and 2 clones of ST3GAL1 knockdown in NNI-11 [8 per group] and NNI-21 [6 per group]), and the correlation with patient clinical information. All statistical tests on patients' data were two-sided; other P values below are one-sided. RESULTS: High ST3GAL1 expression defines an invasive subfraction with self-renewal capacity; its loss of function prolongs survival in a mouse model established from mesenchymal NNI-11 (P < .001; groups of 8 in 3 arms: nontargeting, C1, and C2 clones of ST3GAL1 knockdown). ST3GAL1 transcriptomic program stratifies patient survival (hazard ratio [HR] = 2.47, 95% confidence interval [CI] = 1.72 to 3.55, REMBRANDT P = 1.92 x 10(-)(8); HR = 2.89, 95% CI = 1.94 to 4.30, Gravendeel P = 1.05 x 10(-)(1)(1)), independent of age and histology, and associates with higher tumor grade and T2 volume (P = 1.46 x 10(-)(4)). TGFbeta signaling, elevated in mesenchymal patients, correlates with high ST3GAL1 (REMBRANDT gliomacor = 0.31, P = 2.29 x 10(-)(1)(0); Gravendeel gliomacor = 0.50, P = 3.63 x 10(-)(2)(0)). The transcriptomic program upon ST3GAL1 knockdown enriches for mitotic cell cycle processes. FoxM1 was identified as a statistically significantly modulated gene (P = 2.25 x 10(-)(5)) and mediates ST3Gal1 signaling via the (APC/C)-Cdh1 complex. CONCLUSIONS: The ST3GAL1-associated transcriptomic program portends poor prognosis in glioma patients and enriches for higher tumor grades of the mesenchymal molecular classification. We show that ST3Gal1-regulated self-renewal traits are crucial to the sustenance of glioblastoma multiforme growth.

Application of Artificial Neural Networks for Prognostic Modeling in Lung Cancer after Combining Radiomic and Clinical Features

  • Chufal, Kundan S.
  • Ahmad, Irfan
  • Pahuja, Anjali K.
  • Miller, Alexis A.
  • Singh, Rajpal
  • Chowdhary, Rahul L.
Asian Journal of Oncology 2019 Journal Article, cited 0 times
Website
Objective This study was aimed to investigate machine learning (ML) and artificial neural networks (ANNs) in the prognostic modeling of lung cancer, utilizing high-dimensional data. Materials and Methods A computed tomography (CT) dataset of inoperable nonsmall cell lung carcinoma (NSCLC) patients with embedded tumor segmentation and survival status, comprising 422 patients, was selected. Radiomic data extraction was performed on Computation Environment for Radiation Research (CERR). The survival probability was first determined based on clinical features only and then unsupervised ML methods. Supervised ANN modeling was performed by direct and hybrid modeling which were subsequently compared. Statistical significance was set at <0.05. Results Survival analyses based on clinical features alone were not significant, except for gender. ML clustering performed on unselected radiomic and clinical data demonstrated a significant difference in survival (two-step cluster, median overall survival [ mOS]: 30.3 vs. 17.2 m; p = 0.03; K-means cluster, mOS: 21.1 vs. 7.3 m; p < 0.001). Direct ANN modeling yielded a better overall model accuracy utilizing multilayer perceptron (MLP) than radial basis function (RBF; 79.2 vs. 61.4%, respectively). Hybrid modeling with MLP (after feature selection with ML) resulted in an overall model accuracy of 80%. There was no difference in model accuracy after direct and hybrid modeling (p = 0.164). Conclusion Our preliminary study supports the application of ANN in predicting outcomes based on radiomic and clinical data.

Results of initial low-dose computed tomographic screening for lung cancer

  • Church, T. R.
  • Black, W. C.
  • Aberle, D. R.
  • Berg, C. D.
  • Clingan, K. L.
  • Duan, F.
  • Fagerstrom, R. M.
  • Gareen, I. F.
  • Gierada, D. S.
  • Jones, G. C.
  • Mahon, I.
  • Marcus, P. M.
  • Sicks, J. D.
  • Jain, A.
  • Baum, S.
N Engl J MedThe New England journal of medicine 2013 Journal Article, cited 529 times
Website
BACKGROUND: Lung cancer is the largest contributor to mortality from cancer. The National Lung Screening Trial (NLST) showed that screening with low-dose helical computed tomography (CT) rather than with chest radiography reduced mortality from lung cancer. We describe the screening, diagnosis, and limited treatment results from the initial round of screening in the NLST to inform and improve lung-cancer-screening programs. METHODS: At 33 U.S. centers, from August 2002 through April 2004, we enrolled asymptomatic participants, 55 to 74 years of age, with a history of at least 30 pack-years of smoking. The participants were randomly assigned to undergo annual screening, with the use of either low-dose CT or chest radiography, for 3 years. Nodules or other suspicious findings were classified as positive results. This article reports findings from the initial screening examination. RESULTS: A total of 53,439 eligible participants were randomly assigned to a study group (26,715 to low-dose CT and 26,724 to chest radiography); 26,309 participants (98.5%) and 26,035 (97.4%), respectively, underwent screening. A total of 7191 participants (27.3%) in the low-dose CT group and 2387 (9.2%) in the radiography group had a positive screening result; in the respective groups, 6369 participants (90.4%) and 2176 (92.7%) had at least one follow-up diagnostic procedure, including imaging in 5717 (81.1%) and 2010 (85.6%) and surgery in 297 (4.2%) and 121 (5.2%). Lung cancer was diagnosed in 292 participants (1.1%) in the low-dose CT group versus 190 (0.7%) in the radiography group (stage 1 in 158 vs. 70 participants and stage IIB to IV in 120 vs. 112). Sensitivity and specificity were 93.8% and 73.4% for low-dose CT and 73.5% and 91.3% for chest radiography, respectively. CONCLUSIONS: The NLST initial screening results are consistent with the existing literature on screening by means of low-dose CT and chest radiography, suggesting that a reduction in mortality from lung cancer is achievable at U.S. screening centers that have staff experienced in chest CT. (Funded by the National Cancer Institute; NLST ClinicalTrials.gov number, NCT00047385.).

Automatic detection of spiculation of pulmonary nodules in computed tomography images

  • Ciompi, F
  • Jacobs, C
  • Scholten, ET
  • van Riel, SJ
  • Wille, MMW
  • Prokop, M
  • van Ginneken, B
2015 Conference Proceedings, cited 5 times
Website

Reproducing 2D breast mammography images with 3D printed phantoms

  • Clark, Matthew
  • Ghammraoui, Bahaa
  • Badal, Andreu
2016 Conference Proceedings, cited 2 times
Website

The Quantitative Imaging Network: NCI's Historical Perspective and Planned Goals

  • Clarke, Laurence P.
  • Nordstrom, Robert J.
  • Zhang, Huiming
  • Tandon, Pushpa
  • Zhang, Yantian
  • Redmond, George
  • Farahani, Keyvan
  • Kelloff, Gary
  • Henderson, Lori
  • Shankar, Lalitha
  • Deye, James
  • Capala, Jacek
  • Jacobs, Paula
Translational oncology 2014 Journal Article, cited 0 times
Website

Using Machine Learning Applied to Radiomic Image Features for Segmenting Tumour Structures

  • Clifton, Henry
  • Vial, Alanna
  • Miller, Andrew
  • Ritz, Christian
  • Field, Matthew
  • Holloway, Lois
  • Ros, Montserrat
  • Carolan, Martin
  • Stirling, David
2019 Conference Paper, cited 0 times
Website
Lung cancer (LC) was the predicted leading causeof Australian cancer fatalities in 2018 (around 9,200 deaths). Non-Small Cell Lung Cancer (NSCLC) tumours with larger amounts of heterogeneity have been linked to a worse outcome.Medical imaging is widely used in oncology and non-invasively collects data about the whole tumour. The field of radiomics uses these medical images to extract quantitative image featuresand promises further understanding of the disease at the time of diagnosis, during treatment and in follow up. It is well known that manual and semi-automatic tumour segmentation methods are subject to inter-observer variability which reduces confidence in the treatment region and extentof disease. This leads to tumour under- and over-estimation which can impact on treatment outcome and treatment-induced morbidity.This research aims to use radiomic features centred at each pixel to segment the location of the lung tumour on Computed Tomography (CT) scans. To achieve this objective, a DecisionTree (DT) model was trained using sampled CT data from eight patients. The data consisted of 25 pixel-based texture features calculated from four Gray Level Matrices (GLMs)describing the region around each pixel. The model was assessed using an unseen patient through both a confusion matrix and interpretation of the segment.The findings showed that the model accurately (AUROC =83.9%) predicts tumour location within the test data, concluding that pixel based textural features likely contribute to segmenting the lung tumour. The prediction displayed a strong representation of the manually segmented Region of Interest (ROI), which is considered the ground truth for the purpose of this research.

Automated Medical Image Modality Recognition by Fusion of Visual and Text Information

  • Codella, Noel
  • Connell, Jonathan
  • Pankanti, Sharath
  • Merler, Michele
  • Smith, John R
2014 Book Section, cited 10 times
Website

Semantic Model Vector for ImageCLEF2013

  • Codella, Noel
  • Merler, Michele
2014 Report, cited 0 times
Website

NCI Workshop Report: Clinical and Computational Requirements for Correlating Imaging Phenotypes with Genomics Signatures

  • Colen, Rivka
  • Foster, Ian
  • Gatenby, Robert
  • Giger, Mary Ellen
  • Gillies, Robert
  • Gutman, David
  • Heller, Matthew
  • Jain, Rajan
  • Madabhushi, Anant
  • Madhavan, Subha
  • Napel, Sandy
  • Rao, Arvind
  • Saltz, Joel
  • Tatum, James
  • Verhaak, Roeland
  • Whitman, Gary
Translational oncology 2014 Journal Article, cited 39 times
Website

Imaging genomic mapping of an invasive MRI phenotype predicts patient outcome and metabolic dysfunction: a TCGA glioma phenotype research group project

  • Colen, Rivka R
  • Vangel, Mark
  • Wang, Jixin
  • Gutman, David A
  • Hwang, Scott N
  • Wintermark, Max
  • Jain, Rajan
  • Jilwan-Nicolas, Manal
  • Chen, James Y
  • Raghavan, Prashant
BMC Medical Genomics 2014 Journal Article, cited 47 times
Website

Imaging genomic mapping of an invasive MRI phenotype predicts patient outcome and metabolic dysfunction: a TCGA glioma phenotype research group project

  • Colen, Rivka R
  • Vangel, Mark
  • Wang, Jixin
  • Gutman, David A
  • Hwang, Scott N
  • Wintermark, Max
  • Jain, Rajan
  • Jilwan-Nicolas, Manal
  • Chen, James Y
  • Raghavan, Prashant
  • Holder, C. A.
  • Rubin, D.
  • Huang, E.
  • Kirby, J.
  • Freymann, J.
  • Jaffe, C. C.
  • Flanders, A.
  • TCGA Glioma Phenotype Research Group
  • Zinn, P. O.
BMC Medical Genomics 2014 Journal Article, cited 47 times
Website
BACKGROUND: Invasion of tumor cells into adjacent brain parenchyma is a major cause of treatment failure in glioblastoma. Furthermore, invasive tumors are shown to have a different genomic composition and metabolic abnormalities that allow for a more aggressive GBM phenotype and resistance to therapy. We thus seek to identify those genomic abnormalities associated with a highly aggressive and invasive GBM imaging-phenotype. METHODS: We retrospectively identified 104 treatment-naive glioblastoma patients from The Cancer Genome Atlas (TCGA) whom had gene expression profiles and corresponding MR imaging available in The Cancer Imaging Archive (TCIA). The standardized VASARI feature-set criteria were used for the qualitative visual assessments of invasion. Patients were assigned to classes based on the presence (Class A) or absence (Class B) of statistically significant invasion parameters to create an invasive imaging signature; imaging genomic analysis was subsequently performed using GenePattern Comparative Marker Selection module (Broad Institute). RESULTS: Our results show that patients with a combination of deep white matter tracts and ependymal invasion (Class A) on imaging had a significant decrease in overall survival as compared to patients with absence of such invasive imaging features (Class B) (8.7 versus 18.6 months, p < 0.001). Mitochondrial dysfunction was the top canonical pathway associated with Class A gene expression signature. The MYC oncogene was predicted to be the top activation regulator in Class A. CONCLUSION: We demonstrate that MRI biomarker signatures can identify distinct GBM phenotypes associated with highly significant survival differences and specific molecular pathways. This study identifies mitochondrial dysfunction as the top canonical pathway in a very aggressive GBM phenotype. Thus, imaging-genomic analyses may prove invaluable in detecting novel targetable genomic pathways.

Glioblastoma: Imaging Genomic Mapping Reveals Sex-specific Oncogenic Associations of Cell Death

  • Colen, Rivka R
  • Wang, Jixin
  • Singh, Sanjay K
  • Gutman, David A
  • Zinn, Pascal O
Radiology 2014 Journal Article, cited 36 times
Website
PURPOSE: To identify the molecular profiles of cell death as defined by necrosis volumes at magnetic resonance (MR) imaging and uncover sex-specific molecular signatures potentially driving oncogenesis and cell death in glioblastoma (GBM). MATERIALS AND METHODS: This retrospective study was HIPAA compliant and had institutional review board approval, with waiver of the need to obtain informed consent. The molecular profiles for 99 patients (30 female patients, 69 male patients) were identified from the Cancer Genome Atlas, and quantitative MR imaging data were obtained from the Cancer Imaging Archive. Volumes of necrosis at MR imaging were extracted. Differential gene expression profiles were obtained in those patients (including male and female patients separately) with high versus low MR imaging volumes of tumor necrosis. Ingenuity Pathway Analysis was used for messenger RNA-microRNA interaction analysis. A histopathologic data set (n = 368; 144 female patients, 224 male patients) was used to validate the MR imaging findings by assessing the amount of cell death. A connectivity map was used to identify therapeutic agents potentially targeting sex-specific cell death in GBM. RESULTS: Female patients showed significantly lower volumes of necrosis at MR imaging than male patients (6821 vs 11 050 mm(3), P = .03). Female patients, unlike male patients, with high volumes of necrosis at imaging had significantly shorter survival (6.5 vs 14.5 months, P = .01). Transcription factor analysis suggested that cell death in female patients with GBM is associated with MYC, while that in male patients is associated with TP53 activity. Additionally, a group of therapeutic agents that can potentially be tested to target cell death in a sex-specific manner was identified. CONCLUSION: The results of this study suggest that cell death in GBM may be driven by sex-specific molecular pathways.

Extended Modality Propagation: Image Synthesis of Pathological Cases

  • N. Cordier
  • H. Delingette
  • M. Le
  • N. Ayache
IEEE Transactions on Medical Imaging 2016 Journal Article, cited 18 times
Website

Combined Megavoltage and Contrast-Enhanced Radiotherapy as an Intrafraction Motion Management Strategy in Lung SBRT

  • Coronado-Delgado, Daniel A
  • Garnica-Garza, Hector M
Technol Cancer Res Treat 2019 Journal Article, cited 0 times
Website
Using Monte Carlo simulation and a realistic patient model, it is shown that the volume of healthy tissue irradiated at therapeutic doses can be drastically reduced using a combination of standard megavoltage and kilovoltage X-ray beams with a contrast agent previously loaded into the tumor, without the need to reduce standard treatment margins. Four-dimensional computed tomography images of 2 patients with a centrally located and a peripherally located tumor were obtained from a public database and subsequently used to plan robotic stereotactic body radiotherapy treatments. Two modalities are assumed: conventional high-energy stereotactic body radiotherapy and a treatment with contrast agent loaded in the tumor and a kilovoltage X-ray beam replacing the megavoltage beam (contrast-enhanced radiotherapy). For each patient model, 2 planning target volumes were designed: one following the recommendations from either Radiation Therapy Oncology Group (RTOG) 0813 or RTOG 0915 task group depending on the patient model and another with a 2-mm uniform margin determined solely on beam penumbra considerations. The optimized treatments with RTOG margins were imparted to the moving phantom to model the dose distribution that would be obtained as a result of intrafraction motion. Treatment plans are then compared to the plan with the 2-mm uniform margin considered to be the ideal plan. It is shown that even for treatments in which only one-fifth of the total dose is imparted via the contrast-enhanced radiotherapy modality and with the use of standard treatment margins, the resultant absorbed dose distributions are such that the volume of healthy tissue irradiated to high doses is close to what is obtained under ideal conditions.

Bayesian Kernel Models for Statistical Genetics and Cancer Genomics

  • Crawford, Lorin
2017 Thesis, cited 0 times

Predicting the ISUP grade of clear cell renal cell carcinoma with multiparametric MR and multiphase CT radiomics

  • Cui, Enming
  • Li, Zhuoyong
  • Ma, Changyi
  • Li, Qing
  • Lei, Yi
  • Lan, Yong
  • Yu, Juan
  • Zhou, Zhipeng
  • Li, Ronggang
  • Long, Wansheng
  • Lin, Fan
Eur Radiol 2020 Journal Article, cited 0 times
Website
OBJECTIVE: To investigate externally validated magnetic resonance (MR)-based and computed tomography (CT)-based machine learning (ML) models for grading clear cell renal cell carcinoma (ccRCC). MATERIALS AND METHODS: Patients with pathologically proven ccRCC in 2009-2018 were retrospectively included for model development and internal validation; patients from another independent institution and The Cancer Imaging Archive dataset were included for external validation. Features were extracted from T1-weighted, T2-weighted, corticomedullary-phase (CMP), and nephrographic-phase (NP) MR as well as precontrast-phase (PCP), CMP, and NP CT. CatBoost was used for ML-model investigation. The reproducibility of texture features was assessed using intraclass correlation coefficient (ICC). Accuracy (ACC) was used for ML-model performance evaluation. RESULTS: Twenty external and 440 internal cases were included. Among 368 and 276 texture features from MR and CT, 322 and 250 features with good to excellent reproducibility (ICC >/= 0.75) were included for ML-model development. The best MR- and CT-based ML models satisfactorily distinguished high- from low-grade ccRCCs in internal (MR-ACC = 73% and CT-ACC = 79%) and external (MR-ACC = 74% and CT-ACC = 69%) validation. Compared to single-sequence or single-phase images, the classifiers based on all-sequence MR (71% to 73% in internal and 64% to 74% in external validation) and all-phase CT (77% to 79% in internal and 61% to 69% in external validation) images had significant increases in ACC. CONCLUSIONS: MR- and CT-based ML models are valuable noninvasive techniques for discriminating high- from low-grade ccRCCs, and multiparameter MR- and multiphase CT-based classifiers are potentially superior to those based on single-sequence or single-phase imaging. KEY POINTS: * Both the MR- and CT-based machine learning models are reliable predictors for differentiating high- from low-grade ccRCCs. * ML models based on multiparameter MR sequences and multiphase CT images potentially outperform those based on single-sequence or single-phase images in ccRCC grading.

Primary lung tumor segmentation from PET–CT volumes with spatial–topological constraint

  • Cui, Hui
  • Wang, Xiuying
  • Lin, Weiran
  • Zhou, Jianlong
  • Eberl, Stefan
  • Feng, Dagan
  • Fulham, Michael
International journal of computer assisted radiology and surgery 2016 Journal Article, cited 14 times
Website

Volume of high-risk intratumoral subregions at multi-parametric MR imaging predicts overall survival and complements molecular analysis of glioblastoma

  • Cui, Yi
  • Ren, Shangjie
  • Tha, Khin Khin
  • Wu, Jia
  • Shirato, Hiroki
  • Li, Ruijiang
European Radiology 2017 Journal Article, cited 10 times
Website

Prognostic Imaging Biomarkers in Glioblastoma: Development and Independent Validation on the Basis of Multiregion and Quantitative Analysis of MR Images

  • Cui, Yi
  • Tha, Khin Khin
  • Terasaka, Shunsuke
  • Yamaguchi, Shigeru
  • Wang, Jeff
  • Kudo, Kohsuke
  • Xing, Lei
  • Shirato, Hiroki
  • Li, Ruijiang
Radiology 2015 Journal Article, cited 45 times
Website
PURPOSE: To develop and independently validate prognostic imaging biomarkers for predicting survival in patients with glioblastoma on the basis of multiregion quantitative image analysis. MATERIALS AND METHODS: This retrospective study was approved by the local institutional review board, and informed consent was waived. A total of 79 patients from two independent cohorts were included. The discovery and validation cohorts consisted of 46 and 33 patients with glioblastoma from the Cancer Imaging Archive (TCIA) and the local institution, respectively. Preoperative T1-weighted contrast material-enhanced and T2-weighted fluid-attenuation inversion recovery magnetic resonance (MR) images were analyzed. For each patient, we semiautomatically delineated the tumor and performed automated intratumor segmentation, dividing the tumor into spatially distinct subregions that demonstrate coherent intensity patterns across multiparametric MR imaging. Within each subregion and for the entire tumor, we extracted quantitative imaging features, including those that fully capture the differential contrast of multimodality MR imaging. A multivariate sparse Cox regression model was trained by using TCIA data and tested on the validation cohort. RESULTS: The optimal prognostic model identified five imaging biomarkers that quantified tumor surface area and intensity distributions of the tumor and its subregions. In the validation cohort, our prognostic model achieved a concordance index of 0.67 and significant stratification of overall survival by using the log-rank test (P = .018), which outperformed conventional prognostic factors, such as age (concordance index, 0.57; P = .389) and tumor volume (concordance index, 0.59; P = .409). CONCLUSION: The multiregion analysis presented here establishes a general strategy to effectively characterize intratumor heterogeneity manifested at multimodality imaging and has the potential to reveal useful prognostic imaging biomarkers in glioblastoma.

Tumor Transcriptome Reveals High Expression of IL-8 in Non-Small Cell Lung Cancer Patients with Low Pectoralis Muscle Area and Reduced Survival

  • Cury, Sarah Santiloni
  • de Moraes, Diogo
  • Freire, Paula Paccielli
  • de Oliveira, Grasieli
  • Marques, Douglas Venancio Pereira
  • Fernandez, Geysson Javier
  • Dal-Pai-Silva, Maeli
  • Hasimoto, Erica Nishida
  • Dos Reis, Patricia Pintor
  • Rogatto, Silvia Regina
  • Carvalho, Robson Francisco
Cancers (Basel) 2019 Journal Article, cited 1 times
Website
Cachexia is a syndrome characterized by an ongoing loss of skeletal muscle mass associated with poor patient prognosis in non-small cell lung cancer (NSCLC). However, prognostic cachexia biomarkers in NSCLC are unknown. Here, we analyzed computed tomography (CT) images and tumor transcriptome data to identify potentially secreted cachexia biomarkers (PSCB) in NSCLC patients with low-muscularity. We integrated radiomics features (pectoralis muscle, sternum, and tenth thoracic (T10) vertebra) from CT of 89 NSCLC patients, which allowed us to identify an index for screening muscularity. Next, a tumor transcriptomic-based secretome analysis from these patients (discovery set) was evaluated to identify potential cachexia biomarkers in patients with low-muscularity. The prognostic value of these biomarkers for predicting recurrence and survival outcome was confirmed using expression data from eight lung cancer datasets (validation set). Finally, C2C12 myoblasts differentiated into myotubes were used to evaluate the ability of the selected biomarker, interleukin (IL)-8, in inducing muscle cell atrophy. We identified 75 over-expressed transcripts in patients with low-muscularity, which included IL-6, CSF3, and IL-8. Also, we identified NCAM1, CNTN1, SCG2, CADM1, IL-8, NPTX1, and APOD as PSCB in the tumor secretome. These PSCB were capable of distinguishing worse and better prognosis (recurrence and survival) in NSCLC patients. IL-8 was confirmed as a predictor of worse prognosis in all validation sets. In vitro assays revealed that IL-8 promoted C2C12 myotube atrophy. Tumors from low-muscularity patients presented a set of upregulated genes encoding for secreted proteins, including pro-inflammatory cytokines that predict worse overall survival in NSCLC. Among these upregulated genes, IL-8 expression in NSCLC tissues was associated with worse prognosis, and the recombinant IL-8 was capable of triggering atrophy in C2C12 myotubes.

Algorithmic three-dimensional analysis of tumor shape in MRI improves prognosis of survival in glioblastoma: a multi-institutional study

  • Czarnek, Nicholas
  • Clark, Kal
  • Peters, Katherine B
  • Mazurowski, Maciej A
Journal of neuro-oncology 2017 Journal Article, cited 15 times
Website
In this retrospective, IRB-exempt study, we analyzed data from 68 patients diagnosed with glioblastoma (GBM) in two institutions and investigated the relationship between tumor shape, quantified using algorithmic analysis of magnetic resonance images, and survival. Each patient's Fluid Attenuated Inversion Recovery (FLAIR) abnormality and enhancing tumor were manually delineated, and tumor shape was analyzed by automatic computer algorithms. Five features were automatically extracted from the images to quantify the extent of irregularity in tumor shape in two and three dimensions. Univariate Cox proportional hazard regression analysis was performed to determine how prognostic each feature was of survival. Kaplan Meier analysis was performed to illustrate the prognostic value of each feature. To determine whether the proposed quantitative shape features have additional prognostic value compared with standard clinical features, we controlled for tumor volume, patient age, and Karnofsky Performance Score (KPS). The FLAIR-based bounding ellipsoid volume ratio (BEVR), a 3D complexity measure, was strongly prognostic of survival, with a hazard ratio of 0.36 (95% CI 0.20-0.65), and remained significant in regression analysis after controlling for other clinical factors (P = 0.0061). Three enhancing-tumor based shape features were prognostic of survival independently of clinical factors: BEVR (P = 0.0008), margin fluctuation (P = 0.0013), and angular standard deviation (P = 0.0078). Algorithmically assessed tumor shape is statistically significantly prognostic of survival for patients with GBM independently of patient age, KPS, and tumor volume. This shows promise for extending the utility of MR imaging in treatment of GBM patients.

Radiogenomics of glioblastoma: a pilot multi-institutional study to investigate a relationship between tumor shape features and tumor molecular subtype

  • Czarnek, Nicholas M
  • Clark, Kal
  • Peters, Katherine B
  • Collins, Leslie M
  • Mazurowski, Maciej A
2016 Conference Proceedings, cited 3 times
Website

Immunotherapy in Metastatic Colorectal Cancer: Could the Latest Developments Hold the Key to Improving Patient Survival?

  • Damilakis, E.
  • Mavroudis, D.
  • Sfakianaki, M.
  • Souglakos, J.
Cancers (Basel) 2020 Journal Article, cited 0 times
Website
Immunotherapy has considerably increased the number of anticancer agents in many tumor types including metastatic colorectal cancer (mCRC). Anti-PD-1 (programmed death 1) and cytotoxic T-lymphocyte-associated antigen 4 (CTLA-4) immune checkpoint inhibitors (ICI) have been shown to benefit the mCRC patients with mismatch repair deficiency (dMMR) or high microsatellite instability (MSI-H). However, ICI is not effective in mismatch repair proficient (pMMR) colorectal tumors, which constitute a large population of patients. Several clinical trials evaluating the efficacy of immunotherapy combined with chemotherapy, radiation therapy, or other agents are currently ongoing to extend the benefit of immunotherapy to pMMR mCRC cases. In dMMR patients, MSI testing through immunohistochemistry and/or polymerase chain reaction can be used to identify patients that will benefit from immunotherapy. Next-generation sequencing has the ability to detect MSI-H using a low amount of nucleic acids and its application in clinical practice is currently being explored. Preliminary data suggest that radiomics is capable of discriminating MSI from microsatellite stable mCRC and may play a role as an imaging biomarker in the future. Tumor mutational burden, neoantigen burden, tumor-infiltrating lymphocytes, immunoscore, and gastrointestinal microbiome are promising biomarkers that require further investigation and validation.

Feature Extraction In Medical Images by Using Deep Learning Approach

  • Dara, S
  • Tumma, P
  • Eluri, NR
  • Kancharla, GR
International Journal of Pure and Applied Mathematics 2018 Journal Article, cited 0 times
Website

AI-based Prognostic Imaging Biomarkers for Precision Neurooncology: the ReSPOND Consortium

  • Davatzikos, C.
  • Barnholtz-Sloan, J. S.
  • Bakas, S.
  • Colen, R.
  • Mahajan, A.
  • Quintero, C. B.
  • Font, J. C.
  • Puig, J.
  • Jain, R.
  • Sloan, A. E.
  • Badve, C.
  • Marcus, D. S.
  • Choi, Y. S.
  • Lee, S. K.
  • Chang, J. H.
  • Poisson, L. M.
  • Griffith, B.
  • Dicker, A. P.
  • Flanders, A. E.
  • Booth, T. C.
  • Rathore, S.
  • Akbari, H.
  • Sako, C.
  • Bilello, M.
  • Shukla, G.
  • Kazerooni, A. F.
  • Brem, S.
  • Lustig, R.
  • Mohan, S.
  • Bagley, S.
  • Nasrallah, M.
  • O'Rourke, D. M.
Neuro-oncology 2020 Journal Article, cited 0 times
Website

Machine learning: a useful radiological adjunct in determination of a newly diagnosed glioma’s grade and IDH status

  • De Looze, Céline
  • Beausang, Alan
  • Cryan, Jane
  • Loftus, Teresa
  • Buckley, Patrick G
  • Farrell, Michael
  • Looby, Seamus
  • Reilly, Richard
  • Brett, Francesca
  • Kearney, Hugh
Journal of neuro-oncology 2018 Journal Article, cited 0 times

Directional local ternary quantized extrema pattern: A new descriptor for biomedical image indexing and retrieval

  • Deep, G
  • Kaur, L
  • Gupta, S
Engineering Science and Technology, an International Journal 2016 Journal Article, cited 9 times
Website

Local mesh ternary patterns: a new descriptor for MRI and CT biomedical image indexing and retrieval

  • Deep, G
  • Kaur, L
  • Gupta, S
Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization 2016 Journal Article, cited 3 times
Website

Predicting response before initiation of neoadjuvant chemotherapy in breast cancer using new methods for the analysis of dynamic contrast enhanced MRI (DCE MRI) data

  • DeGrandchamp, Joseph B
  • Whisenant, Jennifer G
  • Arlinghaus, Lori R
  • Abramson, VG
  • Yankeelov, Thomas E
  • Cárdenas-Rodríguez, Julio
2016 Conference Proceedings, cited 5 times
Website

Mesoscopic imaging of glioblastomas: Are diffusion, perfusion and spectroscopic measures influenced by the radiogenetic phenotype?

  • Demerath, Theo
  • Simon-Gabriel, Carl Philipp
  • Kellner, Elias
  • Schwarzwald, Ralf
  • Lange, Thomas
  • Heiland, Dieter Henrik
  • Reinacher, Peter
  • Staszewski, Ori
  • Mast, Hansjörg
  • Kiselev, Valerij G
The Neuroradiology Journal 2017 Journal Article, cited 5 times
Website

Mesoscopic imaging of glioblastomas: Are diffusion, perfusion and spectroscopic measures influenced by the radiogenetic phenotype?

  • Demerath, Theo
  • Simon-Gabriel, Carl Philipp
  • Kellner, Elias
  • Schwarzwald, Ralf
  • Lange, Thomas
  • Heiland, Dieter Henrik
  • Reinacher, Peter
  • Staszewski, Ori
  • Mast, Hansjörg
  • Kiselev, Valerij G
  • Egger, K.
  • Urbach, H.
  • Weyerbrock, A.
  • Mader, I.
The Neuroradiology Journal 2017 Journal Article, cited 5 times
Website
The purpose of this study was to identify markers from perfusion, diffusion, and chemical shift imaging in glioblastomas (GBMs) and to correlate them with genetically determined and previously published patterns of structural magnetic resonance (MR) imaging. Twenty-six patients (mean age 60 years, 13 female) with GBM were investigated. Imaging consisted of native and contrast-enhanced 3D data, perfusion, diffusion, and spectroscopic imaging. In the presence of minor necrosis, cerebral blood volume (CBV) was higher (median +/- SD, 2.23% +/- 0.93) than in pronounced necrosis (1.02% +/- 0.71), pcorr = 0.0003. CBV adjacent to peritumoral fluid-attenuated inversion recovery (FLAIR) hyperintensity was lower in edema (1.72% +/- 0.31) than in infiltration (1.91% +/- 0.35), pcorr = 0.039. Axial diffusivity adjacent to peritumoral FLAIR hyperintensity was lower in severe mass effect (1.08*10(-3) mm(2)/s +/- 0.08) than in mild mass effect (1.14*10(-3) mm(2)/s +/- 0.06), pcorr = 0.048. Myo-inositol was positively correlated with a marker for mitosis (Ki-67) in contrast-enhancing tumor, r = 0.5, pcorr = 0.0002. Changed CBV and axial diffusivity, even outside FLAIR hyperintensity, in adjacent normal-appearing matter can be discussed as to be related to angiogenesis pathways and to activated proliferation genes. The correlation between myo-inositol and Ki-67 might be attributed to its binding to cell surface receptors regulating tumorous proliferation of astrocytic cells.

Computer-aided detection of lung nodules using outer surface features

  • Demir, Önder
  • Yılmaz Çamurcu, Ali
Bio-Medical Materials and Engineering 2015 Journal Article, cited 28 times
Website
In this study, a computer-aided detection (CAD) system was developed for the detection of lung nodules in computed tomography images. The CAD system consists of four phases, including two-dimensional and three-dimensional preprocessing phases. In the feature extraction phase, four different groups of features are extracted from volume of interests: morphological features, statistical and histogram features, statistical and histogram features of outer surface, and texture features of outer surface. The support vector machine algorithm is optimized using particle swarm optimization for classification. The CAD system provides 97.37% sensitivity, 86.38% selectivity, 88.97% accuracy and 2.7 false positive per scan using three groups of classification features. After the inclusion of outer surface texture features, classification results of the CAD system reaches 98.03% sensitivity, 87.71% selectivity, 90.12% accuracy and 2.45 false positive per scan. Experimental results demonstrate that outer surface texture features of nodule candidates are useful to increase sensitivity and decrease the number of false positives in the detection of lung nodules in computed tomography images.

Development of a nomogram combining clinical staging with 18F-FDG PET/CT image features in non-small-cell lung cancer stage I–III

  • Desseroit, Marie-Charlotte
  • Visvikis, Dimitris
  • Tixier, Florent
  • Majdoub, Mohamed
  • Perdrisot, Rémy
  • Guillevin, Rémy
  • Le Rest, Catherine Cheze
  • Hatt, Mathieu
European journal of nuclear medicine and molecular imaging 2016 Journal Article, cited 34 times
Website

Spatial habitats from multiparametric MR imaging are associated with signaling pathway activities and survival in glioblastoma

  • Dextraze, Katherine
  • Saha, Abhijoy
  • Kim, Donnie
  • Narang, Shivali
  • Lehrer, Michael
  • Rao, Anita
  • Narang, Saphal
  • Rao, Dinesh
  • Ahmed, Salmaan
  • Madhugiri, Venkatesh
Oncotarget 2017 Journal Article, cited 0 times
Website

Measurement of spiculation index in 3D for solitary pulmonary nodules in volumetric lung CT images

  • Dhara, Ashis Kumar
  • Mukhopadhyay, Sudipta
  • Alam, Naved
  • Khandelwal, Niranjan
2013 Conference Proceedings, cited 4 times
Website

3d texture analysis of solitary pulmonary nodules using co-occurrence matrix from volumetric lung CT images

  • Dhara, Ashis Kumar
  • Mukhopadhyay, Sudipta
  • Khandelwal, Niranjan
2013 Conference Proceedings, cited 7 times
Website

Deep learning in head & neck cancer outcome prediction

  • Diamant, André
  • Chatterjee, Avishek
  • Vallières, Martin
  • Shenouda, George
  • Seuntjens, Jan
Sci RepScientific reports 2019 Journal Article, cited 0 times
Website
Traditional radiomics involves the extraction of quantitative texture features from medical images in an attempt to determine correlations with clinical endpoints. We hypothesize that convolutional neural networks (CNNs) could enhance the performance of traditional radiomics, by detecting image patterns that may not be covered by a traditional radiomic framework. We test this hypothesis by training a CNN to predict treatment outcomes of patients with head and neck squamous cell carcinoma, based solely on their pre-treatment computed tomography image. The training (194 patients) and validation sets (106 patients), which are mutually independent and include 4 institutions, come from The Cancer Imaging Archive. When compared to a traditional radiomic framework applied to the same patient cohort, our method results in a AUC of 0.88 in predicting distant metastasis. When combining our model with the previous model, the AUC improves to 0.92. Our framework yields models that are shown to explicitly recognize traditional radiomic features, be directly visualized and perform accurate outcome prediction.

Theoretical tumor edge detection technique using multiple Bragg peak decomposition in carbon ion therapy

  • Dias, Marta Filipa Ferraz
  • Collins-Fekete, Charles-Antoine
  • Baroni, Guido
  • Riboldi, Marco
  • Seco, Joao
Biomedical Physics & Engineering Express 2019 Journal Article, cited 0 times
Website

Automated segmentation refinement of small lung nodules in CT scans by local shape analysis

  • Diciotti, Stefano
  • Lombardo, Simone
  • Falchini, Massimo
  • Picozzi, Giulia
  • Mascalchi, Mario
IEEE Trans Biomed Eng 2011 Journal Article, cited 68 times
Website
One of the most important problems in the segmentation of lung nodules in CT imaging arises from possible attachments occurring between nodules and other lung structures, such as vessels or pleura. In this report, we address the problem of vessels attachments by proposing an automated correction method applied to an initial rough segmentation of the lung nodule. The method is based on a local shape analysis of the initial segmentation making use of 3-D geodesic distance map representations. The correction method has the advantage that it locally refines the nodule segmentation along recognized vessel attachments only, without modifying the nodule boundary elsewhere. The method was tested using a simple initial rough segmentation, obtained by a fixed image thresholding. The validation of the complete segmentation algorithm was carried out on small lung nodules, identified in the ITALUNG screening trial and on small nodules of the lung image database consortium (LIDC) dataset. In fully automated mode, 217/256 (84.8%) lung nodules of ITALUNG and 139/157 (88.5%) individual marks of lung nodules of LIDC were correctly outlined and an excellent reproducibility was also observed. By using an additional interactive mode, based on a controlled manual interaction, 233/256 (91.0%) lung nodules of ITALUNG and 144/157 (91.7%) individual marks of lung nodules of LIDC were overall correctly segmented. The proposed correction method could also be usefully applied to any existent nodule segmentation algorithm for improving the segmentation quality of juxta-vascular nodules.

Learning Multi-Class Segmentations From Single-Class Datasets

  • Dmitriev, Konstantin
  • Kaufman, Arie
2019 Conference Paper, cited 1 times
Website
Multi-class segmentation has recently achieved significant performance in natural images and videos. This achievement is due primarily to the public availability of large multi-class datasets. However, there are certain domains, such as biomedical images, where obtaining sufficient multi-class annotations is a laborious and often impossible task and only single-class datasets are available. While existing segmentation research in such domains use private multi-class datasets or focus on single-class segmentations, we propose a unified highly efficient framework for robust simultaneous learning of multi-class segmentations by combining single-class datasets and utilizing a novel way of conditioning a convolutional network for the purpose of segmentation. We demonstrate various ways of incorporating the conditional information, perform an extensive evaluation, and show compelling multi-class segmentation performance on biomedical images, which outperforms current state-of-the-art solutions (up to 2.7%). Unlike current solutions, which are meticulously tailored for particular single-class datasets, we utilize datasets from a variety of sources. Furthermore, we show the applicability of our method also to natural images and evaluate it on the Cityscapes dataset. We further discuss other possible applications of our proposed framework.

Long short-term memory networks predict breast cancer recurrence in analysis of consecutive MRIs acquired during the course of neoadjuvant chemotherapy

  • Drukker, Karen
  • Edwards, Alexandra
  • Papaioannou, John
  • Giger, Maryellen
  • Hahn, Horst K.
  • Mazurowski, Maciej A.
2020 Conference Paper, cited 0 times
Website
The purpose of this study was to assess long short-term memory networks in the prediction of recurrence-free survival in breast cancer patients using features extracted from MRIs acquired during the course of neoadjuvant chemotherapy. In the I-SPY1 dataset, up to 4 MRI exams were available per patient acquired at pre-treatment, early-treatment, interregimen, and pre-surgery time points. Breast cancers were automatically segmented and 8 features describing kinetic curve characteristics were extracted. We assessed performance of long short-term memory networks in the prediction of recurrence-free survival status at 2 years and at 5 years post-surgery. For these predictions, we analyzed MRIs from women who had at least 2 (or 5) years of recurrence-free follow-up or experienced recurrence or death within that timeframe: 157 women and 73 women, respectively. One approach used features extracted from all available exams and the other approach used features extracted from only exams prior to the second cycle of neoadjuvant chemotherapy. The areas under the ROC curve in the prediction of recurrence-free survival status at 2 years post-surgery were 0.80, 95% confidence interval [0.68; 0.88] and 0.75 [0.62; 0.83] for networks trained with all 4 available exams and only the ‘early’ exams, respectively. Hazard ratios at the lowest, median, and highest quartile cut -points were 6.29 [2.91; 13.62], 3.27 [1.77; 6.03], 1.65 [0.83; 3.27] and 2.56 [1.20; 5.48], 3.01 [1.61; 5.66], 2.30 [1.14; 4.67]. Long short-term memory networks were able to predict recurrence-free survival in breast cancer patients, also when analyzing only MRIs acquired ‘early on’ during neoadjuvant treatment.

Most-enhancing tumor volume by MRI radiomics predicts recurrence-free survival “early on” in neoadjuvant treatment of breast cancer

  • Drukker, Karen
  • Li, Hui
  • Antropova, Natalia
  • Edwards, Alexandra
  • Papaioannou, John
  • Giger, Maryellen L
Cancer Imaging 2018 Journal Article, cited 0 times
Website
BACKGROUND: The hypothesis of this study was that MRI-based radiomics has the ability to predict recurrence-free survival "early on" in breast cancer neoadjuvant chemotherapy. METHODS: A subset, based on availability, of the ACRIN 6657 dynamic contrast-enhanced MR images was used in which we analyzed images of all women imaged at pre-treatment baseline (141 women: 40 with a recurrence, 101 without) and all those imaged after completion of the first cycle of chemotherapy, i.e., at early treatment (143 women: 37 with a recurrence vs. 105 without). Our method was completely automated apart from manual localization of the approximate tumor center. The most enhancing tumor volume (METV) was automatically calculated for the pre-treatment and early treatment exams. Performance of METV in the task of predicting a recurrence was evaluated using ROC analysis. The association of recurrence-free survival with METV was assessed using a Cox regression model controlling for patient age, race, and hormone receptor status and evaluated by C-statistics. Kaplan-Meier analysis was used to estimate survival functions. RESULTS: The C-statistics for the association of METV with recurrence-free survival were 0.69 with 95% confidence interval of [0.58; 0.80] at pre-treatment and 0.72 [0.60; 0.84] at early treatment. The hazard ratios calculated from Kaplan-Meier curves were 2.28 [1.08; 4.61], 3.43 [1.83; 6.75], and 4.81 [2.16; 10.72] for the lowest quartile, median quartile, and upper quartile cut-points for METV at early treatment, respectively. CONCLUSION: The performance of the automatically-calculated METV rivaled that of a semi-manual model described for the ACRIN 6657 study (published C-statistic 0.72 [0.60; 0.84]), which involved the same dataset but required semi-manual delineation of the functional tumor volume (FTV) and knowledge of the pre-surgical residual cancer burden.

Local Wavelet Pattern: A New Feature Descriptor for Image Retrieval in Medical CT Databases

  • Dubey, Shiv Ram
  • Singh, Satish Kumar
  • Singh, Rajat Kumar
IEEE Trans Image Process 2015 Journal Article, cited 52 times
Website
A new image feature description based on the local wavelet pattern (LWP) is proposed in this paper to characterize the medical computer tomography (CT) images for content-based CT image retrieval. In the proposed work, the LWP is derived for each pixel of the CT image by utilizing the relationship of center pixel with the local neighboring information. In contrast to the local binary pattern that only considers the relationship between a center pixel and its neighboring pixels, the presented approach first utilizes the relationship among the neighboring pixels using local wavelet decomposition, and finally considers its relationship with the center pixel. A center pixel transformation scheme is introduced to match the range of center value with the range of local wavelet decomposed values. Moreover, the introduced local wavelet decomposition scheme is centrally symmetric and suitable for CT images. The novelty of this paper lies in the following two ways: 1) encoding local neighboring information with local wavelet decomposition and 2) computing LWP using local wavelet decomposed values and transformed center pixel values. We tested the performance of our method over three CT image databases in terms of the precision and recall. We also compared the proposed LWP descriptor with the other state-of-the-art local image descriptors, and the experimental results suggest that the proposed method outperforms other methods for CT image retrieval.

Improving PET Imaging Acquisition and Analysis With Machine Learning: A Narrative Review With Focus on Alzheimer's Disease and Oncology

  • Duffy, Ian R
  • Boyle, Amanda J
  • Vasdev, Neil
Molecular imaging 2019 Journal Article, cited 0 times

An Ad Hoc Random Initialization Deep Neural Network Architecture for Discriminating Malignant Breast Cancer Lesions in Mammographic Images

  • Duggento, Andrea
  • Aiello, Marco
  • Cavaliere, Carlo
  • Cascella, Giuseppe L
  • Cascella, Davide
  • Conte, Giovanni
  • Guerrisi, Maria
  • Toschi, Nicola
Contrast Media Mol Imaging 2019 Journal Article, cited 1 times
Website
Breast cancer is one of the most common cancers in women, with more than 1,300,000 cases and 450,000 deaths each year worldwide. In this context, recent studies showed that early breast cancer detection, along with suitable treatment, could significantly reduce breast cancer death rates in the long term. X-ray mammography is still the instrument of choice in breast cancer screening. In this context, the false-positive and false-negative rates commonly achieved by radiologists are extremely arduous to estimate and control although some authors have estimated figures of up to 20% of total diagnoses or more. The introduction of novel artificial intelligence (AI) technologies applied to the diagnosis and, possibly, prognosis of breast cancer could revolutionize the current status of the management of the breast cancer patient by assisting the radiologist in clinical image interpretation. Lately, a breakthrough in the AI field has been brought about by the introduction of deep learning techniques in general and of convolutional neural networks in particular. Such techniques require no a priori feature space definition from the operator and are able to achieve classification performances which can even surpass human experts. In this paper, we design and validate an ad hoc CNN architecture specialized in breast lesion classification from imaging data only. We explore a total of 260 model architectures in a train-validation-test split in order to propose a model selection criterion which can pose the emphasis on reducing false negatives while still retaining acceptable accuracy. We achieve an area under the receiver operatic characteristics curve of 0.785 (accuracy 71.19%) on the test set, demonstrating how an ad hoc random initialization architecture can and should be fine tuned to a specific problem, especially in biomedical applications.

Assessing the Effects of Software Platforms on Volumetric Segmentation of Glioblastoma

  • Dunn Jr, WD
  • Aerts, HJWL
  • Cooper, LA
  • Holder, CA
  • Hwang, SN
J Neuroimaging Psychiatry Neurol 2016 Journal Article, cited 0 times
Website

Improving Brain Tumor Diagnosis Using MRI Segmentation Based on Collaboration of Beta Mixture Model and Learning Automata

  • Edalati-rad, Akram
  • Mosleh, Mohammad
Arabian Journal for Science and Engineering 2018 Journal Article, cited 0 times
Website

Automated 3-D Tissue Segmentation Via Clustering

  • Edwards, Samuel
  • Brown, Scott
  • Lee, Michael
Journal of Biomedical Engineering and Medical Imaging 2018 Journal Article, cited 0 times

Performance Analysis of Prediction Methods for Lossless Image Compression

  • Egorov, Nickolay
  • Novikov, Dmitriy
  • Gilmutdinov, Marat
2015 Book Section, cited 4 times
Website

Decision forests for learning prostate cancer probability maps from multiparametric MRI

  • Ehrenberg, Henry R
  • Cornfeld, Daniel
  • Nawaf, Cayce B
  • Sprenkle, Preston C
  • Duncan, James S
2016 Conference Proceedings, cited 2 times
Website

A Content-Based-Image-Retrieval Approach for Medical Image Repositories

  • el Rifai, Diaa
  • Maeder, Anthony
  • Liyanage, Liwan
2015 Conference Paper, cited 2 times
Website

Feature Extraction and Analysis for Lung Nodule Classification using Random Forest

  • Nada El-Askary
  • Mohammed Salem
  • Mohammed Roushdy
2019 Conference Paper, cited 0 times
Website

Imaging genomics of glioblastoma: state of the art bridge between genomics and neuroradiology

  • ElBanan, Mohamed G
  • Amer, Ahmed M
  • Zinn, Pascal O
  • Colen, Rivka R
Neuroimaging Clinics of North America 2015 Journal Article, cited 29 times
Website
Glioblastoma (GBM) is the most common and most aggressive primary malignant tumor of the central nervous system. Recently, researchers concluded that the "one-size-fits-all" approach for treatment of GBM is no longer valid and research should be directed toward more personalized and patient-tailored treatment protocols. Identification of the molecular and genomic pathways underlying GBM is essential for achieving this personalized and targeted therapeutic approach. Imaging genomics represents a new era as a noninvasive surrogate for genomic and molecular profile identification. This article discusses the basics of imaging genomics of GBM, its role in treatment decision-making, and its future potential in noninvasive genomic identification.

Diffusion MRI quality control and functional diffusion map results in ACRIN 6677/RTOG 0625: a multicenter, randomized, phase II trial of bevacizumab and chemotherapy in recurrent glioblastoma

  • Ellingson, Benjamin M
  • Kim, Eunhee
  • Woodworth, Davis C
  • Marques, Helga
  • Boxerman, Jerrold L
  • Safriel, Yair
  • McKinstry, Robert C
  • Bokstein, Felix
  • Jain, Rajan
  • Chi, T Linda
  • Sorensen, A Gregory
  • Gilbert, Mark R
  • Barboriak, Daniel P
Int J Oncol 2015 Journal Article, cited 27 times
Website
Functional diffusion mapping (fDM) is a cancer imaging technique that quantifies voxelwise changes in apparent diffusion coefficient (ADC). Previous studies have shown value of fDMs in bevacizumab therapy for recurrent glioblastoma multiforme (GBM). The aim of the present study was to implement explicit criteria for diffusion MRI quality control and independently evaluate fDM performance in a multicenter clinical trial (RTOG 0625/ACRIN 6677). A total of 123 patients were enrolled in the current multicenter trial and signed institutional review board-approved informed consent at their respective institutions. MRI was acquired prior to and 8 weeks following therapy. A 5-point QC scoring system was used to evaluate DWI quality. fDM performance was evaluated according to the correlation of these metrics with PFS and OS at the first follow-up time-point. Results showed ADC variability of 7.3% in NAWM and 10.5% in CSF. A total of 68% of patients had usable DWI data and 47% of patients had high quality DWI data when also excluding patients that progressed before the first follow-up. fDM performance was improved by using only the highest quality DWI. High pre-treatment contrast enhancing tumor volume was associated with shorter PFS and OS. A high volume fraction of increasing ADC after therapy was associated with shorter PFS, while a high volume fraction of decreasing ADC was associated with shorter OS. In summary, DWI in multicenter trials are currently of limited value due to image quality. Improvements in consistency of image quality in multicenter trials are necessary for further advancement of DWI biomarkers.

A Novel Hybrid Perceptron Neural Network Algorithm for Classifying Breast MRI Tumors

  • ElNawasany, Amal M
  • Ali, Ahmed Fouad
  • Waheed, Mohamed E
2014 Book Section, cited 3 times
Website

A COMPUTER AIDED DIAGNOSIS SYSTEM FOR LUNG CANCER DETECTION USING SVM

  • EMİRZADE, ERKAN
2016 Thesis, cited 1 times
Website

4D robust optimization including uncertainties in time structures can reduce the interplay effect in proton pencil beam scanning radiation therapy

  • Engwall, Erik
  • Fredriksson, Albin
  • Glimelius, Lars
Medical physics 2018 Journal Article, cited 2 times
Website

Attention P-Net for Segmentation of Post-operative Glioblastoma in MRI

  • Enlund Åström, Isabelle
2019 Thesis, cited 0 times
Website
Segmentation of post-operative glioblastoma is important for follow-up treatment. In this thesis, Fully Convolutional Networks (FCN) are utilised together with attention modules for segmentation of post-operative glioblastoma in MR images. Attention-based modules help the FCN to focus on relevant features to improve segmentation results. Channel and spatial attention combines both the spatial context as well as the semantic information in MR images. P-Net is used as a backbone for creating an architecture with existing bottleneck attention modules and was named attention P-Net. The proposed network and competing techniques were evaluated on a Uppsala University database containing T1-weighted MR images of brain from 12 subjects. The proposed framework shows substantial improvement over the existing techniques.

Radiology and Enterprise Medical Imaging Extensions (REMIX)

  • Erdal, Barbaros S
  • Prevedello, Luciano M
  • Qian, Songyue
  • Demirer, Mutlu
  • Little, Kevin
  • Ryu, John
  • O’Donnell, Thomas
  • White, Richard D
Journal of Digital Imaging 2017 Journal Article, cited 1 times
Website

Multisite Image Data Collection and Management Using the RSNA Image Sharing Network

  • Erickson, Bradley J
  • Fajnwaks, Patricio
  • Langer, Steve G
  • Perry, John
Translational oncology 2014 Journal Article, cited 3 times
Website
The execution of a multisite trial frequently includes image collection. The Clinical Trials Processor (CTP) makes removal of protected health information highly reliable. It also provides reliable transfer of images to a central review site. Trials using central review of imaging should consider using CTP for handling image data when a multisite trial is being designed.

New prognostic factor telomerase reverse transcriptase promotor mutation presents without MR imaging biomarkers in primary glioblastoma

  • Ersoy, Tunc F
  • Keil, Vera C
  • Hadizadeh, Dariusch R
  • Gielen, Gerrit H
  • Fimmers, Rolf
  • Waha, Andreas
  • Heidenreich, Barbara
  • Kumar, Rajiv
  • Schild, Hans H
  • Simon, Matthias
Neuroradiology 2017 Journal Article, cited 1 times
Website

Computer-aided detection of Pulmonary Nodules based on SVM in thoracic CT images

  • Eskandarian, Parinaz
  • Bagherzadeh, Jamshid
2015 Conference Proceedings, cited 12 times
Website
Computer-Aided diagnosis of Solitary Pulmonary Nodules using the method of X-ray CT images is the early detection of lung cancer. In this study, a computer-aided system for detection of pulmonary nodules on CT scan based support vector machine classifier is provided for the diagnosis of solitary pulmonary nodules. So at the first step, by data mining techniques, volume of data are reduced. Then divided by the area of the chest, the suspicious nodules are identified and eventually nodules are detected. In comparison with the threshold-based methods, support vector machine classifier to classify more accurately describes areas of the lungs. In this study, the false positive rate is reduced by combination of threshold with support vector machine classifier. Experimental results based on data from 147 patients with lung LIDC image database show that the proposed system is able to obtained sensitivity of 89.9% and false positive of 3.9 per scan. In comparison to previous systems, the proposed system demonstrates good performance.

Towards Fully Automatic X-Ray to CT Registration

  • Esteban, Javier
  • Grimm, Matthias
  • Unberath, Mathias
  • Zahnd, Guillaume
  • Navab, Nassir
2019 Journal Article, cited 3 times
Website
The main challenge preventing a fully-automatic X-ray to CT registration is an initialization scheme that brings the X-ray pose within the capture range of existing intensity-based registration methods. By providing such an automatic initialization, the present study introduces the first end-to-end fully-automatic registration framework. A network is first trained once on artificial X-rays to extract 2D landmarks resulting from the projection of CT-labels. A patient-specific refinement scheme is then carried out: candidate points detected from a new set of artificial X-rays are back-projected onto the patient CT and merged into a refined meaningful set of landmarks used for network re-training. This network-landmarks combination is finally exploited for intraoperative pose-initialization with a runtime of 102 ms. Evaluated on 6 pelvis anatomies (486 images in total), the mean Target Registration Error was 15.0±7.3 mm. When used to initialize the BOBYQA optimizer with normalized cross-correlation, the average (± STD) projection distance was 3.4±2.3 mm, and the registration success rate (projection distance <2.5% of the detector width) greater than 97%.

Tumour heterogeneity revealed by unsupervised decomposition of dynamic contrast-enhanced magnetic resonance imaging is associated with underlying gene expression patterns and poor survival in breast cancer patients

  • Fan, M.
  • Xia, P.
  • Liu, B.
  • Zhang, L.
  • Wang, Y.
  • Gao, X.
  • Li, L.
Breast Cancer Res 2019 Journal Article, cited 3 times
Website
BACKGROUND: Heterogeneity is a common finding within tumours. We evaluated the imaging features of tumours based on the decomposition of tumoural dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) data to identify their prognostic value for breast cancer survival and to explore their biological importance. METHODS: Imaging features (n = 14), such as texture, histogram distribution and morphological features, were extracted to determine their associations with recurrence-free survival (RFS) in patients in the training cohort (n = 61) from The Cancer Imaging Archive (TCIA). The prognostic value of the features was evaluated in an independent dataset of 173 patients (i.e. the reproducibility cohort) from the TCIA I-SPY 1 TRIAL dataset. Radiogenomic analysis was performed in an additional cohort, the radiogenomic cohort (n = 87), using DCE-MRI from TCGA-BRCA and corresponding gene expression data from The Cancer Genome Atlas (TCGA). The MRI tumour area was decomposed by convex analysis of mixtures (CAM), resulting in 3 components that represent plasma input, fast-flow kinetics and slow-flow kinetics. The prognostic MRI features were associated with the gene expression module in which the pathway was analysed. Furthermore, a multigene signature for each prognostic imaging feature was built, and the prognostic value for RFS and overall survival (OS) was confirmed in an additional cohort from TCGA. RESULTS: Three image features (i.e. the maximum probability from the precontrast MR series, the median value from the second postcontrast series and the overall tumour volume) were independently correlated with RFS (p values of 0.0018, 0.0036 and 0.0032, respectively). The maximum probability feature from the fast-flow kinetics subregion was also significantly associated with RFS and OS in the reproducibility cohort. Additionally, this feature had a high correlation with the gene expression module (r = 0.59), and the pathway analysis showed that Ras signalling, a breast cancer-related pathway, was significantly enriched (corrected p value = 0.0044). Gene signatures (n = 43) associated with the maximum probability feature were assessed for associations with RFS (p = 0.035) and OS (p = 0.027) in an independent dataset containing 1010 gene expression samples. Among the 43 gene signatures, Ras signalling was also significantly enriched. CONCLUSIONS: Dynamic pattern deconvolution revealed that tumour heterogeneity was associated with poor survival and cancer-related pathways in breast cancer.

Feature fusion for lung nodule classification

  • Farag, Amal A
  • Ali, Asem
  • Elshazly, Salwa
  • Farag, Aly A
International journal of computer assisted radiology and surgery 2017 Journal Article, cited 3 times
Website

Hybrid intelligent approach for diagnosis of the lung nodule from CT images using spatial kernelized fuzzy c-means and ensemble learning

  • Farahani, Farzad Vasheghani
  • Ahmadi, Abbas
  • Zarandi, Mohammad Hossein Fazel
Mathematics and Computers in Simulation 2018 Journal Article, cited 1 times
Website

Computational Challenges and Collaborative Projects in the NCI Quantitative Imaging Network

  • Farahani, Keyvan
  • Kalpathy-Cramer, Jayashree
  • Chenevert, Thomas L
  • Rubin, Daniel L
  • Sunderland, John J
  • Nordstrom, Robert J
  • Buatti, John
  • Hylton, Nola
Tomography 2016 Journal Article, cited 2 times
Website
The Quantitative Imaging Network (QIN) of the National Cancer Institute (NCI) conducts research in development and validation of imaging tools and methods for predicting and evaluating clinical response to cancer therapy. Members of the network are involved in examining various imaging and image assessment parameters through network-wide cooperative projects. To more effectively use the cooperative power of the network in conducting computational challenges in benchmarking of tools and methods and collaborative projects in analytical assessment of imaging technologies, the QIN Challenge Task Force has developed policies and procedures to enhance the value of these activities by developing guidelines and leveraging NCI resources to help their administration and manage dissemination of results. Challenges and Collaborative Projects (CCPs) are further divided into technical and clinical CCPs. As the first NCI network to engage in CCPs, we anticipate a variety of CCPs to be conducted by QIN teams in the coming years. These will be aimed to benchmark advanced software tools for clinical decision support, explore new imaging biomarkers for therapeutic assessment, and establish consensus on a range of methods and protocols in support of the use of quantitative imaging to predict and assess response to cancer therapy.

Recurrent Attention Network for False Positive Reduction in the Detection of Pulmonary Nodules in Thoracic CT Scans

  • M. Mehdi Farhangi
  • Nicholas Petrick
  • Berkman Sahiner
  • Hichem Frigui
  • Amir A. Amini
  • Aria Pezeshk
Med Phys 2020 Journal Article, cited 0 times
Website
PURPOSE: Multi-view 2-D Convolutional Neural Networks (CNNs) and 3-D CNNs have been successfully used for analyzing volumetric data in many state-of-the-art medical imaging applications. We propose an alternative modular framework that analyzes volumetric data with an approach that is analogous to radiologists' interpretation, and apply the framework to reduce false positives that are generated in Computer-Aided Detection (CADe) systems for pulmonary nodules in thoracic CT scans. METHODS: In our approach, a deep network consisting of 2-D CNNs first processes slices individually. The features extracted in this stage are then passed to a Recurrent Neural Network (RNN), thereby modeling consecutive slices as a sequence of temporal data and capturing the contextual information across all three dimensions in the volume of interest. Outputs of the RNN layer are weighed before the final fully connected layer, enabling the network to scale the importance of different slices within a volume of interest in an end-to-end training framework. RESULTS: We validated the proposed architecture on the false positive reduction track of the Lung Nodule Analysis (LUNA) challenge for pulmonary nodule detection in chest CT scans, and obtained competitive results compared to 3-D CNNs. Our results show that the proposed approach can encode the 3-D information in volumetric data effectively by achieving a sensitivity > 0.8 with just 1/8 false positives per scan. CONCLUSIONS: Our experimental results demonstrate the effectiveness of temporal analysis of volumetric images for the application of false positive reduction in chest CT scans and show that state-of-the-art 2-D architectures from the literature can be directly applied to analyzing volumetric medical data. As newer and better 2-D architectures are being developed at a much faster rate compared to 3-D architectures, our approach makes it easy to obtain state-of-the-art performance on volumetric data using new 2-D architectures.

A study of machine learning and deep learning models for solving medical imaging problems

  • Farhat, Fadi G.
2019 Thesis, cited 0 times
Website
Application of machine learning and deep learning methods on medical imaging aims to create systems that can help in the diagnosis of disease and the automation of analyzing medical images in order to facilitate treatment planning. Deep learning methods do well in image recognition, but medical images present unique challenges. The lack of large amounts of data, the image size, and the high class-imbalance in most datasets, makes training a machine learning model to recognize a particular pattern that is typically present only in case images a formidable task. Experiments are conducted to classify breast cancer images as healthy or nonhealthy, and to detect lesions in damaged brain MRI (Magnetic Resonance Imaging) scans. Random Forest, Logistic Regression and Support Vector Machine perform competitively in the classification experiments, but in general, deep neural networks beat all conventional methods. Gaussian Naïve Bayes (GNB) and the Lesion Identification with Neighborhood Data Analysis (LINDA) methods produce better lesion detection results than single path neural networks, but a multi-modal, multi-path deep neural network beats all other methods. The importance of pre-processing training data is also highlighted and demonstrated, especially for medical images, which require extensive preparation to improve classifier and detector performance. Only a more complex and deeper neural network combined with properly pre-processed data can produce the desired accuracy levels that can rival and maybe exceed those of human experts.

Signal intensity analysis of ecological defined habitat in soft tissue sarcomas to predict metastasis development

  • Farhidzadeh, Hamidreza
  • Chaudhury, Baishali
  • Scott, Jacob G
  • Goldgof, Dmitry B
  • Hall, Lawrence O
  • Gatenby, Robert A
  • Gillies, Robert J
  • Raghavan, Meera
2016 Conference Proceedings, cited 6 times
Website

Quantitative Imaging Informatics for Cancer Research

  • Fedorov, Andrey
  • Beichel, Reinhard
  • Kalpathy-Cramer, Jayashree
  • Clunie, David
  • Onken, Michael
  • Riesmeier, Jorg
  • Herz, Christian
  • Bauer, Christian
  • Beers, Andrew
  • Fillion-Robin, Jean-Christophe
  • Lasso, Andras
  • Pinter, Csaba
  • Pieper, Steve
  • Nolden, Marco
  • Maier-Hein, Klaus
  • Herrmann, Markus D
  • Saltz, Joel
  • Prior, Fred
  • Fennessy, Fiona
  • Buatti, John
  • Kikinis, Ron
JCO Clin Cancer Inform 2020 Journal Article, cited 0 times
Website
PURPOSE: We summarize Quantitative Imaging Informatics for Cancer Research (QIICR; U24 CA180918), one of the first projects funded by the National Cancer Institute (NCI) Informatics Technology for Cancer Research program. METHODS: QIICR was motivated by the 3 use cases from the NCI Quantitative Imaging Network. 3D Slicer was selected as the platform for implementation of open-source quantitative imaging (QI) tools. Digital Imaging and Communications in Medicine (DICOM) was chosen for standardization of QI analysis outputs. Support of improved integration with community repositories focused on The Cancer Imaging Archive (TCIA). Priorities included improved capabilities of the standard, toolkits and tools, reference datasets, collaborations, and training and outreach. RESULTS: Fourteen new tools to support head and neck cancer, glioblastoma, and prostate cancer QI research were introduced and downloaded over 100,000 times. DICOM was amended, with over 40 correction proposals addressing QI needs. Reference implementations of the standard in a popular toolkit and standalone tools were introduced. Eight datasets exemplifying the application of the standard and tools were contributed. An open demonstration/connectathon was organized, attracting the participation of academic groups and commercial vendors. Integration of tools with TCIA was improved by implementing programmatic communication interface and by refining best practices for QI analysis results curation. CONCLUSION: Tools, capabilities of the DICOM standard, and datasets we introduced found adoption and utility within the cancer imaging community. A collaborative approach is critical to addressing challenges in imaging informatics at the national and international levels. Numerous challenges remain in establishing and maintaining the infrastructure of analysis tools and standardized datasets for the imaging community. Ideas and technology developed by the QIICR project are contributing to the NCI Imaging Data Commons currently being developed.

DICOM for quantitative imaging biomarker development: a standards based approach to sharing clinical data and structured PET/CT analysis results in head and neck cancer research

  • Fedorov, Andriy
  • Clunie, David
  • Ulrich, Ethan
  • Bauer, Christian
  • Wahle, Andreas
  • Brown, Bartley
  • Onken, Michael
  • Riesmeier, Jörg
  • Pieper, Steve
  • Kikinis, Ron
PeerJ 2016 Journal Article, cited 20 times
Website

DICOM for quantitative imaging biomarker development: a standards based approach to sharing clinical data and structured PET/CT analysis results in head and neck cancer research

  • Fedorov, Andriy
  • Clunie, David
  • Ulrich, Ethan
  • Bauer, Christian
  • Wahle, Andreas
  • Brown, Bartley
  • Onken, Michael
  • Riesmeier, Jörg
  • Pieper, Steve
  • Kikinis, Ron
  • Buatti, John
  • Beichel, Reinhard R
PeerJ 2016 Journal Article, cited 20 times
Website
Background. Imaging biomarkers hold tremendous promise for precision medicine clinical applications. Development of such biomarkers relies heavily on image post-processing tools for automated image quantitation. Their deployment in the context of clinical research necessitates interoperability with the clinical systems. Comparison with the established outcomes and evaluation tasks motivate integration of the clinical and imaging data, and the use of standardized approaches to support annotation and sharing of the analysis results and semantics. We developed the methodology and tools to support these tasks in Positron Emission Tomography and Computed Tomography (PET/CT) quantitative imaging (QI) biomarker development applied to head and neck cancer (HNC) treatment response assessment, using the Digital Imaging and Communications in Medicine (DICOM((R))) international standard and free open-source software. Methods. Quantitative analysis of PET/CT imaging data collected on patients undergoing treatment for HNC was conducted. Processing steps included Standardized Uptake Value (SUV) normalization of the images, segmentation of the tumor using manual and semi-automatic approaches, automatic segmentation of the reference regions, and extraction of the volumetric segmentation-based measurements. Suitable components of the DICOM standard were identified to model the various types of data produced by the analysis. A developer toolkit of conversion routines and an Application Programming Interface (API) were contributed and applied to create a standards-based representation of the data. Results. DICOM Real World Value Mapping, Segmentation and Structured Reporting objects were utilized for standards-compliant representation of the PET/CT QI analysis results and relevant clinical data. A number of correction proposals to the standard were developed. The open-source DICOM toolkit (DCMTK) was improved to simplify the task of DICOM encoding by introducing new API abstractions. Conversion and visualization tools utilizing this toolkit were developed. The encoded objects were validated for consistency and interoperability. The resulting dataset was deposited in the QIN-HEADNECK collection of The Cancer Imaging Archive (TCIA). Supporting tools for data analysis and DICOM conversion were made available as free open-source software. Discussion. We presented a detailed investigation of the development and application of the DICOM model, as well as the supporting open-source tools and toolkits, to accommodate representation of the research data in QI biomarker development. We demonstrated that the DICOM standard can be used to represent the types of data relevant in HNC QI biomarker development, and encode their complex relationships. The resulting annotated objects are amenable to data mining applications, and are interoperable with a variety of systems that support the DICOM standard.

A comparison of two methods for estimating DCE-MRI parameters via individual and cohort based AIFs in prostate cancer: A step towards practical implementation

  • Fedorov, Andriy
  • Fluckiger, Jacob
  • Ayers, Gregory D
  • Li, Xia
  • Gupta, Sandeep N
  • Tempany, Clare
  • Mulkern, Robert
  • Yankeelov, Thomas E
  • Fennessy, Fiona M
Magnetic Resonance Imaging 2014 Journal Article, cited 30 times
Website
Multi-parametric Magnetic Resonance Imaging, and specifically Dynamic Contrast Enhanced (DCE) MRI, play increasingly important roles in detection and staging of prostate cancer (PCa). One of the actively investigated approaches to DCE MRI analysis involves pharmacokinetic (PK) modeling to extract quantitative parameters that may be related to microvascular properties of the tissue. It is well-known that the prescribed arterial blood plasma concentration (or Arterial Input Function, AIF) input can have significant effects on the parameters estimated by PK modeling. The purpose of our study was to investigate such effects in DCE MRI data acquired in a typical clinical PCa setting. First, we investigated how the choice of a semi-automated or fully automated image-based individualized AIF (iAIF) estimation method affects the PK parameter values; and second, we examined the use of method-specific averaged AIF (cohort-based, or cAIF) as a means to attenuate the differences between the two AIF estimation methods. Two methods for automated image-based estimation of individualized (patient-specific) AIFs, one of which was previously validated for brain and the other for breast MRI, were compared. cAIFs were constructed by averaging the iAIF curves over the individual patients for each of the two methods. Pharmacokinetic analysis using the Generalized kinetic model and each of the four AIF choices (iAIF and cAIF for each of the two image-based AIF estimation approaches) was applied to derive the volume transfer rate (K(trans)) and extravascular extracellular volume fraction (ve) in the areas of prostate tumor. Differences between the parameters obtained using iAIF and cAIF for a given method (intra-method comparison) as well as inter-method differences were quantified. The study utilized DCE MRI data collected in 17 patients with histologically confirmed PCa. Comparison at the level of the tumor region of interest (ROI) showed that the two automated methods resulted in significantly different (p<0.05) mean estimates of ve, but not of K(trans). Comparing cAIF, different estimates for both ve, and K(trans) were obtained. Intra-method comparison between the iAIF- and cAIF-driven analyses showed the lack of effect on ve, while K(trans) values were significantly different for one of the methods. Our results indicate that the choice of the algorithm used for automated image-based AIF determination can lead to significant differences in the values of the estimated PK parameters. K(trans) estimates are more sensitive to the choice between cAIF/iAIF as compared to ve, leading to potentially significant differences depending on the AIF method. These observations may have practical consequences in evaluating the PK analysis results obtained in a multi-site setting.

An annotated test-retest collection of prostate multiparametric MRI

  • Fedorov, Andriy
  • Schwier, Michael
  • Clunie, David
  • Herz, Christian
  • Pieper, Steve
  • Kikinis, Ron
  • Tempany, Clare
  • Fennessy, Fiona
Scientific data 2018 Journal Article, cited 0 times
Website

Somatostatin Receptor Expression on VHL-Associated Hemangioblastomas Offers Novel Therapeutic Target

  • Feldman, Michael
  • Piazza, Martin G
  • Edwards, Nancy A
  • Ray-Chaudhury, Abhik
  • Maric, Dragan
  • Merrill, Marsha J
  • Zhuang, Zhengping
  • Chittiboina, Prashant
Neurosurgery 2015 Journal Article, cited 0 times

Identifying BAP1 Mutations in Clear-Cell Renal Cell Carcinoma by CT Radiomics: Preliminary Findings

  • Feng, Zhan
  • Zhang, Lixia
  • Qi, Zhong
  • Shen, Qijun
  • Hu, Zhengyu
  • Chen, Feng
Frontiers in Oncology 2020 Journal Article, cited 0 times
Website
To evaluate the potential application of computed tomography (CT) radiomics in the prediction of BRCA1-associated protein 1 (BAP1) mutation status in patients with clear-cell renal cell carcinoma (ccRCC). In this retrospective study, clinical and CT imaging data of 54 patients were retrieved from The Cancer Genome Atlas–Kidney Renal Clear Cell Carcinoma database. Among these, 45 patients had wild-type BAP1 and nine patients had BAP1 mutation. The texture features of tumor images were extracted using the Matlab-based IBEX package. To produce class-balanced data and improve the stability of prediction, we performed data augmentation for the BAP1 mutation group during cross validation. A model to predict BAP1 mutation status was constructed using Random Forest Classification algorithms, and was evaluated using leave-one-out-cross-validation. Random Forest model of predict BAP1 mutation status had an accuracy of 0.83, sensitivity of 0.72, specificity of 0.87, precision of 0.65, AUC of 0.77, F-score of 0.68. CT radiomics is a potential and feasible method for predicting BAP1 mutation status in patients with ccRCC.

HEVC optimizations for medical environments

  • Fernández, DG
  • Del Barrio, AA
  • Botella, Guillermo
  • García, Carlos
  • Meyer-Baese, Uwe
  • Meyer-Baese, Anke
2016 Conference Proceedings, cited 5 times
Website

Characterization of Pulmonary Nodules Based on Features of Margin Sharpness and Texture

  • Ferreira, José Raniery
  • Oliveira, Marcelo Costa
  • de Azevedo-Marques, Paulo Mazzoncini
Journal of Digital Imaging 2017 Journal Article, cited 1 times
Website

On the Evaluation of the Suitability of the Materials Used to 3D Print Holographic Acoustic Lenses to Correct Transcranial Focused Ultrasound Aberrations

  • Ferri, Marcelino
  • Bravo, Jose Maria
  • Redondo, Javier
  • Jimenez-Gambin, Sergio
  • Jimenez, Noe
  • Camarena, Francisco
  • Sanchez-Perez, Juan Vicente
Polymers (Basel) 2019 Journal Article, cited 2 times
Website
The correction of transcranial focused ultrasound aberrations is a relevant topic for enhancing various non-invasive medical treatments. Presently, the most widely accepted method to improve focusing is the emission through multi-element phased arrays; however, a new disruptive technology, based on 3D printed holographic acoustic lenses, has recently been proposed, overcoming the spatial limitations of phased arrays due to the submillimetric precision of the latest generation of 3D printers. This work aims to optimize this recent solution. Particularly, the preferred acoustic properties of the polymers used for printing the lenses are systematically analyzed, paying special attention to the effect of p-wave speed and its relationship to the achievable voxel size of 3D printers. Results from simulations and experiments clearly show that, given a particular voxel size, there are optimal ranges for lens thickness and p-wave speed, fairly independent of the emitted frequency, the transducer aperture, or the transducer-target distance.

Enhanced Numerical Method for the Design of 3-D-Printed Holographic Acoustic Lenses for Aberration Correction of Single-Element Transcranial Focused Ultrasound

  • Marcelino Ferri
  • José M. Bravo
  • Javier Redondo
  • Juan V. Sánchez-Pérez
Ultrasound in Medicine & Biology 2018 Journal Article, cited 0 times
Website
The correction of transcranial focused ultrasound aberrations is a relevant issue for enhancing various non-invasive medical treatments. The emission through multi-element phased arrays has been the most widely accepted method to improve focusing in recent years; however, the number and size of transducers represent a bottleneck that limits the focusing accuracy of the technique. To overcome this limitation, a new disruptive technology, based on 3-D-printed acoustic lenses, has recently been proposed. As the submillimeter precision of the latest generation of 3-D printers has been proven to overcome the spatial limitations of phased arrays, a new challenge is to improve the accuracy of the numerical simulations required to design this type of ultrasound lens. In the study described here, we evaluated two improvements in the numerical model applied in previous works for the design of 3-D-printed lenses: (i) allowing the propagation of shear waves in the skull by means of its simulation as an isotropic solid and (ii) introduction of absorption into the set of equations that describes the dynamics of the wave in both fluid and solid media. The results obtained in the numerical simulations are evidence that the inclusion of both s-waves and absorption significantly improves focusing.

LCD-OpenPACS: sistema integrado de telerradiologia com auxílio ao diagnóstico de nódulos pulmonares em exames de tomografia computadorizada

  • Firmino Filho, José Macêdo
2015 Thesis, cited 1 times
Website

Computer-aided detection (CADe) and diagnosis (CADx) system for lung cancer with likelihood of malignancy

  • Firmino, Macedo
  • Angelo, Giovani
  • Morais, Higor
  • Dantas, Marcel R
  • Valentim, Ricardo
Biomedical engineering online 2016 Journal Article, cited 63 times
Website
BACKGROUND: CADe and CADx systems for the detection and diagnosis of lung cancer have been important areas of research in recent decades. However, these areas are being worked on separately. CADe systems do not present the radiological characteristics of tumors, and CADx systems do not detect nodules and do not have good levels of automation. As a result, these systems are not yet widely used in clinical settings. METHODS: The purpose of this article is to develop a new system for detection and diagnosis of pulmonary nodules on CT images, grouping them into a single system for the identification and characterization of the nodules to improve the level of automation. The article also presents as contributions: the use of Watershed and Histogram of oriented Gradients (HOG) techniques for distinguishing the possible nodules from other structures and feature extraction for pulmonary nodules, respectively. For the diagnosis, it is based on the likelihood of malignancy allowing more aid in the decision making by the radiologists. A rule-based classifier and Support Vector Machine (SVM) have been used to eliminate false positives. RESULTS: The database used in this research consisted of 420 cases obtained randomly from LIDC-IDRI. The segmentation method achieved an accuracy of 97 % and the detection system showed a sensitivity of 94.4 % with 7.04 false positives per case. Different types of nodules (isolated, juxtapleural, juxtavascular and ground-glass) with diameters between 3 mm and 30 mm have been detected. For the diagnosis of malignancy our system presented ROC curves with areas of: 0.91 for nodules highly unlikely of being malignant, 0.80 for nodules moderately unlikely of being malignant, 0.72 for nodules with indeterminate malignancy, 0.67 for nodules moderately suspicious of being malignant and 0.83 for nodules highly suspicious of being malignant. CONCLUSIONS: From our preliminary results, we believe that our system is promising for clinical applications assisting radiologists in the detection and diagnosis of lung cancer.

A Radiogenomic Approach for Decoding Molecular Mechanisms Underlying Tumor Progression in Prostate Cancer

  • Fischer, Sarah
  • Tahoun, Mohamed
  • Klaan, Bastian
  • Thierfelder, Kolja M
  • Weber, Marc-Andre
  • Krause, Bernd J
  • Hakenberg, Oliver
  • Fuellen, Georg
  • Hamed, Mohamed
Cancers (Basel) 2019 Journal Article, cited 0 times
Website
Prostate cancer (PCa) is a genetically heterogeneous cancer entity that causes challenges in pre-treatment clinical evaluation, such as the correct identification of the tumor stage. Conventional clinical tests based on digital rectal examination, Prostate-Specific Antigen (PSA) levels, and Gleason score still lack accuracy for stage prediction. We hypothesize that unraveling the molecular mechanisms underlying PCa staging via integrative analysis of multi-OMICs data could significantly improve the prediction accuracy for PCa pathological stages. We present a radiogenomic approach comprising clinical, imaging, and two genomic (gene and miRNA expression) datasets for 298 PCa patients. Comprehensive analysis of gene and miRNA expression profiles for two frequent PCa stages (T2c and T3b) unraveled the molecular characteristics for each stage and the corresponding gene regulatory interaction network that may drive tumor upstaging from T2c to T3b. Furthermore, four biomarkers (ANPEP, mir-217, mir-592, mir-6715b) were found to distinguish between the two PCa stages and were highly correlated (average r = +/- 0.75) with corresponding aggressiveness-related imaging features in both tumor stages. When combined with related clinical features, these biomarkers markedly improved the prediction accuracy for the pathological stage. Our prediction model exhibits high potential to yield clinically relevant results for characterizing PCa aggressiveness.

The ASNR-ACR-RSNA Common Data Elements Project: What Will It Do for the House of Neuroradiology?

  • Flanders, AE
  • Jordan, JE
American Journal of Neuroradiology 2018 Journal Article, cited 0 times
Website

Computer-aided nodule assessment and risk yield risk management of adenocarcinoma: the future of imaging?

  • Foley, Finbar
  • Rajagopalan, Srinivasan
  • Raghunath, Sushravya M
  • Boland, Jennifer M
  • Karwoski, Ronald A
  • Maldonado, Fabien
  • Bartholmai, Brian J
  • Peikert, Tobias
2016 Conference Proceedings, cited 7 times
Website

Breast Lesion Segmentation in DCE- MRI Imaging

  • Frackiewicz, Mariusz
  • Koper, Zuzanna
  • Palus, Henryk
  • Borys, Damian
  • Psiuk-Maksymowicz, Krzysztof
2018 Conference Proceedings, cited 0 times
Website
Breast cancer is one of the most common cancers in women. Typically, the course of the disease is asymptomatic in the early stages of breast cancer. Imaging breast examinations allow early detection of the cancer, which is associated with increased chances of a complete cure. There are many breast imaging techniques such as: mammography (MM), ultrasound imaging (US), positron-emission tomography (PET), computed tomography (CT), and magnetic resonance imaging (MRI). These imaging techniques differ in terms of effectiveness, price, type of physical phenomenon, the impact on the patient and its availability. In this paper, we focus on MRI imaging and we compare three breast lesion segmentation algorithms that have been tested on QIN Breast DCE-MRI database, which is publicly available. The obtained values of Dice and Jaccard indices indicate the segmentation using k-means algorithm.

A radiomics model from joint FDG-PET and MRI texture features for the prediction of lung metastases in soft-tissue sarcomas of the extremities

  • Freeman, CR
  • Skamene, SR
  • El Naqa, I
Physics in medicine and biology 2015 Journal Article, cited 199 times
Website

Supervised Machine-Learning Framework and Classifier Evaluation for Automated Three-dimensional Medical Image Segmentation based on Body MRI

  • Frischmann, Patrick
2013 Thesis, cited 0 times
Website

Automatic Detection of Lung Nodules Using 3D Deep Convolutional Neural Networks

  • Fu, Ling
  • Ma, Jingchen
  • Chen, Yizhi
  • Larsson, Rasmus
  • Zhao, Jun
Journal of Shanghai Jiaotong University (Science) 2019 Journal Article, cited 0 times
Website
Lung cancer is the leading cause of cancer deaths worldwide. Accurate early diagnosis is critical in increasing the 5-year survival rate of lung cancer, so the efficient and accurate detection of lung nodules, the potential precursors to lung cancer, is paramount. In this paper, a computer-aided lung nodule detection system using 3D deep convolutional neural networks (CNNs) is developed. The first multi-scale 11-layer 3D fully convolutional neural network (FCN) is used for screening all lung nodule candidates. Considering relative small sizes of lung nodules and limited memory, the input of the FCN consists of 3D image patches rather than of whole images. The candidates are further classified in the second CNN to get the final result. The proposed method achieves high performance in the LUNA16 challenge and demonstrates the effectiveness of using 3D deep CNNs for lung nodule detection.

A fast and scalable method for quality assurance of deformable image registration on lung CT scans using convolutional neural networks

  • Galib, Shaikat M
  • Lee, Hyoung K
  • Guy, Christopher L
  • Riblett, Matthew J
  • Hugo, Geoffrey D
Med Phys 2020 Journal Article, cited 1 times
Website
PURPOSE: To develop and evaluate a method to automatically identify and quantify deformable image registration (DIR) errors between lung computed tomography (CT) scans for quality assurance (QA) purposes. METHODS: We propose a deep learning method to flag registration errors. The method involves preparation of a dataset for machine learning model training and testing, design of a three-dimensional (3D) convolutional neural network architecture that classifies registrations into good or poor classes, and evaluation of a metric called registration error index (REI) which provides a quantitative measure of registration error. RESULTS: Our study shows that, despite having limited number of training images available (10 CT scan pairs for training and 17 CT scan pairs for testing), the method achieves 0.882 AUC-ROC on the test dataset. Furthermore, the combined standard uncertainty of the estimated REI by our model lies within +/- 0.11 (+/- 11% of true REI value), with a confidence level of approximately 68%. CONCLUSIONS: We have developed and evaluated our method using original clinical registrations without generating any synthetic/simulated data. Moreover, test data were acquired from a different environment than that of training data, so that the method was validated robustly. The results of this study showed that our algorithm performs reasonably well in challenging scenarios.

Extraction of pulmonary vessels and tumour from plain computed tomography sequence

  • Ganapathy, Sridevi
  • Ashar, Kinnari
  • Kathirvelu, D
2018 Conference Proceedings, cited 0 times
Website

Performance analysis for nonlinear tomographic data processing

  • Gang, Grace J
  • Guo, Xueqi
  • Stayman IV, J Webster
2019 Conference Proceedings, cited 0 times
Website

Simultaneous emission and attenuation reconstruction in time-of-flight PET using a reference object

  • Garcia-Perez, P.
  • Espana, S.
EJNMMI Phys 2020 Journal Article, cited 0 times
Website
BACKGROUND: Simultaneous reconstruction of emission and attenuation images in time-of-flight (TOF) positron emission tomography (PET) does not provide a unique solution. In this study, we propose to solve this limitation by including additional information given by a reference object with known attenuation placed outside the patient. Different configurations of the reference object were studied including geometry, material composition, and activity, and an optimal configuration was defined. In addition, this configuration was tested for different timing resolutions and noise levels. RESULTS: The proposed strategy was tested in 2D simulations obtained by forward projection of available PET/CT data and noise was included using Monte Carlo techniques. Obtained results suggest that the optimal configuration corresponds to a water cylinder inserted in the patient table and filled with activity. In that case, mean differences between reconstructed and true images were below 10%. However, better results can be obtained by increasing the activity of the reference object. CONCLUSION: This study shows promising results that might allow to obtain an accurate attenuation map from pure TOF-PET data without prior knowledge obtained from CT, MRI, or transmission scans.

An Improved Mammogram Classification Approach Using Back Propagation Neural Network

  • Gautam, Aman
  • Bhateja, Vikrant
  • Tiwari, Ananya
  • Satapathy, Suresh Chandra
2017 Book Section, cited 16 times
Website

A resource for the assessment of lung nodule size estimation methods: database of thoracic CT scans of an anthropomorphic phantom

  • Gavrielides, Marios A
  • Kinnard, Lisa M
  • Myers, Kyle J
  • Peregoy, Jennifer
  • Pritchard, William F
  • Zeng, Rongping
  • Esparza, Juan
  • Karanian, John
  • Petrick, Nicholas
Optics express 2010 Journal Article, cited 50 times
Website
A number of interrelated factors can affect the precision and accuracy of lung nodule size estimation. To quantify the effect of these factors, we have been conducting phantom CT studies using an anthropomorphic thoracic phantom containing a vasculature insert to which synthetic nodules were inserted or attached. Ten repeat scans were acquired on different multi-detector scanners, using several sets of acquisition and reconstruction protocols and various nodule characteristics (size, shape, density, location). This study design enables both bias and variance analysis for the nodule size estimation task. The resulting database is in the process of becoming publicly available as a resource to facilitate the assessment of lung nodule size estimation methodologies and to enable comparisons between different methods regarding measurement error. This resource complements public databases of clinical data and will contribute towards the development of procedures that will maximize the utility of CT imaging for lung cancer screening and tumor therapy evaluation.

Benefit of overlapping reconstruction for improving the quantitative assessment of CT lung nodule volume

  • Gavrielides, Marios A
  • Zeng, Rongping
  • Myers, Kyle J
  • Sahiner, Berkman
  • Petrick, Nicholas
Academic radiology 2013 Journal Article, cited 23 times
Website
RATIONALE AND OBJECTIVES: The aim of this study was to quantify the effect of overlapping reconstruction on the precision and accuracy of lung nodule volume estimates in a phantom computed tomographic (CT) study. MATERIALS AND METHODS: An anthropomorphic phantom was used with a vasculature insert on which synthetic lung nodules were attached. Repeated scans of the phantom were acquired using a 64-slice CT scanner. Overlapping and contiguous reconstructions were performed for a range of CT imaging parameters (exposure, slice thickness, pitch, reconstruction kernel) and a range of nodule characteristics (size, density). Nodule volume was estimated with a previously developed matched-filter algorithm. RESULTS: Absolute percentage bias across all nodule sizes (n = 2880) was significantly lower when overlapping reconstruction was used, with an absolute percentage bias of 6.6% (95% confidence interval [CI], 6.4-6.9), compared to 13.2% (95% CI, 12.7-13.8) for contiguous reconstruction. Overlapping reconstruction also showed a precision benefit, with a lower standard percentage error of 7.1% (95% CI, 6.9-7.2) compared with 15.3% (95% CI, 14.9-15.7) for contiguous reconstructions across all nodules. Both effects were more pronounced for the smaller, subcentimeter nodules. CONCLUSIONS: These results support the use of overlapping reconstruction to improve the quantitative assessment of nodule size with CT imaging.

Automatic Segmentation of Colon in 3D CT Images and Removal of Opacified Fluid Using Cascade Feed Forward Neural Network

  • Gayathri Devi, K
  • Radhakrishnan, R
Computational and Mathematical Methods in Medicine 2015 Journal Article, cited 5 times
Website

Segmentation of colon and removal of opacified fluid for virtual colonoscopy

  • Gayathri, Devi K
  • Radhakrishnan, R
  • Rajamani, Kumar
Pattern Analysis and Applications 2017 Journal Article, cited 0 times
Website

Ultra-Fast 3D GPGPU Region Extractions for Anatomy Segmentation

  • George, Jose
  • Mysoon, N. S.
  • Antony, Nixima
2019 Conference Paper, cited 0 times
Website
Region extractions are ubiquitous in any anatomy segmentation. Region growing is one such method. Starting from an initial seed point, it grows a region of interest until all valid voxels are checked, thereby resulting in an object segmentation. Although widely used, it is computationally expensive because of its sequential approach. In this paper, we present a parallel and high performance alternate for region growing using GPGPU capability. The idea is to approximate region growing requirements within an algorithm using a parallel connected-component labeling (CCL) solution. To showcase this, we selected a typical lung segmentation problem using region growing. In CPU, sequential approach consists of 3D region growing inside a mask, that is created after applying a threshold. In GPU, parallel alternative is to apply parallel CCL and select the biggest region of interest. We evaluated our approach on 45 clinical chest CT scans in LIDC data from TCIA repository. With respect to CPU, our CUDA based GPU facilitated an average performance improvement of 240× approximately. Speed up is so profound that it can be even applied to 4D lung segmentation at 6 fps.

Synthetic Head and Neck and Phantom Images for Determining Deformable Image Registration Accuracy in Magnetic Resonance Imaging

  • Ger, Rachel B
  • Yang, Jinzhong
  • Ding, Yao
  • Jacobsen, Megan C
  • Cardenas, Carlos E
  • Fuller, Clifton D
  • Howell, Rebecca M
  • Li, Heng
  • Stafford, R Jason
  • Zhou, Shouhao
Medical physics 2018 Journal Article, cited 0 times
Website

Radiomics features of the primary tumor fail to improve prediction of overall survival in large cohorts of CT- and PET-imaged head and neck cancer patients

  • Ger, Rachel B
  • Zhou, Shouhao
  • Elgohari, Baher
  • Elhalawani, Hesham
  • Mackin, Dennis M
  • Meier, Joseph G
  • Nguyen, Callistus M
  • Anderson, Brian M
  • Gay, Casey
  • Ning, Jing
  • Fuller, Clifton D
  • Li, Heng
  • Howell, Rebecca M
  • Layman, Rick R
  • Mawlawi, Osama
  • Stafford, R Jason
  • Aerts, Hugo JWL
  • Court, Laurence E.
PLoS One 2019 Journal Article, cited 0 times
Website
Radiomics studies require many patients in order to power them, thus patients are often combined from different institutions and using different imaging protocols. Various studies have shown that imaging protocols affect radiomics feature values. We examined whether using data from cohorts with controlled imaging protocols improved patient outcome models. We retrospectively reviewed 726 CT and 686 PET images from head and neck cancer patients, who were divided into training or independent testing cohorts. For each patient, radiomics features with different preprocessing were calculated and two clinical variables-HPV status and tumor volume-were also included. A Cox proportional hazards model was built on the training data by using bootstrapped Lasso regression to predict overall survival. The effect of controlled imaging protocols on model performance was evaluated by subsetting the original training and independent testing cohorts to include only patients whose images were obtained using the same imaging protocol and vendor. Tumor volume, HPV status, and two radiomics covariates were selected for the CT model, resulting in an AUC of 0.72. However, volume alone produced a higher AUC, whereas adding radiomics features reduced the AUC. HPV status and one radiomics feature were selected as covariates for the PET model, resulting in an AUC of 0.59, but neither covariate was significantly associated with survival. Limiting the training and independent testing to patients with the same imaging protocol reduced the AUC for CT patients to 0.55, and no covariates were selected for PET patients. Radiomics features were not consistently associated with survival in CT or PET images of head and neck patients, even within patients with the same imaging protocol.

Glioblastoma Multiforme: Exploratory Radiogenomic Analysis by Using Quantitative Image Features

  • Gevaert, Olivier
  • Mitchell, Lex A
  • Achrol, Achal S
  • Xu, Jiajing
  • Echegaray, Sebastian
  • Steinberg, Gary K
  • Cheshier, Samuel H
  • Napel, Sandy
  • Zaharchuk, Greg
  • Plevritis, Sylvia K
Radiology 2014 Journal Article, cited 151 times
Website
Purpose: To derive quantitative image features from magnetic resonance (MR) images that characterize the radiographic phenotype of glioblastoma multiforme (GBM) lesions and to create radiogenomic maps associating these features with various molecular data. Materials and Methods: Clinical, molecular, and MR imaging data for GBMs in 55 patients were obtained from the Cancer Genome Atlas and the Cancer Imaging Archive after local ethics committee and institutional review board approval. Regions of interest (ROIs) corresponding to enhancing necrotic portions of tumor and peritumoral edema were drawn, and quantitative image features were derived from these ROIs. Robust quantitative image features were defined on the basis of an intraclass correlation coefficient of 0.6 for a digital algorithmic modification and a test-retest analysis. The robust features were visualized by using hierarchic clustering and were correlated with survival by using Cox proportional hazards modeling. Next, these robust image features were correlated with manual radiologist annotations from the Visually Accessible Rembrandt Images (VASARI) feature set and GBM molecular subgroups by using nonparametric statistical tests. A bioinformatic algorithm was used to create gene expression modules, defined as a set of coexpressed genes together with a multivariate model of cancer driver genes predictive of the module's expression pattern. Modules were correlated with robust image features by using the Spearman correlation test to create radiogenomic maps and to link robust image features with molecular pathways. Results: Eighteen image features passed the robustness analysis and were further analyzed for the three types of ROIs, for a total of 54 image features. Three enhancement features were significantly correlated with survival, 77 significant correlations were found between robust quantitative features and the VASARI feature set, and seven image features were correlated with molecular subgroups (P < .05 for all). A radiogenomics map was created to link image features with gene expression modules and allowed linkage of 56% (30 of 54) of the image features with biologic processes. Conclusion: Radiogenomic approaches in GBM have the potential to predict clinical and molecular characteristics of tumors noninvasively.

Non-small cell lung cancer: identifying prognostic imaging biomarkers by leveraging public gene expression microarray data--methods and preliminary results

  • Gevaert, O.
  • Xu, J.
  • Hoang, C. D.
  • Leung, A. N.
  • Xu, Y.
  • Quon, A.
  • Rubin, D. L.
  • Napel, S.
  • Plevritis, S. K.
Radiology 2012 Journal Article, cited 187 times
Website
PURPOSE: To identify prognostic imaging biomarkers in non-small cell lung cancer (NSCLC) by means of a radiogenomics strategy that integrates gene expression and medical images in patients for whom survival outcomes are not available by leveraging survival data in public gene expression data sets. MATERIALS AND METHODS: A radiogenomics strategy for associating image features with clusters of coexpressed genes (metagenes) was defined. First, a radiogenomics correlation map is created for a pairwise association between image features and metagenes. Next, predictive models of metagenes are built in terms of image features by using sparse linear regression. Similarly, predictive models of image features are built in terms of metagenes. Finally, the prognostic significance of the predicted image features are evaluated in a public gene expression data set with survival outcomes. This radiogenomics strategy was applied to a cohort of 26 patients with NSCLC for whom gene expression and 180 image features from computed tomography (CT) and positron emission tomography (PET)/CT were available. RESULTS: There were 243 statistically significant pairwise correlations between image features and metagenes of NSCLC. Metagenes were predicted in terms of image features with an accuracy of 59%-83%. One hundred fourteen of 180 CT image features and the PET standardized uptake value were predicted in terms of metagenes with an accuracy of 65%-86%. When the predicted image features were mapped to a public gene expression data set with survival outcomes, tumor size, edge shape, and sharpness ranked highest for prognostic significance. CONCLUSION: This radiogenomics strategy for identifying imaging biomarkers may enable a more rapid evaluation of novel imaging modalities, thereby accelerating their translation to personalized medicine.

Medical Imaging Segmentation Assessment via Bayesian Approaches to Fusion, Accuracy and Variability Estimation with Application to Head and Neck Cancer

  • Ghattas, Andrew Emile
2017 Thesis, cited 0 times
Website

Deep Learning for Low-Dose CT Denoising Using Perceptual Loss and Edge Detection Layer

  • Gholizadeh-Ansari, M.
  • Alirezaie, J.
  • Babyn, P.
J Digit Imaging 2019 Journal Article, cited 1 times
Website
Low-dose CT denoising is a challenging task that has been studied by many researchers. Some studies have used deep neural networks to improve the quality of low-dose CT images and achieved fruitful results. In this paper, we propose a deep neural network that uses dilated convolutions with different dilation rates instead of standard convolution helping to capture more contextual information in fewer layers. Also, we have employed residual learning by creating shortcut connections to transmit image information from the early layers to later ones. To further improve the performance of the network, we have introduced a non-trainable edge detection layer that extracts edges in horizontal, vertical, and diagonal directions. Finally, we demonstrate that optimizing the network by a combination of mean-square error loss and perceptual loss preserves many structural details in the CT image. This objective function does not suffer from over smoothing and blurring effects causing by per-pixel loss and grid-like artifacts resulting from perceptual loss. The experiments show that each modification to the network improves the outcome while changing the complexity of the network, minimally.

Brain tumor detection from MRI image: An approach

  • Ghosh, Debjyoti
  • Bandyopadhyay, Samir Kumar
IJAR 2017 Journal Article, cited 0 times
Website

Role of Imaging in the Era of Precision Medicine

  • Giardino, Angela
  • Gupta, Supriya
  • Olson, Emmi
  • Sepulveda, Karla
  • Lenchik, Leon
  • Ivanidze, Jana
  • Rakow-Penner, Rebecca
  • Patel, Midhir J
  • Subramaniam, Rathan M
  • Ganeshan, Dhakshinamoorthy
Academic radiology 2017 Journal Article, cited 12 times
Website

Automatic Multi-organ Segmentation on Abdominal CT with Dense V-networks

  • E. Gibson
  • F. Giganti
  • Y. Hu
  • E. Bonmati
  • S. Bandula
  • K. Gurusamy
  • B. Davidson
  • S. P. Pereira
  • M. J. Clarkson
  • D. C. Barratt
IEEE Transactions on Medical Imaging 2018 Journal Article, cited 14 times
Website

Towards Image-Guided Pancreas and Biliary Endoscopy: Automatic Multi-organ Segmentation on Abdominal CT with Dense Dilated Networks

  • Gibson, Eli
  • Giganti, Francesco
  • Hu, Yipeng
  • Bonmati, Ester
  • Bandula, Steve
  • Gurusamy, Kurinchi
  • Davidson, Brian R
  • Pereira, Stephen P
  • Clarkson, Matthew J
  • Barratt, Dean C
2017 Conference Proceedings, cited 14 times
Website

Quantitative CT assessment of emphysema and airways in relation to lung cancer risk

  • Gierada, David S
  • Guniganti, Preethi
  • Newman, Blake J
  • Dransfield, Mark T
  • Kvale, Paul A
  • Lynch, David A
  • Pilgram, Thomas K
Radiology 2011 Journal Article, cited 41 times
Website

Projected outcomes using different nodule sizes to define a positive CT lung cancer screening examination

  • Gierada, David S
  • Pinsky, Paul
  • Nath, Hrudaya
  • Chiles, Caroline
  • Duan, Fenghai
  • Aberle, Denise R
Journal of the National Cancer Institute 2014 Journal Article, cited 74 times
Website
Background Computed tomography (CT) screening for lung cancer has been associated with a high frequency of false positive results because of the high prevalence of indeterminate but usually benign small pulmonary nodules. The acceptability of reducing false-positive rates and diagnostic evaluations by increasing the nodule size threshold for a positive screen depends on the projected balance between benefits and risks. Methods We examined data from the National Lung Screening Trial (NLST) to estimate screening CT performance and outcomes for scans with nodules above the 4 mm NLST threshold used to classify a CT screen as positive. Outcomes assessed included screening results, subsequent diagnostic tests performed, lung cancer histology and stage distribution, and lung cancer mortality. Sensitivity, specificity, positive predictive value, and negative predictive value were calculated for the different nodule size thresholds. All statistical tests were two-sided. Results In 64% of positive screens (11 598/18 141), the largest nodule was 7 mm or less in greatest transverse diameter. By increasing the threshold, the percentages of lung cancer diagnoses that would have been missed or delayed and false positives that would have been avoided progressively increased, for example from 1.0% and 15.8% at a 5 mm threshold to 10.5% and 65.8% at an 8 mm threshold, respectively. The projected reductions in postscreening follow-up CT scans and invasive procedures also increased as the threshold was raised. Differences across nodules sizes for lung cancer histology and stage distribution were small but statistically significant. There were no differences across nodule sizes in survival or mortality. Conclusion Raising the nodule size threshold for a positive screen would substantially reduce false-positive CT screenings and medical resource utilization with a variable impact on screening outcomes.

Machine Learning in Medical Imaging

  • Giger, M. L.
J Am Coll Radiol 2018 Journal Article, cited 157 times
Website
Advances in both imaging and computers have synergistically led to a rapid rise in the potential use of artificial intelligence in various radiological imaging tasks, such as risk assessment, detection, diagnosis, prognosis, and therapy response, as well as in multi-omics disease discovery. A brief overview of the field is given here, allowing the reader to recognize the terminology, the various subfields, and components of machine learning, as well as the clinical potential. Radiomics, an expansion of computer-aided diagnosis, has been defined as the conversion of images to minable data. The ultimate benefit of quantitative radiomics is to (1) yield predictive image-based phenotypes of disease for precision medicine or (2) yield quantitative image-based phenotypes for data mining with other -omics for discovery (ie, imaging genomics). For deep learning in radiology to succeed, note that well-annotated large data sets are needed since deep networks are complex, computer software and hardware are evolving constantly, and subtle differences in disease states are more difficult to perceive than differences in everyday objects. In the future, machine learning in radiology is expected to have a substantial clinical impact with imaging examinations being routinely obtained in clinical practice, providing an opportunity to improve decision support in medical image interpretation. The term of note is decision support, indicating that computers will augment human decision making, making it more effective and efficient. The clinical impact of having computers in the routine clinical practice may allow radiologists to further integrate their knowledge with their clinical colleagues in other medical specialties and allow for precision medicine.

Radiomics: Images are more than pictures, they are data

  • Gillies, Robert J
  • Kinahan, Paul E
  • Hricak, Hedvig
Radiology 2015 Journal Article, cited 694 times
Website

Intuitive Error Space Exploration of Medical Image Data in Clinical Daily Routine

  • Gillmann, Christina
  • Arbeláez, Pablo
  • Peñaloza, José Tiberio Hernández
  • Hagen, Hans
  • Wischgoll, Thomas
2017 Conference Paper, cited 3 times
Website

Planning, guidance, and quality assurance of pelvic screw placement using deformable image registration

  • Goerres, J.
  • Uneri, A.
  • Jacobson, M.
  • Ramsay, B.
  • De Silva, T.
  • Ketcha, M.
  • Han, R.
  • Manbachi, A.
  • Vogt, S.
  • Kleinszig, G.
  • Wolinsky, J. P.
  • Osgood, G.
  • Siewerdsen, J. H.
Phys Med Biol 2017 Journal Article, cited 4 times
Website
Percutaneous pelvic screw placement is challenging due to narrow bone corridors surrounded by vulnerable structures and difficult visual interpretation of complex anatomical shapes in 2D x-ray projection images. To address these challenges, a system for planning, guidance, and quality assurance (QA) is presented, providing functionality analogous to surgical navigation, but based on robust 3D-2D image registration techniques using fluoroscopy images already acquired in routine workflow. Two novel aspects of the system are investigated: automatic planning of pelvic screw trajectories and the ability to account for deformation of surgical devices (K-wire deflection). Atlas-based registration is used to calculate a patient-specific plan of screw trajectories in preoperative CT. 3D-2D registration aligns the patient to CT within the projective geometry of intraoperative fluoroscopy. Deformable known-component registration (dKC-Reg) localizes the surgical device, and the combination of plan and device location is used to provide guidance and QA. A leave-one-out analysis evaluated the accuracy of automatic planning, and a cadaver experiment compared the accuracy of dKC-Reg to rigid approaches (e.g. optical tracking). Surgical plans conformed within the bone cortex by 3-4 mm for the narrowest corridor (superior pubic ramus) and >5 mm for the widest corridor (tear drop). The dKC-Reg algorithm localized the K-wire tip within 1.1 mm and 1.4 degrees and was consistently more accurate than rigid-body tracking (errors up to 9 mm). The system was shown to automatically compute reliable screw trajectories and accurately localize deformed surgical devices (K-wires). Such capability could improve guidance and QA in orthopaedic surgery, where workflow is impeded by manual planning, conventional tool trackers add complexity and cost, rigid tool assumptions are often inaccurate, and qualitative interpretation of complex anatomy from 2D projections is prone to trial-and-error with extended fluoroscopy time.

DeepCADe: A Deep Learning Architecture for the Detection of Lung Nodules in CT Scans

  • Golan, Rotem
2018 Thesis, cited 0 times
Website

Lung nodule detection in CT images using deep convolutional neural networks

  • Golan, Rotem
  • Jacob, Christian
  • Denzinger, Jörg
2016 Conference Proceedings, cited 26 times
Website

Automatic detection of pulmonary nodules in CT images by incorporating 3D tensor filtering with local image feature analysis

  • Gong, J.
  • Liu, J. Y.
  • Wang, L. J.
  • Sun, X. W.
  • Zheng, B.
  • Nie, S. D.
Physica Medica 2018 Journal Article, cited 4 times
Website

Optimal Statistical incorporation of independent feature Stability information into Radiomics Studies

  • Götz, Michael
  • Maier-Hein, Klaus H
Sci RepScientific reports 2020 Journal Article, cited 0 times
Website

Guest Editorial Deep Learning in Medical Imaging: Overview and Future Promise of an Exciting New Technique

  • Greenspan, Hayit
  • van Ginneken, Bram
  • Summers, Ronald M
IEEE Transactions on Medical Imaging 2016 Journal Article, cited 395 times
Website

Imaging and clinical data archive for head and neck squamous cell carcinoma patients treated with radiotherapy

  • Grossberg, Aaron J
  • Mohamed, Abdallah SR
  • El Halawani, Hesham
  • Bennett, William C
  • Smith, Kirk E
  • Nolan, Tracy S
  • Williams, Bowman
  • Chamchod, Sasikarn
  • Heukelom, Jolien
  • Kantor, Michael E
Scientific data 2018 Journal Article, cited 0 times
Website

Imaging-genomics reveals driving pathways of MRI derived volumetric tumor phenotype features in Glioblastoma

  • Grossmann, Patrick
  • Gutman, David A
  • Dunn, William D
  • Holder, Chad A
  • Aerts, Hugo JWL
BMC cancer 2016 Journal Article, cited 21 times
Website

Defining the biological and clinical basis of radiomics: towards clinical imaging biomarkers

  • Großmann, P. B. H. J.
  • Grossmann, Patrick Benedict Hans Juan
2018 Thesis, cited 0 times
Website

Quantitative Computed Tomographic Descriptors Associate Tumor Shape Complexity and Intratumor Heterogeneity with Prognosis in Lung Adenocarcinoma

  • Grove, Olya
  • Berglund, Anders E
  • Schabath, Matthew B
  • Aerts, Hugo JWL
  • Dekker, Andre
  • Wang, Hua
  • Velazquez, Emmanuel Rios
  • Lambin, Philippe
  • Gu, Yuhua
  • Balagurunathan, Yoganand
PLoS One 2015 Journal Article, cited 87 times
Website

Quantitative Computed Tomographic Descriptors Associate Tumor Shape Complexity and Intratumor Heterogeneity with Prognosis in Lung Adenocarcinoma

  • Grove, Olya
  • Berglund, Anders E
  • Schabath, Matthew B
  • Aerts, Hugo JWL
  • Dekker, Andre
  • Wang, Hua
  • Velazquez, Emmanuel Rios
  • Lambin, Philippe
  • Gu, Yuhua
  • Balagurunathan, Yoganand
  • Eikman, E.
  • Gatenby, Robert A
  • Eschrich, S
  • Gillies, Robert J
PLoS One 2015 Journal Article, cited 87 times
Website
Two CT features were developed to quantitatively describe lung adenocarcinomas by scoring tumor shape complexity (feature 1: convexity) and intratumor density variation (feature 2: entropy ratio) in routinely obtained diagnostic CT scans. The developed quantitative features were analyzed in two independent cohorts (cohort 1: n = 61; cohort 2: n = 47) of patients diagnosed with primary lung adenocarcinoma, retrospectively curated to include imaging and clinical data. Preoperative chest CTs were segmented semi-automatically. Segmented tumor regions were further subdivided into core and boundary sub-regions, to quantify intensity variations across the tumor. Reproducibility of the features was evaluated in an independent test-retest dataset of 32 patients. The proposed metrics showed high degree of reproducibility in a repeated experiment (concordance, CCC>/=0.897; dynamic range, DR>/=0.92). Association with overall survival was evaluated by Cox proportional hazard regression, Kaplan-Meier survival curves, and the log-rank test. Both features were associated with overall survival (convexity: p = 0.008; entropy ratio: p = 0.04) in Cohort 1 but not in Cohort 2 (convexity: p = 0.7; entropy ratio: p = 0.8). In both cohorts, these features were found to be descriptive and demonstrated the link between imaging characteristics and patient survival in lung adenocarcinoma.

Using Deep Learning for Pulmonary Nodule Detection & Diagnosis

  • Gruetzemacher, Richard
  • Gupta, Ashish
2016 Conference Paper, cited 0 times

Smooth extrapolation of unknown anatomy via statistical shape models

  • Grupp, RB
  • Chiang, H
  • Otake, Y
  • Murphy, RJ
  • Gordon, CR
  • Armand, M
  • Taylor, RH
2015 Conference Proceedings, cited 2 times
Website

Generative Models and Feature Extraction on Patient Images and Structure Data in Radiation Therapy

  • Gruselius, Hanna
Mathematics 2018 Thesis, cited 0 times
Website

Exploit fully automatic low-level segmented PET data for training high-level deep learning algorithms for the corresponding CT data

  • Gsaxner, Christina
  • Roth, Peter M
  • Wallner, Jurgen
  • Egger, Jan
PLoS One 2019 Journal Article, cited 0 times
Website
We present an approach for fully automatic urinary bladder segmentation in CT images with artificial neural networks in this study. Automatic medical image analysis has become an invaluable tool in the different treatment stages of diseases. Especially medical image segmentation plays a vital role, since segmentation is often the initial step in an image analysis pipeline. Since deep neural networks have made a large impact on the field of image processing in the past years, we use two different deep learning architectures to segment the urinary bladder. Both of these architectures are based on pre-trained classification networks that are adapted to perform semantic segmentation. Since deep neural networks require a large amount of training data, specifically images and corresponding ground truth labels, we furthermore propose a method to generate such a suitable training data set from Positron Emission Tomography/Computed Tomography image data. This is done by applying thresholding to the Positron Emission Tomography data for obtaining a ground truth and by utilizing data augmentation to enlarge the dataset. In this study, we discuss the influence of data augmentation on the segmentation results, and compare and evaluate the proposed architectures in terms of qualitative and quantitative segmentation performance. The results presented in this study allow concluding that deep neural networks can be considered a promising approach to segment the urinary bladder in CT images.

Automatic lung nodule detection using multi-scale dot nodule-enhancement filter and weighted support vector machines in chest computed tomography

  • Gu, Y.
  • Lu, X.
  • Zhang, B.
  • Zhao, Y.
  • Yu, D.
  • Gao, L.
  • Cui, G.
  • Wu, L.
  • Zhou, T.
PLoS One 2019 Journal Article, cited 0 times
Website
A novel CAD scheme for automated lung nodule detection is proposed to assist radiologists with the detection of lung cancer on CT scans. The proposed scheme is composed of four major steps: (1) lung volume segmentation, (2) nodule candidate extraction and grouping, (3) false positives reduction for the non-vessel tree group, and (4) classification for the vessel tree group. Lung segmentation is performed first. Then, 3D labeling technology is used to divide nodule candidates into two groups. For the non-vessel tree group, nodule candidates are classified as true nodules at the false positive reduction stage if the candidates survive the rule-based classifier and are not screened out by the dot filter. For the vessel tree group, nodule candidates are extracted using dot filter. Next, RSFS feature selection is used to select the most discriminating features for classification. Finally, WSVM with an undersampling approach is adopted to discriminate true nodules from vessel bifurcations in vessel tree group. The proposed method was evaluated on 154 thin-slice scans with 204 nodules in the LIDC database. The performance of the proposed CAD scheme yielded a high sensitivity (87.81%) while maintaining a low false rate (1.057 FPs/scan). The experimental results indicate the performance of our method may be better than the existing methods.

Automatic Colorectal Segmentation with Convolutional Neural Network

  • Guachi, Lorena
  • Guachi, Robinson
  • Bini, Fabiano
  • Marinozzi, Franco
Computer-Aided Design and Applications 2019 Journal Article, cited 3 times
Website
This paper presents a new method for colon tissues segmentation on Computed Tomography images which takes advantages of using deep and hierarchical learning about colon features through Convolutional Neural Networks (CNN). The proposed method works robustly reducing misclassified colon tissues pixels that are introduced by the presence of noise, artifacts, unclear edges, and other organs or different areas characterized by the same intensity value as the colon. Patch analysis is exploited for allowing the classification of each center pixel as colon tissue or background pixel. Experimental results demonstrate the proposed method achieves a higher effectiveness in terms of sensitivity and specificity with respect to three state-of the art methods.

User-centered design and evaluation of interactive segmentation methods for medical images

  • Gueziri, Houssem-Eddine
2017 Thesis, cited 1 times
Website
Segmentation of medical images is a challenging task that aims to identify a particular structure present on the image. Among the existing methods involving the user at different levels, from a fully-manual to a fully-automated task, interactive segmentation methods provide assistance to the user during the task to reduce the variability in the results and allow occasional corrections of segmentation failures. Therefore, they offer a compromise between the segmentation efficiency and the accuracy of the results. It is the user who judges whether the results are satisfactory and how to correct them during the segmentation, making the process subject to human factors. Despite the strong influence of the user on the outcomes of a segmentation task, the impact of such factors has received little attention, with the literature focusing the assessment of segmentation processes on computational performance. Yet, involving the user performance in the analysis is more representative of a realistic scenario. Our goal is to explore the user behaviour in order to improve the efficiency of interactive image segmentation processes. This is achieved through three contributions. First, we developed a method which is based on a new user interaction mechanism to provide hints as to where to concentrate the computations. This significantly improves the computation efficiency without sacrificing the quality of the segmentation. The benefits of using such hints are twofold: (i) because our contribution is based on user interaction, it generalizes to a wide range of segmentation methods, and (ii) it gives comprehensive indications about where to focus the segmentation search. The latter advantage is used to achieve the second contribution. We developed an automated method based on a multi-scale strategy to: (i) reduce the user’s workload and, (ii) improve the computational time up to tenfold, allowing real-time segmentation feedback. Third, we have investigated the effects of such improvements in computations on the user’s performance. We report an experiment that manipulates the delay induced by the computation time while performing an interactive segmentation task. Results reveal that the influence of this delay can be significantly reduced with an appropriate interaction mechanism design. In conclusion, this project provides an effective image segmentation solution that has been developed in compliance with user performance requirements. We validated our approach through multiple user studies that provided a step forward into understanding the user behaviour during interactive image segmentation. Titre traduit Conception et évaluation orientées utilisateur des méthodes de segmentation interactives des images médicales Résumé traduit La segmentation d’images consiste à identifier une structure particulière dans une image. Parmi les méthodes existantes qui impliquent l’utilisateur à différents niveaux, les méthodes de segmentation interactives fournissent un support logiciel pour assister l’utilisateur dans cette tâche, ce qui aide à réduire la variabilité des résultats et permet de corriger les erreurs occasionnelles. Ces méthodes offrent un compromis entre l’efficacité et la précision des résultats. En effet, durant la segmentation, l’utilisateur décide si les résultats sont satisfaisants et dans le cas contraire, comment les corriger, rendant le processus sujet aux facteurs humains. Malgré la forte influence qu’a l’utilisateur sur l’issue de la segmentation, l’impact de ces facteurs a reçu peu d’attention de la part de la communauté scientifique, qui souvent, réduit l’évaluation des methods de segmentation à leurs performances de calcul. Pourtant, inclure la performance de l’utilisateur lors de l’évaluation de la segmentation permet une représentation plus fidèle de la réalité. Notre but est d’explorer le comportement de l’utilisateur afin d’améliorer l’efficacité des méthodes de segmentation interactives. Cette tâche est réalisée en trois contributions. Dans un premier temps, nous avons développé un nouveau mécanisme d’interaction utilisateur qui oriente la méthode de segmentation vers les endroits de l’image où concentrer les calculs. Ceci augmente significativement l’efficacité des calculs sans atténuer la qualité de la segmentation. Il y a un double avantage à utiliser un tel mécanisme: (i) puisque notre contribution est base sur l’interaction utilisateur, l’approche est généralisable à un grand nombre de méthodes de segmentation, et (ii) ce mécanisme permet une meilleure compréhension des endroits de l’image où l’on doit orienter la recherche du contour lors de la segmentation. Ce dernier point est exploité pour réaliser la deuxième contribution. En effet, nous avons remplacé le mécanisme d’interaction par une méthode automatique basée sur une stratégie multi-échelle qui permet de: (i) réduire l’effort produit par l’utilisateur lors de la segmentation, et (ii) améliorer jusqu’à dix fois le temps de calcul, permettant une segmentation en temps-réel. Dans la troisième contribution, nous avons étudié l’effet d’une telle amélioration des performances de calculs sur l’utilisateur. Nous avons mené une expérience qui manipule les délais des calculs lors de la segmentation interactive. Les résultats révèlent qu’une conception appropriée du mécanisme d’interaction peut réduire l’effet de ces délais sur l’utilisateur. En conclusion, ce projet offer une solution interactive de segmentation d’images développée en tenant compte de la performance de l’utilisateur. Nous avons validé notre approche à travers de multiples études utilisateurs qui nous ont permis une meilleure compréhension du comportement utilisateur durant la segmentation interactive des images.

User-guided graph reduction for fast image segmentation

  • Gueziri, Houssem-Eddine
  • McGuffin, Michael J
  • Laporte, Catherine
2015 Conference Proceedings, cited 2 times
Website

A generalized graph reduction framework for interactive segmentation of large images

  • Gueziri, Houssem-Eddine
  • McGuffin, Michael J
  • Laporte, Catherine
Computer Vision and Image Understanding 2016 Journal Article, cited 5 times
Website
The speed of graph-based segmentation approaches, such as random walker (RW) and graph cut (GC), depends strongly on image size. For high-resolution images, the time required to compute a segmentation based on user input renders interaction tedious. We propose a novel method, using an approximate contour sketched by the user, to reduce the graph before passing it on to a segmentation algorithm such as RW or GC. This enables a significantly faster feedback loop. The user first draws a rough contour of the object to segment. Then, the pixels of the image are partitioned into "layers" (corresponding to different scales) based on their distance from the contour. The thickness of these layers increases with distance to the contour according to a Fibonacci sequence. An initial segmentation result is rapidly obtained after automatically generating foreground and background labels according to a specifically selected layer; all vertices beyond this layer are eliminated, restricting the segmentation to regions near the drawn contour. Further foreground background labels can then be added by the user to refine the segmentation. All iterations of the graph-based segmentation benefit from a reduced input graph, while maintaining full resolution near the object boundary. A user study with 16 participants was carried out for RW segmentation of a multi-modal dataset of 22 medical images, using either a standard mouse or a stylus pen to draw the contour. Results reveal that our approach significantly reduces the overall segmentation time compared with the status quo approach (p < 0.01). The study also shows that our approach works well with both input devices. Compared to super-pixel graph reduction, our approach provides full resolution accuracy at similar speed on a high-resolution benchmark image with both RW and GC segmentation methods. However, graph reduction based on super-pixels does not allow interactive correction of clustering errors. Finally, our approach can be combined with super-pixel clustering methods for further graph reduction, resulting in even faster segmentation. (C) 2016 Elsevier Inc. All rights reserved.

Feature selection and patch-based segmentation in MRI for prostate radiotherapy

  • Guinin, M
  • Ruan, S
  • Dubray, B
  • Massoptier, L
  • Gardin, I
2016 Conference Proceedings, cited 0 times
Website

Prediction of clinical phenotypes in invasive breast carcinomas from the integration of radiomics and genomics data

  • Guo, Wentian
  • Li, Hui
  • Zhu, Yitan
  • Lan, Li
  • Yang, Shengjie
  • Drukker, Karen
  • Morris, Elizabeth
  • Burnside, Elizabeth
  • Whitman, Gary
  • Giger, Maryellen L
Journal of Medical Imaging 2015 Journal Article, cited 57 times
Website

Prediction of clinical phenotypes in invasive breast carcinomas from the integration of radiomics and genomics data

  • Guo, Wentian
  • Li, Hui
  • Zhu, Yitan
  • Lan, Li
  • Yang, Shengjie
  • Drukker, Karen
  • Morris, Elizabeth
  • Burnside, Elizabeth
  • Whitman, Gary
  • Giger, Maryellen L
  • Ji, Y.
  • TCGA Breast Phenotype Research Group
Journal of Medical Imaging 2015 Journal Article, cited 57 times
Website
Genomic and radiomic imaging profiles of invasive breast carcinomas from The Cancer Genome Atlas and The Cancer Imaging Archive were integrated and a comprehensive analysis was conducted to predict clinical outcomes using the radiogenomic features. Variable selection via LASSO and logistic regression were used to select the most-predictive radiogenomic features for the clinical phenotypes, including pathological stage, lymph node metastasis, and status of estrogen receptor (ER), progesterone receptor (PR), and human epidermal growth factor receptor 2 (HER2). Cross-validation with receiver operating characteristic (ROC) analysis was performed and the area under the ROC curve (AUC) was employed as the prediction metric. Higher AUCs were obtained in the prediction of pathological stage, ER, and PR status than for lymph node metastasis and HER2 status. Overall, the prediction performances by genomics alone, radiomics alone, and combined radiogenomics features showed statistically significant correlations with clinical outcomes; however, improvement on the prediction performance by combining genomics and radiomics data was not found to be statistically significant, most likely due to the small sample size of 91 cancer cases with 38 radiomic features and 144 genomic features.

A tool for lung nodules analysis based on segmentation and morphological operation

  • Gupta, Anindya
  • Martens, Olev
  • Le Moullec, Yannick
  • Saar, Tonis
2015 Conference Proceedings, cited 4 times
Website

Brain Tumor Detection using Curvelet Transform and Support Vector Machine

  • Gupta, Bhawna
  • Tiwari, Shamik
International Journal of Computer Science and Mobile Computing 2014 Journal Article, cited 8 times
Website

Appropriate Contrast Enhancement Measures for Brain and Breast Cancer Images

  • Gupta, Suneet
  • Porwal, Rabins
International Journal of Biomedical Imaging 2016 Journal Article, cited 10 times
Website

The REMBRANDT study, a large collection of genomic data from brain cancer patients

  • Gusev, Yuriy
  • Bhuvaneshwar, Krithika
  • Song, Lei
  • Zenklusen, Jean-Claude
  • Fine, Howard
  • Madhavan, Subha
Scientific data 2018 Journal Article, cited 1 times
Website

Cancer Digital Slide Archive: an informatics resource to support integrated in silico analysis of TCGA pathology data

  • Gutman, David A
  • Cobb, Jake
  • Somanna, Dhananjaya
  • Park, Yuna
  • Wang, Fusheng
  • Kurc, Tahsin
  • Saltz, Joel H
  • Brat, Daniel J
  • Cooper, Lee A
Journal of the American Medical Informatics Association 2013 Journal Article, cited 70 times
Website
BACKGROUND: The integration and visualization of multimodal datasets is a common challenge in biomedical informatics. Several recent studies of The Cancer Genome Atlas (TCGA) data have illustrated important relationships between morphology observed in whole-slide images, outcome, and genetic events. The pairing of genomics and rich clinical descriptions with whole-slide imaging provided by TCGA presents a unique opportunity to perform these correlative studies. However, better tools are needed to integrate the vast and disparate data types. OBJECTIVE: To build an integrated web-based platform supporting whole-slide pathology image visualization and data integration. MATERIALS AND METHODS: All images and genomic data were directly obtained from the TCGA and National Cancer Institute (NCI) websites. RESULTS: The Cancer Digital Slide Archive (CDSA) produced is accessible to the public (http://cancer.digitalslidearchive.net) and currently hosts more than 20,000 whole-slide images from 22 cancer types. DISCUSSION: The capabilities of CDSA are demonstrated using TCGA datasets to integrate pathology imaging with associated clinical, genomic and MRI measurements in glioblastomas and can be extended to other tumor types. CDSA also allows URL-based sharing of whole-slide images, and has preliminary support for directly sharing regions of interest and other annotations. Images can also be selected on the basis of other metadata, such as mutational profile, patient age, and other relevant characteristics. CONCLUSIONS: With the increasing availability of whole-slide scanners, analysis of digitized pathology images will become increasingly important in linking morphologic observations with genomic and clinical endpoints.

Cancer Digital Slide Archive: an informatics resource to support integrated in silico analysis of TCGA pathology data

  • Gutman, David A
  • Cobb, Jake
  • Somanna, Dhananjaya
  • Park, Yuna
  • Wang, Fusheng
  • Kurc, Tahsin
  • Saltz, Joel H
  • Brat, Daniel J
  • Cooper, Lee AD
  • Kong, Jun
Journal of the American Medical Informatics Association 2013 Journal Article, cited 70 times
Website

MR Imaging Predictors of Molecular Profile and Survival: Multi-institutional Study of the TCGA Glioblastoma Data Set

  • Gutman, David A
  • Cooper, Lee AD
  • Hwang, Scott N
  • Holder, Chad A
  • Gao, JingJing
  • Aurora, Tarun D
  • Dunn, William D
  • Scarpace, Lisa
  • Mikkelsen, Tom
  • Jain, Rajan
Radiology 2013 Journal Article, cited 217 times
Website

MR Imaging Predictors of Molecular Profile and Survival: Multi-institutional Study of the TCGA Glioblastoma Data Set

  • Gutman, David A
  • Cooper, Lee A D
  • Hwang, Scott N
  • Holder, Chad A
  • Gao, Jingjing
  • Aurora, Tarun D
  • Dunn, William D Jr
  • Scarpace, Lisa
  • Mikkelsen, Tom
  • Jain, Rajan
  • Wintermark, Max
  • Jilwan, Manal
  • Raghavan, Prashant
  • Huang, Erich
  • Clifford, Robert J
  • Mongkolwat, Pattanasak
  • Kleper, Vladimir
  • Freymann, John
  • Kirby, Justin
  • Zinn, Pascal O
  • Moreno, Carlos S
  • Jaffe, Carl
  • Colen, Rivka
  • Rubin, Daniel L
  • Saltz, Joel
  • Flanders, Adam
  • Brat, Daniel J
Radiology 2013 Journal Article, cited 217 times
Website
PURPOSE: To conduct a comprehensive analysis of radiologist-made assessments of glioblastoma (GBM) tumor size and composition by using a community-developed controlled terminology of magnetic resonance (MR) imaging visual features as they relate to genetic alterations, gene expression class, and patient survival. MATERIALS AND METHODS: Because all study patients had been previously deidentified by the Cancer Genome Atlas (TCGA), a publicly available data set that contains no linkage to patient identifiers and that is HIPAA compliant, no institutional review board approval was required. Presurgical MR images of 75 patients with GBM with genetic data in the TCGA portal were rated by three neuroradiologists for size, location, and tumor morphology by using a standardized feature set. Interrater agreements were analyzed by using the Krippendorff alpha statistic and intraclass correlation coefficient. Associations between survival, tumor size, and morphology were determined by using multivariate Cox regression models; associations between imaging features and genomics were studied by using the Fisher exact test. RESULTS: Interrater analysis showed significant agreement in terms of contrast material enhancement, nonenhancement, necrosis, edema, and size variables. Contrast-enhanced tumor volume and longest axis length of tumor were strongly associated with poor survival (respectively, hazard ratio: 8.84, P = .0253, and hazard ratio: 1.02, P = .00973), even after adjusting for Karnofsky performance score (P = .0208). Proneural class GBM had significantly lower levels of contrast enhancement (P = .02) than other subtypes, while mesenchymal GBM showed lower levels of nonenhanced tumor (P < .01). CONCLUSION: This analysis demonstrates a method for consistent image feature annotation capable of reproducibly characterizing brain tumors; this study shows that radiologists' estimations of macroscopic imaging features can be combined with genetic alterations and gene expression subtypes to provide deeper insight to the underlying biologic properties of GBM subsets.

Web based tools for visualizing imaging data and development of XNATView, a zero footprint image viewer

  • Gutman, David A
  • Dunn Jr, William D
  • Cobb, Jake
  • Stoner, Richard M
  • Kalpathy-Cramer, Jayashree
  • Erickson, Bradley
Frontiers in Neuroinformatics 2014 Journal Article, cited 12 times
Website
Advances in web technologies now allow direct visualization of imaging data sets without necessitating the download of large file sets or the installation of software. This allows centralization of file storage and facilitates image review and analysis. XNATView is a light framework recently developed in our lab to visualize DICOM images stored in The Extensible Neuroimaging Archive Toolkit (XNAT). It consists of a PyXNAT-based framework to wrap around the REST application programming interface (API) and query the data in XNAT. XNATView was developed to simplify quality assurance, help organize imaging data, and facilitate data sharing for intra- and inter-laboratory collaborations. Its zero-footprint design allows the user to connect to XNAT from a web browser, navigate through projects, experiments, and subjects, and view DICOM images with accompanying metadata all within a single viewing instance.

Somatic mutations associated with MRI-derived volumetric features in glioblastoma

  • Gutman, David A
  • Dunn Jr, William D
  • Grossmann, Patrick
  • Cooper, Lee AD
  • Holder, Chad A
  • Ligon, Keith L
  • Alexander, Brian M
  • Aerts, Hugo JWL
Neuroradiology 2015 Journal Article, cited 45 times
Website
INTRODUCTION: MR imaging can noninvasively visualize tumor phenotype characteristics at the macroscopic level. Here, we investigated whether somatic mutations are associated with and can be predicted by MRI-derived tumor imaging features of glioblastoma (GBM). METHODS: Seventy-six GBM patients were identified from The Cancer Imaging Archive for whom preoperative T1-contrast (T1C) and T2-FLAIR MR images were available. For each tumor, a set of volumetric imaging features and their ratios were measured, including necrosis, contrast enhancing, and edema volumes. Imaging genomics analysis assessed the association of these features with mutation status of nine genes frequently altered in adult GBM. Finally, area under the curve (AUC) analysis was conducted to evaluate the predictive performance of imaging features for mutational status. RESULTS: Our results demonstrate that MR imaging features are strongly associated with mutation status. For example, TP53-mutated tumors had significantly smaller contrast enhancing and necrosis volumes (p = 0.012 and 0.017, respectively) and RB1-mutated tumors had significantly smaller edema volumes (p = 0.015) compared to wild-type tumors. MRI volumetric features were also found to significantly predict mutational status. For example, AUC analysis results indicated that TP53, RB1, NF1, EGFR, and PDGFRA mutations could each be significantly predicted by at least one imaging feature. CONCLUSION: MRI-derived volumetric features are significantly associated with and predictive of several cancer-relevant, drug-targetable DNA mutations in glioblastoma. These results may shed insight into unique growth characteristics of individual tumors at the macroscopic level resulting from molecular events as well as increase the use of noninvasive imaging in personalized medicine.

OPTIMISING DELINEATION ACCURACY OF TUMOURS IN PET FOR RADIOTHERAPY PLANNING USING BLIND DECONVOLUTION

  • Guvenis, A
  • Koc, A
Radiation Protection Dosimetry 2015 Journal Article, cited 3 times
Website
Positron emission tomography (PET) imaging has been proven to be useful in radiotherapy planning for the determination of the metabolically active regions of tumours. Delineation of tumours, however, is a difficult task in part due to high noise levels and the partial volume effects originating mainly from the low camera resolution. The goal of this work is to study the effect of blind deconvolution on tumour volume estimation accuracy for different computer-aided contouring methods. The blind deconvolution estimates the point spread function (PSF) of the imaging system in an iterative manner in a way that the likelihood of the given image being the convolution output is maximised. In this way, the PSF of the imaging system does not need to be known. Data were obtained from a NEMA NU-2 IQ-based phantom with a GE DSTE-16 PET/CT scanner. The artificial tumour diameters were 13, 17, 22, 28 and 37 mm with a target/background ratio of 4:1. The tumours were delineated before and after blind deconvolution. Student's two-tailed paired t-test showed a significant decrease in volume estimation error (p < 0.001) when blind deconvolution was used in conjunction with computer-aided delineation methods. A manual delineation confirmation demonstrated an improvement from 26 to 16 % for the artificial tumour of size 37 mm while an improvement from 57 to 15 % was noted for the small tumour of 13 mm. Therefore, it can be concluded that blind deconvolution of reconstructed PET images may be used to increase tumour delineation accuracy.

Multi-faceted computational assessment of risk and progression in oligodendroglioma implicates NOTCH and PI3K pathways

  • Halani, Sameer H
  • Yousefi, Safoora
  • Vega, Jose Velazquez
  • Rossi, Michael R
  • Zhao, Zheng
  • Amrollahi, Fatemeh
  • Holder, Chad A
  • Baxter-Stoltzfus, Amelia
  • Eschbacher, Jennifer
  • Griffith, Brent
NPJ precision oncology 2018 Journal Article, cited 0 times
Website

Vector quantization-based automatic detection of pulmonary nodules in thoracic CT images

  • Han, Hao
  • Li, Lihong
  • Han, Fangfang
  • Zhang, Hao
  • Moore, William
  • Liang, Zhengrong
2013 Conference Proceedings, cited 8 times
Website

A novel computer-aided detection system for pulmonary nodule identification in CT images

  • Han, Hao
  • Li, Lihong
  • Wang, Huafeng
  • Zhang, Hao
  • Moore, William
  • Liang, Zhengrong
2014 Conference Proceedings, cited 5 times
Website

MRI to MGMT: predicting methylation status in glioblastoma patients using convolutional recurrent neural networks

  • Han, Lichy
  • Kamdar, Maulik R.
2018 Conference Paper, cited 5 times
Website
Glioblastoma Multiforme (GBM), a malignant brain tumor, is among the most lethal of all cancers. Temozolomide is the primary chemotherapy treatment for patients diagnosed with GBM. The methylation status of the promoter or the enhancer regions of the O6− methylguanine methyltransferase (MGMT) gene may impact the efficacy and sensitivity of temozolomide, and hence may affect overall patient survival. Microscopic genetic changes may manifest as macroscopic morphological changes in the brain tumors that can be detected using magnetic resonance imaging (MRI), which can serve as noninvasive biomarkers for determining methylation of MGMT regulatory regions. In this research, we use a compendium of brain MRI scans of GBM patients collected from The Cancer Imaging Archive (TCIA) combined with methylation data from The Cancer Genome Atlas (TCGA) to predict the methylation state of the MGMT regulatory regions in these patients. Our approach relies on a bi-directional convolutional recurrent neural network architecture (CRNN) that leverages the spatial aspects of these 3-dimensional MRI scans. Our CRNN obtains an accuracy of 67% on the validation data and 62% on the test data, with precision and recall both at 67%, suggesting the existence of MRI features that may complement existing markers for GBM patient stratification and prognosis. We have additionally presented our model via a novel neural network visualization platform, which we have developed to improve interpretability of deep learning MRI-based classification models.

Predictive capabilities of statistical learning methods for lung nodule malignancy classification using diagnostic image features: an investigation using the Lung Image Database Consortium dataset

  • Hancock, Matthew C
  • Magnan, Jerry F
2017 Conference Proceedings, cited 0 times
Website

Classification of Covid-19 Coronavirus, Pneumonia and Healthy Lungs in CT Scans Using Q-Deformed Entropy and Deep Learning Features

  • Hasan, Ali M.
  • Al-Jawad, Mohammed M.
  • Jalab, Hamid A.
  • Shaiba, Hadil
  • Ibrahim, Rabha W.
  • Al-Shamasneh, Ala’a R.
Entropy 2020 Journal Article, cited 0 times
Website
Many health systems over the world have collapsed due to limited capacity and a dramatic increase of suspected COVID-19 cases. What has emerged is the need for finding an efficient, quick and accurate method to mitigate the overloading of radiologists’ efforts to diagnose the suspected cases. This study presents the combination of deep learning of extracted features with the Q-deformed entropy handcrafted features for discriminating between COVID-19 coronavirus, pneumonia and healthy computed tomography (CT) lung scans. In this study, pre-processing is used to reduce the effect of intensity variations between CT slices. Then histogram thresholding is used to isolate the background of the CT lung scan. Each CT lung scan undergoes a feature extraction which involves deep learning and a Q-deformed entropy algorithm. The obtained features are classified using a long short-term memory (LSTM) neural network classifier. Subsequently, combining all extracted features significantly improves the performance of the LSTM network to precisely discriminate between COVID-19, pneumonia and healthy cases. The maximum achieved accuracy for classifying the collected dataset comprising 321 patients is 99.68%.

Descriptions and evaluations of methods for determining surface curvature in volumetric data

  • Hauenstein, Jacob D.
  • Newman, Timothy S.
Computers & Graphics 2020 Journal Article, cited 0 times
Website
Highlights • Methods using convolution or fitting are often the most accurate. • The existing TE method is fast and accurate on noise-free data. • The OP method is faster than existing, similarly accurate methods on real data. • Even modest errors in curvature notably impact curvature-based renderings. • On real data, GSTH, GSTI, and OP produce the best curvature-based renderings. Abstract Three methods developed for determining surface curvature in volumetric data are described, including one convolution-based method, one fitting-based method, and one method that uses normal estimates to directly determine curvature. Additionally, a study of the accuracy and computational performance of these methods and prior methods is presented. The study considers synthetic data, noise-added synthetic data, and real data. Sample volume renderings using curvature-based transfer functions, where curvatures were determined with the methods, are also exhibited.

A biomarker basing on radiomics for the prediction of overall survival in non–small cell lung cancer patients

  • He, Bo
  • Zhao, Wei
  • Pi, Jiang-Yuan
  • Han, Dan
  • Jiang, Yuan-Ming
  • Zhang, Zhen-Guang
Respiratory research 2018 Journal Article, cited 0 times
Website

Feasibility study of a multi-criteria decision-making based hierarchical model for multi-modality feature and multi-classifier fusion: Applications in medical prognosis prediction

  • He, Qiang
  • Li, Xin
  • Kim, DW Nathan
  • Jia, Xun
  • Gu, Xuejun
  • Zhen, Xin
  • Zhou, Linghong
Information Fusion 2020 Journal Article, cited 0 times
Website

Fast Super-Resolution in MRI Images Using Phase Stretch Transform, Anchored Point Regression and Zero-Data Learning

  • He, Sifeng
  • Jalali, Bahram
2019 Conference Proceedings, cited 0 times
Website
Medical imaging is fundamentally challenging due to absorption and scattering in tissues and by the need to minimize illumination of the patient with harmful radiation. Common problems are low spatial resolution, limited dynamic range and low contrast. These predicaments have fueled interest in enhancing medical images using digital post processing. In this paper, we propose and demonstrate an algorithm for real-time inference that is suitable for edge computing. Our locally adaptive learned filtering technique named Phase Stretch Anchored Regression (PhSAR) combines the Phase Stretch Transform for local features extraction in visually impaired images with clustered anchored points to represent image feature space and fast regression based learning. In contrast with the recent widely-used deep neural network for image super-resolution, our algorithm achieves significantly faster inference and less hallucination on image details and is interpretable. Tests on brain MRI images using zero-data learning reveal its robustness with explicit PSNR improvement and lower latency compared to relevant benchmarks.

A Comparison of the Efficiency of Using a Deep CNN Approach with Other Common Regression Methods for the Prediction of EGFR Expression in Glioblastoma Patients

  • Hedyehzadeh, Mohammadreza
  • Maghooli, Keivan
  • MomenGharibvand, Mohammad
  • Pistorius, Stephen
J Digit Imaging 2019 Journal Article, cited 0 times
Website
To estimate epithermal growth factor receptor (EGFR) expression level in glioblastoma (GBM) patients using radiogenomic analysis of magnetic resonance images (MRI). A comparative study using a deep convolutional neural network (CNN)-based regression, deep neural network, least absolute shrinkage and selection operator (LASSO) regression, elastic net regression, and linear regression with no regularization was carried out to estimate EGFR expression of 166 GBM patients. Except for the deep CNN case, overfitting was prevented by using feature selection, and loss values for each method were compared. The loss values in the training phase for deep CNN, deep neural network, Elastic net, LASSO, and the linear regression with no regularization were 2.90, 8.69, 7.13, 14.63, and 21.76, respectively, while in the test phase, the loss values were 5.94, 10.28, 13.61, 17.32, and 24.19 respectively. These results illustrate that the efficiency of the deep CNN approach is better than that of the other methods, including Lasso regression, which is a regression method known for its advantage in high-dimension cases. A comparison between deep CNN, deep neural network, and three other common regression methods was carried out, and the efficiency of the CNN deep learning approach, in comparison with other regression models, was demonstrated.

Multiparametric MRI of prostate cancer: An update on state‐of‐the‐art techniques and their performance in detecting and localizing prostate cancer

  • Hegde, John V
  • Mulkern, Robert V
  • Panych, Lawrence P
  • Fennessy, Fiona M
  • Fedorov, Andriy
  • Maier, Stephan E
  • Tempany, Clare
Journal of Magnetic Resonance Imaging 2013 Journal Article, cited 164 times
Website

Deep Feature Learning For Soft Tissue Sarcoma Classification In MR Images Via Transfer Learning

  • Hermessi, Haithem
  • Mourali, Olfa
  • Zagrouba, Ezzeddine
Expert Systems with Applications 2018 Journal Article, cited 0 times
Website

Transfer learning with multiple convolutional neural networks for soft tissue sarcoma MRI classification

  • Hermessi, Haithem
  • Mourali, Olfa
  • Zagrouba, Ezzeddine
2019 Conference Proceedings, cited 1 times
Website

Design of a Patient-Specific Radiotherapy Treatment Target

  • Heyns, Michael
  • Breseman, Kelsey
  • Lee, Christopher
  • Bloch, B Nicholas
  • Jaffe, Carl
  • Xiang, Hong
2013 Conference Proceedings, cited 3 times
Website

Automated Muscle Segmentation from Clinical CT using Bayesian U-Net for Personalized Musculoskeletal Modeling

  • Hiasa, Yuta
  • Otake, Yoshito
  • Takao, Masaki
  • Ogawa, Takeshi
  • Sugano, Nobuhiko
  • Sato, Yoshinobu
IEEE Trans Med Imaging 2019 Journal Article, cited 2 times
Website
We propose a method for automatic segmentation of individual muscles from a clinical CT. The method uses Bayesian convolutional neural networks with the U-Net architecture, using Monte Carlo dropout that infers an uncertainty metric in addition to the segmentation label. We evaluated the performance of the proposed method using two data sets: 20 fully annotated CTs of the hip and thigh regions and 18 partially annotated CTs that are publicly available from The Cancer Imaging Archive (TCIA) database. The experiments showed a Dice coefficient (DC) of 0.891+/-0.016 (mean+/-std) and an average symmetric surface distance (ASD) of 0.994+/-0.230 mm over 19 muscles in the set of 20 CTs. These results were statistically significant improvements compared to the state-of-the-art hierarchical multi-atlas method which resulted in 0.845+/-0.031 DC and 1.556+/-0.444 mm ASD. We evaluated validity of the uncertainty metric in the multi-class organ segmentation problem and demonstrated a correlation between the pixels with high uncertainty and the segmentation failure. One application of the uncertainty metric in activelearning is demonstrated, and the proposed query pixel selection method considerably reduced the manual annotation cost for expanding the training data set. The proposed method allows an accurate patient-specific analysis of individual muscle shapes in a clinical routine. This would open up various applications including personalization of biomechanical simulation and quantitative evaluation of muscle atrophy.

Approaches to uncovering cancer diagnostic and prognostic molecular signatures

  • Hong, Shengjun
  • Huang, Yi
  • Cao, Yaqiang
  • Chen, Xingwei
  • Han, Jing-Dong J
Molecular & Cellular Oncology 2014 Journal Article, cited 2 times
Website
The recent rapid development of high-throughput technology enables the study of molecular signatures for cancer diagnosis and prognosis at multiple levels, from genomic and epigenomic to transcriptomic. These unbiased large-scale scans provide important insights into the detection of cancer-related signatures. In addition to single-layer signatures, such as gene expression and somatic mutations, integrating data from multiple heterogeneous platforms using a systematic approach has been proven to be particularly effective for the identification of classification markers. This approach not only helps to uncover essential driver genes and pathways in the cancer network that are responsible for the mechanisms of cancer development, but will also lead us closer to the ultimate goal of personalized cancer therapy.

Renal Cancer Cell Nuclei Detection from Cytological Images Using Convolutional Neural Network for Estimating Proliferation Rate

  • Hossain, Shamim
  • Jalab, Hamid A.
  • Zulfiqar, Fariha
  • Pervin, Mahfuza
Journal of Telecommunication, Electronic and Computer Engineering 2019 Journal Article, cited 0 times
Website
The Cytological images play an essential role in monitoring the progress of cancer cell mutation. The proliferation rate of the cancer cell is the prerequisite for cancer treatment. It is hard to accurately identify the nucleus of the abnormal cell in a faster way as well as find the correct proliferation rate since it requires an in-depth manual examination, observation and cell counting, which are very tedious and time-consuming. The proposed method starts with segmentation to separate the background and object regions with K-means clustering. The small candidate regions, which contain cell region is detected based on the value of support vector machine automatically. The sets of cell regions are marked with selective search according to the local distance between the nucleus and cell boundary, whether they are overlapping or non-overlapping cell regions. After that, the selective segmented cell features are taken to learn the normal and abnormal cell nuclei separately from the regional convolutional neural network. Finally, the proliferation rate in the invasive cancer area is calculated based on the number of abnormal cells. A set of renal cancer cell cytological images is taken from the National Cancer Institute, USA and this data set is available for the research work. Quantitative evaluation of this method is performed by comparing its accuracy with the accuracy of the other state of the art cancer cell nuclei detection methods. Qualitative assessment is done based on human observation. The proposed method is able to detect renal cancer cell nuclei accurately and provide automatic proliferation rate.

A Pipeline for Lung Tumor Detection and Segmentation from CT Scans Using Dilated Convolutional Neural Networks

  • Hossain, S
  • Najeeb, S
  • Shahriyar, A
  • Abdullah, ZR
  • Haque, MA
2019 Conference Proceedings, cited 0 times
Website
Lung cancer is the most prevalent cancer worldwide with about 230,000 new cases every year. Most cases go undiagnosed until it’s too late, especially in developing countries and remote areas. Early detection is key to beating cancer. Towards this end, the work presented here proposes an automated pipeline for lung tumor detection and segmentation from 3D lung CT scans from the NSCLC-Radiomics Dataset. It also presents a new dilated hybrid-3D convolutional neural network architecture for tumor segmentation. First, a binary classifier chooses CT scan slices that may contain parts of a tumor. To segment the tumors, the selected slices are passed to the segmentation model which extracts feature maps from each 2D slice using dilated convolutions and then fuses the stacked maps through 3D convolutions - incorporating the 3D structural information present in the CT scan volume into the output. Lastly, the segmentation masks are passed through a post-processing block which cleans them up through morphological operations. The proposed segmentation model outperformed other contemporary models like LungNet and U-Net. The average and median dice coefficient on the test set for the proposed model were 65.7% and 70.39% respectively. The next best model, LungNet had dice scores of 62.67% and 66.78%.

Publishing descriptions of non-public clinical datasets: proposed guidance for researchers, repositories, editors and funding organisations

  • Hrynaszkiewicz, Iain
  • Khodiyar, Varsha
  • Hufton, Andrew L
  • Sansone, Susanna-Assunta
Research Integrity and Peer Review 2016 Journal Article, cited 8 times
Website
Sharing of experimental clinical research data usually happens between individuals or research groups rather than via public repositories, in part due to the need to protect research participant privacy. This approach to data sharing makes it difficult to connect journal articles with their underlying datasets and is often insufficient for ensuring access to data in the long term. Voluntary data sharing services such as the Yale Open Data Access (YODA) and Clinical Study Data Request (CSDR) projects have increased accessibility to clinical datasets for secondary uses while protecting patient privacy and the legitimacy of secondary analyses but these resources are generally disconnected from journal articles-where researchers typically search for reliable information to inform future research. New scholarly journal and article types dedicated to increasing accessibility of research data have emerged in recent years and, in general, journals are developing stronger links with data repositories. There is a need for increased collaboration between journals, data repositories, researchers, funders, and voluntary data sharing services to increase the visibility and reliability of clinical research. Using the journal Scientific Data as a case study, we propose and show examples of changes to the format and peer-review process for journal articles to more robustly link them to data that are only available on request. We also propose additional features for data repositories to better accommodate non-public clinical datasets, including Data Use Agreements (DUAs).

Performance of sparse-view CT reconstruction with multi-directional gradient operators

  • Hsieh, C. J.
  • Jin, S. C.
  • Chen, J. C.
  • Kuo, C. W.
  • Wang, R. T.
  • Chu, W. C.
PLoS One 2019 Journal Article, cited 0 times
Website
To further reduce the noise and artifacts in the reconstructed image of sparse-view CT, we have modified the traditional total variation (TV) methods, which only calculate the gradient variations in x and y directions, and have proposed 8- and 26-directional (the multi-directional) gradient operators for TV calculation to improve the quality of reconstructed images. Different from traditional TV methods, the proposed 8- and 26-directional gradient operators additionally consider the diagonal directions in TV calculation. The proposed method preserves more information from original tomographic data in the step of gradient transform to obtain better reconstruction image qualities. Our algorithms were tested using two-dimensional Shepp-Logan phantom and three-dimensional clinical CT images. Results were evaluated using the root-mean-square error (RMSE), peak signal-to-noise ratio (PSNR), and universal quality index (UQI). All the experiment results show that the sparse-view CT images reconstructed using the proposed 8- and 26-directional gradient operators are superior to those reconstructed by traditional TV methods. Qualitative and quantitative analyses indicate that the more number of directions that the gradient operator has, the better images can be reconstructed. The 8- and 26-directional gradient operators we proposed have better capability to reduce noise and artifacts than traditional TV methods, and they are applicable to be applied to and combined with existing CT reconstruction algorithms derived from CS theory to produce better image quality in sparse-view reconstruction.

Quantitative glioma grading using transformed gray-scale invariant textures of MRI

  • Hsieh, Kevin Li-Chun
  • Chen, Cheng-Yu
  • Lo, Chung-Ming
Computers in biology and medicine 2017 Journal Article, cited 8 times
Website
Background: A computer-aided diagnosis (CAD) system based on intensity-invariant magnetic resonance (MR) imaging features was proposed to grade gliomas for general application to various scanning systems and settings. Method: In total, 34 glioblastomas and 73 lower-grade gliomas comprised the image database to evaluate the proposed CAD system. For each case, the local texture on MR images was transformed into a local binary pattern (LBP) which was intensity-invariant. From the LBP, quantitative image features, including the histogram moment and textures, were extracted and combined in a logistic regression classifier to establish a malignancy prediction model. The performance was compared to conventional texture features to demonstrate the improvement. Results: The performance of the CAD system based on LBP features achieved an accuracy of 93% (100/107), a sensitivity of 97% (33/34), a negative predictive value of 99% (67/68), and an area under the receiver operating characteristic curve (Az) of 0.94, which were significantly better than the conventional texture features: an accuracy of 84% (90/107), a sensitivity of 76% (26/34), a negative predictive value of 89% (64/72), and an Az of 0.89 with respective p values of 0.0303, 0.0122, 0.0201, and 0.0334. Conclusions: More-robust texture features were extracted from MR images and combined into a significantly better CAD system for distinguishing glioblastomas from lower-grade gliomas. The proposed CAD system would be more practical in clinical use with various imaging systems and settings.

Computer-aided grading of gliomas based on local and global MRI features

  • Hsieh, Kevin Li-Chun
  • Lo, Chung-Ming
  • Hsiao, Chih-Jou
Computer methods and programs in biomedicine 2016 Journal Article, cited 13 times
Website
BACKGROUND AND OBJECTIVES: A computer-aided diagnosis (CAD) system based on quantitative magnetic resonance imaging (MRI) features was developed to evaluate the malignancy of diffuse gliomas, which are central nervous system tumors. METHODS: The acquired image database for the CAD performance evaluation was composed of 34 glioblastomas and 73 diffuse lower-grade gliomas. In each case, tissues enclosed in a delineated tumor area were analyzed according to their gray-scale intensities on MRI scans. Four histogram moment features describing the global gray-scale distributions of gliomas tissues and 14 textural features were used to interpret local correlations between adjacent pixel values. With a logistic regression model, the individual feature set and a combination of both feature sets were used to establish the malignancy prediction model. RESULTS: Performances of the CAD system using global, local, and the combination of both image feature sets achieved accuracies of 76%, 83%, and 88%, respectively. Compared to global features, the combined features had significantly better accuracy (p = 0.0213). With respect to the pathology results, the CAD classification obtained substantial agreement kappa = 0.698, p < 0.001. CONCLUSIONS: Numerous proposed image features were significant in distinguishing glioblastomas from lower-grade gliomas. Combining them further into a malignancy prediction model would be promising in providing diagnostic suggestions for clinical use.

Computer-aided grading of gliomas based on local and global MRI features

  • Hsieh, Kevin Li-Chun
  • Lo, Chung-Ming
  • Hsiao, Chih-Jou
Computer methods and programs in biomedicine 2017 Journal Article, cited 13 times
Website
BACKGROUND AND OBJECTIVES: A computer-aided diagnosis (CAD) system based on quantitative magnetic resonance imaging (MRI) features was developed to evaluate the malignancy of diffuse gliomas, which are central nervous system tumors. METHODS: The acquired image database for the CAD performance evaluation was composed of 34 glioblastomas and 73 diffuse lower-grade gliomas. In each case, tissues enclosed in a delineated tumor area were analyzed according to their gray-scale intensities on MRI scans. Four histogram moment features describing the global gray-scale distributions of gliomas tissues and 14 textural features were used to interpret local correlations between adjacent pixel values. With a logistic regression model, the individual feature set and a combination of both feature sets were used to establish the malignancy prediction model. RESULTS: Performances of the CAD system using global, local, and the combination of both image feature sets achieved accuracies of 76%, 83%, and 88%, respectively. Compared to global features, the combined features had significantly better accuracy (p = 0.0213). With respect to the pathology results, the CAD classification obtained substantial agreement kappa = 0.698, p < 0.001. CONCLUSIONS: Numerous proposed image features were significant in distinguishing glioblastomas from lower-grade gliomas. Combining them further into a malignancy prediction model would be promising in providing diagnostic suggestions for clinical use.

Effect of a computer-aided diagnosis system on radiologists' performance in grading gliomas with MRI

  • Hsieh, Kevin Li-Chun
  • Tsai, Ruei-Je
  • Teng, Yu-Chuan
  • Lo, Chung-Ming
PLoS One 2017 Journal Article, cited 0 times
Website
The effects of a computer-aided diagnosis (CAD) system based on quantitative intensity features with magnetic resonance (MR) imaging (MRI) were evaluated by examining radiologists' performance in grading gliomas. The acquired MRI database included 71 lower-grade gliomas and 34 glioblastomas. Quantitative image features were extracted from the tumor area and combined in a CAD system to generate a prediction model. The effect of the CAD system was evaluated in a two-stage procedure. First, a radiologist performed a conventional reading. A sequential second reading was determined with a malignancy estimation by the CAD system. Each MR image was regularly read by one radiologist out of a group of three radiologists. The CAD system achieved an accuracy of 87% (91/105), a sensitivity of 79% (27/34), a specificity of 90% (64/71), and an area under the receiver operating characteristic curve (Az) of 0.89. In the evaluation, the radiologists' Az values significantly improved from 0.81, 0.87, and 0.84 to 0.90, 0.90, and 0.88 with p = 0.0011, 0.0076, and 0.0167, respectively. Based on the MR image features, the proposed CAD system not only performed well in distinguishing glioblastomas from lower-grade gliomas but also provided suggestions about glioma grading to reinforce radiologists' confidence rating.

Brain Tumor Segmentation Using Multi-Cascaded Convolutional Neural Networks and Conditional Random Field

  • Hu, Kai
  • Gan, Qinghai
  • Zhang, Yuan
  • Deng, Shuhua
  • Xiao, Fen
  • Huang, Wei
  • Cao, Chunhong
  • Gao, Xieping
IEEE Access 2019 Journal Article, cited 2 times
Website
Accurate segmentation of brain tumor is an indispensable component for cancer diagnosis and treatment. In this paper, we propose a novel brain tumor segmentation method based on multicascaded convolutional neural network (MCCNN) and fully connected conditional random fields (CRFs). The segmentation process mainly includes the following two steps. First, we design a multi-cascaded network architecture by combining the intermediate results of several connected components to take the local dependencies of labels into account and make use of multi-scale features for the coarse segmentation. Second, we apply CRFs to consider the spatial contextual information and eliminate some spurious outputs for the fine segmentation. In addition, we use image patches obtained from axial, coronal, and sagittal views to respectively train three segmentation models, and then combine them to obtain the final segmentation result. The validity of the proposed method is evaluated on three publicly available databases. The experimental results show that our method achieves competitive performance compared with the state-of-the-art approaches.

A neural network approach to lung nodule segmentation

  • Hu, Yaoxiu
  • Menon, Prahlad G
2016 Conference Proceedings, cited 1 times
Website

Development and validation of radiomic signatures of head and neck squamous cell carcinoma molecular features and subtypes

  • Huang, Chao
  • Cintra, Murilo
  • Brennan, Kevin
  • Zhou, Mu
  • Colevas, A Dimitrios
  • Fischbein, Nancy
  • Zhu, Shankuan
  • Gevaert, Olivier
EBioMedicine 2019 Journal Article, cited 1 times
Website
BACKGROUND: Radiomics-based non-invasive biomarkers are promising to facilitate the translation of therapeutically related molecular subtypes for treatment allocation of patients with head and neck squamous cell carcinoma (HNSCC). METHODS: We included 113 HNSCC patients from The Cancer Genome Atlas (TCGA-HNSCC) project. Molecular phenotypes analyzed were RNA-defined HPV status, five DNA methylation subtypes, four gene expression subtypes and five somatic gene mutations. A total of 540 quantitative image features were extracted from pre-treatment CT scans. Features were selected and used in a regularized logistic regression model to build binary classifiers for each molecular subtype. Models were evaluated using the average area under the Receiver Operator Characteristic curve (AUC) of a stratified 10-fold cross-validation procedure repeated 10 times. Next, an HPV model was trained with the TCGA-HNSCC, and tested on a Stanford cohort (N=53). FINDINGS: Our results show that quantitative image features are capable of distinguishing several molecular phenotypes. We obtained significant predictive performance for RNA-defined HPV+ (AUC=0.73), DNA methylation subtypes MethylMix HPV+ (AUC=0.79), non-CIMP-atypical (AUC=0.77) and Stem-like-Smoking (AUC=0.71), and mutation of NSD1 (AUC=0.73). We externally validated the HPV prediction model (AUC=0.76) on the Stanford cohort. When compared to clinical models, radiomic models were superior to subtypes such as NOTCH1 mutation and DNA methylation subtype non-CIMP-atypical while were inferior for DNA methylation subtype CIMP-atypical and NSD1 mutation. INTERPRETATION: Our study demonstrates that radiomics can potentially serve as a non-invasive tool to identify treatment-relevant subtypes of HNSCC, opening up the possibility for patient stratification, treatment allocation and inclusion in clinical trials. FUND: Dr. Gevaert reports grants from National Institute of Dental & Craniofacial Research (NIDCR) U01 DE025188, grants from National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health (NIBIB), R01 EB020527, grants from National Cancer Institute (NCI), U01 CA217851, during the conduct of the study; Dr. Huang and Dr. Zhu report grants from China Scholarship Council (Grant NO:201606320087), grants from China Medical Board Collaborating Program (Grant NO:15-216), the Cyrus Tang Foundation, and the Zhejiang University Education Foundation during the conduct of the study; Dr. Cintra reports grants from Sao Paulo State Foundation for Teaching and Research (FAPESP), during the conduct of the study.

Image Super-Resolution Algorithm Based on an Improved Sparse Autoencoder

  • Huang, Detian
  • Huang, Weiqin
  • Yuan, Zhenguo
  • Lin, Yanming
  • Zhang, Jian
  • Zheng, Lixin
Information 2018 Journal Article, cited 0 times
Website
Due to the limitations of the resolution of the imaging system and the influence of scene changes and other factors, sometimes only low-resolution images can be acquired, which cannot satisfy the practical application’s requirements. To improve the quality of low-resolution images, a novel super-resolution algorithm based on an improved sparse autoencoder is proposed. Firstly, in the training set preprocessing stage, the high- and low-resolution image training sets are constructed, respectively, by using high-frequency information of the training samples as the characterization, and then the zero-phase component analysis whitening technique is utilized to decorrelate the formed joint training set to reduce its redundancy. Secondly, a constructed sparse regularization term is added to the cost function of the traditional sparse autoencoder to further strengthen the sparseness constraint on the hidden layer. Finally, in the dictionary learning stage, the improved sparse autoencoder is adopted to achieve unsupervised dictionary learning to improve the accuracy and stability of the dictionary. Experimental results validate that the proposed algorithm outperforms the existing algorithms both in terms of the subjective visual perception and the objective evaluation indices, including the peak signal-to-noise ratio and the structural similarity measure.

Assessment of a radiomic signature developed in a general NSCLC cohort for predicting overall survival of ALK-positive patients with different treatment types

  • Huang, Lyu
  • Chen, Jiayan
  • Hu, Weigang
  • Xu, Xinyan
  • Liu, Di
  • Wen, Junmiao
  • Lu, Jiayu
  • Cao, Jianzhao
  • Zhang, Junhua
  • Gu, Yu
  • Wang, Jiazhou
  • Fan, Min
Clinical lung cancer 2019 Journal Article, cited 0 times
Website
Objectives To investigate the potential of a radiomic signature developed in a general NSCLC cohort for predicting the overall survival of ALK-positive patients with different treatment types. Methods After test-retest in the RIDER dataset, 132 features (ICC>0.9) were selected in the LASSO Cox regression model with a leave-one-out cross-validation. The NSCLC Radiomics collection from TCIA was randomly divided into a training set (N=254) and a validation set (N=63) to develop a general radiomic signature for NSCLC. In our ALK+ set, 35 patients received targeted therapy and 19 patients received non-targeted therapy. The developed signature was tested later in this ALK+ set. Performance of the signature was evaluated with C-index and stratification analysis. Results The general signature has good performance (C-index>0.6, log-rank p-value<0.05) in the NSCLC Radiomics collection. It includes five features: Geom_va_ratio, W_GLCM_LH_Std, W_GLCM_LH_DV, W_GLCM_HH_IM2 and W_his_HL_mean (Supplementary Table S2). Its accuracy of predicting overall survival in the ALK+ set achieved 0.649 (95%CI=0.640-0.658). Nonetheless, impaired performance was observed in the targeted therapy group (C-index=0.573, 95%CI=0.556-0.589) while significantly improved performance was observed in the non-targeted therapy group (C-index=0.832, 95%CI=0.832-0.852). Stratification analysis also showed that the general signature could only identify high- and low-risk patients in the non-targeted therapy group (log-rank p-value=0.00028). Conclusions This preliminary study suggests that the applicability of a general signature to ALK-positive patients is limited. The general radiomic signature seems to be only applicable to ALK-positive patients who had received non-targeted therapy, which indicates that developing special radiomics signatures for patients treated with TKI might be necessary. Abbreviations and acronyms TCIA The Cancer Imaging Archive ALK Anaplastic lymphoma kinase NSCLC Non-small cell lung cancer EML4-ALK fusion The echinoderm microtubule-associated protein like 4-anaplastic lymphoma kinase fusion C-index Concordance index CI Confidence interval ICC The intra-class correlation coefficient OS Overall Survival LASSO The Least Absolute Shrinkage and Selection Operator EGFR Epidermal Growth Factor Receptor TKI Tyrosine-kinase inhibitor

The Study on Data Hiding in Medical Images

  • Huang, Li-Chin
  • Tseng, Lin-Yu
  • Hwang, Min-Shiang
International Journal of Network Security 2012 Journal Article, cited 25 times
Website

A reversible data hiding method by histogram shifting in high quality medical images

  • Huang, Li-Chin
  • Tseng, Lin-Yu
  • Hwang, Min-Shiang
Journal of Systems and Software 2013 Journal Article, cited 60 times
Website

The Impact of Arterial Input Function Determination Variations on Prostate Dynamic Contrast-Enhanced Magnetic Resonance Imaging Pharmacokinetic Modeling: A Multicenter Data Analysis Challenge

  • Huang, Wei
  • Chen, Yiyi
  • Fedorov, Andriy
  • Li, Xia
  • Jajamovich, Guido H
  • Malyarenko, Dariya I
  • Aryal, Madhava P
  • LaViolette, Peter S
  • Oborski, Matthew J
  • O'Sullivan, Finbarr
Tomography: a journal for imaging research 2016 Journal Article, cited 21 times
Website

Variations of dynamic contrast-enhanced magnetic resonance imaging in evaluation of breast cancer therapy response: a multicenter data analysis challenge

  • Huang, W.
  • Li, X.
  • Chen, Y.
  • Li, X.
  • Chang, M. C.
  • Oborski, M. J.
  • Malyarenko, D. I.
  • Muzi, M.
  • Jajamovich, G. H.
  • Fedorov, A.
  • Tudorica, A.
  • Gupta, S. N.
  • Laymon, C. M.
  • Marro, K. I.
  • Dyvorne, H. A.
  • Miller, J. V.
  • Barbodiak, D. P.
  • Chenevert, T. L.
  • Yankeelov, T. E.
  • Mountz, J. M.
  • Kinahan, P. E.
  • Kikinis, R.
  • Taouli, B.
  • Fennessy, F.
  • Kalpathy-Cramer, J.
2014 Journal Article, cited 60 times
Website
Pharmacokinetic analysis of dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) time-course data allows estimation of quantitative parameters such as K (trans) (rate constant for plasma/interstitium contrast agent transfer), v e (extravascular extracellular volume fraction), and v p (plasma volume fraction). A plethora of factors in DCE-MRI data acquisition and analysis can affect accuracy and precision of these parameters and, consequently, the utility of quantitative DCE-MRI for assessing therapy response. In this multicenter data analysis challenge, DCE-MRI data acquired at one center from 10 patients with breast cancer before and after the first cycle of neoadjuvant chemotherapy were shared and processed with 12 software tools based on the Tofts model (TM), extended TM, and Shutter-Speed model. Inputs of tumor region of interest definition, pre-contrast T1, and arterial input function were controlled to focus on the variations in parameter value and response prediction capability caused by differences in models and associated algorithms. Considerable parameter variations were observed with the within-subject coefficient of variation (wCV) values for K (trans) and v p being as high as 0.59 and 0.82, respectively. Parameter agreement improved when only algorithms based on the same model were compared, e.g., the K (trans) intraclass correlation coefficient increased to as high as 0.84. Agreement in parameter percentage change was much better than that in absolute parameter value, e.g., the pairwise concordance correlation coefficient improved from 0.047 (for K (trans)) to 0.92 (for K (trans) percentage change) in comparing two TM algorithms. Nearly all algorithms provided good to excellent (univariate logistic regression c-statistic value ranging from 0.8 to 1.0) early prediction of therapy response using the metrics of mean tumor K (trans) and k ep (=K (trans)/v e, intravasation rate constant) after the first therapy cycle and the corresponding percentage changes. The results suggest that the interalgorithm parameter variations are largely systematic, which are not likely to significantly affect the utility of DCE-MRI for assessment of therapy response.

Fast and Fully-Automated Detection and Segmentation of Pulmonary Nodules in Thoracic CT Scans Using Deep Convolutional Neural Networks

  • Huang, X.
  • Sun, W.
  • Tseng, T. B.
  • Li, C.
  • Qian, W.
Computerized Medical Imaging and Graphics 2019 Journal Article, cited 0 times
Website
Deep learning techniques have been extensively used in computerized pulmonary nodule analysis in recent years. Many reported studies still utilized hybrid methods for diagnosis, in which convolutional neural networks (CNNs) are used only as one part of the pipeline, and the whole system still needs either traditional image processing modules or human intervention to obtain final results. In this paper, we introduced a fast and fully-automated end-to-end system that can efficiently segment precise lung nodule contours from raw thoracic CT scans. Our proposed system has four major modules: candidate nodule detection with Faster regional-CNN (R-CNN), candidate merging, false positive (FP) reduction with CNN, and nodule segmentation with customized fully convolutional neural network (FCN). The entire system has no human interaction or database specific design. The average runtime is about 16 s per scan on a standard workstation. The nodule detection accuracy is 91.4% and 94.6% with an average of 1 and 4 false positives (FPs) per scan. The average dice coefficient of nodule segmentation compared to the groundtruth is 0.793.

A longitudinal four‐dimensional computed tomography and cone beam computed tomography dataset for image‐guided radiation therapy research in lung cancer

  • Hugo, Geoffrey D
  • Weiss, Elisabeth
  • Sleeman, William C
  • Balik, Salim
  • Keall, Paul J
  • Lu, Jun
  • Williamson, Jeffrey F
Medical physics 2017 Journal Article, cited 8 times
Website
PURPOSE: To describe in detail a dataset consisting of serial four-dimensional computed tomography (4DCT) and 4D cone beam CT (4DCBCT) images acquired during chemoradiotherapy of 20 locally advanced, nonsmall cell lung cancer patients we have collected at our institution and shared publicly with the research community. ACQUISITION AND VALIDATION METHODS: As part of an NCI-sponsored research study 82 4DCT and 507 4DCBCT images were acquired in a population of 20 locally advanced nonsmall cell lung cancer patients undergoing radiation therapy. All subjects underwent concurrent radiochemotherapy to a total dose of 59.4-70.2 Gy using daily 1.8 or 2 Gy fractions. Audio-visual biofeedback was used to minimize breathing irregularity during all fractions, including acquisition of all 4DCT and 4DCBCT acquisitions in all subjects. Target, organs at risk, and implanted fiducial markers were delineated by a physician in the 4DCT images. Image coordinate system origins between 4DCT and 4DCBCT were manipulated in such a way that the images can be used to simulate initial patient setup in the treatment position. 4DCT images were acquired on a 16-slice helical CT simulator with 10 breathing phases and 3 mm slice thickness during simulation. In 13 of the 20 subjects, 4DCTs were also acquired on the same scanner weekly during therapy. Every day, 4DCBCT images were acquired on a commercial onboard CBCT scanner. An optically tracked external surrogate was synchronized with CBCT acquisition so that each CBCT projection was time stamped with the surrogate respiratory signal through in-house software and hardware tools. Approximately 2500 projections were acquired over a period of 8-10 minutes in half-fan mode with the half bow-tie filter. Using the external surrogate, the CBCT projections were sorted into 10 breathing phases and reconstructed with an in-house FDK reconstruction algorithm. Errors in respiration sorting, reconstruction, and acquisition were carefully identified and corrected. DATA FORMAT AND USAGE NOTES: 4DCT and 4DCBCT images are available in DICOM format and structures through DICOM-RT RTSTRUCT format. All data are stored in the Cancer Imaging Archive (TCIA, http://www.cancerimagingarchive.net/) as collection 4D-Lung and are publicly available. DISCUSSION: Due to high temporal frequency sampling, redundant (4DCT and 4DCBCT) data at similar timepoints, oversampled 4DCBCT, and fiducial markers, this dataset can support studies in image-guided and image-guided adaptive radiotherapy, assessment of 4D voxel trajectory variability, and development and validation of new tools for image registration and motion management.

Pulmonary nodule detection on computed tomography using neuro-evolutionary scheme

  • Huidrom, Ratishchandra
  • Chanu, Yambem Jina
  • Singh, Khumanthem Manglem
Signal, Image and Video Processing 2018 Journal Article, cited 0 times
Website

Radiomics of NSCLC: Quantitative CT Image Feature Characterization and Tumor Shrinkage Prediction

  • Hunter, Luke
2013 Thesis, cited 4 times
Website

Collage CNN for Renal Cell Carcinoma Detection from CT

  • Hussain, Mohammad Arafat
  • Amir-Khalili, Alborz
  • Hamarneh, Ghassan
  • Abugharbieh, Rafeef
2017 Conference Proceedings, cited 0 times
Website

Advanced MRI Techniques in the Monitoring of Treatment of Gliomas

  • Hyare, Harpreet
  • Thust, Steffi
  • Rees, Jeremy
Current treatment options in neurology 2017 Journal Article, cited 11 times
Website

Automatic MRI Breast tumor Detection using Discrete Wavelet Transform and Support Vector Machines

  • Ibraheem, Amira Mofreh
  • Rahouma, Kamel Hussein
  • Hamed, Hesham F. A.
2019 Conference Paper, cited 0 times
Website
The human right is to live a healthy life free of serious diseases. Cancer is the most serious disease facing humans and possibly leading to death. So, a definitive solution must be done to these diseases, to eliminate them and also to protect humans from them. Breast cancer is considered being one of the dangerous types of cancers that face women in particular. Early examination should be done periodically and the diagnosis must be more sensitive and effective to preserve the women lives. There are various types of breast cancer images but magnetic resonance imaging (MRI) has become one of the important ways in breast cancer detection. In this work, a new method is done to detect the breast cancer using the MRI images that is preprocessed using a 2D Median filter. The features are extracted from the images using discrete wavelet transform (DWT). These features are reduced to 13 features. Then, support vector machine (SVM) is used to detect if there is a tumor or not. Simulation results have been accomplished using the MRI images datasets. These datasets are extracted from the standard Breast MRI database known as the “Reference Image Database to Evaluate Response (RIDER)”. The proposed method has achieved an accuracy of 98.03 % using the available MRIs database. The processing time for all processes was recorded as 0.894 seconds. The obtained results have demonstrated the superiority of the proposed system over the available ones in the literature.

Brain tumor segmentation in multi‐spectral MRI using convolutional neural networks (CNN)

  • Iqbal, Sajid
  • Ghani, M Usman
  • Saba, Tanzila
  • Rehman, Amjad
Microscopy research and technique 2018 Journal Article, cited 8 times
Website

A rotation and translation invariant method for 3D organ image classification using deep convolutional neural networks

  • Islam, Kh Tohidul
  • Wijewickrema, Sudanthi
  • O’Leary, Stephen
PeerJ Computer SciencePeerJ Computer Science 2019 Journal Article, cited 0 times
Website
Three-dimensional (3D) medical image classification is useful in applications such as disease diagnosis and content-based medical image retrieval. It is a challenging task due to several reasons. First, image intensity values are vastly different depending on the image modality. Second, intensity values within the same image modality may vary depending on the imaging machine and artifacts may also be introduced in the imaging process. Third, processing 3D data requires high computational power. In recent years, significant research has been conducted in the field of 3D medical image classification. However, most of these make assumptions about patient orientation and imaging direction to simplify the problem and/or work with the full 3D images. As such, they perform poorly when these assumptions are not met. In this paper, we propose a method of classification for 3D organ images that is rotation and translation invariant. To this end, we extract a representative two-dimensional (2D) slice along the plane of best symmetry from the 3D image. We then use this slice to represent the 3D image and use a 20-layer deep convolutional neural network (DCNN) to perform the classification task. We show experimentally, using multi-modal data, that our method is comparable to existing methods when the assumptions of patient orientation and viewing direction are met. Notably, it shows similarly high accuracy even when these assumptions are violated, where other methods fail. We also explore how this method can be used with other DCNN models as well as conventional classification approaches.

Magnetic resonance image features identify glioblastoma phenotypic subtypes with distinct molecular pathway activities

  • Itakura, Haruka
  • Achrol, Achal S
  • Mitchell, Lex A
  • Loya, Joshua J
  • Liu, Tiffany
  • Westbroek, Erick M
  • Feroze, Abdullah H
  • Rodriguez, Scott
  • Echegaray, Sebastian
  • Azad, Tej D
Science translational medicine 2015 Journal Article, cited 90 times
Website

NextMed, Augmented and Virtual Reality platform for 3D medical imaging visualization: Explanation of the software platform developed for 3D models visualization related with medical images using Augmented and Virtual Reality technology

  • Izard, Santiago González
  • Plaza, Óscar Alonso
  • Torres, Ramiro Sánchez
  • Méndez, Juan Antonio Juanes
  • García-Peñalvo, Francisco José
2019 Conference Proceedings, cited 0 times
Website
The visualization of the radiological results with more advanced techniques than the current ones, such as Augmented Reality and Virtual Reality technologies, represent a great advance for medical professionals, by eliminating their imagination capacity as an indispensable requirement for the understanding of medical images. The problem is that for its application it is necessary to segment the anatomical areas of interest, and this currently involves the intervention of the human being. The Nextmed project is presented as a complete solution that includes DICOM images import, automatic segmentation of certain anatomical structures, 3D mesh generation of the segmented area, visualization engine with Augmented Reality and Virtual Reality, all thanks to different software platforms that have been implemented and detailed, including results obtained from real patients. We will focus on the visualization platform using both Augmented and Virtual Reality technologies to allow medical professionals to work with 3d model representation of medical images in a different way taking advantage of new technologies.

Quantitative imaging in radiation oncology: An emerging science and clinical service

  • Jaffray, DA
  • Chung, C
  • Coolens, C
  • Foltz, W
  • Keller, H
  • Menard, C
  • Milosevic, M
  • Publicover, J
  • Yeung, I
2015 Conference Proceedings, cited 9 times
Website

Prediction of Treatment Response to Neoadjuvant Chemotherapy for Breast Cancer via Early Changes in Tumor Heterogeneity Captured by DCE-MRI Registration

  • Jahani, Nariman
  • Cohen, Eric
  • Hsieh, Meng-Kang
  • Weinstein, Susan P
  • Pantalone, Lauren
  • Hylton, Nola
  • Newitt, David
  • Davatzikos, Christos
  • Kontos, Despina
Sci RepScientific reports 2019 Journal Article, cited 0 times
Website
We analyzed DCE-MR images from 132 women with locally advanced breast cancer from the I-SPY1 trial to evaluate changes of intra-tumor heterogeneity for augmenting early prediction of pathologic complete response (pCR) and recurrence-free survival (RFS) after neoadjuvant chemotherapy (NAC). Utilizing image registration, voxel-wise changes including tumor deformations and changes in DCE-MRI kinetic features were computed to characterize heterogeneous changes within the tumor. Using five-fold cross-validation, logistic regression and Cox regression were performed to model pCR and RFS, respectively. The extracted imaging features were evaluated in augmenting established predictors, including functional tumor volume (FTV) and histopathologic and demographic factors, using the area under the curve (AUC) and the C-statistic as performance measures. The extracted voxel-wise features were also compared to analogous conventional aggregated features to evaluate the potential advantage of voxel-wise analysis. Voxel-wise features improved prediction of pCR (AUC = 0.78 (±0.03) vs 0.71 (±0.04), p < 0.05 and RFS (C-statistic = 0.76 ( ± 0.05), vs 0.63 ( ± 0.01)), p < 0.05, while models based on analogous aggregate imaging features did not show appreciable performance changes (p > 0.05). Furthermore, all selected voxel-wise features demonstrated significant association with outcome (p < 0.05). Thus, precise measures of voxel-wise changes in tumor heterogeneity extracted from registered DCE-MRI scans can improve early prediction of neoadjuvant treatment outcomes in locally advanced breast cancer.

Genomic mapping and survival prediction in glioblastoma: molecular subclassification strengthened by hemodynamic imaging biomarkers

  • Jain, Rajan
  • Poisson, Laila
  • Narang, Jayant
  • Gutman, David
  • Scarpace, Lisa
  • Hwang, Scott N
  • Holder, Chad
  • Wintermark, Max
  • Colen, Rivka R
  • Kirby, Justin
Radiology 2013 Journal Article, cited 99 times
Website

Correlation of perfusion parameters with genes related to angiogenesis regulation in glioblastoma: a feasibility study

  • Jain, R
  • Poisson, L
  • Narang, J
  • Scarpace, L
  • Rosenblum, ML
  • Rempel, S
  • Mikkelsen, T
American Journal of Neuroradiology 2012 Journal Article, cited 39 times
Website

Outcome prediction in patients with glioblastoma by using imaging, clinical, and genomic biomarkers: focus on the nonenhancing component of the tumor

  • Jain, R.
  • Poisson, L. M.
  • Gutman, D.
  • Scarpace, L.
  • Hwang, S. N.
  • Holder, C. A.
  • Wintermark, M.
  • Rao, A.
  • Colen, R. R.
  • Kirby, J.
  • Freymann, J.
  • Jaffe, C. C.
  • Mikkelsen, T.
  • Flanders, A.
Radiology 2014 Journal Article, cited 86 times
Website
PURPOSE: To correlate patient survival with morphologic imaging features and hemodynamic parameters obtained from the nonenhancing region (NER) of glioblastoma (GBM), along with clinical and genomic markers. MATERIALS AND METHODS: An institutional review board waiver was obtained for this HIPAA-compliant retrospective study. Forty-five patients with GBM underwent baseline imaging with contrast material-enhanced magnetic resonance (MR) imaging and dynamic susceptibility contrast-enhanced T2*-weighted perfusion MR imaging. Molecular and clinical predictors of survival were obtained. Single and multivariable models of overall survival (OS) and progression-free survival (PFS) were explored with Kaplan-Meier estimates, Cox regression, and random survival forests. RESULTS: Worsening OS (log-rank test, P = .0103) and PFS (log-rank test, P = .0223) were associated with increasing relative cerebral blood volume of NER (rCBVNER), which was higher with deep white matter involvement (t test, P = .0482) and poor NER margin definition (t test, P = .0147). NER crossing the midline was the only morphologic feature of NER associated with poor survival (log-rank test, P = .0125). Preoperative Karnofsky performance score (KPS) and resection extent (n = 30) were clinically significant OS predictors (log-rank test, P = .0176 and P = .0038, respectively). No genomic alterations were associated with survival, except patients with high rCBVNER and wild-type epidermal growth factor receptor (EGFR) mutation had significantly poor survival (log-rank test, P = .0306; area under the receiver operating characteristic curve = 0.62). Combining resection extent with rCBVNER marginally improved prognostic ability (permutation, P = .084). Random forest models of presurgical predictors indicated rCBVNER as the top predictor; also important were KPS, age at diagnosis, and NER crossing the midline. A multivariable model containing rCBVNER, age at diagnosis, and KPS can be used to group patients with more than 1 year of difference in observed median survival (0.49-1.79 years). CONCLUSION: Patients with high rCBVNER and NER crossing the midline and those with high rCBVNER and wild-type EGFR mutation showed poor survival. In multivariable survival models, however, rCBVNER provided unique prognostic information that went above and beyond the assessment of all NER imaging features, as well as clinical and genomic features.

Integrative analysis of diffusion-weighted MRI and genomic data to inform treatment of glioblastoma

  • Jajamovich, Guido H
  • Valiathan, Chandni R
  • Cristescu, Razvan
  • Somayajula, Sangeetha
Journal of neuro-oncology 2016 Journal Article, cited 4 times
Website

Non-invasive tumor genotyping using radiogenomic biomarkers, a systematic review and oncology-wide pathway analysis

  • Jansen, Robin W
  • van Amstel, Paul
  • Martens, Roland M
  • Kooi, Irsan E
  • Wesseling, Pieter
  • de Langen, Adrianus J
  • Menke-Van der Houven, Catharina W
Oncotarget 2018 Journal Article, cited 0 times
Website

Deep Neural Network Based Classifier Model for Lung Cancer Diagnosis and Prediction System in Healthcare Informatics

  • Jayaraj, D.
  • Sathiamoorthy, S.
2019 Conference Paper, cited 0 times
Lung cancer is a most important deadly disease which results to mortality of people because of the cells growth in unmanageable way. This problem leads to increased significance among physicians as well as academicians to develop efficient diagnosis models. Therefore, a novel method for automated identification of lung nodule becomes essential and it forms the motivation of this study. This paper presents a new deep learning classification model for lung cancer diagnosis. The presented model involves four main steps namely preprocessing, feature extraction, segmentation and classification. A particle swarm optimization (PSO) algorithm is sued for segmentation and deep neural network (DNN) is applied for classification. The presented PSO-DNN model is tested against a set of sample lung images and the results verified the goodness of the projected model on all the applied images.

Integrating Open Data on Cancer in Support to Tumor Growth Analysis

  • Jeanquartier, Fleur
  • Jean-Quartier, Claire
  • Schreck, Tobias
  • Cemernek, David
  • Holzinger, Andreas
2016 Conference Proceedings, cited 10 times
Website

Assessment of prostate cancer prognostic Gleason grade group using zonal-specific features extracted from biparametric MRI using a KNN classifier

  • Jensen, C.
  • Carl, J.
  • Boesen, L.
  • Langkilde, N. C.
  • Ostergaard, L. R.
J Appl Clin Med Phys 2019 Journal Article, cited 0 times
Website
PURPOSE: To automatically assess the aggressiveness of prostate cancer (PCa) lesions using zonal-specific image features extracted from diffusion weighted imaging (DWI) and T2W MRI. METHODS: Region of interest was extracted from DWI (peripheral zone) and T2W MRI (transitional zone and anterior fibromuscular stroma) around the center of 112 PCa lesions from 99 patients. Image histogram and texture features, 38 in total, were used together with a k-nearest neighbor classifier to classify lesions into their respective prognostic Grade Group (GG) (proposed by the International Society of Urological Pathology 2014 consensus conference). A semi-exhaustive feature search was performed (1-6 features in each feature set) and validated using threefold stratified cross validation in a one-versus-rest classification setup. RESULTS: Classifying PCa lesions into GGs resulted in AUC of 0.87, 0.88, 0.96, 0.98, and 0.91 for GG1, GG2, GG1 + 2, GG3, and GG4 + 5 for the peripheral zone, respectively. The results for transitional zone and anterior fibromuscular stroma were AUC of 0.85, 0.89, 0.83, 0.94, and 0.86 for GG1, GG2, GG1 + 2, GG3, and GG4 + 5, respectively. CONCLUSION: This study showed promising results with reasonable AUC values for classification of all GG indicating that zonal-specific imaging features from DWI and T2W MRI can be used to differentiate between PCa lesions of various aggressiveness.

Lung nodule detection from CT scans using 3D convolutional neural networks without candidate selection

  • Jenuwine, Natalia M
  • Mahesh, Sunny N
  • Furst, Jacob D
  • Raicu, Daniela S
2018 Conference Proceedings, cited 0 times
Website

Computer-aided nodule detection and volumetry to reduce variability between radiologists in the interpretation of lung nodules at low-dose screening CT

  • Jeon, Kyung Nyeo
  • Goo, Jin Mo
  • Lee, Chang Hyun
  • Lee, Youkyung
  • Choo, Ji Yung
  • Lee, Nyoung Keun
  • Shim, Mi-Suk
  • Lee, In Sun
  • Kim, Kwang Gi
  • Gierada, David S
Investigative radiology 2012 Journal Article, cited 51 times
Website

CASED: Curriculum Adaptive Sampling for Extreme Data Imbalance

  • Jesson, Andrew
  • Guizard, Nicolas
  • Ghalehjegh, Sina Hamidi
  • Goblot, Damien
  • Soudan, Florian
  • Chapados, Nicolas
2017 Conference Proceedings, cited 18 times
Website
We introduce CASED, a novel curriculum sampling algorithm that facilitates the optimization of deep learning segmentation or detection models on data sets with extreme class imbalance. We evaluate the CASED learning framework on the task of lung nodule detection in chest CT. In contrast to two-stage solutions, wherein nodule candidates are first proposed by a segmentation model and refined by a second detection stage, CASED improves the training of deep nodule segmentation models (e.g. UNet) to the point where state of the art results are achieved using only a trivial detection stage. CASED improves the optimization of deep segmentation models by allowing them to first learn how to distinguish nodules from their immediate surroundings, while continuously adding a greater proportion of difficult-to-classify global context, until uniformly sampling from the empirical data distribution. Using CASED during training yields a minimalist proposal to the lung nodule detection problem that tops the LUNA16 nodule detection benchmark with an average sensitivity score of 88.35%. Furthermore, we find that models trained using CASED are robust to nodule annotation quality by showing that comparable results can be achieved when only a point and radius for each ground truth nodule are provided during training. Finally, the CASED learning framework makes no assumptions with regard to imaging modality or segmentation target and should generalize to other medical imaging problems where class imbalance is a persistent problem.

Fusion Radiomics Features from Conventional MRI Predict MGMT Promoter Methylation Status in Lower Grade Gliomas

  • Jiang, Chendan
  • Kong, Ziren
  • Liu, Sirui
  • Feng, Shi
  • Zhang, Yiwei
  • Zhu, Ruizhe
  • Chen, Wenlin
  • Wang, Yuekun
  • Lyu, Yuelei
  • You, Hui
  • Zhao, Dachun
  • Wang, Renzhi
  • Wang, Yu
  • Ma, Wenbin
  • Feng, Feng
Eur J Radiol 2019 Journal Article, cited 0 times
Website
PURPOSE: The methylation status of the O6-methylguanine-DNA methyltransferase (MGMT) promoter has been proven to be a prognostic and predictive biomarker for lower grade glioma (LGG). This study aims to build a radiomics model to preoperatively predict the MGMT promoter methylation status in LGG. METHOD: 122 pathology-confirmed LGG patients were retrospectively reviewed, with 87 local patients as the training dataset, and 35 from The Cancer Imaging Archive as independent validation. A total of 1702 radiomics features were extracted from three-dimensional contrast-enhanced T1 (3D-CE-T1)-weighted and T2-weighted MRI images, including 14 shape, 18 first order, 75 texture, and 744 wavelet features respectively. The radiomics features were selected with the least absolute shrinkage and selection operator algorithm, and prediction models were constructed with multiple classifiers. Models were evaluated using receiver operating characteristic (ROC). RESULTS: Five radiomics prediction models, namely, 3D-CE-T1-weighted single radiomics model, T2-weighted single radiomics model, fusion radiomics model, linear combination radiomics model, and clinical integrated model, were built. The fusion radiomics model, which constructed from the concatenation of both series, displayed the best performance, with an accuracy of 0.849 and an area under the curve (AUC) of 0.970 (0.939-1.000) in the training dataset, and an accuracy of 0.886 and an AUC of 0.898 (0.786-1.000) in the validation dataset. Linear combination of single radiomics models and integration of clinical factors did not improve. CONCLUSIONS: Conventional MRI radiomics models are reliable for predicting the MGMT promoter methylation status in LGG patients. The fusion of radiomics features from different series may increase the prediction performance.

Evaluation of Feature Robustness Against Technical Parameters in CT Radiomics: Verification of Phantom Study with Patient Dataset

  • Jin, Hyeongmin
  • Kim, Jong Hyo
Journal of Signal Processing Systems 2020 Journal Article, cited 1 times
Website
Recent advances in radiomics have shown promising results in prognostic and diagnostic studies with high dimensional imaging feature analysis. However, radiomic features are known to be affected by technical parameters and feature extraction methodology. We evaluate the robustness of CT radiomic features against the technical parameters involved in CT acquisition and feature extraction procedures using a standardized phantom and verify the feature robustness by using patient cases. ACR phantom was scanned with two tube currents, two reconstruction kernels, and two fields of view size. A total of 47 radiomic features of textures and first-order statistics were extracted on the homogeneous region from all scans. Intrinsic variability was measured to identify unstable features vulnerable to inherent CT noise and texture. Susceptibility index was defined to represent the susceptibility to the variation of a given technical parameter. Eighteen radiomic features were shown to be intrinsically unstable on reference condition. The features were more susceptible to the reconstruction kernel variation than to other sources of variation. The feature robustness evaluated on the phantom CT correlated with those evaluated on clinical CT scans. We revealed a number of scan parameters could significantly affect the radiomic features. These characteristics should be considered in a radiomic study when different scan parameters are used in a clinical dataset.

Enhancement of Deep Learning in Image Classification Performance Using Xception with the Swish Activation Function for Colorectal Polyp Preliminary Screening

  • Jinsakul, Natinai
  • Tsai, Cheng-Fa
  • Tsai, Chia-En
  • Wu, Pensee
Mathematics 2019 Journal Article, cited 0 times
One of the leading forms of cancer is colorectal cancer (CRC), which is responsible for increasing mortality in young people. The aim of this paper is to provide an experimental modification of deep learning of Xception with Swish and assess the possibility of developing a preliminary colorectal polyp screening system by training the proposed model with a colorectal topogram dataset in two and three classes. The results indicate that the proposed model can enhance the original convolutional neural network model with evaluation classification performance by achieving accuracy of up to 98.99% for classifying into two classes and 91.48% for three classes. For testing of the model with another external image, the proposed method can also improve the prediction compared to the traditional method, with 99.63% accuracy for true prediction of two classes and 80.95% accuracy for true prediction of three classes.

Analysis of Vestibular Labyrinthine Geometry and Variation in the Human Temporal Bone

  • Johnson Chacko, Lejo
  • Schmidbauer, Dominik T
  • Handschuh, Stephan
  • Reka, Alen
  • Fritscher, Karl D
  • Raudaschl, Patrik
  • Saba, Rami
  • Handler, Michael
  • Schier, Peter P
  • Baumgarten, Daniel
Frontiers in neuroscience 2018 Journal Article, cited 4 times
Website

Interactive 3D Virtual Colonoscopic Navigation For Polyp Detection From CT Images

  • Joseph, Jinu
  • Kumar, Rajesh
  • Chandran, Pournami S
  • Vidya, PV
Procedia Computer Science 2017 Journal Article, cited 0 times
Website

Cloud-based NoSQL open database of pulmonary nodules for computer-aided lung cancer diagnosis and reproducible research

  • Junior, José Raniery Ferreira
  • Oliveira, Marcelo Costa
  • de Azevedo-Marques, Paulo Mazzoncini
Journal of Digital Imaging 2016 Journal Article, cited 14 times
Website

Radiographic assessment of contrast enhancement and T2/FLAIR mismatch sign in lower grade gliomas: correlation with molecular groups

  • Juratli, Tareq A
  • Tummala, Shilpa S
  • Riedl, Angelika
  • Daubner, Dirk
  • Hennig, Silke
  • Penson, Tristan
  • Zolal, Amir
  • Thiede, Christian
  • Schackert, Gabriele
  • Krex, Dietmar
Journal of neuro-oncology 2018 Journal Article, cited 0 times
Website

Homology-based radiomic features for prediction of the prognosis of lung cancer based on CT-based radiomics

  • Kadoya, Noriyuki
  • Tanaka, Shohei
  • Kajikawa, Tomohiro
  • Tanabe, Shunpei
  • Abe, Kota
  • Nakajima, Yujiro
  • Yamamoto, Takaya
  • Takahashi, Noriyoshi
  • Takeda, Kazuya
  • Dobashi, Suguru
  • Takeda, Ken
  • Nakane, Kazuaki
  • Jingu, Keiichi
Med Phys 2020 Journal Article, cited 0 times
Website
PURPOSE: Radiomics is a new technique that enables noninvasive prognostic prediction by extracting features from medical images. Homology is a concept used in many branches of algebra and topology that can quantify the contact degree. In the present study, we developed homology-based radiomic features to predict the prognosis of non-small-cell lung cancer (NSCLC) patients and then evaluated the accuracy of this prediction method. METHODS: Four data sets were used: two to provide training and test data and two for the selection of robust radiomic features. All the data sets were downloaded from The Cancer Imaging Archive (TCIA). In two-dimensional cases, the Betti numbers consist of two values: b0 (zero-dimensional Betti number), which is the number of isolated components, and b1 (one-dimensional Betti number), which is the number of one-dimensional or "circular" holes. For homology-based evaluation, CT images must be converted to binarized images in which each pixel has two possible values: 0 or 1. All CT slices of the gross tumor volume were used for calculating the homology histogram. First, by changing the threshold of the CT value (range: -150 to 300 HU) for all its slices, we developed homology-based histograms for b0 , b1 , and b1 /b0 using binarized images All histograms were then summed, and the summed histogram was normalized by the number of slices. 144 homology-based radiomic features were defined from the histogram. To compare the standard radiomic features, 107 radiomic features were calculated using the standard radiomics technique. To clarify the prognostic power, the relationship between the values of the homology-based radiomic features and overall survival was evaluated using LASSO Cox regression model and the Kaplan-Meier method. The retained features with non-zero coefficients calculated by the LASSO Cox regression model were used for fitting the regression model. Moreover, these features were then integrated into a radiomics signature. An individualized rad score was calculated from a linear combination of the selected features, which were weighted by their respective coefficients. RESULTS: When the patients in the training and test data sets were stratified into high-risk and low-risk groups according to the rad scores, the overall survival of the groups was significantly different. The C-index values for the homology-based features (rad score), standard features (rad score), and tumor size were 0.625, 0.603, and 0.607, respectively, for the training data sets and 0.689, 0.668, and 0.667 for the test data sets. This result showed that homology-based radiomic features had slightly higher prediction power than the standard radiomic features. CONCLUSIONS: Prediction performance using homology-based radiomic features had a comparable or slightly higher prediction power than standard radiomic features. These findings suggest that homology-based radiomic features may have great potential for improving the prognostic prediction accuracy of CT-based radiomics. In this result, it is noteworthy that there are some limitations.

Multicenter CT phantoms public dataset for radiomics reproducibility tests

  • Kalendralis, Petros
  • Traverso, Alberto
  • Shi, Zhenwei
  • Zhovannik, Ivan
  • Monshouwer, Rene
  • Starmans, Martijn P A
  • Klein, Stefan
  • Pfaehler, Elisabeth
  • Boellaard, Ronald
  • Dekker, Andre
  • Wee, Leonard
Med Phys 2019 Journal Article, cited 0 times
Website
PURPOSE: The aim of this paper is to describe a public, open-access, computed tomography (CT) phantom image set acquired at three centers and collected especially for radiomics reproducibility research. The dataset is useful to test radiomic features reproducibility with respect to various parameters, such as acquisition settings, scanners, and reconstruction algorithms. ACQUISITION AND VALIDATION METHODS: Three phantoms were scanned in three independent institutions. Images of the following phantoms were acquired: Catphan 700 and COPDGene Phantom II (Phantom Laboratory, Greenwich, NY, USA), and the Triple modality 3D Abdominal Phantom (CIRS, Norfolk, VA, USA). Data were collected at three Dutch medical centers: MAASTRO Clinic (Maastricht, NL), Radboud University Medical Center (Nijmegen, NL), and University Medical Center Groningen (Groningen, NL) with scanners from two different manufacturers Siemens Healthcare and Philips Healthcare. The following acquisition parameter were varied in the phantom scans: slice thickness, reconstruction kernels, and tube current. DATA FORMAT AND USAGE NOTES: We made the dataset publically available on the Dutch instance of "Extensible Neuroimaging Archive Toolkit-XNAT" (https://xnat.bmia.nl). The dataset is freely available and reusable with attribution (Creative Commons 3.0 license). POTENTIAL APPLICATIONS: Our goal was to provide a findable, open-access, annotated, and reusable CT phantom dataset for radiomics reproducibility studies. Reproducibility testing and harmonization are fundamental requirements for wide generalizability of radiomics-based clinical prediction models. It is highly desirable to include only reproducible features into models, to be more assured of external validity across hitherto unseen contexts. In this view, phantom data from different centers represent a valuable source of information to exclude CT radiomic features that may already be unstable with respect to simplified structures and tightly controlled scan settings. The intended extension of our shared dataset is to include other modalities and phantoms with more realistic lesion simulations.

Radiomics of Lung Nodules: A Multi-Institutional Study of Robustness and Agreement of Quantitative Imaging Features

  • Kalpathy-Cramer, J.
  • Mamomov, A.
  • Zhao, B.
  • Lu, L.
  • Cherezov, D.
  • Napel, S.
  • Echegaray, S.
  • Rubin, D.
  • McNitt-Gray, M.
  • Lo, P.
  • Sieren, J. C.
  • Uthoff, J.
  • Dilger, S. K.
  • Driscoll, B.
  • Yeung, I.
  • Hadjiiski, L.
  • Cha, K.
  • Balagurunathan, Y.
  • Gillies, R.
  • Goldgof, D.
Tomography: a journal for imaging research 2016 Journal Article, cited 19 times
Website

A Comparison of Lung Nodule Segmentation Algorithms: Methods and Results from a Multi-institutional Study

  • Kalpathy-Cramer, Jayashree
  • Zhao, Binsheng
  • Goldgof, Dmitry
  • Gu, Yuhua
  • Wang, Xingwei
  • Yang, Hao
  • Tan, Yongqiang
  • Gillies, Robert
  • Napel, Sandy
Journal of Digital Imaging 2016 Journal Article, cited 18 times
Website

A low cost approach for brain tumor segmentation based on intensity modeling and 3D Random Walker

  • Kanas, Vasileios G
  • Zacharaki, Evangelia I
  • Davatzikos, Christos
  • Sgarbas, Kyriakos N
  • Megalooikonomou, Vasileios
Biomedical Signal Processing and Control 2015 Journal Article, cited 15 times
Website

Learning MRI-based classification models for MGMT methylation status prediction in glioblastoma

  • Kanas, Vasileios G
  • Zacharaki, Evangelia I
  • Thomas, Ginu A
  • Zinn, Pascal O
  • Megalooikonomou, Vasileios
  • Colen, Rivka R
Computer methods and programs in biomedicine 2017 Journal Article, cited 16 times
Website

Neurosense: deep sensing of full or near-full coverage head/brain scans in human magnetic resonance imaging

  • Kanber, B.
  • Ruffle, J.
  • Cardoso, J.
  • Ourselin, S.
  • Ciccarelli, O.
Neuroinformatics 2019 Journal Article, cited 0 times
Website
The application of automated algorithms to imaging requires knowledge of its content, a curatorial task, for which we ordinarily rely on the Digital Imaging and Communications in Medicine (DICOM) header as the only source of image meta-data. However, identifying brain MRI scans that have full or near-full coverage among a large number (e.g. >5000) of scans comprising both head/brain and other body parts is a time-consuming task that cannot be automated with the use of the information stored in the DICOM header attributes alone. Depending on the clinical scenario, an entire set of scans acquired in a single visit may often be labelled “BRAIN” in the DICOM field 0018,0015 (Body Part Examined), while the individual scans will often not only include brain scans with full coverage, but also others with partial brain coverage, scans of the spinal cord, and in some cases other body parts.

3D multi-view convolutional neural networks for lung nodule classification

  • Kang, Guixia
  • Liu, Kui
  • Hou, Beibei
  • Zhang, Ningbo
PLoS One 2017 Journal Article, cited 7 times
Website

Multi-Institutional Validation of Deep Learning for Pretreatment Identification of Extranodal Extension in Head and Neck Squamous Cell Carcinoma

  • Kann, B. H.
  • Hicks, D. F.
  • Payabvash, S.
  • Mahajan, A.
  • Du, J.
  • Gupta, V.
  • Park, H. S.
  • Yu, J. B.
  • Yarbrough, W. G.
  • Burtness, B. A.
  • Husain, Z. A.
  • Aneja, S.
J Clin Oncol 2020 Journal Article, cited 5 times
Website
PURPOSE: Extranodal extension (ENE) is a well-established poor prognosticator and an indication for adjuvant treatment escalation in patients with head and neck squamous cell carcinoma (HNSCC). Identification of ENE on pretreatment imaging represents a diagnostic challenge that limits its clinical utility. We previously developed a deep learning algorithm that identifies ENE on pretreatment computed tomography (CT) imaging in patients with HNSCC. We sought to validate our algorithm performance for patients from a diverse set of institutions and compare its diagnostic ability to that of expert diagnosticians. METHODS: We obtained preoperative, contrast-enhanced CT scans and corresponding pathology results from two external data sets of patients with HNSCC: an external institution and The Cancer Genome Atlas (TCGA) HNSCC imaging data. Lymph nodes were segmented and annotated as ENE-positive or ENE-negative on the basis of pathologic confirmation. Deep learning algorithm performance was evaluated and compared directly to two board-certified neuroradiologists. RESULTS: A total of 200 lymph nodes were examined in the external validation data sets. For lymph nodes from the external institution, the algorithm achieved an area under the receiver operating characteristic curve (AUC) of 0.84 (83.1% accuracy), outperforming radiologists' AUCs of 0.70 and 0.71 (P = .02 and P = .01). Similarly, for lymph nodes from the TCGA, the algorithm achieved an AUC of 0.90 (88.6% accuracy), outperforming radiologist AUCs of 0.60 and 0.82 (P < .0001 and P = .16). Radiologist diagnostic accuracy improved when receiving deep learning assistance. CONCLUSION: Deep learning successfully identified ENE on pretreatment imaging across multiple institutions, exceeding the diagnostic ability of radiologists with specialized head and neck experience. Our findings suggest that deep learning has utility in the identification of ENE in patients with HNSCC and has the potential to be integrated into clinical decision making.

Public data and open source tools for multi-assay genomic investigation of disease

  • Kannan, Lavanya
  • Ramos, Marcel
  • Re, Angela
  • El-Hachem, Nehme
  • Safikhani, Zhaleh
  • Gendoo, Deena MA
  • Davis, Sean
  • Gomez-Cabrero, David
  • Castelo, Robert
  • Hansen, Kasper D
Briefings in bioinformatics 2015 Journal Article, cited 28 times
Website

Radiogenomic correlation for prognosis in patients with glioblastoma multiformae

  • Karnayana, Pallavi Machaiah
2013 Thesis, cited 0 times
Website

Identification of Tumor area from Brain MR Image

  • Kasım, Ömer
  • Kuzucuoğlu, Ahmet Emin
2016 Conference Proceedings, cited 1 times
Website

Mediator: A data sharing synchronization platform for heterogeneous medical image archives

  • Kathiravelu, Pradeeban
  • Sharma, Ashish
2015 Conference Proceedings, cited 4 times
Website

On-demand big data integration

  • Kathiravelu, Pradeeban
  • Sharma, Ashish
  • Galhardas, Helena
  • Van Roy, Peter
  • Veiga, Luís
Distributed and Parallel Databases 2018 Journal Article, cited 2 times
Website

“Radiotranscriptomics”: A synergy of imaging and transcriptomics in clinical assessment

  • Katrib, Amal
  • Hsu, William
  • Bui, Alex
  • Xing, Yi
Quantitative Biology 2016 Journal Article, cited 0 times

A joint intensity and edge magnitude-based multilevel thresholding algorithm for the automatic segmentation of pathological MR brain images

  • Kaur, Taranjit
  • Saini, Barjinder Singh
  • Gupta, Savita
Neural Computing and Applications 2016 Journal Article, cited 1 times
Website

ECM-CSD: An Efficient Classification Model for Cancer Stage Diagnosis in CT Lung Images Using FCM and SVM Techniques

  • Kavitha, MS
  • Shanthini, J
  • Sabitha, R
Journal of Medical Systems 2019 Journal Article, cited 0 times
Website

ECIDS-Enhanced Cancer Image Diagnosis and Segmentation Using Artificial Neural Networks and Active Contour Modelling

  • Kavitha, M. S.
  • Shanthini, J.
  • Bhavadharini, R. M.
Journal of Medical Imaging and Health Informatics 2020 Journal Article, cited 0 times
In the present decade, image processing techniques are extensively utilized in various medical image diagnoses, specifically in dealing with cancer images for detection and treatment in advance. The quality of the image and the accuracy are the significant factors to be considered while analyzing the images for cancer diagnosis. With that note, in this paper, an Enhanced Cancer Image Diagnosis and Segmentation (ECIDS) framework has been developed for effective detection and segmentation of lung cancer cells. Initially, the Computed Tomography lung image (CT image) has been processed for denoising by employing kernel based global denoising function. Following that, the noise free lung images are given for feature extraction. The images are further classified into normal and abnormal classes using Feed Forward Artificial Neural Network Classification. With that, the classified lung cancer images are given for segmentation and the process of segmentation has been done here with the Active Contour Modelling with reduced gradient. The segmented cancer images are further given for medical processing. Moreover, the framework is experimented with MATLAB tool using the clinical dataset called LIDC-IDRI lung CT dataset. The results are analyzed and discussed based on some performance evaluation metrics such as energy, Entropy, Correlation and Homogeneity are involved in effective classification.

Radiological Atlas for Patient Specific Model Generation

  • Kawa, Jacek
  • Juszczyk, Jan
  • Pyciński, Bartłomiej
  • Badura, Paweł
  • Pietka, Ewa
2014 Book Section, cited 11 times
Website

Supervised Dimension-Reduction Methods for Brain Tumor Image Data Analysis

  • Kawaguchi, Atsushi
2017 Book Section, cited 1 times
Website

eFis: A Fuzzy Inference Method for Predicting Malignancy of Small Pulmonary Nodules

  • Kaya, Aydın
  • Can, Ahmet Burak
2014 Book Section, cited 3 times
Website

Malignancy prediction by using characteristic-based fuzzy sets: A preliminary study

  • Kaya, Aydin
  • Can, Ahmet Burak
2015 Conference Proceedings, cited 0 times
Website

Computer-aided detection of brain tumors using image processing techniques

  • Kazdal, Seda
  • Dogan, Buket
  • Camurcu, Ali Yilmaz
2015 Conference Proceedings, cited 3 times
Website

Arterial input function and tracer kinetic model-driven network for rapid inference of kinetic maps in Dynamic Contrast-Enhanced MRI (AIF-TK-net)

  • Kettelkamp, Joseph
  • Lingala, Sajan Goud
2020 Conference Paper, cited 0 times
Website
We propose a patient-specific arterial input function (AIF) and tracer kinetic (TK) model-driven network to rapidly estimate the extended Tofts- Kety kinetic model parameters in DCE-MRI. We term our network as AIF-TK-net, which maps an input comprising of an image patch of the DCE-time series and the patient-specific AIF to the output image patch of the TK parameters. We leverage the open-source NEURO-RIDER database of brain tumor DCE-MRI scans to train our network. Once trained, our model rapidly infers the TK maps of unseen DCE-MRI images on the order of a 0.34 sec/slice for a 256x256x65 time series data on a NVIDIA GeForce GTX 1080 Ti GPU. We show its utility on high time resolution DCE-MRI datasets where significant variability in AIFs across patients exists. We demonstrate that the proposed AIF - TK net considerably improves the TK parameter estimation accuracy in comparison to a network, which does not utilize the patient AIF.

Preliminary Detection and Analysis of Lung Cancer on CT images using MATLAB: A Cost-effective Alternative

  • Khan, Md Daud Hossain
  • Ahmed, Mansur
  • Bach, Christian
Journal of Biomedical Engineering and Medical Imaging 2016 Journal Article, cited 0 times

Zonal Segmentation of Prostate T2W-MRI using Atrous Convolutional Neural Network

  • Khan, Zia
  • Yahya, Norashikin
  • Alsaih, Khaled
  • Meriaudeau, Fabrice
2019 Conference Paper, cited 0 times
The number of prostate cancer cases is steadily increasing especially with rising number of ageing population. It is reported that 5-year relative survival rate for man with stage 1 prostate cancer is almost 99% hence, early detection will significantly improve treatment planning and increase survival rate. Magnetic resonance imaging (MRI) technique is a common imaging modality for diagnosis of prostate cancer. MRI provide good visualization of soft tissue and enable better lesion detection and staging of prostate cancer. The main challenge of prostate whole gland segmentation is due to blurry boundary of central gland (CG) and peripheral zone (PZ) which lead to differential diagnosis. Since there is substantial difference in occurance and characteristic of cancer in both zones. So to enhance the diagnosis of prostate gland, we implemented DeeplabV3+ semantic segmentation approach to segment the prostate into zones. DeepLabV3+ achieved significant results in segmentation of prostate MRI by applying several parallel atrous convolution with different rates. The CNN-based semantic segmentation approach is trained and tested on NCI-ISBI 1.5T and 3T MRI dataset consist of 40 patients. Performance evaluation based on Dice similarity coefficient (DSC) of the Deeplab-based segmentation is compared with two other CNN-based semantic segmentation techniques: FCN and PSNet. Results shows that prostate segmentation using DeepLabV3+ can perform better than FCN and PSNet with average DSC of 70.3% in PZ and 88% in CG zone. This indicates the significant contribution made by the atrous convolution layer, in producing better prostate segmentation result.

3D convolution neural networks for molecular subtype prediction in glioblastoma multiforme

  • Khened, Mahendra
  • Anand, Vikas Kumar
  • Acharya, Gagan
  • Shah, Nameeta
  • Krishnamurthi, Ganapathy
2019 Conference Proceedings, cited 0 times
Website

Prediction of 1p/19q Codeletion in Diffuse Glioma Patients Using Preoperative Multiparametric Magnetic Resonance Imaging

  • Kim, Donnie
  • Wang, Nicholas C
  • Ravikumar, Visweswaran
  • Raghuram, DR
  • Li, Jinju
  • Patel, Ankit
  • Wendt, Richard E
  • Rao, Ganesh
  • Rao, Arvind
Frontiers in computational neuroscience 2019 Journal Article, cited 0 times

Associations between gene expression profiles of invasive breast cancer and Breast Imaging Reporting and Data System MRI lexicon

  • Kim, Ga Ram
  • Ku, You Jin
  • Cho, Soon Gu
  • Kim, Sei Joong
  • Min, Byung Soh
Annals of Surgical Treatment and Research 2017 Journal Article, cited 3 times
Website

Correlation between MR Image-Based Radiomics Features and Risk Scores Associated with Gene Expression Profiles in Breast Cancer

  • Kim, Ga Ram
  • Ku, You Jin
  • Kim, Jun Ho
  • Kim, Eun-Kyung
Journal of the Korean Society of Radiology 2020 Journal Article, cited 0 times
Website

Modification of population based arterial input function to incorporate individual variation

  • Kim, Harrison
Magn Reson Imaging 2018 Journal Article, cited 2 times
Website
This technical note describes how to modify a population-based arterial input function to incorporate variation among the individuals. In DCE-MRI, an arterial input function (AIF) is often distorted by pulsated inflow effect and noise. A population-based AIF (pAIF) has high signal-to-noise ratio (SNR), but cannot incorporate the individual variation. AIF variation is mainly induced by variation in cardiac output and blood volume of the individuals, which can be detected by the full width at half maximum (FWHM) during the first passage and the amplitude of AIF, respectively. Thus pAIF scaled in time and amplitude fitting to the individual AIF may serve as a high SNR AIF incorporating the individual variation. The proposed method was validated using DCE-MRI images of 18 prostate cancer patients. Root mean square error (RMSE) of pAIF from individual AIFs was 0.88+/-0.48mM (mean+/-SD), but it was reduced to 0.25+/-0.11mM after pAIF modification using the proposed method (p<0.0001).

Visual Interpretation of Convolutional Neural Network Predictions in Classifying Medical Image Modalities

  • Kim, Incheol
  • Rajaraman, Sivaramakrishnan
  • Antani, Sameer
Diagnostics (Basel) 2019 Journal Article, cited 0 times
Website
Deep learning (DL) methods are increasingly being applied for developing reliable computer-aided detection (CADe), diagnosis (CADx), and information retrieval algorithms. However, challenges in interpreting and explaining the learned behavior of the DL models hinders their adoption and use in real-world systems. In this study, we propose a novel method called “Class-selective Relevance Mapping” (CRM) for localizing and visualizing discriminative regions of interest (ROI) within a medical image. Such visualizations offer improved explanation of the convolutional neural network (CNN)-based DL model predictions. We demonstrate CRM effectiveness in classifying medical imaging modalities toward automatically labeling them for visual information retrieval applications. The CRM is based on linear sum of incremental mean squared errors (MSE) calculated at the output layer of the CNN model. It measures both positive and negative contributions of each spatial element in the feature maps produced from the last convolution layer leading to correct classification of an input image. A series of experiments on a “multi-modality” CNN model designed for classifying seven different types of image modalities shows that the proposed method is significantly better in detecting and localizing the discriminative ROIs than other state of the art class-activation methods. Further, to visualize its effectiveness we generate “class-specific” ROI maps by averaging the CRM scores of images in each modality class, and characterize the visual explanation through their different size, shape, and location for our multi-modality CNN model that achieved over 98% performance on a dataset constructed from publicly available images.

Training of deep convolutional neural nets to extract radiomic signatures of tumors

  • Kim, J.
  • Seo, S.
  • Ashrafinia, S.
  • Rahmim, A.
  • Sossi, V.
  • Klyuzhin, I.
Journal of Nuclear Medicine 2019 Journal Article, cited 0 times
Website
Objectives: Radiomics-based analysis of FDG PET images has been shown to improve the assessment and prediction of tumor growth rate, response to treatment and other patient outcomes [1]. An alternative new approach to image analysis involves the use of convolutional neural networks (CNNs), wherein relevant image features are learned implicitly and automatically in the process of network training [2]; this is in contrast to radiomics analyses, where the features are “hand-crafted” and are explicitly computed (EC). Although CNNs represent a more general approach, it is not clear whether the implicitly learned features may, or have the ability to include radiomics features (RFs) as a subset. If this is the case, CNN-based approaches may eventually obviate the use of EC RFs. Further, the use of CNNs instead of RFs may completely eliminate the need for feature selection and tumor delineation, enabling high-throughput data analyses. Thus, our objective was to test whether CNNs can learn to act similarly to several commonly used RFs. Using a set of simulated and real FDG PET images of tumors, we train the CNNs to estimate the values of RFs from the images without the explicit computation. We then compare the values of the CNN-estimated and EC features. Methods: Using a stochastic volumetric model for tumor growth, 2000 FDG images of tumors confined to a bounding box (BB) were simulated (40x40x40 voxels, voxel size 2.0 mm), and 10 RFs (3 x morphology, 4 x intensity histogram, 3 x texture features) were computed for each image using the SERA library [3] (compliant with the Image Biomarker Standardization Initiative, IBSI [4]). A 3D CNN with 4 convolutional layers, and a total of 164 filters, was implemented in Python using the Keras library with TensorFlow backend (https://www.keras.io). The mean absolute error was the optimized loss function. The CNN was trained to automatically estimate the values each of the 10 RFs for each image; 1900 of images were used for training, and 100 were used for testing, to compare the CNN-estimated values to the EC feature values. We also used a secondary test set comprised of 133 real tumor images, obtained from the head and neck PET/CT imaging study [5] publicly available at the Cancer Imaging Archive. The tumors were cropped to a BB, and the images were resampled to yield similar image size to the simulated image set. Results: After the training procedure, on the simulated test set the CNN was able to estimate the values of most EC RFs with 10-20% error (relative to the range). In the morphology group, the errors were 3.8% for volume, 12.0% for compactness, 15.7% for flatness. In the intensity group, the errors were 13.7% for the mean, 15.4% for variance, 12.3% for skewness, and 13.1% for kurtosis. In the texture group, the error was 10.6% for GLCM contrast, 13.4% for cluster tendency, and 21.7% for angular momentum. With all features, the difference between the CNN-estimated and EC feature values were statistically insignificant (two-sample t-test), and the correlation between the feature values was highly significant (p<0.01). On the real image test set, we observed higher error rates, on the order of 20-30%; however, with all but one feature (angular momentum), there was a significant correlation between the CNN-estimated and EC features (p<0.01). Conclusions: Our results suggest that CNNs can be trained to act similarly to several widely used RFs. While the accuracy of CNN-based estimates varied between the features, in general, the CNN showed a good propensity for learning. Thus, it is likely that with more complex network architectures and training data, features can be estimated more accurately. While a greater number of RFs need to be similarly tested in the future, these initial experiments provide first evidence that, given the sufficient quality and quantity of the training data, the CNNs indeed represent a more general approach to feature extraction, and may potentially replace radiomics-based analyses without compromising the descriptive thoroughness.

Design and evaluation of an accurate CNR-guided small region iterative restoration-based tumor segmentation scheme for PET using both simulated and real heterogeneous tumors

  • Koç, Alpaslan
  • Güveniş, Albert
Med Biol Eng ComputMed Biol Eng Comput 2020 Journal Article, cited 0 times
Website
Tumor delineation accuracy directly affects the effectiveness of radiotherapy. This study presents a methodology that minimizes potential errors during the automated segmentation of tumors in PET images. Iterative blind deconvolution was implemented in a region of interest encompassing the tumor with the number of iterations determined from contrast-to-noise ratios. The active contour and random forest classification-based segmentation method was evaluated using three distinct image databases that included both synthetic and real heterogeneous tumors. Ground truths about tumor volumes were known precisely. The volumes of the tumors were in the range of 0.49-26.34 cm(3), 0.64-1.52 cm(3), and 40.38-203.84 cm(3) respectively. Widely available software tools, namely, MATLAB, MIPAV, and ITK-SNAP were utilized. When using the active contour method, image restoration reduced mean errors in volumes estimation from 95.85 to 3.37%, from 815.63 to 17.45%, and from 32.61 to 6.80% for the three datasets. The accuracy gains were higher using datasets that include smaller tumors for which PVE is known to be more predominant. Computation time was reduced by a factor of about 10 in the smaller deconvolution region. Contrast-to-noise ratios were improved for all tumors in all data. The presented methodology has the potential to improve delineation accuracy in particular for smaller tumors at practically feasible computational times. Graphical abstract Evaluation of accurate lesion volumes using CNR-guided and ROI-based restoration method for PET images.

Influence of segmentation margin on machine learning–based high-dimensional quantitative CT texture analysis: a reproducibility study on renal clear cell carcinomas

  • Kocak, Burak
  • Ates, Ece
  • Durmaz, Emine Sebnem
  • Ulusan, Melis Baykara
  • Kilickesmez, Ozgur
European Radiology 2019 Journal Article, cited 0 times
Website

Unenhanced CT Texture Analysis of Clear Cell Renal Cell Carcinomas: A Machine Learning-Based Study for Predicting Histopathologic Nuclear Grade

  • Kocak, Burak
  • Durmaz, Emine Sebnem
  • Ates, Ece
  • Kaya, Ozlem Korkmaz
  • Kilickesmez, Ozgur
American Journal of Roentgenology 2019 Journal Article, cited 0 times
Website
OBJECTIVE: The purpose of this study is to investigate the predictive performance of machine learning (ML)-based unenhanced CT texture analysis in distinguishing low (grades I and II) and high (grades III and IV) nuclear grade clear cell renal cell carcinomas (RCCs). MATERIALS AND METHODS: For this retrospective study, 81 patients with clear cell RCC (56 high and 25 low nuclear grade) were included from a public database. Using 2D manual segmentation, 744 texture features were extracted from unenhanced CT images. Dimension reduction was done in three consecutive steps: reproducibility analysis by two radiologists, collinearity analysis, and feature selection. Models were created using artificial neural network (ANN) and binary logistic regression, with and without synthetic minority oversampling technique (SMOTE), and were validated using 10-fold cross-validation. The reference standard was histopathologic nuclear grade (low vs high). RESULTS: Dimension reduction steps yielded five texture features for the ANN and six for the logistic regression algorithm. None of clinical variables was selected. ANN alone and ANN with SMOTE correctly classified 81.5% and 70.5%, respectively, of clear cell RCCs, with AUC values of 0.714 and 0.702, respectively. The logistic regression algorithm alone and with SMOTE correctly classified 75.3% and 62.5%, respectively, of the tumors, with AUC values of 0.656 and 0.666, respectively. The ANN performed better than the logistic regression (p < 0.05). No statistically significant difference was present between the model performances created with and without SMOTE (p > 0.05). CONCLUSION: ML-based unenhanced CT texture analysis using ANN can be a promising noninvasive method in predicting the nuclear grade of clear cell RCCs.

Radiogenomics of lower-grade gliomas: machine learning-based MRI texture analysis for predicting 1p/19q codeletion status

  • Kocak, B.
  • Durmaz, E. S.
  • Ates, E.
  • Sel, I.
  • Turgut Gunes, S.
  • Kaya, O. K.
  • Zeynalova, A.
  • Kilickesmez, O.
Eur Radiol 2019 Journal Article, cited 0 times
Website
OBJECTIVE: To evaluate the potential value of the machine learning (ML)-based MRI texture analysis for predicting 1p/19q codeletion status of lower-grade gliomas (LGG), using various state-of-the-art ML algorithms. MATERIALS AND METHODS: For this retrospective study, 107 patients with LGG were included from a public database. Texture features were extracted from conventional T2-weighted and contrast-enhanced T1-weighted MRI images, using LIFEx software. Training and unseen validation splits were created using stratified 10-fold cross-validation technique along with minority over-sampling. Dimension reduction was done using collinearity analysis and feature selection (ReliefF). Classifications were done using adaptive boosting, k-nearest neighbours, naive Bayes, neural network, random forest, stochastic gradient descent, and support vector machine. Friedman test and pairwise post hoc analyses were used for comparison of classification performances based on the area under the curve (AUC). RESULTS: Overall, the predictive performance of the ML algorithms were statistically significantly different, chi2(6) = 26.7, p < 0.001. There was no statistically significant difference among the performance of the neural network, naive Bayes, support vector machine, random forest, and stochastic gradient descent, adjusted p > 0.05. The mean AUC and accuracy values of these five algorithms ranged from 0.769 to 0.869 and from 80.1 to 84%, respectively. The neural network had the highest mean rank with mean AUC and accuracy values of 0.869 and 83.8%, respectively. CONCLUSIONS: The ML-based MRI texture analysis might be a promising non-invasive technique for predicting the 1p/19q codeletion status of LGGs. Using this technique along with various ML algorithms, more than four-fifths of the LGGs can be correctly classified. KEY POINTS: * More than four-fifths of the lower-grade gliomas can be correctly classified with machine learning-based MRI texture analysis. Satisfying classification outcomes are not limited to a single algorithm. * A few-slice-based volumetric segmentation technique would be a valid approach, providing satisfactory predictive textural information and avoiding excessive segmentation duration in clinical practice. * Feature selection is sensitive to different patient data set samples so that each sampling leads to the selection of different feature subsets, which needs to be considered in future works.

Reliability of Single-Slice–Based 2D CT Texture Analysis of Renal Masses: Influence of Intra- and Interobserver Manual Segmentation Variability on Radiomic Feature Reproducibility

  • Kocak, Burak
  • Durmaz, Emine Sebnem
  • Kaya, Ozlem Korkmaz
  • Ates, Ece
  • Kilickesmez, Ozgur
AJR Am J Roentgenol 2019 Journal Article, cited 0 times
Website
OBJECTIVE. The objective of our study was to investigate the potential influence of intra- and interobserver manual segmentation variability on the reliability of single-slice-based 2D CT texture analysis of renal masses. MATERIALS AND METHODS. For this retrospective study, 30 patients with clear cell renal cell carcinoma were included from a public database. For intra- and interobserver analyses, three radiologists with varying degrees of experience segmented the tumors from unenhanced CT and corticomedullary phase contrast-enhanced CT (CECT) in different sessions. Each radiologist was blind to the image slices selected by other radiologists and him- or herself in the previous session. A total of 744 texture features were extracted from original, filtered, and transformed images. The intraclass correlation coefficient was used for reliability analysis. RESULTS. In the intraobserver analysis, the rates of features with good to excellent reliability were 84.4-92.2% for unenhanced CT and 85.5-93.1% for CECT. Considering the mean rates of unenhanced CT and CECT, having high experience resulted in better reliability rates in terms of the intraobserver analysis. In the interobserver analysis, the rates were 76.7% for unenhanced CT and 84.9% for CECT. The gray-level cooccurrence matrix and first-order feature groups yielded higher good to excellent reliability rates on both unenhanced CT and CECT. Filtered and transformed images resulted in more features with good to excellent reliability than the original images did on both unenhanced CT and CECT. CONCLUSION. Single-slice-based 2D CT texture analysis of renal masses is sensitive to intra- and interobserver manual segmentation variability. Therefore, it may lead to nonreproducible results in radiomic analysis unless a reliability analysis is considered in the workflow.

Machine learning-based unenhanced CT texture analysis for predicting BAP1 mutation status of clear cell renal cell carcinomas

  • Kocak, Burak
  • Durmaz, Emine Sebnem
  • Kaya, Ozlem Korkmaz
  • Kilickesmez, Ozgur
Acta Radiol 2019 Journal Article, cited 0 times
Website
BACKGROUND: BRCA1-associated protein 1 (BAP1) mutation is an unfavorable factor for overall survival in patients with clear cell renal cell carcinoma (ccRCC). Radiomics literature about BAP1 mutation lacks papers that consider the reliability of texture features in their workflow. PURPOSE: Using texture features with a high inter-observer agreement, we aimed to develop and internally validate a machine learning-based radiomic model for predicting the BAP1 mutation status of ccRCCs. MATERIALS AND METHODS: For this retrospective study, 65 ccRCCs were included from a public database. Texture features were extracted from unenhanced computed tomography (CT) images, using two-dimensional manual segmentation. Dimension reduction was done in three steps: (i) inter-observer agreement analysis; (ii) collinearity analysis; and (iii) feature selection. The machine learning classifier was random forest. The model was validated using 10-fold nested cross-validation. The reference standard was the BAP1 mutation status. RESULTS: Out of 744 features, 468 had an excellent inter-observer agreement. After the collinearity analysis, the number of features decreased to 17. Finally, the wrapper-based algorithm selected six features. Using selected features, the random forest correctly classified 84.6% of the labelled slices regarding BAP1 mutation status with an area under the receiver operating characteristic curve of 0.897. For predicting ccRCCs with BAP1 mutation, the sensitivity, specificity, and precision were 90.4%, 78.8%, and 81%, respectively. For predicting ccRCCs without BAP1 mutation, the sensitivity, specificity, and precision were 78.8%, 90.4%, and 89.1%, respectively. CONCLUSION: Machine learning-based unenhanced CT texture analysis might be a potential method for predicting the BAP1 mutation status of ccRCCs.

Creation and curation of the society of imaging informatics in Medicine Hackathon Dataset

  • Kohli, Marc
  • Morrison, James J
  • Wawira, Judy
  • Morgan, Matthew B
  • Hostetter, Jason
  • Genereaux, Brad
  • Hussain, Mohannad
  • Langer, Steve G
Journal of Digital Imaging 2018 Journal Article, cited 4 times
Website

Feasibility of synthetic computed tomography generated with an adversarial network for multi-sequence magnetic resonance-based brain radiotherapy

  • Koike, Yuhei
  • Akino, Yuichi
  • Sumida, Iori
  • Shiomi, Hiroya
  • Mizuno, Hirokazu
  • Yagi, Masashi
  • Isohashi, Fumiaki
  • Seo, Yuji
  • Suzuki, Osamu
  • Ogawa, Kazuhiko
J Radiat Res 2019 Journal Article, cited 0 times
Website
The aim of this work is to generate synthetic computed tomography (sCT) images from multi-sequence magnetic resonance (MR) images using an adversarial network and to assess the feasibility of sCT-based treatment planning for brain radiotherapy. Datasets for 15 patients with glioblastoma were selected and 580 pairs of CT and MR images were used. T1-weighted, T2-weighted and fluid-attenuated inversion recovery MR sequences were combined to create a three-channel image as input data. A conditional generative adversarial network (cGAN) was trained using image patches. The image quality was evaluated using voxel-wise mean absolute errors (MAEs) of the CT number. For the dosimetric evaluation, 3D conformal radiotherapy (3D-CRT) and volumetric modulated arc therapy (VMAT) plans were generated using the original CT set and recalculated using the sCT images. The isocenter dose and dose-volume parameters were compared for 3D-CRT and VMAT plans, respectively. The equivalent path length was also compared. The mean MAEs for the whole body, soft tissue and bone region were 108.1 +/- 24.0, 38.9 +/- 10.7 and 366.2 +/- 62.0 hounsfield unit, respectively. The dosimetric evaluation revealed no significant difference in the isocenter dose for 3D-CRT plans. The differences in the dose received by 2% of the volume (D2%), D50% and D98% relative to the prescribed dose were <1.0%. The overall equivalent path length was shorter than that for real CT by 0.6 +/- 1.9 mm. A treatment planning study using generated sCT detected only small, clinically negligible differences. These findings demonstrated the feasibility of generating sCT images for MR-only radiotherapy from multi-sequence MR images using cGAN.

Investigating the role of model-based and model-free imaging biomarkers as early predictors of neoadjuvant breast cancer therapy outcome

  • Kontopodis, Eleftherios
  • Venianaki, Maria
  • Manikis, George C
  • Nikiforaki, Katerina
  • Salvetti, Ovidio
  • Papadaki, Efrosini
  • Papadakis, Georgios Z
  • Karantanas, Apostolos H
  • Marias, Kostas
IEEE J Biomed Health Inform 2019 Journal Article, cited 0 times
Website
Imaging biomarkers (IBs) play a critical role in the clinical management of breast cancer (BRCA) patients throughout the cancer continuum for screening, diagnosis and therapy assessment especially in the neoadjuvant setting. However, certain model-based IBs suffer from significant variability due to the complex workflows involved in their computation, whereas model-free IBs have not been properly studied regarding clinical outcome. In the present study, IBs from 35 BRCA patients who received neoadjuvant chemotherapy (NAC) were extracted from dynamic contrast enhanced MR imaging (DCE-MRI) data with two different approaches, a model-free approach based on pattern recognition (PR), and a model-based one using pharmacokinetic compartmental modeling. Our analysis found that both model-free and model-based biomarkers can predict pathological complete response (pCR) after the first cycle of NAC. Overall, 8 biomarkers predicted the treatment response after the first cycle of NAC, with statistical significance (p-value<0.05), and 3 at the baseline. The best pCR predictors at first follow-up, achieving high AUC and sensitivity and specificity more than 50%, were the hypoxic component with threshold2 (AUC 90.4%) from the PR method, and the median value of kep (AUC 73.4%) from the model-based approach. Moreover, the 80th percentile of ve achieved the highest pCR prediction at baseline with AUC 78.5%. The results suggest that model-free DCE-MRI IBs could be a more robust alternative to complex, model-based ones such as kep and favor the hypothesis that the PR image-derived hypoxic image component captures actual tumor hypoxia information able to predict BRCA NAC outcome.

Automated Segmentation of Hyperintense Regions in FLAIR MRI Using Deep Learning

  • Korfiatis, Panagiotis
  • Kline, Timothy L
  • Erickson, Bradley J
Tomography: a journal for imaging research 2016 Journal Article, cited 16 times
Website

The quest for'diagnostically lossless' medical image compression: a comparative study of objective quality metrics for compressed medical images

  • Kowalik-Urbaniak, Ilona
  • Brunet, Dominique
  • Wang, Jiheng
  • Koff, David
  • Smolarski-Koff, Nadine
  • Vrscay, Edward R
  • Wallace, Bill
  • Wang, Zhou
2014 Conference Proceedings, cited 0 times

Usefulness of gradient tree boosting for predicting histological subtype and EGFR mutation status of non-small cell lung cancer on (18)F FDG-PET/CT

  • Koyasu, S.
  • Nishio, M.
  • Isoda, H.
  • Nakamoto, Y.
  • Togashi, K.
Ann Nucl Med 2020 Journal Article, cited 3 times
Website
OBJECTIVE: To develop and evaluate a radiomics approach for classifying histological subtypes and epidermal growth factor receptor (EGFR) mutation status in lung cancer on PET/CT images. METHODS: PET/CT images of lung cancer patients were obtained from public databases and used to establish two datasets, respectively to classify histological subtypes (156 adenocarcinomas and 32 squamous cell carcinomas) and EGFR mutation status (38 mutant and 100 wild-type samples). Seven types of imaging features were obtained from PET/CT images of lung cancer. Two types of machine learning algorithms were used to predict histological subtypes and EGFR mutation status: random forest (RF) and gradient tree boosting (XGB). The classifiers used either a single type or multiple types of imaging features. In the latter case, the optimal combination of the seven types of imaging features was selected by Bayesian optimization. Receiver operating characteristic analysis, area under the curve (AUC), and tenfold cross validation were used to assess the performance of the approach. RESULTS: In the classification of histological subtypes, the AUC values of the various classifiers were as follows: RF, single type: 0.759; XGB, single type: 0.760; RF, multiple types: 0.720; XGB, multiple types: 0.843. In the classification of EGFR mutation status, the AUC values were: RF, single type: 0.625; XGB, single type: 0.617; RF, multiple types: 0.577; XGB, multiple types: 0.659. CONCLUSIONS: The radiomics approach to PET/CT images, together with XGB and Bayesian optimization, is useful for classifying histological subtypes and EGFR mutation status in lung cancer.

Lupsix: A Cascade Framework for Lung Parenchyma Segmentation in Axial CT Images

  • Koyuncu, Hasan
International Journal of Intelligent Systems and Applications in Engineering 2018 Journal Article, cited 0 times
Website

An Efficient Pipeline for Abdomen Segmentation in CT Images

  • Koyuncu, H.
  • Ceylan, R.
  • Sivri, M.
  • Erdogan, H.
J Digit Imaging 2018 Journal Article, cited 4 times
Website
Computed tomography (CT) scans usually include some disadvantages due to the nature of the imaging procedure, and these handicaps prevent accurate abdomen segmentation. Discontinuous abdomen edges, bed section of CT, patient information, closeness between the edges of the abdomen and CT, poor contrast, and a narrow histogram can be regarded as the most important handicaps that occur in abdominal CT scans. Currently, one or more handicaps can arise and prevent technicians obtaining abdomen images through simple segmentation techniques. In other words, CT scans can include the bed section of CT, a patient's diagnostic information, low-quality abdomen edges, low-level contrast, and narrow histogram, all in one scan. These phenomena constitute a challenge, and an efficient pipeline that is unaffected by handicaps is required. In addition, analysis such as segmentation, feature selection, and classification has meaning for a real-time diagnosis system in cases where the abdomen section is directly used with a specific size. A statistical pipeline is designed in this study that is unaffected by the handicaps mentioned above. Intensity-based approaches, morphological processes, and histogram-based procedures are utilized to design an efficient structure. Performance evaluation is realized in experiments on 58 CT images (16 training, 16 test, and 26 validation) that include the abdomen and one or more disadvantage(s). The first part of the data (16 training images) is used to detect the pipeline's optimum parameters, while the second and third parts are utilized to evaluate and to confirm the segmentation performance. The segmentation results are presented as the means of six performance metrics. Thus, the proposed method achieves remarkable average rates for training/test/validation of 98.95/99.36/99.57% (jaccard), 99.47/99.67/99.79% (dice), 100/99.91/99.91% (sensitivity), 98.47/99.23/99.85% (specificity), 99.38/99.63/99.87% (classification accuracy), and 98.98/99.45/99.66% (precision). In summary, a statistical pipeline performing the task of abdomen segmentation is achieved that is not affected by the disadvantages, and the most detailed abdomen segmentation study is performed for the use before organ and tumor segmentation, feature extraction, and classification.

Impact of internal target volume definition for pencil beam scanned proton treatment planning in the presence of respiratory motion variability for lung cancer: A proof of concept

  • Krieger, Miriam
  • Giger, Alina
  • Salomir, Rares
  • Bieri, Oliver
  • Celicanin, Zarko
  • Cattin, Philippe C
  • Lomax, Antony J
  • Weber, Damien C
  • Zhang, Ye
Radiotherapy and Oncology 2020 Journal Article, cited 0 times
Website

Medical (CT) image generation with style

  • Krishna, Arjun
  • Mueller, Klaus
2019 Conference Proceedings, cited 0 times

Performance Analysis of Denoising in MR Images with Double Density Dual Tree Complex Wavelets, Curvelets and NonSubsampled Contourlet Transforms

  • Krishnakumar, V
  • Parthiban, Latha
Annual Review & Research in Biology 2014 Journal Article, cited 0 times

Three-dimensional lung nodule segmentation and shape variance analysis to detect lung cancer with reduced false positives

  • Krishnamurthy, Senthilkumar
  • Narasimhan, Ganesh
  • Rengasamy, Umamaheswari
Proceedings of the Institution of Mechanical Engineers, Part H: Journal of Engineering in Medicine 2016 Journal Article, cited 17 times
Website

An Level Set Evolution Morphology Based Segmentation of Lung Nodules and False Nodule Elimination by 3D Centroid Shift and Frequency Domain DC Constant Analysis

  • Krishnamurthy, Senthilkumar
  • Narasimhan, Ganesh
  • Rengasamy, Umamaheswari
International Journal of u- and e- Service, Science and Technology 2016 Journal Article, cited 0 times
Website

Analysis of CT DICOM Image Segmentation for Abnormality Detection

  • Kulkarni, Rashmi
  • Bhavani, K.
International Journal of Engineering and Manufacturing 2019 Journal Article, cited 0 times
Website
The cancer is a menacing disease. More care is required while diagnosing cancer disease. Mostly CT modality is used for Cancer therapy. Image processing techniques [1] can help doctors to diagnose easily and more accurately. Image pre-processing [2], segmentation methods [3] are used in extraction of cancerous nodules from CT images. Many researches have been done on segmentation of CT images with different algorithms, but they failed to reach 100% accuracy. This research work, proposes a model for analysis of CT image segmentation with filtered and without filtered images. And brings out the importance of pre-processing of CT images.

Content-Based Medical Image Retrieval: A Survey of Applications to Multidimensional and Multimodality Data

  • Kumar, Ashnil
  • Kim, Jinman
  • Cai, Weidong
  • Fulham, Michael
  • Feng, Dagan
Journal of Digital Imaging 2013 Journal Article, cited 109 times
Website

A Visual Analytics Approach using the Exploration of Multi-Dimensional Feature Spaces for Content-based Medical Image Retrieval

  • Kumar, Ajit
  • Nette, Falk
  • Klein, Krystal
  • Fulham, Michael
  • Kim, Jung-Ho
2014 Journal Article, cited 13 times
Website

Discovery radiomics for pathologically-proven computed tomography lung cancer prediction

  • Kumar, Devinder
  • Chung, Audrey G
  • Shaifee, Mohammad J
  • Khalvati, Farzad
  • Haider, Masoom A
  • Wong, Alexander
2017 Conference Proceedings, cited 30 times
Website

Medical image segmentation using modified fuzzy c mean based clustering

  • Kumar, Dharmendra
  • Solanki, Anil Kumar
  • Ahlawat, Anil
  • Malhotra, Sukhnandan
2020 Conference Proceedings, cited 0 times
Website
Locating disease area in medical images is one of the most challenging task in the field of image segmentation. This paper presents a new approach of image-segmentation using modified fuzzy c-means(MFCM) clustering. Considering low illuminated medical images, the input image is firstly enhanced using histogram equalization(HE) technique. The enhanced image is now segmented into various regions using the MFCM based approach. The local information is employed in the objective-function of MFCM to overcome the issue of noise sensitivity. After that membership partitioning is improved by using fast membership filtering. The observed result of the proposed scheme is found suitable in terms of various evaluating parameters for experimentation.

Lung Nodule Classification Using Deep Features in CT Images

  • Kumar, Devinder
  • Wong, Alexander
  • Clausi, David A
2015 Conference Proceedings, cited 114 times
Website

Computer-Aided Diagnosis of Life-Threatening Diseases

  • Kumar, Pramod
  • Ambekar, Sameer
  • Roy, Subarna
  • Kunchur, Pavan
2019 Book Section, cited 0 times
According to WHO, the incidence of life-threatening diseases like cancer, diabetes, and Alzheimer’s disease is escalating globally. In the past few decades, traditional methods have been used to diagnose such diseases. These traditional methods often have limitations such as lack of accuracy, expense, and time-consuming procedures. Computer-aided diagnosis (CAD) aims to overcome these limitations by personalizing healthcare issues. Machine learning is a promising CAD method, offering effective solutions for these diseases. It is being used for early detection of cancer, diabetic retinopathy, as well as Alzheimer’s disease, and also to identify diseases in plants. Machine learning can increase efficiency, making the process more cost effective, with quicker delivery of results. There are several CAD algorithms (ANN, SVM, etc.) that can be used to train the disease dataset, and eventually make significant predictions. It has also been proven that CAD algorithms have potential to diagnose and early detection of life-threatening diseases.

Human Ether-a-Go-Go-Related-1 Gene (hERG) K+ Channel as a Prognostic Marker and Therapeutic Target for Glioblastoma

  • Kuo, John S.
  • Pointer, Kelli Briana
  • Clark, Paul A.
  • Robertson, Gail
Neurosurgery 2015 Journal Article, cited 0 times
Website

Combining Generative Models for Multifocal Glioma Segmentation and Registration

  • Kwon, Dongjin
  • Shinohara, Russell T
  • Akbari, Hamed
  • Davatzikos, Christos
2014 Book Section, cited 55 times
Website
In this paper, we propose a new method for simultaneously segmenting brain scans of glioma patients and registering these scans to a normal atlas. Performing joint segmentation and registration for brain tumors is very challenging when tumors include multifocal masses and have complex shapes with heterogeneous textures. Our approach grows tumors for each mass from multiple seed points using a tumor growth model and modifies a normal atlas into one with tumors and edema using the combined results of grown tumors. We also generate a tumor shape prior via the random walk with restart, utilizing multiple tumor seeds as initial foreground information. We then incorporate this shape prior into an EM framework which estimates the mapping between the modified atlas and the scans, posteriors for each tissue labels, and the tumor growth model parameters. We apply our method to the BRATS 2013 leaderboard dataset to evaluate segmentation performance. Our method shows the best performance among all participants.

Acute Tumor Transition Angle on Computed Tomography Predicts Chromosomal Instability Status of Primary Gastric Cancer: Radiogenomics Analysis from TCGA and Independent Validation

  • Lai, Ying-Chieh
  • Yeh, Ta-Sen
  • Wu, Ren-Chin
  • Tsai, Cheng-Kun
  • Yang, Lan-Yan
  • Lin, Gigin
  • Kuo, Michael D
Cancers 2019 Journal Article, cited 0 times
Website
Chromosomal instability (CIN) of gastric cancer is correlated with distinct outcomes. This study aimed to investigate the role of computed tomography (CT) imaging traits in predicting the CIN status of gastric cancer. We screened 443 patients in the Cancer Genome Atlas gastric cancer cohort to filter 40 patients with complete CT imaging and genomic data as the training cohort. CT imaging traits were subjected to logistic regression to select independent predictors for the CIN status. For the validation cohort, we prospectively enrolled 18 gastric cancer patients for CT and tumor genomic analysis. The imaging predictors were tested in the validation cohort using receiver operating characteristic curve (ROC) analysis. Thirty patients (75%) in the training cohort and 9 patients (50%) in the validation cohort had CIN subtype gastric cancers. Smaller tumor diameter (p = 0.017) and acute tumor transition angle (p = 0.045) independently predict CIN status in the training cohort. In the validation cohort, acute tumor transition angle demonstrated the highest accuracy, sensitivity, and specificity of 88.9%, 88.9%, and 88.9%, respectively, and areas under ROC curve of 0.89. In conclusion, this pilot study showed acute tumor transition angle on CT images may predict the CIN status of gastric cancer.

Textural Analysis of Tumour Imaging: A Radiomics Approach

  • Lambrecht, Joren
2017 Thesis, cited 0 times
Website

A simple texture feature for retrieval of medical images

  • Lan, Rushi
  • Zhong, Si
  • Liu, Zhenbing
  • Shi, Zhuo
  • Luo, Xiaonan
Multimedia Tools and Applications 2017 Journal Article, cited 2 times
Website

Collaborative and Reproducible Research: Goals, Challenges, and Strategies

  • Langer, S. G.
  • Shih, G.
  • Nagy, P.
  • Landman, B. A.
J Digit Imaging 2018 Journal Article, cited 1 times
Website