Two-phase multi-model automatic brain tumour diagnosis system from magnetic resonance images using convolutional neural networks

  • Abd-Ellah, Mahmoud Khaled
  • Awad, Ali Ismail
  • Khalaf, Ashraf AM
  • Hamed, Hesham FA
EURASIP Journal on Image and Video Processing 2018 Journal Article, cited 0 times
Website

Detection of Lung Nodules on Medical Images by the Use of Fractal Segmentation

  • Abdollahzadeh Rezaie, Afsaneh
  • Habiboghli, Ali
International Journal of Interactive Multimedia and Artificial Inteligence 2017 Journal Article, cited 0 times
Website

Robust Computer-Aided Detection of Pulmonary Nodules from Chest Computed Tomography

  • Abduh, Zaid
  • Wahed, Manal Abdel
  • Kadah, Yasser M
Journal of Medical Imaging and Health Informatics 2016 Journal Article, cited 5 times
Website

A generalized framework for medical image classification and recognition

  • Abedini, M
  • Codella, NCF
  • Connell, JH
  • Garnavi, R
  • Merler, M
  • Pankanti, S
  • Smith, JR
  • Syeda-Mahmood, T
IBM Journal of Research and Development 2015 Journal Article, cited 19 times
Website

Computer-aided diagnosis of clinically significant prostate cancer from MRI images using sparse autoencoder and random forest classifier

  • Abraham, Bejoy
  • Nair, Madhu S
Biocybernetics and Biomedical Engineering 2018 Journal Article, cited 0 times
Website

Computer-aided classification of prostate cancer grade groups from MRI images using texture features and stacked sparse autoencoder

  • Abraham, Bejoy
  • Nair, Madhu S
Computerized Medical Imaging and Graphics 2018 Journal Article, cited 1 times
Website

Automated grading of prostate cancer using convolutional neural network and ordinal class classifier

  • Abraham, Bejoy
  • Nair, Madhu S.
Informatics in Medicine Unlocked 2019 Journal Article, cited 0 times
Website
Prostate Cancer (PCa) is one of the most prominent cancer among men. Early diagnosis and treatment planning are significant in reducing the mortality rate due to PCa. Accurate prediction of grade is required to ensure prompt treatment for cancer. Grading of prostate cancer can be considered as an ordinal class classification problem. This paper presents a novel method for the grading of prostate cancer from multiparametric magnetic resonance images using VGG-16 Convolutional Neural Network and Ordinal Class Classifier with J48 as the base classifier. Multiparametric magnetic resonance images of the PROSTATEx-2 2017 grand challenge dataset are employed for this work. The method achieved a moderate quadratic weighted kappa score of 0.4727 in the grading of PCa into 5 grade groups, which is higher than state-of-the-art methods. The method also achieved a positive predictive value of 0.9079 in predicting clinically significant prostate cancer.

Adaptive Enhancement Technique for Cancerous Lung Nodule in Computed Tomography Images

  • AbuBaker, Ayman A
International Journal of Engineering and Technology 2016 Journal Article, cited 1 times
Website

Automated lung tumor detection and diagnosis in CT Scans using texture feature analysis and SVM

  • Adams, Tim
  • Dörpinghaus, Jens
  • Jacobs, Marc
  • Steinhage, Volker
Communication Papers of the Federated Conference on Computer Science and Information Systems 2018 Journal Article, cited 0 times
Website

Defining a Radiomic Response Phenotype: A Pilot Study using targeted therapy in NSCLC

  • Aerts, Hugo JWL
  • Grossmann, Patrick
  • Tan, Yongqiang
  • Oxnard, Geoffrey G
  • Rizvi, Naiyer
  • Schwartz, Lawrence H
  • Zhao, Binsheng
Sci RepScientific reports 2016 Journal Article, cited 40 times
Website

Decoding tumour phenotype by noninvasive imaging using a quantitative radiomics approach

  • Aerts, H. J.
  • Velazquez, E. R.
  • Leijenaar, R. T.
  • Parmar, C.
  • Grossmann, P.
  • Cavalho, S.
  • Bussink, J.
  • Monshouwer, R.
  • Haibe-Kains, B.
  • Rietveld, D.
  • Hoebers, F.
  • Rietbergen, M. M.
  • Leemans, C. R.
  • Dekker, A.
  • Quackenbush, J.
  • Gillies, R. J.
  • Lambin, P.
2014 Journal Article, cited 1029 times
Website
Human cancers exhibit strong phenotypic differences that can be visualized noninvasively by medical imaging. Radiomics refers to the comprehensive quantification of tumour phenotypes by applying a large number of quantitative image features. Here we present a radiomic analysis of 440 features quantifying tumour image intensity, shape and texture, which are extracted from computed tomography data of 1,019 patients with lung or head-and-neck cancer. We find that a large number of radiomic features have prognostic power in independent data sets of lung and head-and-neck cancer patients, many of which were not identified as significant before. Radiogenomics analysis reveals that a prognostic radiomic signature, capturing intratumour heterogeneity, is associated with underlying gene-expression patterns. These data suggest that radiomics identifies a general prognostic phenotype existing in both lung and head-and-neck cancer. This may have a clinical impact as imaging is routinely used in clinical practice, providing an unprecedented opportunity to improve decision-support in cancer treatment at low cost.

Decoding tumour phenotype by noninvasive imaging using a quantitative radiomics approach

  • Aerts, H. J.
  • Velazquez, E. R.
  • Leijenaar, R. T.
  • Parmar, C.
  • Grossmann, P.
  • Cavalho, S.
  • Bussink, J.
  • Monshouwer, R.
  • Haibe-Kains, B.
  • Rietveld, D.
  • Hoebers, F.
  • Rietbergen, M. M.
  • Leemans, C. R.
  • Dekker, A.
  • Quackenbush, J.
  • Gillies, R. J.
  • Lambin, P.
2014 Dataset, cited 1029 times
Website

An Augmentation in the Diagnostic Potency of Breast Cancer through A Deep Learning Cloud-Based AI Framework to Compute Tumor Malignancy & Risk

  • Agarwal, O
International Research Journal of Innovations in Engineering and Technology (IRJIET) 2019 Journal Article, cited 0 times
Website
This research project focuses on developing a web-based multi-platform solution for augmenting prognostic strategies to diagnose breast cancer (BC), from a variety of different tests, including histology, mammography, cytopathology, and fine-needle aspiration cytology, all inan automated fashion. The respective application utilizes tensor-based data representations and deep learning architectural algorithms, to produce optimized models for the prediction of novel instances against each of these medical tests. This system has been designed in a way that all of its computation can be integrated seamlessly into a clinical setting, without posing any disruption to a clinician’s productivity or workflow, but rather an enhancement of their capabilities. This software can make the diagnostic process automated, standardized, faster, and even more accurate than current benchmarks achieved by both pathologists, and radiologists, which makes it invaluable from a clinical standpoint to make well-informed diagnostic decisions with nominal resources.

Automatic mass detection in mammograms using deep convolutional neural networks

  • Agarwal, Richa
  • Diaz, Oliver
  • Lladó, Xavier
  • Yap, Moi Hoon
  • Martí, Robert
Journal of Medical Imaging 2019 Journal Article, cited 0 times
Website
With recent advances in the field of deep learning, the use of convolutional neural networks (CNNs) in medical imaging has become very encouraging. The aim of our paper is to propose a patch-based CNN method for automated mass detection in full-field digital mammograms (FFDM). In addition to evaluating CNNs pretrained with the ImageNet dataset, we investigate the use of transfer learning for a particular domain adaptation. First, the CNN is trained using a large public database of digitized mammograms (CBIS-DDSM dataset), and then the model is transferred and tested onto the smaller database of digital mammograms (INbreast dataset). We evaluate three widely used CNNs (VGG16, ResNet50, InceptionV3) and show that the InceptionV3 obtains the best performance for classifying the mass and nonmass breast region for CBIS-DDSM. We further show the benefit of domain adaptation between the CBIS-DDSM (digitized) and INbreast (digital) datasets using the InceptionV3 CNN. Mass detection evaluation follows a fivefold cross-validation strategy using free-response operating characteristic curves. Results show that the transfer learning from CBIS-DDSM obtains a substantially higher performance with the best true positive rate (TPR) of 0.98 0.02 at 1.67 false positives per image (FPI), compared with transfer learning from ImageNet with TPR of 0.91 0.07 at 2.1 FPI. In addition, the proposed framework improves upon mass detection results described in the literature on the INbreast database, in terms of both TPR and FPI.

Patient-Wise Versus Nodule-Wise Classification of Annotated Pulmonary Nodules using Pathologically Confirmed Cases

  • Aggarwal, Preeti
  • Vig, Renu
  • Sardana, HK
Journal of Computers 2013 Journal Article, cited 5 times
Website

Automatic lung segmentation in low-dose chest CT scans using convolutional deep and wide network (CDWN)

  • Agnes, S Akila
  • Anitha, J
  • Peter, J Dinesh
Neural Computing and Applications 2018 Journal Article, cited 0 times
Website

Robust Image Denoising with Multi-Column Deep Neural Networks

  • Agostinelli, Forest
  • Anderson, Michael R
  • Lee, Honglak
2013 Conference Proceedings, cited 60 times
Website

Adaptive Multi-Column Deep Neural Networks with Application to Robust Image Denoising

  • Agostinelli, Forest
  • Anderson, Michael R
  • Lee, Honglak
2013 Conference Proceedings, cited 118 times
Website

Tumor Lesion Segmentation from 3D PET Using a Machine Learning Driven Active Surface

  • Ahmadvand, Payam
  • Duggan, Nóirín
  • Bénard, François
  • Hamarneh, Ghassan
2016 Conference Proceedings, cited 4 times
Website

Increased robustness in reference region model analysis of DCE MRI using two‐step constrained approaches

  • Ahmed, Zaki
  • Levesque, Ives R
Magnetic Resonance in Medicine 2016 Journal Article, cited 1 times
Website

An extended reference region model for DCE‐MRI that accounts for plasma volume

  • Ahmed, Zaki
  • Levesque, Ives R
NMR in Biomedicine 2018 Journal Article, cited 0 times
Website

Pharmacokinetic modeling of dynamic contrast-enhanced MRI using a reference region and input function tail

  • Ahmed, Z.
  • Levesque, I. R.
Magn Reson Med 2019 Journal Article, cited 0 times
Website
PURPOSE: Quantitative analysis of dynamic contrast-enhanced MRI (DCE-MRI) requires an arterial input function (AIF) which is difficult to measure. We propose the reference region and input function tail (RRIFT) approach which uses a reference tissue and the washout portion of the AIF. METHODS: RRIFT was evaluated in simulations with 100 parameter combinations at various temporal resolutions (5-30 s) and noise levels (sigma = 0.01-0.05 mM). RRIFT was compared against the extended Tofts model (ETM) in 8 studies from patients with glioblastoma multiforme. Two versions of RRIFT were evaluated: one using measured patient-specific AIF tails, and another assuming a literature-based AIF tail. RESULTS: RRIFT estimated the transfer constant K trans and interstitial volume v e with median errors within 20% across all simulations. RRIFT was more accurate and precise than the ETM at temporal resolutions slower than 10 s. The percentage error of K trans had a median and interquartile range of -9 +/- 45% with the ETM and -2 +/- 17% with RRIFT at a temporal resolution of 30 s under noiseless conditions. RRIFT was in excellent agreement with the ETM in vivo, with concordance correlation coefficients (CCC) of 0.95 for K trans , 0.96 for v e , and 0.73 for the plasma volume v p using a measured AIF tail. With the literature-based AIF tail, the CCC was 0.89 for K trans , 0.93 for v e and 0.78 for v p . CONCLUSIONS: Quantitative DCE-MRI analysis using the input function tail and a reference tissue yields absolute kinetic parameters with the RRIFT method. This approach was viable in simulation and in vivo for temporal resolutions as low as 30 s.

Pharmacokinetic modeling of dynamic contrast‐enhanced MRI using a reference region and input function tail

  • Ahmed, Zaki
  • Levesque, Ives R
Magnetic Resonance in Medicine 2020 Journal Article, cited 0 times
Website

Automated and Manual Quantification of Tumour Cellularity in Digital Slides for Tumour Burden Assessment

  • Akbar, S.
  • Peikari, M.
  • Salama, S.
  • Panah, A. Y.
  • Nofech-Mozes, S.
  • Martel, A. L.
Sci RepScientific reports 2019 Journal Article, cited 3 times
Website
The residual cancer burden index is an important quantitative measure used for assessing treatment response following neoadjuvant therapy for breast cancer. It has shown to be predictive of overall survival and is composed of two key metrics: qualitative assessment of lymph nodes and the percentage of invasive or in situ tumour cellularity (TC) in the tumour bed (TB). Currently, TC is assessed through eye-balling of routine histopathology slides estimating the proportion of tumour cells within the TB. With the advances in production of digitized slides and increasing availability of slide scanners in pathology laboratories, there is potential to measure TC using automated algorithms with greater precision and accuracy. We describe two methods for automated TC scoring: 1) a traditional approach to image analysis development whereby we mimic the pathologists' workflow, and 2) a recent development in artificial intelligence in which features are learned automatically in deep neural networks using image data alone. We show strong agreements between automated and manual analysis of digital slides. Agreements between our trained deep neural networks and experts in this study (0.82) approach the inter-rater agreements between pathologists (0.89). We also reveal properties that are captured when we apply deep neural network to whole slide images, and discuss the potential of using such visualisations to improve upon TC assessment in the future.

GLISTRboost: Combining Multimodal MRI Segmentation, Registration, and Biophysical Tumor Growth Modeling with Gradient Boosting Machines for Glioma Segmentation

  • Akbari, Hamed
  • Gaonkar, Bilwaj
  • Rozycki, Martin
  • Pati, Sarthak
2016 Conference Proceedings, cited 24 times
Website

Map-Reduce based tipping point scheduler for parallel image processing

  • Akhtar, Mohammad Nishat
  • Saleh, Junita Mohamad
  • Awais, Habib
  • Bakar, Elmi Abu
Expert Systems with Applications 2019 Journal Article, cited 0 times
Website
Nowadays, Big Data image processing is very much in need due to its proven success in the field of business information system, medical science and social media. However, as the days are passing by, the computation of Big Data images is becoming more complex which ultimately results in complex resource management and higher task execution time. Researchers have been using a combination of CPU and GPU based computing to cut down the execution time, however, when it comes to scaling of compute nodes, then the combination of CPU and GPU based computing still remains a challenge due to the high communication cost factor. In order to tackle this issue, the Map-Reduce framework has come out to be a viable option as its workflow optimization could be enhanced by changing its underlying job scheduling mechanism. This paper presents a comparative study of job scheduling algorithms which could be deployed over various Big Data based image processing application and also proposes a tipping point scheduling algorithm to optimize the workflow for job execution on multiple nodes. The evaluation of the proposed scheduling algorithm is done by implementing parallel image segmentation algorithm to detect lung tumor for up to 3GB size of image dataset. In terms of performance comprising of task execution time and throughput, the proposed tipping point scheduler has come out to be the best scheduler followed by the Map-Reduce based Fair scheduler. The proposed tipping point scheduler is 1.14 times better than Map-Reduce based Fair scheduler and 1.33 times better than Map-Reduced based FIFO scheduler in terms of task execution time and throughput. In terms of speedup comparison between single node and multiple nodes, the proposed tipping point scheduler attained a speedup of 4.5 X for multi-node architecture. Keywords: Job scheduler; Workflow optimization; Map-Reduce; Tipping point scheduler; Parallel image segmentation; Lung tumor

A review of lung cancer screening and the role of computer-aided detection

  • Al Mohammad, B
  • Brennan, PC
  • Mello-Thoms, C
Clinical Radiology 2017 Journal Article, cited 23 times
Website

Radiologist performance in the detection of lung cancer using CT

  • Al Mohammad, B
  • Hillis, SL
  • Reed, W
  • Alakhras, M
  • Brennan, PC
Clinical Radiology 2019 Journal Article, cited 2 times
Website

Breast Cancer Diagnostic System Based on MR images Using KPCA-Wavelet Transform and Support Vector Machine

  • AL-Dabagh, Mustafa Zuhaer
  • AL-Mukhtar, Firas H
IJAERS 2017 Journal Article, cited 0 times
Website

Quantitative assessment of colorectal morphology: Implications for robotic colonoscopy

  • Alazmani, A
  • Hood, A
  • Jayne, D
  • Neville, A
  • Culmer, P
Medical engineering & physics 2016 Journal Article, cited 11 times
Website

Deep learning for segmentation of brain tumors: Impact of cross-institutional training and testing

  • AlBadawy, E. A.
  • Saha, A.
  • Mazurowski, M. A.
Med Phys 2018 Journal Article, cited 5 times
Website
BACKGROUND AND PURPOSE: Convolutional neural networks (CNNs) are commonly used for segmentation of brain tumors. In this work, we assess the effect of cross-institutional training on the performance of CNNs. METHODS: We selected 44 glioblastoma (GBM) patients from two institutions in The Cancer Imaging Archive dataset. The images were manually annotated by outlining each tumor component to form ground truth. To automatically segment the tumors in each patient, we trained three CNNs: (a) one using data for patients from the same institution as the test data, (b) one using data for the patients from the other institution and (c) one using data for the patients from both of the institutions. The performance of the trained models was evaluated using Dice similarity coefficients as well as Average Hausdorff Distance between the ground truth and automatic segmentations. The 10-fold cross-validation scheme was used to compare the performance of different approaches. RESULTS: Performance of the model significantly decreased (P < 0.0001) when it was trained on data from a different institution (dice coefficients: 0.68 +/- 0.19 and 0.59 +/- 0.19) as compared to training with data from the same institution (dice coefficients: 0.72 +/- 0.17 and 0.76 +/- 0.12). This trend persisted for segmentation of the entire tumor as well as its individual components. CONCLUSIONS: There is a very strong effect of selecting data for training on performance of CNNs in a multi-institutional setting. Determination of the reasons behind this effect requires additional comprehensive investigation.

Self-organizing Approach to Learn a Level-set Function for Object Segmentation in Complex Background Environments

  • Albalooshi, Fatema A
2015 Thesis, cited 0 times
Website

Multi-modal Multi-temporal Brain Tumor Segmentation, Growth Analysis and Texture-based Classification

  • Alberts, Esther
2019 Thesis, cited 0 times
Website
Brain tumor analysis is an active field of research, which has received a lot of attention from both the medical and the technical communities in the past decades. The purpose of this thesis is to investigate brain tumor segmentation, growth analysis and tumor classification based on multi-modal magnetic resonance (MR) image datasets of low- and high-grade glioma making use of computer vision and machine learning methodologies. Brain tumor segmentation involves the delineation of tumorous structures, such as edema, active tumor and necrotic tumor core, and healthy brain tissues, often categorized in gray matter, white matter and cerebro-spinal fluid. Deep learning frameworks have proven to be among the most accurate brain tumor segmentation techniques, performing particularly well when large accurately annotated image datasets are available. A first project is designed to build a more flexible model, which allows for intuitive semi-automated user-interaction, is less dependent on training data, and can handle missing MR modalities. The framework is based on a Bayesian network with hidden variables optimized by the expectation-maximization algorithm, and is tailored to handle non-Gaussian multivariate distributions using the concept of Gaussian copulas. To generate reliable priors for the generative probabilistic model and to spatially regularize the segmentation results, it is extended with an initialization and a post-processing module, both based on supervoxels classified by random forests. Brain tumor segmentation allows to assess tumor volumetry over time, which is important to identify disease progression (tumor regrowth) after therapy. In a second project, a dataset of temporal MR sequences is analyzed. To that end, brain tumor segmentation and brain tumor growth assessment are unified within a single framework using a conditional random field (CRF). The CRF extends over the temporal patient datasets and includes directed links with infinite weight in order to incorporate growth or shrinkage constraints. The model is shown to obtain temporally coherent tumor segmentation and aids in estimating the likelihood of disease progression after therapy. Recent studies classify brain tumors based on their genotypic parameters, which are reported to have an important impact on the prognosis and the therapy of patients. A third project is aimed to investigate whether the genetic profile of glioma can be predicted based on the MR images only, which would eliminate the need to take biopsies. A multi-modal medical image classification framework is built, classifying glioma in three genetic classes based on DNA methylation status. The framework makes use of short local image descriptors as well as deep-learned features acquired by denoising auto-encoders to generate meaningful image features. The framework is successfully validated and shown to obtain high accuracies even though the same image-based classification task is hardly possible for medical experts.

Automatic intensity windowing of mammographic images based on a perceptual metric

  • Albiol, Alberto
  • Corbi, Alberto
  • Albiol, Francisco
Medical physics 2017 Journal Article, cited 0 times
Website

Semi-automatic classification of prostate cancer on multi-parametric MR imaging using a multi-channel 3D convolutional neural network

  • Aldoj, Nader
  • Lukas, Steffen
  • Dewey, Marc
  • Penzkofer, Tobias
Eur Radiol 2019 Journal Article, cited 1 times
Website
OBJECTIVE: To present a deep learning-based approach for semi-automatic prostate cancer classification based on multi-parametric magnetic resonance (MR) imaging using a 3D convolutional neural network (CNN). METHODS: Two hundred patients with a total of 318 lesions for which histological correlation was available were analyzed. A novel CNN was designed, trained, and validated using different combinations of distinct MRI sequences as input (e.g., T2-weighted, apparent diffusion coefficient (ADC), diffusion-weighted images, and K-trans) and the effect of different sequences on the network's performance was tested and discussed. The particular choice of modeling approach was justified by testing all relevant data combinations. The model was trained and validated using eightfold cross-validation. RESULTS: In terms of detection of significant prostate cancer defined by biopsy results as the reference standard, the 3D CNN achieved an area under the curve (AUC) of the receiver operating characteristics ranging from 0.89 (88.6% and 90.0% for sensitivity and specificity respectively) to 0.91 (81.2% and 90.5% for sensitivity and specificity respectively) with an average AUC of 0.897 for the ADC, DWI, and K-trans input combination. The other combinations scored less in terms of overall performance and average AUC, where the difference in performance was significant with a p value of 0.02 when using T2w and K-trans; and 0.00025 when using T2w, ADC, and DWI. Prostate cancer classification performance is thus comparable to that reported for experienced radiologists using the prostate imaging reporting and data system (PI-RADS). Lesion size and largest diameter had no effect on the network's performance. CONCLUSION: The diagnostic performance of the 3D CNN in detecting clinically significant prostate cancer is characterized by a good AUC and sensitivity and high specificity. KEY POINTS: * Prostate cancer classification using a deep learning model is feasible and it allows direct processing of MR sequences without prior lesion segmentation. * Prostate cancer classification performance as measured by AUC is comparable to that of an experienced radiologist. * Perfusion MR images (K-trans), followed by DWI and ADC, have the highest effect on the overall performance; whereas T2w images show hardly any improvement.

Semi-automatic classification of prostate cancer on multi-parametric MR imaging using a multi-channel 3D convolutional neural network

  • Aldoj, Nader
  • Lukas, Steffen
  • Dewey, Marc
  • Penzkofer, Tobias
European Radiology 2020 Journal Article, cited 1 times
Website

Radiogenomics in renal cell carcinoma

  • Alessandrino, Francesco
  • Shinagare, Atul B
  • Bossé, Dominick
  • Choueiri, Toni K
  • Krajewski, Katherine M
Abdominal Radiology 2018 Journal Article, cited 0 times
Website

SurfCut: Surfaces of Minimal Paths From Topological Structures

  • Algarni, Marei
  • Sundaramoorthi, Ganesh
arXiv preprint arXiv:1705.00301 2017 Journal Article, cited 0 times
Website

Robust Detection of Circles in the Vessel Contours and Application to Local Probability Density Estimation

  • Alvarez, Luis
  • González, Esther
  • Esclarín, Julio
  • Gomez, Luis
  • Alemán-Flores, Miguel
  • Trujillo, Agustín
  • Cuenca, Carmelo
  • Mazorra, Luis
  • Tahoces, Pablo G
  • Carreira, José M
2017 Book Section, cited 3 times
Website

Transferable HMM probability matrices in multi‐orientation geometric medical volumes segmentation

  • AlZu'bi, Shadi
  • AlQatawneh, Sokyna
  • ElBes, Mohammad
  • Alsmirat, Mohammad
Concurrency and Computation: Practice and Experience 2019 Journal Article, cited 0 times
Website
Acceptable error rate, low quality assessment, and time complexity are the major problems in image segmentation, which needed to be discovered. A variety of acceleration techniques have been applied and achieve real time results, but still limited in 3D. HMM is one of the best statistical techniques that played a significant rule recently. The problem associated with HMM is time complexity, which has been resolved using different accelerator. In this research, we propose a methodology for transferring HMM matrices from image to another skipping the training time for the rest of the 3D volume. One HMM train is generated and generalized to the whole volume. The concepts behind multi‐orientation geometrical segmentation has been employed here to improve the quality of HMM segmentation. Axial, saggital, and coronal orientations have been considered individually and together to achieve accurate segmentation results in less processing time and superior quality in the detection accuracy.

Imaging Biomarker Ontology (IBO): A Biomedical Ontology to Annotate and Share Imaging Biomarker Data

  • Amdouni, Emna
  • Gibaud, Bernard
Journal on Data Semantics 2018 Journal Article, cited 0 times
Website

Breast Cancer Response Prediction in Neoadjuvant Chemotherapy Treatment Based on Texture Analysis

  • Ammar, Mohammed
  • Mahmoudi, Saïd
  • Stylianos, Drisis
Procedia Computer Science 2016 Journal Article, cited 2 times
Website

Medical Image Classification Algorithm Based on Weight Initialization-Sliding Window Fusion Convolutional Neural Network

  • An, Feng-Ping
Complexity 2019 Journal Article, cited 0 times
Website
Due to the complexity of medical images, traditional medical image classification methods have been unable to meet actual application needs. In recent years, the rapid development of deep learning theory has provided a technical approach for solving medical image classification tasks. However, deep learning has the following problems in medical image classification. First, it is impossible to construct a deep learning model hierarchy for medical image properties; second, the network initialization weights of deep learning models are not well optimized. Therefore, this paper starts from the perspective of network optimization and improves the nonlinear modeling ability of the network through optimization methods. A new network weight initialization method is proposed, which alleviates the problem that existing deep learning model initialization is limited by the type of the nonlinear unit adopted and increases the potential of the neural network to handle different visual tasks. Moreover, through an in-depth study of the multicolumn convolutional neural network framework, this paper finds that the number of features and the convolution kernel size at different levels of the convolutional neural network are different. In contrast, the proposed method can construct different convolutional neural network models that adapt better to the characteristics of the medical images of interest and thus can better train the resulting heterogeneous multicolumn convolutional neural networks. Finally, using the adaptive sliding window fusion mechanism proposed in this paper, both methods jointly complete the classification task of medical images. Based on the above ideas, this paper proposes a medical classification algorithm based on a weight initialization/sliding window fusion for multilevel convolutional neural networks. The methods proposed in this study were applied to breast mass, brain tumor tissue, and medical image database classification experiments. The results show that the proposed method not only achieves a higher average accuracy than that of traditional machine learning and other deep learning methods but also is more stable and more robust.

Application of Fuzzy c-means and Neural networks to categorize tumor affected breast MR Images

  • Anand, Shruthi
  • Vinod, Viji
  • Rampure, Anand
International Journal of Applied Engineering Research 2015 Journal Article, cited 4 times
Website

Imaging Genomics in Glioblastoma Multiforme: A Predictive Tool for Patients Prognosis, Survival, and Outcome

  • Anil, Rahul
  • Colen, Rivka R
Magnetic Resonance Imaging Clinics of North America 2016 Journal Article, cited 3 times
Website

Lung nodule detection using 3D convolutional neural networks trained on weakly labeled data

  • Anirudh, Rushil
  • Thiagarajan, Jayaraman J
  • Bremer, Timo
  • Kim, Hyojin
2016 Conference Proceedings, cited 33 times
Website

Brain tumour classification using two-tier classifier with adaptive segmentation technique

  • Anitha, V
  • Murugavalli, S
IET Computer Vision 2016 Journal Article, cited 46 times
Website

Classification of lung adenocarcinoma transcriptome subtypes from pathological images using deep convolutional networks

  • Antonio, Victor Andrew A
  • Ono, Naoaki
  • Saito, Akira
  • Sato, Tetsuo
  • Altaf-Ul-Amin, Md
  • Kanaya, Shigehiko
International journal of computer assisted radiology and surgery 2018 Journal Article, cited 0 times
Website

Fast wavelet based image characterization for content based medical image retrieval

  • Anwar, Syed Muhammad
  • Arshad, Fozia
  • Majid, Muhammad
2017 Conference Proceedings, cited 4 times
Website

End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography

  • Ardila, D.
  • Kiraly, A. P.
  • Bharadwaj, S.
  • Choi, B.
  • Reicher, J. J.
  • Peng, L.
  • Tse, D.
  • Etemadi, M.
  • Ye, W.
  • Corrado, G.
  • Naidich, D. P.
  • Shetty, S.
Nat Med 2019 Journal Article, cited 1 times
Website
With an estimated 160,000 deaths in 2018, lung cancer is the most common cause of cancer death in the United States(1). Lung cancer screening using low-dose computed tomography has been shown to reduce mortality by 20-43% and is now included in US screening guidelines(1-6). Existing challenges include inter-grader variability and high false-positive and false-negative rates(7-10). We propose a deep learning algorithm that uses a patient's current and prior computed tomography volumes to predict the risk of lung cancer. Our model achieves a state-of-the-art performance (94.4% area under the curve) on 6,716 National Lung Cancer Screening Trial cases, and performs similarly on an independent clinical validation set of 1,139 cases. We conducted two reader studies. When prior computed tomography imaging was not available, our model outperformed all six radiologists with absolute reductions of 11% in false positives and 5% in false negatives. Where prior computed tomography imaging was available, the model performance was on-par with the same radiologists. This creates an opportunity to optimize the screening process via computer assistance and automation. While the vast majority of patients remain unscreened, we show the potential for deep learning models to increase the accuracy, consistency and adoption of lung cancer screening worldwide.

Beyin MR Görüntülerinden Bilgisayar Destekli Tümör Teşhisi Sistemi Computer-aided Tumor Detection System Using Brain MR Images

  • ARI, Ali
  • ALPASLAN, Nuh
  • HANBAY, Davut
Vogue 2015 Journal Article, cited 4 times
Website

Potentials of radiomics for cancer diagnosis and treatment in comparison with computer-aided diagnosis

  • Arimura, Hidetaka
  • Soufi, Mazen
  • Ninomiya, Kenta
  • Kamezawa, Hidemi
  • Yamada, Masahiro
Radiological Physics and Technology 2018 Journal Article, cited 0 times
Website
Computer-aided diagnosis (CAD) is a field that is essentially based on pattern recognition that improves the accuracy of a diagnosis made by a physician who takes into account the computer’s “opinion” derived from the quantitative analysis of radiological images. Radiomics is a field based on data science that massively and comprehensively analyzes a large number of medical images to extract a large number of phenotypic features reflecting disease traits, and explores the associations between the features and patients’ prognoses for precision medicine. According to the definitions for both, you may think that radiomics is not a paraphrase of CAD, but you may also think that these definitions are “image manipulation”. However, there are common and different features between the two fields. This review paper elaborates on these common and different features and introduces the potential of radiomics for cancer diagnosis and treatment by comparing it with CAD.

The lung image database consortium (LIDC) and image database resource initiative (IDRI): a completed reference database of lung nodules on CT scans

  • Armato III, Samuel G
  • McLennan, Geoffrey
  • Bidaut, Luc
  • McNitt-Gray, Michael F
  • Meyer, Charles R
  • Reeves, Anthony P
  • Zhao, Binsheng
  • Aberle, Denise R
  • Henschke, Claudia I
  • Hoffman, Eric A
Medical physics 2011 Journal Article, cited 546 times
Website

Data From LIDC-IDRI

  • Armato III, Samuel G.
  • McLennan, Geoffrey;
  • Bidaut, Luc;
  • McNitt-Gray, Michael F.;
  • Meyer, Charles R.;
  • Reeves, Anthony P.;
  • Zhao, Binsheng;
  • Aberle, Denise R.;
  • Henschke, Claudia I.;
  • Hoffman, Eric A.;
  • Kazerooni, Ella A.;
  • MacMahon, Heber;
  • van Beek, Edwin J.R.;
  • Yankelevitz, David;
  • Biancardi, Alberto M.;
  • Bland, Peyton H.;
  • Brown, Matthew S.;
  • Engelmann, Roger M.;
  • Laderach, Gary E.;
  • Max, Daniel;
  • Pais, Richard C.;
  • Qing, David P-Y;
  • Roberts, Rachael Y.;
  • Smith, Amanda R.;
  • Starkey, Adam;
  • Batra, Poonam;
  • Caligiuri, Phillip;
  • Farooqi, Ali;
  • Gladish, Gregory W.;
  • Jude, C. Matilda;
  • Munden, Reginald F.;
  • Petkovska, Iva;
  • Quint, Leslie E.;
  • Schwartz, Lawrence H.;
  • Sundaram, Baskaran;
  • Dodd, Lori E.;
  • Fenimore, Charles;
  • Gur, David;
  • Petrick, Nicholas;
  • Freymann, John
  • Kirby, Justin;
  • Hughes, Brian;
  • Casteele, Alessi Vande;
  • Gupte, Sangeeta;
  • Sallam, Maha;
  • Heath, Michael D.;
  • Kuhn, Michael H.;
  • Dharaiya, Ekta;
  • Burns, Ricahrds;
  • Anand, Vikram;
  • Shreter, Uri;
  • Vastagh, Stephen;
  • Croft, Barbara Y.;
  • Clarke, Laurence P.
2015 Dataset, cited 0 times

Collaborative projects

  • Armato, S
  • McNitt-Gray, M
  • Meyer, C
  • Reeves, A
  • Clarke, L
Int J CARS 2012 Journal Article, cited 307 times
Website

SPIE-AAPM-NCI Lung Nodule Classification Challenge Dataset

  • Armato, Samuel G.
  • Drukker, Karen
2015 Dataset, cited 0 times

Special Section Guest Editorial: LUNGx Challenge for computerized lung nodule classification: reflections and lessons learned

  • Armato, Samuel G
  • Hadjiiski, Lubomir
  • Tourassi, Georgia D
  • Drukker, Karen
  • Giger, Maryellen L
  • Li, Feng
  • Redmond, George
  • Farahani, Keyvan
  • Kirby, Justin S
  • Clarke, Laurence P
Journal of Medical Imaging 2015 Journal Article, cited 20 times
Website

Discovery of pre-therapy 2-deoxy-2-18 F-fluoro-D-glucose positron emission tomography-based radiomics classifiers of survival outcome in non-small-cell lung cancer patients

  • Arshad, Mubarik A
  • Thornton, Andrew
  • Lu, Haonan
  • Tam, Henry
  • Wallitt, Kathryn
  • Rodgers, Nicola
  • Scarsbrook, Andrew
  • McDermott, Garry
  • Cook, Gary J
  • Landau, David
European journal of nuclear medicine and molecular imaging 2018 Journal Article, cited 0 times
Website

Morphological and multi-level geometrical descriptor analysis in CT and MRI volumes for automatic pancreas segmentation

  • Asaturyan, Hykoush
  • Gligorievski, Antonio
  • Villarini, Barbara
Computerized Medical Imaging and Graphics 2019 Journal Article, cited 3 times
Website
Automatic pancreas segmentation in 3D radiological scans is a critical, yet challenging task. As a prerequisite for computer-aided diagnosis (CADx) systems, accurate pancreas segmentation could generate both quantitative and qualitative information towards establishing the severity of a condition, and thus provide additional guidance for therapy planning. Since the pancreas is an organ of high inter-patient anatomical variability, previous segmentation approaches report lower quantitative accuracy scores in comparison to abdominal organs such as the liver or kidneys. This paper presents a novel approach for automatic pancreas segmentation in magnetic resonance imaging (MRI) and computer tomography (CT) scans. This method exploits 3D segmentation that, when coupled with geometrical and morphological characteristics of abdominal tissue, classifies distinct contours in tight pixel-range proximity as “pancreas” or “non-pancreas”. There are three main stages to this approach: (1) identify a major pancreas region and apply contrast enhancement to differentiate between pancreatic and surrounding tissue; (2) perform 3D segmentation via continuous max-flow and min-cuts approach, structured forest edge detection, and a training dataset of annotated pancreata; (3) eliminate non-pancreatic contours from resultant segmentation via morphological operations on area, structure and connectivity between distinct contours. The proposed method is evaluated on a dataset containing 82 CT image volumes, achieving mean Dice Similarity coefficient (DSC) of 79.3 ± 4.4%. Two MRI datasets containing 216 and 132 image volumes are evaluated, achieving mean DSC 79.6 ± 5.7% and 81.6 ± 5.1% respectively. This approach is statistically stable, reflected by lower metrics in standard deviation in comparison to state-of-the-art approaches.

Early survival prediction in non-small cell lung cancer from PET/CT images using an intra-tumor partitioning method.

  • Astaraki, Mehdi
  • Wang, Chunliang
  • Buizza, Giulia
  • Toma-Dasu, Iuliana
  • Lazzeroni, Marta
  • Smedby, Orjan
Physica Medica 2019 Journal Article, cited 0 times
Website
PURPOSE: To explore prognostic and predictive values of a novel quantitative feature set describing intra-tumor heterogeneity in patients with lung cancer treated with concurrent and sequential chemoradiotherapy. METHODS: Longitudinal PET-CT images of 30 patients with non-small cell lung cancer were analysed. To describe tumor cell heterogeneity, the tumors were partitioned into one to ten concentric regions depending on their sizes, and, for each region, the change in average intensity between the two scans was calculated for PET and CT images separately to form the proposed feature set. To validate the prognostic value of the proposed method, radiomics analysis was performed and a combination of the proposed novel feature set and the classic radiomic features was evaluated. A feature selection algorithm was utilized to identify the optimal features, and a linear support vector machine was trained for the task of overall survival prediction in terms of area under the receiver operating characteristic curve (AUROC). RESULTS: The proposed novel feature set was found to be prognostic and even outperformed the radiomics approach with a significant difference (AUROCSALoP=0.90 vs. AUROCradiomic=0.71) when feature selection was not employed, whereas with feature selection, a combination of the novel feature set and radiomics led to the highest prognostic values. CONCLUSION: A novel feature set designed for capturing intra-tumor heterogeneity was introduced. Judging by their prognostic power, the proposed features have a promising potential for early survival prediction.

Computer Aided Detection Scheme To Improve The Prognosis Assessment Of Early Stage Lung Cancer Patients

  • Athira, KV
  • Nithin, SS
Computer 2018 Journal Article, cited 0 times
Website

Radiogenomics of clear cell renal cell carcinoma: preliminary findings of The Cancer Genome Atlas–Renal Cell Carcinoma (TCGA–RCC) Imaging Research Group

  • Atul, B
Abdominal imaging 2015 Journal Article, cited 47 times
Website

Analysis of dual tree M‐band wavelet transform based features for brain image classification

  • Ayalapogu, Ratna Raju
  • Pabboju, Suresh
  • Ramisetty, Rajeswara Rao
Magnetic Resonance in Medicine 2018 Journal Article, cited 1 times
Website

Analysis of Classification Methods for Diagnosis of Pulmonary Nodules in CT Images

  • Baboo, Capt Dr S Santhosh
  • Iyyapparaj, E
International Journal of Engineering Science 2017 Journal Article, cited 0 times
Website

Detection of Brain Tumour in MRI Scan Images using Tetrolet Transform and SVM Classifier

  • Babu, B Shoban
  • Varadarajan, S
Indian Journal of Science and Technology 2017 Journal Article, cited 1 times
Website

BIOMEDICAL IMAGE RETRIEVAL USING LBWP

  • Babu, Joyce Sarah
  • Mathew, Soumya
  • Simon, Rini
International Research Journal of Engineering and Technology 2017 Journal Article, cited 0 times
Website

Virtual clinical trial for task-based evaluation of a deep learning synthetic mammography algorithm

  • Badal, Andreu
  • Cha, Kenny H.
  • Divel, Sarah E.
  • Graff, Christian G.
  • Zeng, Rongping
  • Badano, Aldo
2019 Conference Proceedings, cited 0 times
Website
Image processing algorithms based on deep learning techniques are being developed for a wide range of medical applications. Processed medical images are typically evaluated with the same kind of image similarity metrics used for natural scenes, disregarding the medical task for which the images are intended. We propose a com- putational framework to estimate the clinical performance of image processing algorithms using virtual clinical trials. The proposed framework may provide an alternative method for regulatory evaluation of non-linear image processing algorithms. To illustrate this application of virtual clinical trials, we evaluated three algorithms to compute synthetic mammograms from digital breast tomosynthesis (DBT) scans based on convolutional neural networks previously used for denoising low dose computed tomography scans. The inputs to the networks were one or more noisy DBT projections, and the networks were trained to minimize the difference between the output and the corresponding high dose mammogram. DBT and mammography images simulated with the Monte Carlo code MC-GPU using realistic breast phantoms were used for network training and validation. The denoising algorithms were tested in a virtual clinical trial by generating 3000 synthetic mammograms from the public VICTRE dataset of simulated DBT scans. The detectability of a calcification cluster and a spiculated mass present in the images was calculated using an ensemble of 30 computational channelized Hotelling observers. The signal detectability results, which took into account anatomic and image reader variability, showed that the visibility of the mass was not affected by the post-processing algorithm, but that the resulting slight blurring of the images severely impacted the visibility of the calcification cluster. The evaluation of the algorithms using the pixel-based metrics peak signal to noise ratio and structural similarity in image patches was not able to predict the reduction in performance in the detectability of calcifications. These two metrics are computed over the whole image and do not consider any particular task, and might not be adequate to estimate the diagnostic performance of the post-processed images.

Technical and Clinical Factors Affecting Success Rate of a Deep Learning Method for Pancreas Segmentation on CT

  • Bagheri, Mohammad Hadi
  • Roth, Holger
  • Kovacs, William
  • Yao, Jianhua
  • Farhadi, Faraz
  • Li, Xiaobai
  • Summers, Ronald M
Acad Radiol 2019 Journal Article, cited 0 times
Website
PURPOSE: Accurate pancreas segmentation has application in surgical planning, assessment of diabetes, and detection and analysis of pancreatic tumors. Factors that affect pancreas segmentation accuracy have not been previously reported. The purpose of this study is to identify technical and clinical factors that adversely affect the accuracy of pancreas segmentation on CT. METHOD AND MATERIALS: In this IRB and HIPAA compliant study, a deep convolutional neural network was used for pancreas segmentation in a publicly available archive of 82 portal-venous phase abdominal CT scans of 53 men and 29 women. The accuracies of the segmentations were evaluated by the Dice similarity coefficient (DSC). The DSC was then correlated with demographic and clinical data (age, gender, height, weight, body mass index), CT technical factors (image pixel size, slice thickness, presence or absence of oral contrast), and CT imaging findings (volume and attenuation of pancreas, visceral abdominal fat, and CT attenuation of the structures within a 5 mm neighborhood of the pancreas). RESULTS: The average DSC was 78% +/- 8%. Factors that were statistically significantly correlated with DSC included body mass index (r=0.34, p < 0.01), visceral abdominal fat (r=0.51, p < 0.0001), volume of the pancreas (r=0.41, p=0.001), standard deviation of CT attenuation within the pancreas (r=0.30, p=0.01), and median and average CT attenuation in the immediate neighborhood of the pancreas (r = -0.53, p < 0.0001 and r=-0.52, p < 0.0001). There were no significant correlations between the DSC and the height, gender, or mean CT attenuation of the pancreas. CONCLUSION: Increased visceral abdominal fat and accumulation of fat within or around the pancreas are major factors associated with more accurate segmentation of the pancreas. Potential applications of our findings include assessment of pancreas segmentation difficulty of a particular scan or dataset and identification of methods that work better for more challenging pancreas segmentations.

Imaging genomics in cancer research: limitations and promises

  • Bai, Harrison X
  • Lee, Ashley M
  • Yang, Li
  • Zhang, Paul
  • Davatzikos, Christos
  • Maris, John M
  • Diskin, Sharon J
The British journal of radiology 2016 Journal Article, cited 28 times
Website

BraTS Multimodal Brain Tumor Segmetation Challenge

  • Bakas, Spyridon
2017 Conference Proceedings, cited 0 times
Website

A radiogenomic dataset of non-small cell lung cancer

  • Bakr, Shaimaa
  • Gevaert, Olivier
  • Echegaray, Sebastian
  • Ayers, Kelsey
  • Zhou, Mu
  • Shafiq, Majid
  • Zheng, Hong
  • Benson, Jalen Anthony
  • Zhang, Weiruo
  • Leung, Ann NC
Scientific data 2018 Journal Article, cited 1 times
Website

Secure telemedicine using RONI halftoned visual cryptography without pixel expansion

  • Bakshi, Arvind
  • Patel, Anoop Kumar
Journal of Information Security and Applications 2019 Journal Article, cited 0 times
Website

Test–Retest Reproducibility Analysis of Lung CT Image Features

  • Balagurunathan, Yoganand
  • Kumar, Virendra
  • Gu, Yuhua
  • Kim, Jongphil
  • Wang, Hua
  • Liu, Ying
  • Goldgof, Dmitry B
  • Hall, Lawrence O
  • Korn, Rene
  • Zhao, Binsheng
Journal of Digital Imaging 2014 Journal Article, cited 85 times
Website

Quantitative Imaging features Improve Discrimination of Malignancy in Pulmonary nodules

  • Balagurunathan, Yoganand
  • Schabath, Matthew B.
  • Wang, Hua
  • Liu, Ying
  • Gillies, Robert J.
Sci RepScientific reports 2019 Journal Article, cited 0 times
Website
Pulmonary nodules are frequently detected radiological abnormalities in lung cancer screening. Nodules of the highest- and lowest-risk for cancer are often easily diagnosed by a trained radiologist there is still a high rate of indeterminate pulmonary nodules (IPN) of unknown risk. Here, we test the hypothesis that computer extracted quantitative features ("radiomics") can provide improved risk-assessment in the diagnostic setting. Nodules were segmented in 3D and 219 quantitative features are extracted from these volumes. Using these features novel malignancy risk predictors are formed with various stratifications based on size, shape and texture feature categories. We used images and data from the National Lung Screening Trial (NLST), curated a subset of 479 participants (244 for training and 235 for testing) that included incident lung cancers and nodule-positive controls. After removing redundant and non-reproducible features, optimal linear classifiers with area under the receiver operator characteristics (AUROC) curves were used with an exhaustive search approach to find a discriminant set of image features, which were validated in an independent test dataset. We identified several strong predictive models, using size and shape features the highest AUROC was 0.80. Using non-size based features the highest AUROC was 0.85. Combining features from all the categories, the highest AUROC were 0.83.

Bone-Cancer Assessment and Destruction Pattern Analysis in Long-Bone X-ray Image

  • Bandyopadhyay, Oishila
  • Biswas, Arindam
  • Bhattacharya, Bhargab B
J Digit Imaging 2018 Journal Article, cited 0 times
Website
Bone cancer originates from bone and rapidly spreads to the rest of the body affecting the patient. A quick and preliminary diagnosis of bone cancer begins with the analysis of bone X-ray or MRI image. Compared to MRI, an X-ray image provides a low-cost diagnostic tool for diagnosis and visualization of bone cancer. In this paper, a novel technique for the assessment of cancer stage and grade in long bones based on X-ray image analysis has been proposed. Cancer-affected bone images usually appear with a variation in bone texture in the affected region. A fusion of different methodologies is used for the purpose of our analysis. In the proposed approach, we extract certain features from bone X-ray images and use support vector machine (SVM) to discriminate healthy and cancerous bones. A technique based on digital geometry is deployed for localizing cancer-affected regions. Characterization of the present stage and grade of the disease and identification of the underlying bone-destruction pattern are performed using a decision tree classifier. Furthermore, the method leads to the development of a computer-aided diagnostic tool that can readily be used by paramedics and doctors. Experimental results on a number of test cases reveal satisfactory diagnostic inferences when compared with ground truth known from clinical findings.

A New Adaptive-Weighted Fusion Rule for Wavelet based PET/CT Fusion

  • Barani, R
  • Sumathi, M
International Journal of Signal Processing, Image Processing and Pattern Recognition 2016 Journal Article, cited 1 times
Website

Interreader Variability of Dynamic Contrast-enhanced MRI of Recurrent Glioblastoma: The Multicenter ACRIN 6677/RTOG 0625 Study

  • Barboriak, Daniel P
  • Zhang, Zheng
  • Desai, Pratikkumar
  • Snyder, Bradley S
  • Safriel, Yair
  • McKinstry, Robert C
  • Bokstein, Felix
  • Sorensen, Gregory
  • Gilbert, Mark R
  • Boxerman, Jerrold L
Radiology 2019 Journal Article, cited 2 times
Website
Purpose To evaluate factors contributing to interreader variation (IRV) in parameters measured at dynamic contrast material-enhanced (DCE) MRI in patients with glioblastoma who were participating in a multicenter trial. Materials and Methods A total of 18 patients (mean age, 57 years +/- 13 [standard deviation]; 10 men) who volunteered for the advanced imaging arm of ACRIN 6677, a substudy of the RTOG 0625 clinical trial for recurrent glioblastoma treatment, underwent analyzable DCE MRI at one of four centers. The 78 imaging studies were analyzed centrally to derive the volume transfer constant (K(trans)) for gadolinium between blood plasma and tissue extravascular extracellular space, fractional volume of the extracellular extravascular space (ve), and initial area under the gadolinium concentration curve (IAUGC). Two independently trained teams consisting of a neuroradiologist and a technologist segmented the enhancing tumor on three-dimensional spoiled gradient-recalled acquisition in the steady-state images. Mean and median parameter values in the enhancing tumor were extracted after registering segmentations to parameter maps. The effect of imaging time relative to treatment, map quality, imager magnet and sequence, average tumor volume, and reader variability in tumor volume on IRV was studied by using intraclass correlation coefficients (ICCs) and linear mixed models. Results Mean interreader variations (+/- standard deviation) (difference as a percentage of the mean) for mean and median IAUGC, mean and median K(trans), and median ve were 18% +/- 24, 17% +/- 23, 27% +/- 34, 16% +/- 27, and 27% +/- 34, respectively. ICCs for these metrics ranged from 0.90 to 1.0 for baseline and from 0.48 to 0.76 for posttreatment examinations. Variability in reader-derived tumor volume was significantly related to IRV for all parameters. Conclusion Differences in reader tumor segmentations are a significant source of interreader variation for all dynamic contrast-enhanced MRI parameters. (c) RSNA, 2018 Online supplemental material is available for this article. See also the editorial by Wolf in this issue.

Pathologically-Validated Tumor Prediction Maps in MRI

  • Barrington, Alex
2019 Thesis, cited 0 times
Website
Glioblastoma (GBM) is an aggressive cancer with an average 5-year survival rate of about 5%. Following treatment with surgery, radiation, and chemotherapy, diagnosing tumor recurrence requires serial magnetic resonance imaging (MRI) scans. Infiltrative tumor cells beyond gadolinium enhancement on T1-weighted MRI are difficult to detect. This study therefore aims to improve tumor detection beyond traditional tumor margins. To accomplish this, a neural network model was trained to classify tissue samples as ‘tumor’ or ‘not tumor’. This model was then used to classify thousands of tiles from histology samples acquired at autopsy with known MRI locations on the patient’s final clinical MRI scan. This combined radiological-pathological (rad-path) dataset was then treated as a ground truth to train a second model for predicting tumor presence from MRI alone. Predictive maps were created for seven patients left out of the training steps, and tissue samples were tested to determine the model’s accuracy. The final model produced a receiver operator characteristic (ROC) area under the curve (AUC) of 0.70. This study demonstrates a new method for detecting infiltrative tumor beyond conventional radiologist defined margins based on neural networks applied to rad-path datasets in glioblastoma.

Equating quantitative emphysema measurements on different CT image reconstructions

  • Bartel, Seth T
  • Bierhals, Andrew J
  • Pilgram, Thomas K
  • Hong, Cheng
  • Schechtman, Kenneth B
  • Conradi, Susan H
  • Gierada, David S
Medical physics 2011 Journal Article, cited 15 times
Website

Removing Mixture Noise from Medical Images Using Block Matching Filtering and Low-Rank Matrix Completion

  • Barzigar, Nafise
  • Roozgard, Aminmohammad
  • Verma, Pramode K
  • Cheng, Samuel
2012 Conference Proceedings, cited 2 times
Website

Machine Intelligence for Advanced Medical Data Analysis: Manifold Learning Approach

  • Bashiri, Fereshteh Sadat
2019 Thesis, cited 0 times
Website
In the current work, linear and non-linear manifold learning techniques, specifically Principle Component Analysis (PCA) and Laplacian Eigenmaps, are studied in detail. Their applications in medical image and shape analysis are investigated. In the first contribution, a manifold learning-based multi-modal image registration technique is developed, which results in a unified intensity system through intensity transformation between the reference and sensed images. The transformation eliminates intensity variations in multi-modal medical scans and hence facilitates employing well-studied monomodal registration techniques. The method can be used for registering multi-modal images with full and partial data. Next, a manifold learning-based scale invariant global shape descriptor is introduced. The proposed descriptor benefits from the capability of Laplacian Eigenmap in dealing with high dimensional data by introducing an exponential weighting scheme. It eliminates the limitations tied to the well-known cotangent weighting scheme, namely dependency on triangular mesh representation and high intra-class quality of 3D models. In the end, a novel descriptive model for diagnostic classification of pulmonary nodules is presented. The descriptive model benefits from structural differences between benign and malignant nodules for automatic and accurate prediction of a candidate nodule. It extracts concise and discriminative features automatically from the 3D surface structure of a nodule using spectral features studied in the previous work combined with a point cloud-based deep learning network. Extensive experiments have been conducted and have shown that the proposed algorithms based on manifold learning outperform several state-of-the-art methods. Advanced computational techniques with a combination of manifold learning and deep networks can play a vital role in effective healthcare delivery by providing a framework for several fundamental tasks in image and shape processing, namely, registration, classification, and detection of features of interest.

Call for Data Standardization: Lessons Learned and Recommendations in an Imaging Study

  • Basu, Amrita
  • Warzel, Denise
  • Eftekhari, Aras
  • Kirby, Justin S
  • Freymann, John
  • Knable, Janice
  • Sharma, Ashish
  • Jacobs, Paula
JCO Clin Cancer Inform 2019 Journal Article, cited 0 times
Website
PURPOSE: Data sharing creates potential cost savings, supports data aggregation, and facilitates reproducibility to ensure quality research; however, data from heterogeneous systems require retrospective harmonization. This is a major hurdle for researchers who seek to leverage existing data. Efforts focused on strategies for data interoperability largely center around the use of standards but ignore the problems of competing standards and the value of existing data. Interoperability remains reliant on retrospective harmonization. Approaches to reduce this burden are needed. METHODS: The Cancer Imaging Archive (TCIA) is an example of an imaging repository that accepts data from a diversity of sources. It contains medical images from investigators worldwide and substantial nonimage data. Digital Imaging and Communications in Medicine (DICOM) standards enable querying across images, but TCIA does not enforce other standards for describing nonimage supporting data, such as treatment details and patient outcomes. In this study, we used 9 TCIA lung and brain nonimage files containing 659 fields to explore retrospective harmonization for cross-study query and aggregation. It took 329.5 hours, or 2.3 months, extended over 6 months to identify 41 overlapping fields in 3 or more files and transform 31 of them. We used the Genomic Data Commons (GDC) data elements as the target standards for harmonization. RESULTS: We characterized the issues and have developed recommendations for reducing the burden of retrospective harmonization. Once we harmonized the data, we also developed a Web tool to easily explore harmonized collections. CONCLUSION: While prospective use of standards can support interoperability, there are issues that complicate this goal. Our work recognizes and reveals retrospective harmonization issues when trying to reuse existing data and recommends national infrastructure to address these issues.

Anatomical DCE-MRI phantoms generated from glioma patient data

  • Beers, Andrew
  • Chang, Ken
  • Brown, James
  • Zhu, Xia
  • Sengupta, Dipanjan
  • Willke, Theodore L
  • Gerstner, Elizabeth
  • Rosen, Bruce
  • Kalpathy-Cramer, Jayashree
2018 Conference Proceedings, cited 0 times
Website

Multi‐site quality and variability analysis of 3D FDG PET segmentations based on phantom and clinical image data

  • Beichel, Reinhard R
  • Smith, Brian J
  • Bauer, Christian
  • Ulrich, Ethan J
  • Ahmadvand, Payam
  • Budzevich, Mikalai M
  • Gillies, Robert J
  • Goldgof, Dmitry
  • Grkovski, Milan
  • Hamarneh, Ghassan
Medical physics 2017 Journal Article, cited 7 times
Website

Data From QIN-HEADNECK

  • Beichel, R R
  • Ulrich, E J
2015 Dataset, cited 0 times

Radiogenomic analysis of hypoxia pathway is predictive of overall survival in Glioblastoma

  • Beig, Niha
  • Patel, Jay
  • Prasanna, Prateek
  • Hill, Virginia
  • Gupta, Amit
  • Correa, Ramon
  • Bera, Kaustav
  • Singh, Salendra
  • Partovi, Sasan
  • Varadan, Vinay
Sci RepScientific reports 2018 Journal Article, cited 5 times
Website

Radiogenomic analysis of hypoxia pathway reveals computerized MRI descriptors predictive of overall survival in Glioblastoma

  • Beig, Niha
  • Patel, Jay
  • Prasanna, Prateek
  • Partovi, Sasan
  • Varadhan, Vinay
  • Madabhushi, Anant
  • Tiwari, Pallavi
2017 Conference Proceedings, cited 3 times
Website

Longitudinal fan-beam computed tomography dataset for head-and-neck squamous cell carcinoma patients

  • Bejarano, T.
  • De Ornelas-Couto, M.
  • Mihaylov, I. B.
Med Phys 2019 Journal Article, cited 0 times
Website
PURPOSE: To describe in detail a dataset consisting of longitudinal fan-beam computed tomography (CT) imaging to visualize anatomical changes in head-and-neck squamous cell carcinoma (HNSCC) patients throughout radiotherapy (RT) treatment course. ACQUISITION AND VALIDATION METHODS: This dataset consists of CT images from 31 HNSCC patients who underwent volumetric modulated arc therapy (VMAT). Patients had three CT scans acquired throughout the duration of the radiation treatment course. Pretreatment planning CT scans with a median of 13 days before treatment (range: 2-27), mid-treatment CT at 22 days after start of treatment (range: 13-38), and post-treatment CT 65 days after start of treatment (range: 35-192). Patients received RT treatment to a total dose of 58-70 Gy, using daily 2.0-2.20 Gy, fractions for 30-35 fractions. The fan-beam CT images were acquired using a Siemens 16-slice CT scanner head protocol with 120 kV and current of 400 mAs. A helical scan with 1 rotation per second was used with a slice thickness of 2 mm and table increment of 1.2 mm. In addition to the imaging data, contours of anatomical structures for RT, demographic, and outcome measurements are provided. DATA FORMAT AND USAGE NOTES: The dataset with DICOM files including images, RTSTRUCT files, and RTDOSE files can be found and publicly accessed in the Cancer Imaging Archive (TCIA, http://www.cancerimagingarchive.net/) as collection Head-and-neck squamous cell carcinoma patients with CT taken during pretreatment, mid-treatment, and post-treatment (HNSCC-3DCT-RT). DISCUSSION: This is the first dataset to date in TCIA which provides a collection of multiple CT imaging studies (pretreatment, mid-treatment, and post-treatment) throughout the treatment course. The dataset can serve a wide array of research projects including (but not limited to): quantitative imaging assessment, investigation on anatomical changes with treatment progress, dosimetry of target volumes and/or normal structures due to anatomical changes occurring during treatment, investigation of RT toxicity, and concurrent chemotherapy and RT effects on head-and-neck patients.

Analysis of postprocessing steps for residue function dependent dynamic susceptibility contrast (DSC)‐MRI biomarkers and their clinical impact on glioma grading for both 1.5 and 3T

  • Bell, Laura C
  • Stokes, Ashley M
  • Quarles, C. Chad
Journal of Magnetic Resonance Imaging 2019 Journal Article, cited 0 times
Website
BACKGROUND: Dynamic susceptibility contrast (DSC)-MRI analysis pipelines differ across studies and sites, potentially confounding the clinical value and use of the derived biomarkers. PURPOSE/HYPOTHESIS: To investigate how postprocessing steps for computation of cerebral blood volume (CBV) and residue function dependent parameters (cerebral blood flow [CBF], mean transit time [MTT], capillary transit heterogeneity [CTH]) impact glioma grading. STUDY TYPE: Retrospective study from The Cancer Imaging Archive (TCIA). POPULATION: Forty-nine subjects with low- and high-grade gliomas. FIELD STRENGTH/SEQUENCE: 1.5 and 3.0T clinical systems using a single-echo echo planar imaging (EPI) acquisition. ASSESSMENT: Manual regions of interest (ROIs) were provided by TCIA and automatically segmented ROIs were generated by k-means clustering. CBV was calculated based on conventional equations. Residue function dependent biomarkers (CBF, MTT, CTH) were found by two deconvolution methods: circular discretization followed by a signal-to-noise ratio (SNR)-adapted eigenvalue thresholding (Method 1) and Volterra discretization with L-curve-based Tikhonov regularization (Method 2). STATISTICAL TESTS: Analysis of variance, receiver operating characteristics (ROC), and logistic regression tests. RESULTS: MTT alone was unable to statistically differentiate glioma grade (P > 0.139). When normalized, tumor CBF, CTH, and CBV did not differ across field strengths (P > 0.141). Biomarkers normalized to automatically segmented regions performed equally (rCTH AUROC is 0.73 compared with 0.74) or better (rCBF AUROC increases from 0.74-0.84; rCBV AUROC increases 0.78-0.86) than manually drawn ROIs. By updating the current deconvolution steps (Method 2), rCTH can act as a classifier for glioma grade (P < 0.007), but not if processed by current conventional DSC methods (Method 1) (P > 0.577). Lastly, higher-order biomarkers (eg, rCBF and rCTH) along with rCBV increases AUROC to 0.92 for differentiating tumor grade as compared with 0.78 and 0.86 (manual and automatic reference regions, respectively) for rCBV alone. DATA CONCLUSION: With optimized analysis pipelines, higher-order perfusion biomarkers (rCBF and rCTH) improve glioma grading as compared with CBV alone. Additionally, postprocessing steps impact thresholds needed for glioma grading. LEVEL OF EVIDENCE: 3 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2019.

Analysis of postprocessing steps for residue function dependent dynamic susceptibility contrast (DSC)‐MRI biomarkers and their clinical impact on glioma grading for both 1.5 and 3T

  • Bell, Laura C
  • Stokes, Ashley M
  • Quarles, C Chad
Journal of Magnetic Resonance Imaging 2020 Journal Article, cited 0 times
Website

Automatic Design of Window Operators for the Segmentation of the Prostate Gland in Magnetic Resonance Images

  • Benalcázar, Marco E
  • Brun, Marcel
  • Ballarin, Virginia
2015 Conference Proceedings, cited 0 times
Website

Overview of the American Society for Radiation Oncology–National Institutes of Health–American Association of Physicists in Medicine Workshop 2015: Exploring Opportunities for Radiation Oncology in the Era of Big Data

  • Benedict, Stanley H
  • Hoffman, Karen
  • Martel, Mary K
  • Abernethy, Amy P
  • Asher, Anthony L
  • Capala, Jacek
  • Chen, Ronald C
  • Chera, Bhisham
  • Couch, Jennifer
  • Deye, James
International Journal of Radiation Oncology• Biology• Physics 2016 Journal Article, cited 0 times

Segmentation of three-dimensional images with parametric active surfaces and topology changes

  • Benninghoff, Heike
  • Garcke, Harald
Journal of Scientific Computing 2017 Journal Article, cited 1 times
Website

Pulmonary nodule detection using a cascaded SVM classifier

  • Bergtholdt, Martin
  • Wiemker, Rafael
  • Klinder, Tobias
2016 Conference Proceedings, cited 9 times
Website

Deep-learning framework to detect lung abnormality – A study with chest X-Ray and lung CT scan images

  • Bhandary, Abhir
  • Prabhu, G. Ananth
  • Rajinikanth, V.
  • Thanaraj, K. Palani
  • Satapathy, Suresh Chandra
  • Robbins, David E.
  • Shasky, Charles
  • Zhang, Yu-Dong
  • Tavares, João Manuel R. S.
  • Raja, N. Sri Madhava
Pattern Recognition Letters 2020 Journal Article, cited 0 times
Website
Lung abnormalities are highly risky conditions in humans. The early diagnosis of lung abnormalities is essential to reduce the risk by enabling quick and efficient treatment. This research work aims to propose a Deep-Learning (DL) framework to examine lung pneumonia and cancer. This work proposes two different DL techniques to assess the considered problem: (i) The initial DL method, named a modified AlexNet (MAN), is proposed to classify chest X-Ray images into normal and pneumonia class. In the MAN, the classification is implemented using with Support Vector Machine (SVM), and its performance is compared against Softmax. Further, its performance is validated with other pre-trained DL techniques, such as AlexNet, VGG16, VGG19 and ResNet50. (ii) The second DL work implements a fusion of handcrafted and learned features in the MAN to improve classification accuracy during lung cancer assessment. This work employs serial fusion and Principal Component Analysis (PCA) based features selection to enhance the feature vector. The performance of this DL frame work is tested using benchmark lung cancer CT images of LIDC-IDRI and classification accuracy (97.27%) is attained.

G-DOC Plus–an integrative bioinformatics platform for precision medicine

  • Bhuvaneshwar, Krithika
  • Belouali, Anas
  • Singh, Varun
  • Johnson, Robert M
  • Song, Lei
  • Alaoui, Adil
  • Harris, Michael A
  • Clarke, Robert
  • Weiner, Louis M
  • Gusev, Yuriy
BMC bioinformatics 2016 Journal Article, cited 14 times
Website

Artificial intelligence in cancer imaging: Clinical challenges and applications

  • Bi, Wenya Linda
  • Hosny, Ahmed
  • Schabath, Matthew B
  • Giger, Maryellen L
  • Birkbak, Nicolai J
  • Mehrtash, Alireza
  • Allison, Tavis
  • Arnaout, Omar
  • Abbosh, Christopher
  • Dunn, Ian F
CA: a cancer journal for clinicians 2019 Journal Article, cited 0 times
Website

A comparison of ground truth estimation methods

  • Biancardi, Alberto M
  • Jirapatnakul, Artit C
  • Reeves, Anthony P
International journal of computer assisted radiology and surgery 2010 Journal Article, cited 17 times
Website

Learning to detect anatomical landmarks of the pelvis in X-rays from arbitrary views

  • Bier, B.
  • Goldmann, F.
  • Zaech, J. N.
  • Fotouhi, J.
  • Hegeman, R.
  • Grupp, R.
  • Armand, M.
  • Osgood, G.
  • Navab, N.
  • Maier, A.
  • Unberath, M.
Int J Comput Assist Radiol Surg 2019 Journal Article, cited 0 times
Website
Purpose Minimally invasive alternatives are now available for many complex surgeries. These approaches are enabled by the increasing availability of intra-operative image guidance. Yet, fluoroscopic X-rays suffer from projective transformation and thus cannot provide direct views onto anatomy. Surgeons could highly benefit from additional information, such as the anatomical landmark locations in the projections, to support intra-operative decision making. However, detecting landmarks is challenging since the viewing direction changes substantially between views leading to varying appearance of the same landmark. Therefore, and to the best of our knowledge, view-independent anatomical landmark detection has not been investigated yet. Methods In this work, we propose a novel approach to detect multiple anatomical landmarks in X-ray images from arbitrary viewing directions. To this end, a sequential prediction framework based on convolutional neural networks is employed to simultaneously regress all landmark locations. For training, synthetic X-rays are generated with a physically accurate forward model that allows direct application of the trained model to real X-ray images of the pelvis. View invariance is achieved via data augmentation by sampling viewing angles on a spherical segment of 120∘×90∘ . Results On synthetic data, a mean prediction error of 5.6 ± 4.5 mm is achieved. Further, we demonstrate that the trained model can be directly applied to real X-rays and show that these detections define correspondences to a respective CT volume, which allows for analytic estimation of the 11 degree of freedom projective mapping. Conclusion We present the first tool to detect anatomical landmarks in X-ray images independent of their viewing direction. Access to this information during surgery may benefit decision making and constitutes a first step toward global initialization of 2D/3D registration without the need of calibration. As such, the proposed concept has a strong prospect to facilitate and enhance applications and methods in the realm of image-guided surgery.

Performance analysis of unsupervised optimal fuzzy clustering algorithm for MRI brain tumor segmentation

  • Blessy, SA Praylin Selva
  • Sulochana, C Helen
Technology and Health Care 2014 Journal Article, cited 0 times
Website

CT Colonography: External Clinical Validation of an Algorithm for Computer-assisted Prone and Supine Registration

  • Boone, Darren J
  • Halligan, Steve
  • Roth, Holger R
  • Hampshire, Tom E
  • Helbren, Emma
  • Slabaugh, Greg G
  • McQuillan, Justine
  • McClelland, Jamie R
  • Hu, Mingxing
  • Punwani, Shonit
Radiology 2013 Journal Article, cited 5 times
Website

Solid Indeterminate Nodules with a Radiological Stability Suggesting Benignity: A Texture Analysis of Computed Tomography Images Based on the Kurtosis and Skewness of the Nodule Volume Density Histogram

  • Borguezan, Bruno Max
  • Lopes, Agnaldo José
  • Saito, Eduardo Haruo
  • Higa, Claudio
  • Silva, Aristófanes Corrêa
  • Nunes, Rodolfo Acatauassú
Pulmonary Medicine 2019 Journal Article, cited 0 times
Website
BACKGROUND: The number of incidental findings of pulmonary nodules using imaging methods to diagnose other thoracic or extrathoracic conditions has increased, suggesting the need for in-depth radiological image analyses to identify nodule type and avoid unnecessary invasive procedures. OBJECTIVES:The present study evaluated solid indeterminate nodules with a radiological stability suggesting benignity (SINRSBs) through a texture analysis of computed tomography (CT) images. METHODS: A total of 100 chest CT scans were evaluated, including 50 cases of SINRSBs and 50 cases of malignant nodules. SINRSB CT scans were performed using the same noncontrast enhanced CT protocol and equipment; the malignant nodule data were acquired from several databases. The kurtosis (KUR) and skewness (SKW) values of these tests were determined for the whole volume of each nodule, and the histograms were classified into two basic patterns: peaks or plateaus. RESULTS: The mean (MEN) KUR values of the SINRSBs and malignant nodules were 3.37 ± 3.88 and 5.88 ± 5.11, respectively. The receiver operating characteristic (ROC) curve showed that the sensitivity and specificity for distinguishing SINRSBs from malignant nodules were 65% and 66% for KUR values >6, respectively, with an area under the curve (AUC) of 0.709 (p< 0.0001). The MEN SKW values of the SINRSBs and malignant nodules were 1.73 ± 0.94 and 2.07 ± 1.01, respectively. The ROC curve showed that the sensitivity and specificity for distinguishing malignant nodules from SINRSBs were 65% and 66% for SKW values >3.1, respectively, with an AUC of 0.709 (p < 0.0001). An analysis of the peak and plateau histograms revealed sensitivity, specificity, and accuracy values of 84%, 74%, and 79%, respectively. CONCLUSION: KUR, SKW, and histogram shape can help to noninvasively diagnose SINRSBs but should not be used alone or without considering clinical data.

Data From Head-Neck Cetuximab

  • Bosch, Walter R.
  • Straube, William L.
  • Matthews, John W.
  • Purdy, James A.
2015 Dataset, cited 0 times
This collection combines advanced molecular imaging treatment response assessment through pre- and post-treatment FDG PET/CT scans with therapy of advanced head and neck cancer, including chemo-radiation therapy with and without addition of an EGFR inhibitor molecular targeted agent (Cetuximab). The Head-Neck Cetuximab collection consists of a subset of image data from RTOG 0522/ACRIN 4500, which was randomized phase III Trial of Radiation Therapy and Chemotherapy for stage III and IV Head and Neck carcinomas. The RTOG 0522/ACRIN 4500 protocols were activated in November 2005 and successfully completed accrual of 945 patients in 2009. As part of the RTOG 0522 trial, institutions had the option to join the RTOG 0522/ACRIN 4500 imaging study. The post-treatment FDG PET/CT scan was performed 8-9 weeks after completion of treatment before any nodal dissection. For this reason the data was provided through two independent channels: RTOG 0522: CT, Structures, RT Doses, RT Plans sent to ITC ACRIN 4500: Quantitative PET (PET/CT) sent to ACRIN For more information about the original aims of this trial please see this oral abstract: J Clin Oncol 29: 2011 (suppl; abstr 5500) here: https://meetinglibrary.asco.org/record/63118/video

DOIs for DICOM Raw Images: Enabling Science Reproducibility

  • Philip E. Bourne
Radiology 2015 Journal Article, cited 3 times
Website

Radiogenomics of Clear Cell Renal Cell Carcinoma: Associations Between mRNA-Based Subtyping and CT Imaging Features

  • Bowen, Lan
  • Xiaojing, Li
Academic radiology 2018 Journal Article, cited 0 times
Website

Singular value decomposition using block least mean square method for image denoising and compression

  • Boyat, Ajay Kumar
  • Khare, Parth
2015 Conference Proceedings, cited 1 times
Website

Association of Peritumoral Radiomics With Tumor Biology and Pathologic Response to Preoperative Targeted Therapy for HER2 (ERBB2)-Positive Breast Cancer

  • Braman, Nathaniel
  • Prasanna, Prateek
  • Whitney, Jon
  • Singh, Salendra
  • Beig, Niha
  • Etesami, Maryam
  • Bates, David D. B.
  • Gallagher, Katherine
  • Bloch, B. Nicolas
  • Vulchi, Manasa
  • Turk, Paulette
  • Bera, Kaustav
  • Abraham, Jame
  • Sikov, William M.
  • Somlo, George
  • Harris, Lyndsay N.
  • Gilmore, Hannah
  • Plecha, Donna
  • Varadan, Vinay
  • Madabhushi, Anant
JAMA Netw Open 2019 Journal Article, cited 0 times
Website
Importance There has been significant recent interest in understanding the utility of quantitative imaging to delineate breast cancer intrinsic biological factors and therapeutic response. No clinically accepted biomarkers are as yet available for estimation of response to human epidermal growth factor receptor 2 (currently known as ERBB2, but referred to as HER2 in this study)–targeted therapy in breast cancer. Objective To determine whether imaging signatures on clinical breast magnetic resonance imaging (MRI) could noninvasively characterize HER2-positive tumor biological factors and estimate response to HER2-targeted neoadjuvant therapy. Design, Setting, and Participants In a retrospective diagnostic study encompassing 209 patients with breast cancer, textural imaging features extracted within the tumor and annular peritumoral tissue regions on MRI were examined as a means to identify increasingly granular breast cancer subgroups relevant to therapeutic approach and response. First, among a cohort of 117 patients who received an MRI prior to neoadjuvant chemotherapy (NAC) at a single institution from April 27, 2012, through September 4, 2015, imaging features that distinguished HER2+ tumors from other receptor subtypes were identified. Next, among a cohort of 42 patients with HER2+ breast cancers with available MRI and RNaseq data accumulated from a multicenter, preoperative clinical trial (BrUOG 211B), a signature of the response-associated HER2-enriched (HER2-E) molecular subtype within HER2+ tumors (n = 42) was identified. The association of this signature with pathologic complete response was explored in 2 patient cohorts from different institutions, where all patients received HER2-targeted NAC (n = 28, n = 50). Finally, the association between significant peritumoral features and lymphocyte distribution was explored in patients within the BrUOG 211B trial who had corresponding biopsy hematoxylin-eosin–stained slide images. Data analysis was conducted from January 15, 2017, to February 14, 2019. Main Outcomes and Measures Evaluation of imaging signatures by the area under the receiver operating characteristic curve (AUC) in identifying HER2+ molecular subtypes and distinguishing pathologic complete response (ypT0/is) to NAC with HER2-targeting. Results In the 209 patients included (mean [SD] age, 51.1 [11.7] years), features from the peritumoral regions better discriminated HER2-E tumors (maximum AUC, 0.85; 95% CI, 0.79-0.90; 9-12 mm from the tumor) compared with intratumoral features (AUC, 0.76; 95% CI, 0.69-0.84). A classifier combining peritumoral and intratumoral features identified the HER2-E subtype (AUC, 0.89; 95% CI, 0.84-0.93) and was significantly associated with response to HER2-targeted therapy in both validation cohorts (AUC, 0.80; 95% CI, 0.61-0.98 and AUC, 0.69; 95% CI, 0.53-0.84). Features from the 0- to 3-mm peritumoral region were significantly associated with the density of tumor-infiltrating lymphocytes (R2 = 0.57; 95% CI, 0.39-0.75; P = .002). Conclusions and Relevance A combination of peritumoral and intratumoral characteristics appears to identify intrinsic molecular subtypes of HER2+ breast cancers from imaging, offering insights into immune response within the peritumoral environment and suggesting potential benefit for treatment guidance.

A volumetric technique for fossil body mass estimation applied to Australopithecus afarensis

  • Brassey, Charlotte A
  • O'Mahoney, Thomas G
  • Chamberlain, Andrew T
  • Sellers, William I
Journal of human evolution 2018 Journal Article, cited 3 times
Website

Constructing 3D-Printable CAD Models of Prostates from MR Images

  • Breseman, Kelsey
  • Lee, Christopher
  • Bloch, B Nicholas
  • Jaffe, Carl
2013 Conference Proceedings, cited 1 times
Website

An ensemble learning approach for brain cancer detection exploiting radiomic features

  • Brunese, Luca
  • Mercaldo, Francesco
  • Reginelli, Alfonso
  • Santone, Antonella
Comput Methods Programs Biomed 2019 Journal Article, cited 1 times
Website
BACKGROUND AND OBJECTIVE: The brain cancer is one of the most aggressive tumour: the 70% of the patients diagnosed with this malignant cancer will not survive. Early detection of brain tumours can be fundamental to increase survival rates. The brain cancers are classified into four different grades (i.e., I, II, III and IV) according to how normal or abnormal the brain cells look. The following work aims to recognize the different brain cancer grades by analysing brain magnetic resonance images. METHODS: A method to identify the components of an ensemble learner is proposed. The ensemble learner is focused on the discrimination between different brain cancer grades using non invasive radiomic features. The considered radiomic features are belonging to five different groups: First Order, Shape, Gray Level Co-occurrence Matrix, Gray Level Run Length Matrix and Gray Level Size Zone Matrix. We evaluate the features effectiveness through hypothesis testing and through decision boundaries, performance analysis and calibration plots thus we select the best candidate classifiers for the ensemble learner. RESULTS: We evaluate the proposed method with 111,205 brain magnetic resonances belonging to two freely available data-sets for research purposes. The results are encouraging: we obtain an accuracy of 99% for the benign grade I and the II, III and IV malignant brain cancer detection. CONCLUSION: The experimental results confirm that the ensemble learner designed with the proposed method outperforms the current state-of-the-art approaches in brain cancer grade detection starting from magnetic resonance images.

Quantitative variations in texture analysis features dependent on MRI scanning parameters: A phantom model

  • Buch, Karen
  • Kuno, Hirofumi
  • Qureshi, Muhammad M
  • Li, Baojun
  • Sakai, Osamu
Journal of applied clinical medical physics 2018 Journal Article, cited 0 times
Website

Quantitative Imaging Biomarker Ontology (QIBO) for Knowledge Representation of Biomedical Imaging Biomarkers

  • Buckler, AndrewJ
  • Ouellette, M.
  • Danagoulian, J.
  • Wernsing, G.
  • Liu, TiffanyTing
  • Savig, Erica
  • Suzek, BarisE
  • Rubin, DanielL
  • Paik, David
Journal of Digital Imaging 2013 Journal Article, cited 17 times
Website

Association of genomic subtypes of lower-grade gliomas with shape features automatically extracted by a deep learning algorithm

  • Buda, Mateusz
  • Saha, Ashirbani
  • Mazurowski, Maciej A
Computers in biology and medicine 2019 Journal Article, cited 1 times
Website
Recent analysis identified distinct genomic subtypes of lower-grade glioma tumors which are associated with shape features. In this study, we propose a fully automatic way to quantify tumor imaging characteristics using deep learning-based segmentation and test whether these characteristics are predictive of tumor genomic subtypes. We used preoperative imaging and genomic data of 110 patients from 5 institutions with lower-grade gliomas from The Cancer Genome Atlas. Based on automatic deep learning segmentations, we extracted three features which quantify two-dimensional and three-dimensional characteristics of the tumors. Genomic data for the analyzed cohort of patients consisted of previously identified genomic clusters based on IDH mutation and 1p/19q co-deletion, DNA methylation, gene expression, DNA copy number, and microRNA expression. To analyze the relationship between the imaging features and genomic clusters, we conducted the Fisher exact test for 10 hypotheses for each pair of imaging feature and genomic subtype. To account for multiple hypothesis testing, we applied a Bonferroni correction. P-values lower than 0.005 were considered statistically significant. We found the strongest association between RNASeq clusters and the bounding ellipsoid volume ratio (p < 0.0002) and between RNASeq clusters and margin fluctuation (p < 0.005). In addition, we identified associations between bounding ellipsoid volume ratio and all tested molecular subtypes (p < 0.02) as well as between angular standard deviation and RNASeq cluster (p < 0.02). In terms of automatic tumor segmentation that was used to generate the quantitative image characteristics, our deep learning algorithm achieved a mean Dice coefficient of 82% which is comparable to human performance.

Comparing nonrigid registration techniques for motion corrected MR prostate diffusion imaging

  • Buerger, C
  • Sénégas, J
  • Kabus, S
  • Carolus, H
  • Schulz, H
  • Agarwal, H
  • Turkbey, B
  • Choyke, PL
  • Renisch, S
Medical physics 2015 Journal Article, cited 4 times
Website

Using computer‐extracted image phenotypes from tumors on breast magnetic resonance imaging to predict breast cancer pathologic stage

  • Burnside, Elizabeth S
  • Drukker, Karen
  • Li, Hui
  • Bonaccio, Ermelinda
  • Zuley, Margarita
  • Ganott, Marie
  • Net, Jose M
  • Sutton, Elizabeth J
  • Brandt, Kathleen R
  • Whitman, Gary J
Cancer 2016 Journal Article, cited 28 times
Website

Medical Image Retrieval Based on Convolutional Neural Network and Supervised Hashing

  • Cai, Yiheng
  • Li, Yuanyuan
  • Qiu, Changyan
  • Ma, Jie
  • Gao, Xurong
IEEE Access 2019 Journal Article, cited 0 times
Website
In recent years, with extensive application in image retrieval and other tasks, a convolutional neural network (CNN) has achieved outstanding performance. In this paper, a new content-based medical image retrieval (CBMIR) framework using CNN and hash coding is proposed. The new framework adopts a Siamese network in which pairs of images are used as inputs, and a model is learned to make images belonging to the same class have similar features by using weight sharing and a contrastive loss function. In each branch of the network, CNN is adapted to extract features, followed by hash mapping, which is used to reduce the dimensionality of feature vectors. In the training process, a new loss function is designed to make the feature vectors more distinguishable, and a regularization term is added to encourage the real value outputs to approximate the desired binary values. In the retrieval phase, the compact binary hash code of the query image is achieved from the trained network and is subsequently compared with the hash codes of the database images. We experimented on two medical image datasets: the cancer imaging archive-computed tomography (TCIA-CT) and the vision and image analysis group/international early lung cancer action program (VIA/I-ELCAP). The results indicate that our method is superior to existing hash methods and CNN methods. Compared with the traditional hashing method, feature extraction based on CNN has advantages. The proposed algorithm combining a Siamese network with the hash method is superior to the classical CNN-based methods. The application of a new loss function can effectively improve retrieval accuracy.

Image Area Reduction for Efficient Medical Image Retrieval

  • Camlica, Zehra
2015 Thesis, cited 0 times
Website

PARaDIM - A PHITS-based Monte Carlo tool for internal dosimetry with tetrahedral mesh computational phantoms

  • Carter, L. M.
  • Crawford, T. M.
  • Sato, T.
  • Furuta, T.
  • Choi, C.
  • Kim, C. H.
  • Brown, J. L.
  • Bolch, W. E.
  • Zanzonico, P. B.
  • Lewis, J. S.
J Nucl Med 2019 Journal Article, cited 0 times
Website
Mesh-type and voxel-based computational phantoms comprise the current state-of-the-art for internal dose assessment via Monte Carlo simulations, but excel in different aspects, with mesh-type phantoms offering advantages over their voxel counterparts in terms of their flexibility and realistic representation of detailed patient- or subject-specific anatomy. We have developed PARaDIM, a freeware application for implementing tetrahedral mesh-type phantoms in absorbed dose calculations via the Particle and Heavy Ion Transport code System (PHITS). It considers all medically relevant radionuclides including alpha, beta, gamma, positron, and Auger/conversion electron emitters, and handles calculation of mean dose to individual regions, as well as 3D dose distributions for visualization and analysis in a variety of medical imaging softwares. This work describes the development of PARaDIM, documents the measures taken to test and validate its performance, and presents examples to illustrate its uses. Methods: Human, small animal, and cell-level dose calculations were performed with PARaDIM and the results compared with those of widely accepted dosimetry programs and literature data. Several tetrahedral phantoms were developed or adapted using computer-aided modeling techniques for these comparisons. Results: For human dose calculations, agreement of PARaDIM with OLINDA 2.0 was good - within 10-20% for most organs - despite geometric differences among the phantoms tested. Agreement with MIRDcell for cell-level S-value calculations was within 5% in most cases. Conclusion: PARaDIM extends the use of Monte Carlo dose calculations to the broader community in nuclear medicine by providing a user-friendly graphical user interface for calculation setup and execution. PARaDIM leverages the enhanced anatomical realism provided by advanced computational reference phantoms or bespoke image-derived phantoms to enable improved assessments of radiation doses in a variety of radiopharmaceutical use cases, research, and preclinical development.

Selección de un algoritmo para la clasificación de Nódulos Pulmonares Solitarios

  • Castro, Arelys Rivero
  • Correa, Luis Manuel Cruz
  • Lezcano, Jeffrey Artiles
Revista Cubana de Informática Médica 2016 Journal Article, cited 0 times
Website

MRI volume changes of axillary lymph nodes as predictor of pathological complete responses to neoadjuvant chemotherapy in breast cancer

  • Cattell, Renee F.
  • Kang, James J.
  • Ren, Thomas
  • Huang, Pauline B.
  • Muttreja, Ashima
  • Dacosta, Sarah
  • Li, Haifang
  • Baer, Lea
  • Clouston, Sean
  • Palermo, Roxanne
  • Fisher, Paul
  • Bernstein, Cliff
  • Cohen, Jules A.
  • Duong, Tim Q.
Clinical Breast Cancer 2019 Journal Article, cited 0 times
Website
Introduction Longitudinal monitoring of breast tumor volume over the course of chemotherapy is informative of pathological response. This study aims to determine whether axillary lymph node (aLN) volume by MRI could augment the prediction accuracy of treatment response to neoadjuvant chemotherapy (NAC). Materials and Methods Level-2a curated data from I-SPY-1 TRIAL (2002-2006) were used. Patients had stage 2 or 3 breast cancer. MRI was acquired pre-, during and post-NAC. A subset with visible aLN on MRI was identified (N=132). Prediction of pathological complete response (PCR) was made using breast tumor volume changes, nodal volume changes, and combined breast tumor and nodal volume changes with sub-stratification with and without large lymph nodes (3mL or ∼1.79cm diameter cutoff). Receiver-operator-curve analysis was used to quantify prediction performance. Results Rate of change of aLN and breast tumor volume were informative of pathological response, with prediction being most informative early in treatment (AUC: 0.63-0.82) compared to later in treatment (AUC: 0.50-0.73). Larger aLN volume was associated with hormone receptor negativity, with the largest nodal volume for triple negative subtypes. Sub-stratification by node size improved predictive performance, with the best predictive model for large nodes having AUC of 0.82. Conclusion Axillary lymph node MRI offers clinically relevant information and has the potential to predict treatment response to neoadjuvant chemotherapy in breast cancer patients.

Highly accurate model for prediction of lung nodule malignancy with CT scans

  • Causey, Jason L
  • Zhang, Junyu
  • Ma, Shiqian
  • Jiang, Bo
  • Qualls, Jake A
  • Politte, David G
  • Prior, Fred
  • Zhang, Shuzhong
  • Huang, Xiuzhen
Sci RepScientific reports 2018 Journal Article, cited 5 times
Website

Renal cell carcinoma: predicting RUNX3 methylation level and its consequences on survival with CT features

  • Dongzhi Cen
  • Li Xu
  • Siwei Zhang
  • Zhiguang Chen
  • Yan Huang
  • Ziqi Li
  • Bo Liang
European Radiology 2019 Journal Article, cited 0 times
Website
PURPOSE: To investigate associations between CT imaging features, RUNX3 methylation level, and survival in clear cell renal cell carcinoma (ccRCC). MATERIALS AND METHODS: Patients were divided into high RUNX3 methylation and low RUNX3 methylation groups according to RUNX3 methylation levels (the threshold was identified by using X-tile). The CT scanning data from 106 ccRCC patients were retrospectively analyzed. The relationship between RUNX3 methylation level and overall survivals was evaluated using the Kaplan-Meyer analysis and Cox regression analysis (univariate and multivariate). The relationship between RUNX3 methylation level and CT features was evaluated using chi-square test and logistic regression analysis (univariate and multivariate). RESULTS: beta value cutoff of 0.53 to distinguish high methylation (N = 44) from low methylation tumors (N = 62). Patients with lower levels of methylation had longer median overall survival (49.3 vs. 28.4) months (low vs. high, adjusted hazard ratio [HR] 4.933, 95% CI 2.054-11.852, p < 0.001). On univariate logistic regression analysis, four risk factors (margin, side, long diameter, and intratumoral vascularity) were associated with RUNX3 methylation level (all p < 0.05). Multivariate logistic regression analysis found that three risk factors (side: left vs. right, odds ratio [OR] 2.696; p = 0.024; 95% CI 1.138-6.386; margin: ill-defined vs. well-defined, OR 2.685; p = 0.038; 95% CI 1.057-6.820; and intratumoral vascularity: yes vs. no, OR 3.286; p = 0.008; 95% CI 1.367-7.898) were significant independent predictors of high methylation tumors. This model had an area under the receiver operating characteristic curve (AUC) of 0.725 (95% CI 0.623-0.827). CONCLUSIONS: Higher levels of RUNX3 methylation are associated with shorter survival in ccRCC patients. And presence of intratumoral vascularity, ill-defined margin, and left side tumor were significant independent predictors of high methylation level of RUNX3 gene. KEY POINTS: * RUNX3 methylation level is negatively associated with overall survival in ccRCC patients. * Presence of intratumoral vascularity, ill-defined margin, and left side tumor were significant independent predictors of high methylation level of RUNX3 gene.

Segmentation, tracking, and kinematics of lung parenchyma and lung tumors from 4D CT with application to radiation treatment planning

  • Cha, Jungwon
2018 Thesis, cited 0 times
Website
This thesis is concerned with development of techniques for efficient computerized analysis of 4-D CT data. The goal is to have a highly automated approach to segmentation of the lung boundary and lung nodules inside the lung. The determination of exact lung tumor location over space and time by image segmentation is an essential step to track thoracic malignancies. Accurate image segmentation helps clinical experts examine the anatomy and structure and determine the disease progress. Since 4-D CT provides structural and anatomical information during tidal breathing, we use the same data to also measure mechanical properties related to deformation of the lung tissue including Jacobian and strain at high resolutions and as a function of time. Radiation Treatment of patients with lung cancer can benefit from knowledge of these measures of regional ventilation. Graph-cuts techniques have been popular for image segmentation since they are able to treat highly textured data via robust global optimization, avoiding local minima in graph based optimization. The graph-cuts methods have been used to extract globally optimal boundaries from images by s/t cut, with energy function based on model-specific visual cues, and useful topological constraints. The method makes N-dimensional globally optimal segmentation possible with good computational efficiency. Even though the graph-cuts method can extract objects where there is a clear intensity difference, segmentation of organs or tumors pose a challenge. For organ segmentation, many segmentation methods using a shape prior have been proposed. However, in the case of lung tumors, the shape varies from patient to patient, and with location. In this thesis, we use a shape prior for tumors through a training step and PCA analysis based on the Active Shape Model (ASM). The method has been tested on real patient data from the Brown Cancer Center at the University of Louisville. We performed temporal B-spline deformable registration of the 4-D CT data - this yielded 3-D deformation fields between successive respiratory phases from which measures of regional lung function were determined. During the respiratory cycle, the lung volume changes and five different lobes of the lung (two in the left and three in the right lung) show different deformation yielding different strain and Jacobian maps. In this thesis, we determine the regional lung mechanics in the Lagrangian frame of reference through different respiratory phases, for example, Phase10 to 20, Phase10 to 30, Phase10 to 40, and Phase10 to 50. Single photon emission computed tomography (SPECT) lung imaging using radioactive tracers with SPECT ventilation and SPECT perfusion imaging also provides functional information. As part of an IRB-approved study therefore, we registered the max-inhale CT volume to both VSPECT and QSPECT data sets using the Demon's non-rigid registration algorithm in patient subjects. Subsequently, statistical correlation between CT ventilation images (Jacobian and strain values), with both VSPECT and QSPECT was undertaken. Through statistical analysis with the Spearman's rank correlation coefficient, we found that Jacobian values have the highest correlation with both VSPECT and QSPECT.

Segmentation and tracking of lung nodules via graph‐cuts incorporating shape prior and motion from 4D CT

  • Cha, Jungwon
  • Farhangi, Mohammad Mehdi
  • Dunlap, Neal
  • Amini, Amir A
Medical physics 2018 Journal Article, cited 5 times
Website

Computational assessment of stomach tumor volume from multi-slice computerized tomography images in presence of type 2 cancer

  • Chacón, Gerardo
  • Rodríguez, Johel E
  • Bermúdez, Valmore
  • Vera, Miguel
  • Hernández, Juan Diego
  • Vargas, Sandra
  • Pardo, Aldo
  • Lameda, Carlos
  • Madriz, Delia
  • Bravo, Antonio J
F1000Research 2018 Journal Article, cited 0 times
Website

Automated feature extraction in brain tumor by magnetic resonance imaging using gaussian mixture models

  • Chaddad, Ahmad
Journal of Biomedical Imaging 2015 Journal Article, cited 29 times
Website

Phenotypic characterization of glioblastoma identified through shape descriptors

  • Chaddad, Ahmad
  • Desrosiers, Christian
  • Toews, Matthew
2016 Conference Proceedings, cited 4 times
Website

GBM heterogeneity characterization by radiomic analysis of phenotype anatomical planes

  • Chaddad, Ahmad
  • Desrosiers, Christian
  • Toews, Matthew
2016 Conference Proceedings, cited 4 times
Website

Radiomic analysis of multi-contrast brain MRI for the prediction of survival in patients with glioblastoma multiforme

  • Chaddad, Ahmad
  • Desrosiers, Christian
  • Toews, Matthew
2016 Conference Proceedings, cited 11 times
Website

Predicting survival time of lung cancer patients using radiomic analysis

  • Chaddad, Ahmad
  • Desrosiers, Christian
  • Toews, Matthew
  • Abdulkarim, Bassam
Oncotarget 2017 Journal Article, cited 4 times
Website

Multimodal Radiomic Features for the Predicting Gleason Score of Prostate Cancer

  • Chaddad, Ahmad
  • Kucharczyk, Michael
  • Niazi, Tamim
Cancers 2018 Journal Article, cited 1 times
Website

Predicting Gleason Score of Prostate Cancer Patients using Radiomic Analysis

  • Chaddad, Ahmad
  • Niazi, Tamim
  • Probst, Stephan
  • Bladou, Franck
  • Anidjar, Moris
  • Bahoric, Boris
Frontiers in Oncology 2018 Journal Article, cited 0 times
Website

Prediction of survival with multi-scale radiomic analysis in glioblastoma patients

  • Chaddad, Ahmad
  • Sabri, Siham
  • Niazi, Tamim
  • Abdulkarim, Bassam
Medical & biological engineering & computing 2018 Journal Article, cited 1 times
Website
We propose a multiscale texture features based on Laplacian-of Gaussian (LoG) filter to predict progression free (PFS) and overall survival (OS) in patients newly diagnosed with glioblastoma (GBM). Experiments use the extracted features derived from 40 patients of GBM with T1-weighted imaging (T1-WI) and Fluid-attenuated inversion recovery (FLAIR) images that were segmented manually into areas of active tumor, necrosis, and edema. Multiscale texture features were extracted locally from each of these areas of interest using a LoG filter and the relation between features to OS and PFS was investigated using univariate (i.e., Spearman’s rank correlation coefficient, log-rank test and Kaplan-Meier estimator) and multivariate analyses (i.e., Random Forest classifier). Three and seven features were statistically correlated with PFS and OS, respectively, with absolute correlation values between 0.32 and 0.36 and p < 0.05. Three features derived from active tumor regions only were associated with OS (p < 0.05) with hazard ratios (HR) of 2.9, 3, and 3.24, respectively. Combined features showed an AUC value of 85.37 and 85.54% for predicting the PFS and OS of GBM patients, respectively, using the random forest (RF) classifier. We presented a multiscale texture features to characterize the GBM regions and predict he PFS and OS. The efficiency achievable suggests that this technique can be developed into a GBM MR analysis system suitable for clinical use after a thorough validation involving more patients.

High-Throughput Quantification of Phenotype Heterogeneity Using Statistical Features

  • Chaddad, Ahmad
  • Tanougast, Camel
Advances in Bioinformatics 2015 Journal Article, cited 5 times
Website

Quantitative evaluation of robust skull stripping and tumor detection applied to axial MR images

  • Chaddad, Ahmad
  • Tanougast, Camel
Brain Informatics 2016 Journal Article, cited 28 times
Website

Extracted magnetic resonance texture features discriminate between phenotypes and are associated with overall survival in glioblastoma multiforme patients

  • Chaddad, Ahmad
  • Tanougast, Camel
Medical & biological engineering & computing 2016 Journal Article, cited 16 times
Website

Automated lung field segmentation in CT images using mean shift clustering and geometrical features

  • Chama, Chanukya Krishna
  • Mukhopadhyay, Sudipta
  • Biswas, Prabir Kumar
  • Dhara, Ashis Kumar
  • Madaiah, Mahendra Kasuvinahally
  • Khandelwal, Niranjan
2013 Conference Proceedings, cited 8 times
Website

Using Docker to support reproducible research

  • Chamberlain, Ryan
  • Invenshure, L
  • Schommer, Jennifer
2014 Report, cited 30 times
Website

Residual Convolutional Neural Network for the Determination of IDH Status in Low-and High-Grade Gliomas from MR Imaging

  • Chang, Ken
  • Bai, Harrison X
  • Zhou, Hao
  • Su, Chang
  • Bi, Wenya Linda
  • Agbodza, Ena
  • Kavouridis, Vasileios K
  • Senders, Joeky T
  • Boaro, Alessandro
  • Beers, Andrew
Clinical Cancer Research 2018 Journal Article, cited 26 times
Website

Automatic assessment of glioma burden: A deep learning algorithm for fully automated volumetric and bi-dimensional measurement

  • Chang, Ken
  • Beers, Andrew L
  • Bai, Harrison X
  • Brown, James M
  • Ly, K Ina
  • Li, Xuejun
  • Senders, Joeky T
  • Kavouridis, Vasileios K
  • Boaro, Alessandro
  • Su, Chang
  • Bi, Wenya Linda
  • Rapalino, Otto
  • Liao, Weihua
  • Shen, Qin
  • Zhou, Hao
  • Xiao, Bo
  • Wang, Yinyan
  • Zhang, Paul J
  • Pinho, Marco C
  • Wen, Patrick Y
  • Batchelor, Tracy T
  • Boxerman, Jerrold L
  • Arnaout, Omar
  • Rosen, Bruce R
  • Gerstner, Elizabeth R
  • Yang, Li
  • Huang, Raymond Y
  • Kalpathy-Cramer, Jayashree
Neuro Oncol 2019 Journal Article, cited 0 times
Website
BACKGROUND: Longitudinal measurement of glioma burden with MRI is the basis for treatment response assessment. In this study, we developed a deep learning algorithm that automatically segments abnormal FLAIR hyperintensity and contrast-enhancing tumor, quantitating tumor volumes as well as the product of maximum bi-dimensional diameters according to the Response Assessment in Neuro-Oncology (RANO) criteria (AutoRANO). METHODS: Two cohorts of patients were used for this study. One consisted of 843 pre-operative MRIs from 843 patients with low- or high-grade gliomas from four institutions and the second consisted 713 longitudinal, post-operative MRI visits from 54 patients with newly diagnosed glioblastomas (each with two pre-treatment "baseline" MRIs) from one institution. RESULTS: The automatically generated FLAIR hyperintensity volume, contrast-enhancing tumor volume, and AutoRANO were highly repeatable for the double-baseline visits, with an intraclass correlation coefficient (ICC) of 0.986, 0.991, and 0.977, respectivelyon the cohort of post-operative GBM patients. Furthermore, there was high agreement between manually and automatically measured tumor volumes, with ICC values of 0.915, 0.924, and 0.965 for pre-operative FLAIR hyperintensity, post-operative FLAIR hyperintensity, and post-operative contrast-enhancing tumor volumes, respectively. Lastly, the ICC for comparing manually and automatically derived longitudinal changes in tumor burden was 0.917, 0.966, and 0.850 for FLAIR hyperintensity volume, contrast-enhancing tumor volume, and RANO measures, respectively. CONCLUSIONS: Our automated algorithm demonstrates potential utility for evaluating tumor burden in complex post-treatment settings, although further validation in multi-center clinical trials will be needed prior to widespread implementation.

Deep-Learning Convolutional Neural Networks Accurately Classify Genetic Mutations in Gliomas

  • Chang, P
  • Grinband, J
  • Weinberg, BD
  • Bardis, M
  • Khy, M
  • Cadena, G
  • Su, M-Y
  • Cha, S
  • Filippi, CG
  • Bota, D
American Journal of Neuroradiology 2018 Journal Article, cited 5 times
Website

Primer for Image Informatics in Personalized Medicine

  • Chang, Young Hwan
  • Foley, Patrick
  • Azimi, Vahid
  • Borkar, Rohan
  • Lefman, Jonathan
Procedia Engineering 2016 Journal Article, cited 0 times
Website

“Big data” and “open data”: What kind of access should researchers enjoy?

  • Chatellier, Gilles
  • Varlet, Vincent
  • Blachier-Poisson, Corinne
Thérapie 2016 Journal Article, cited 0 times

MRI prostate cancer radiomics: Assessment of effectiveness and perspectives

  • Chatzoudis, Pavlos
2018 Thesis, cited 0 times
Website

A Fast Semi-Automatic Segmentation Tool for Processing Brain Tumor Images

  • Chen, Andrew X
  • Rabadán, Raúl
2017 Book Section, cited 0 times
Website

aLow-dose CT via convolutional neural network

  • Chen, Hu
  • Zhang, Yi
  • Zhang, Weihua
  • Liao, Peixi
  • Li, Ke
  • Zhou, Jiliu
  • Wang, Ge
Biomedical Optics Express 2017 Journal Article, cited 89 times
Website

Revealing Tumor Habitats from Texture Heterogeneity Analysis for Classification of Lung Cancer Malignancy and Aggressiveness

  • Cherezov, Dmitry
  • Goldgof, Dmitry
  • Hall, Lawrence
  • Gillies, Robert
  • Schabath, Matthew
  • Müller, Henning
  • Depeursinge, Adrien
Sci RepScientific reports 2019 Journal Article, cited 0 times
Website
We propose an approach for characterizing structural heterogeneity of lung cancer nodules using Computed Tomography Texture Analysis (CTTA). Measures of heterogeneity were used to test the hypothesis that heterogeneity can be used as predictor of nodule malignancy and patient survival. To do this, we use the National Lung Screening Trial (NLST) dataset to determine if heterogeneity can represent differences between nodules in lung cancer and nodules in non-lung cancer patients. 253 participants are in the training set and 207 participants in the test set. To discriminate cancerous from non-cancerous nodules at the time of diagnosis, a combination of heterogeneity and radiomic features were evaluated to produce the best area under receiver operating characteristic curve (AUROC) of 0.85 and accuracy 81.64%. Second, we tested the hypothesis that heterogeneity can predict patient survival. We analyzed 40 patients diagnosed with lung adenocarcinoma (20 short-term and 20 long-term survival patients) using a leave-one-out cross validation approach for performance evaluation. A combination of heterogeneity features and radiomic features produce an AUROC of 0.9 and an accuracy of 85% to discriminate long- and short-term survivors.

Computed Tomography (CT) Image Quality Enhancement via a Uniform Framework Integrating Noise Estimation and Super-Resolution Networks

  • Chi, Jianning
  • Zhang, Yifei
  • Yu, Xiaosheng
  • Wang, Ying
  • Wu, Chengdong
Sensors (Basel) 2019 Journal Article, cited 2 times
Website
Computed tomography (CT) imaging technology has been widely used to assist medical diagnosis in recent years. However, noise during the process of imaging, and data compression during the process of storage and transmission always interrupt the image quality, resulting in unreliable performance of the post-processing steps in the computer assisted diagnosis system (CADs), such as medical image segmentation, feature extraction, and medical image classification. Since the degradation of medical images typically appears as noise and low-resolution blurring, in this paper, we propose a uniform deep convolutional neural network (DCNN) framework to handle the de-noising and super-resolution of the CT image at the same time. The framework consists of two steps: Firstly, a dense-inception network integrating an inception structure and dense skip connection is proposed to estimate the noise level. The inception structure is used to extract the noise and blurring features with respect to multiple receptive fields, while the dense skip connection can reuse those extracted features and transfer them across the network. Secondly, a modified residual-dense network combined with joint loss is proposed to reconstruct the high-resolution image with low noise. The inception block is applied on each skip connection of the dense-residual network so that the structure features of the image are transferred through the network more than the noise and blurring features. Moreover, both the perceptual loss and the mean square error (MSE) loss are used to restrain the network, leading to better performance in the reconstruction of image edges and details. Our proposed network integrates the degradation estimation, noise removal, and image super-resolution in one uniform framework to enhance medical image quality. We apply our method to the Cancer Imaging Archive (TCIA) public dataset to evaluate its ability in medical image quality enhancement. The experimental results demonstrate that the proposed method outperforms the state-of-the-art methods on de-noising and super-resolution by providing higher peak signal to noise ratio (PSNR) and structure similarity index (SSIM) values.

SVM-PUK Kernel Based MRI-brain Tumor Identification Using Texture and Gabor Wavelets

  • Chinnam, Siva
  • Sistla, Venkatramaphanikumar
  • Kolli, Venkata
Traitement du Signal 2019 Journal Article, cited 0 times
Website

Imaging phenotypes of breast cancer heterogeneity in pre-operative breast Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) scans predict 10-year recurrence

  • Chitalia, Rhea
  • Rowland, Jennifer
  • McDonald, Elizabeth S
  • Pantalone, Lauren
  • Cohen, Eric A
  • Gastounioti, Aimilia
  • Feldman, Michael
  • Schnall, Mitchell
  • Conant, Emily
  • Kontos, Despina
Clinical Cancer Research 2019 Journal Article, cited 0 times
Website

Classification of the glioma grading using radiomics analysis

  • Cho, Hwan-ho
  • Lee, Seung-hak
  • Kim, Jonghoon
  • Park, Hyunjin
PeerJ 2018 Journal Article, cited 0 times
Website

Integrative analysis of imaging and transcriptomic data of the immune landscape associated with tumor metabolism in lung adenocarcinoma: Clinical and prognostic implications

  • Choi, Hongyoon
  • Na, Kwon Joong
THERANOSTICS 2018 Journal Article, cited 0 times
Website

Incremental Prognostic Value of ADC Histogram Analysis over MGMT Promoter Methylation Status in Patients with Glioblastoma

  • Choi, Yoon Seong
  • Ahn, Sung Soo
  • Kim, Dong Wook
  • Chang, Jong Hee
  • Kang, Seok-Gu
  • Kim, Eui Hyun
  • Kim, Se Hoon
  • Rim, Tyler Hyungtaek
  • Lee, Seung-Koo
Radiology 2016 Journal Article, cited 18 times
Website

ST3GAL1-associated transcriptomic program in glioblastoma tumor growth, invasion, and prognosis

  • Chong, Yuk Kien
  • Sandanaraj, Edwin
  • Koh, Lynnette WH
  • Thangaveloo, Moogaambikai
  • Tan, Melanie SY
  • Koh, Geraldene RH
  • Toh, Tan Boon
  • Lim, Grace GY
  • Holbrook, Joanna D
  • Kon, Oi Lian
Journal of the National Cancer Institute 2016 Journal Article, cited 16 times
Website

Results of initial low-dose computed tomographic screening for lung cancer

  • Church, T. R.
  • Black, W. C.
  • Aberle, D. R.
  • Berg, C. D.
  • Clingan, K. L.
  • Duan, F.
  • Fagerstrom, R. M.
  • Gareen, I. F.
  • Gierada, D. S.
  • Jones, G. C.
  • Mahon, I.
  • Marcus, P. M.
  • Sicks, J. D.
  • Jain, A.
  • Baum, S.
N Engl J MedThe New England journal of medicine 2013 Journal Article, cited 529 times
Website
BACKGROUND: Lung cancer is the largest contributor to mortality from cancer. The National Lung Screening Trial (NLST) showed that screening with low-dose helical computed tomography (CT) rather than with chest radiography reduced mortality from lung cancer. We describe the screening, diagnosis, and limited treatment results from the initial round of screening in the NLST to inform and improve lung-cancer-screening programs. METHODS: At 33 U.S. centers, from August 2002 through April 2004, we enrolled asymptomatic participants, 55 to 74 years of age, with a history of at least 30 pack-years of smoking. The participants were randomly assigned to undergo annual screening, with the use of either low-dose CT or chest radiography, for 3 years. Nodules or other suspicious findings were classified as positive results. This article reports findings from the initial screening examination. RESULTS: A total of 53,439 eligible participants were randomly assigned to a study group (26,715 to low-dose CT and 26,724 to chest radiography); 26,309 participants (98.5%) and 26,035 (97.4%), respectively, underwent screening. A total of 7191 participants (27.3%) in the low-dose CT group and 2387 (9.2%) in the radiography group had a positive screening result; in the respective groups, 6369 participants (90.4%) and 2176 (92.7%) had at least one follow-up diagnostic procedure, including imaging in 5717 (81.1%) and 2010 (85.6%) and surgery in 297 (4.2%) and 121 (5.2%). Lung cancer was diagnosed in 292 participants (1.1%) in the low-dose CT group versus 190 (0.7%) in the radiography group (stage 1 in 158 vs. 70 participants and stage IIB to IV in 120 vs. 112). Sensitivity and specificity were 93.8% and 73.4% for low-dose CT and 73.5% and 91.3% for chest radiography, respectively. CONCLUSIONS: The NLST initial screening results are consistent with the existing literature on screening by means of low-dose CT and chest radiography, suggesting that a reduction in mortality from lung cancer is achievable at U.S. screening centers that have staff experienced in chest CT. (Funded by the National Cancer Institute; NLST ClinicalTrials.gov number, NCT00047385.).

Automatic detection of spiculation of pulmonary nodules in computed tomography images

  • Ciompi, F
  • Jacobs, C
  • Scholten, ET
  • van Riel, SJ
  • Wille, MMW
  • Prokop, M
  • van Ginneken, B
2015 Conference Proceedings, cited 5 times
Website

Reproducing 2D breast mammography images with 3D printed phantoms

  • Clark, Matthew
  • Ghammraoui, Bahaa
  • Badal, Andreu
2016 Conference Proceedings, cited 2 times
Website

The Quantitative Imaging Network: NCI's Historical Perspective and Planned Goals

  • Clarke, Laurence P.
  • Nordstrom, Robert J.
  • Zhang, Huiming
  • Tandon, Pushpa
  • Zhang, Yantian
  • Redmond, George
  • Farahani, Keyvan
  • Kelloff, Gary
  • Henderson, Lori
  • Shankar, Lalitha
  • Deye, James
  • Capala, Jacek
  • Jacobs, Paula
Translational oncology 2014 Journal Article, cited 0 times
Website

Automated Medical Image Modality Recognition by Fusion of Visual and Text Information

  • Codella, Noel
  • Connell, Jonathan
  • Pankanti, Sharath
  • Merler, Michele
  • Smith, John R
2014 Book Section, cited 10 times
Website

Semantic Model Vector for ImageCLEF2013

  • Codella, Noel
  • Merler, Michele
2014 Report, cited 0 times
Website

NCI Workshop Report: Clinical and Computational Requirements for Correlating Imaging Phenotypes with Genomics Signatures

  • Colen, Rivka
  • Foster, Ian
  • Gatenby, Robert
  • Giger, Mary Ellen
  • Gillies, Robert
  • Gutman, David
  • Heller, Matthew
  • Jain, Rajan
  • Madabhushi, Anant
  • Madhavan, Subha
  • Napel, Sandy
  • Rao, Arvind
  • Saltz, Joel
  • Tatum, James
  • Verhaak, Roeland
  • Whitman, Gary
Translational oncology 2014 Journal Article, cited 39 times
Website

Glioblastoma: Imaging Genomic Mapping Reveals Sex-specific Oncogenic Associations of Cell Death

  • Colen RR,
  • Wang J,
  • Singh SK,
  • Gutman DA,
  • Zinn PO.
2014 Dataset, cited 36 times
Website

Imaging genomic mapping of an invasive MRI phenotype predicts patient outcome and metabolic dysfunction: a TCGA glioma phenotype research group project

  • Colen, Rivka R
  • Vangel, Mark
  • Wang, Jixin
  • Gutman, David A
  • Hwang, Scott N
  • Wintermark, Max
  • Jain, Rajan
  • Jilwan-Nicolas, Manal
  • Chen, James Y
  • Raghavan, Prashant
BMC Medical Genomics 2014 Journal Article, cited 47 times
Website

Glioblastoma: Imaging Genomic Mapping Reveals Sex-specific Oncogenic Associations of Cell Death

  • Colen, Rivka R
  • Wang, Jixin
  • Singh, Sanjay K
  • Gutman, David A
  • Zinn, Pascal O
Radiology 2014 Journal Article, cited 36 times
Website

Extended Modality Propagation: Image Synthesis of Pathological Cases

  • N. Cordier
  • H. Delingette
  • M. Le
  • N. Ayache
IEEE Transactions on Medical Imaging 2016 Journal Article, cited 18 times
Website

Combined Megavoltage and Contrast-Enhanced Radiotherapy as an Intrafraction Motion Management Strategy in Lung SBRT

  • Coronado-Delgado, Daniel A
  • Garnica-Garza, Hector M
Technol Cancer Res Treat 2019 Journal Article, cited 0 times
Website
Using Monte Carlo simulation and a realistic patient model, it is shown that the volume of healthy tissue irradiated at therapeutic doses can be drastically reduced using a combination of standard megavoltage and kilovoltage X-ray beams with a contrast agent previously loaded into the tumor, without the need to reduce standard treatment margins. Four-dimensional computed tomography images of 2 patients with a centrally located and a peripherally located tumor were obtained from a public database and subsequently used to plan robotic stereotactic body radiotherapy treatments. Two modalities are assumed: conventional high-energy stereotactic body radiotherapy and a treatment with contrast agent loaded in the tumor and a kilovoltage X-ray beam replacing the megavoltage beam (contrast-enhanced radiotherapy). For each patient model, 2 planning target volumes were designed: one following the recommendations from either Radiation Therapy Oncology Group (RTOG) 0813 or RTOG 0915 task group depending on the patient model and another with a 2-mm uniform margin determined solely on beam penumbra considerations. The optimized treatments with RTOG margins were imparted to the moving phantom to model the dose distribution that would be obtained as a result of intrafraction motion. Treatment plans are then compared to the plan with the 2-mm uniform margin considered to be the ideal plan. It is shown that even for treatments in which only one-fifth of the total dose is imparted via the contrast-enhanced radiotherapy modality and with the use of standard treatment margins, the resultant absorbed dose distributions are such that the volume of healthy tissue irradiated to high doses is close to what is obtained under ideal conditions.

Bayesian Kernel Models for Statistical Genetics and Cancer Genomics

  • Crawford, Lorin
2017 Thesis, cited 0 times

Topological Summaries of Tumor Images Improve Prediction of Disease Free Survival in Glioblastoma Multiforme

  • Crawford, Lorin
  • Monod, Anthea
  • Chen, Andrew X
  • Mukherjee, Sayan
  • Rabadán, Raúl
arXiv preprint arXiv:1611.06818 2016 Journal Article, cited 7 times
Website

Primary lung tumor segmentation from PET–CT volumes with spatial–topological constraint

  • Cui, Hui
  • Wang, Xiuying
  • Lin, Weiran
  • Zhou, Jianlong
  • Eberl, Stefan
  • Feng, Dagan
  • Fulham, Michael
International journal of computer assisted radiology and surgery 2016 Journal Article, cited 14 times
Website

Volume of high-risk intratumoral subregions at multi-parametric MR imaging predicts overall survival and complements molecular analysis of glioblastoma

  • Cui, Yi
  • Ren, Shangjie
  • Tha, Khin Khin
  • Wu, Jia
  • Shirato, Hiroki
  • Li, Ruijiang
European Radiology 2017 Journal Article, cited 10 times
Website

Prognostic Imaging Biomarkers in Glioblastoma: Development and Independent Validation on the Basis of Multiregion and Quantitative Analysis of MR Images

  • Cui, Yi
  • Tha, Khin Khin
  • Terasaka, Shunsuke
  • Yamaguchi, Shigeru
  • Wang, Jeff
  • Kudo, Kohsuke
  • Xing, Lei
  • Shirato, Hiroki
  • Li, Ruijiang
Radiology 2015 Journal Article, cited 45 times
Website

Tumor Transcriptome Reveals High Expression of IL-8 in Non-Small Cell Lung Cancer Patients with Low Pectoralis Muscle Area and Reduced Survival

  • Cury, Sarah Santiloni
  • de Moraes, Diogo
  • Freire, Paula Paccielli
  • de Oliveira, Grasieli
  • Marques, Douglas Venancio Pereira
  • Fernandez, Geysson Javier
  • Dal-Pai-Silva, Maeli
  • Hasimoto, Erica Nishida
  • Dos Reis, Patricia Pintor
  • Rogatto, Silvia Regina
  • Carvalho, Robson Francisco
Cancers (Basel) 2019 Journal Article, cited 1 times
Website
Cachexia is a syndrome characterized by an ongoing loss of skeletal muscle mass associated with poor patient prognosis in non-small cell lung cancer (NSCLC). However, prognostic cachexia biomarkers in NSCLC are unknown. Here, we analyzed computed tomography (CT) images and tumor transcriptome data to identify potentially secreted cachexia biomarkers (PSCB) in NSCLC patients with low-muscularity. We integrated radiomics features (pectoralis muscle, sternum, and tenth thoracic (T10) vertebra) from CT of 89 NSCLC patients, which allowed us to identify an index for screening muscularity. Next, a tumor transcriptomic-based secretome analysis from these patients (discovery set) was evaluated to identify potential cachexia biomarkers in patients with low-muscularity. The prognostic value of these biomarkers for predicting recurrence and survival outcome was confirmed using expression data from eight lung cancer datasets (validation set). Finally, C2C12 myoblasts differentiated into myotubes were used to evaluate the ability of the selected biomarker, interleukin (IL)-8, in inducing muscle cell atrophy. We identified 75 over-expressed transcripts in patients with low-muscularity, which included IL-6, CSF3, and IL-8. Also, we identified NCAM1, CNTN1, SCG2, CADM1, IL-8, NPTX1, and APOD as PSCB in the tumor secretome. These PSCB were capable of distinguishing worse and better prognosis (recurrence and survival) in NSCLC patients. IL-8 was confirmed as a predictor of worse prognosis in all validation sets. In vitro assays revealed that IL-8 promoted C2C12 myotube atrophy. Tumors from low-muscularity patients presented a set of upregulated genes encoding for secreted proteins, including pro-inflammatory cytokines that predict worse overall survival in NSCLC. Among these upregulated genes, IL-8 expression in NSCLC tissues was associated with worse prognosis, and the recombinant IL-8 was capable of triggering atrophy in C2C12 myotubes.

Algorithmic three-dimensional analysis of tumor shape in MRI improves prognosis of survival in glioblastoma: a multi-institutional study

  • Czarnek, Nicholas
  • Clark, Kal
  • Peters, Katherine B
  • Mazurowski, Maciej A
Journal of neuro-oncology 2017 Journal Article, cited 15 times
Website

Radiogenomics of glioblastoma: a pilot multi-institutional study to investigate a relationship between tumor shape features and tumor molecular subtype

  • Czarnek, Nicholas M
  • Clark, Kal
  • Peters, Katherine B
  • Collins, Leslie M
  • Mazurowski, Maciej A
2016 Conference Proceedings, cited 3 times
Website

Feature Extraction In Medical Images by Using Deep Learning Approach

  • Dara, S
  • Tumma, P
  • Eluri, NR
  • Kancharla, GR
International Journal of Pure and Applied Mathematics 2018 Journal Article, cited 0 times
Website

Machine learning: a useful radiological adjunct in determination of a newly diagnosed glioma’s grade and IDH status

  • De Looze, Céline
  • Beausang, Alan
  • Cryan, Jane
  • Loftus, Teresa
  • Buckley, Patrick G
  • Farrell, Michael
  • Looby, Seamus
  • Reilly, Richard
  • Brett, Francesca
  • Kearney, Hugh
Journal of neuro-oncology 2018 Journal Article, cited 0 times

Directional local ternary quantized extrema pattern: A new descriptor for biomedical image indexing and retrieval

  • Deep, G
  • Kaur, L
  • Gupta, S
Engineering Science and Technology, an International Journal 2016 Journal Article, cited 9 times
Website

Local mesh ternary patterns: a new descriptor for MRI and CT biomedical image indexing and retrieval

  • Deep, G
  • Kaur, L
  • Gupta, S
Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization 2016 Journal Article, cited 3 times
Website

Predicting response before initiation of neoadjuvant chemotherapy in breast cancer using new methods for the analysis of dynamic contrast enhanced MRI (DCE MRI) data

  • DeGrandchamp, Joseph B
  • Whisenant, Jennifer G
  • Arlinghaus, Lori R
  • Abramson, VG
  • Yankeelov, Thomas E
  • Cárdenas-Rodríguez, Julio
2016 Conference Proceedings, cited 5 times
Website

Mesoscopic imaging of glioblastomas: Are diffusion, perfusion and spectroscopic measures influenced by the radiogenetic phenotype?

  • Demerath, Theo
  • Simon-Gabriel, Carl Philipp
  • Kellner, Elias
  • Schwarzwald, Ralf
  • Lange, Thomas
  • Heiland, Dieter Henrik
  • Reinacher, Peter
  • Staszewski, Ori
  • Mast, Hansjörg
  • Kiselev, Valerij G
The Neuroradiology Journal 2017 Journal Article, cited 5 times
Website

Computer-aided detection of lung nodules using outer surface features

  • Demir, Önder
  • Yılmaz Çamurcu, Ali
Bio-Medical Materials and Engineering 2015 Journal Article, cited 28 times
Website

Development of a nomogram combining clinical staging with 18F-FDG PET/CT image features in non-small-cell lung cancer stage I–III

  • Desseroit, Marie-Charlotte
  • Visvikis, Dimitris
  • Tixier, Florent
  • Majdoub, Mohamed
  • Perdrisot, Rémy
  • Guillevin, Rémy
  • Le Rest, Catherine Cheze
  • Hatt, Mathieu
European journal of nuclear medicine and molecular imaging 2016 Journal Article, cited 34 times
Website

Spatial habitats from multiparametric MR imaging are associated with signaling pathway activities and survival in glioblastoma

  • Dextraze, Katherine
  • Saha, Abhijoy
  • Kim, Donnie
  • Narang, Shivali
  • Lehrer, Michael
  • Rao, Anita
  • Narang, Saphal
  • Rao, Dinesh
  • Ahmed, Salmaan
  • Madhugiri, Venkatesh
Oncotarget 2017 Journal Article, cited 0 times
Website

Measurement of spiculation index in 3D for solitary pulmonary nodules in volumetric lung CT images

  • Dhara, Ashis Kumar
  • Mukhopadhyay, Sudipta
  • Alam, Naved
  • Khandelwal, Niranjan
2013 Conference Proceedings, cited 4 times
Website

3d texture analysis of solitary pulmonary nodules using co-occurrence matrix from volumetric lung CT images

  • Dhara, Ashis Kumar
  • Mukhopadhyay, Sudipta
  • Khandelwal, Niranjan
2013 Conference Proceedings, cited 7 times
Website

Deep learning in head & neck cancer outcome prediction

  • Diamant, André
  • Chatterjee, Avishek
  • Vallières, Martin
  • Shenouda, George
  • Seuntjens, Jan
Sci RepScientific reports 2019 Journal Article, cited 0 times
Website
Traditional radiomics involves the extraction of quantitative texture features from medical images in an attempt to determine correlations with clinical endpoints. We hypothesize that convolutional neural networks (CNNs) could enhance the performance of traditional radiomics, by detecting image patterns that may not be covered by a traditional radiomic framework. We test this hypothesis by training a CNN to predict treatment outcomes of patients with head and neck squamous cell carcinoma, based solely on their pre-treatment computed tomography image. The training (194 patients) and validation sets (106 patients), which are mutually independent and include 4 institutions, come from The Cancer Imaging Archive. When compared to a traditional radiomic framework applied to the same patient cohort, our method results in a AUC of 0.88 in predicting distant metastasis. When combining our model with the previous model, the AUC improves to 0.92. Our framework yields models that are shown to explicitly recognize traditional radiomic features, be directly visualized and perform accurate outcome prediction.

Theoretical tumor edge detection technique using multiple Bragg peak decomposition in carbon ion therapy

  • Dias, Marta Filipa Ferraz
  • Collins-Fekete, Charles-Antoine
  • Baroni, Guido
  • Riboldi, Marco
  • Seco, Joao
Biomedical Physics & Engineering Express 2019 Journal Article, cited 0 times
Website

Automated segmentation refinement of small lung nodules in CT scans by local shape analysis

  • Diciotti, Stefano
  • Lombardo, Simone
  • Falchini, Massimo
  • Picozzi, Giulia
  • Mascalchi, Mario
Biomedical Engineering, IEEE Transactions on 2011 Journal Article, cited 68 times
Website

Learning Multi-Class Segmentations From Single-Class Datasets

  • Dmitriev, Konstantin
  • Kaufman, Arie
2019 Conference Paper, cited 1 times
Website
Multi-class segmentation has recently achieved significant performance in natural images and videos. This achievement is due primarily to the public availability of large multi-class datasets. However, there are certain domains, such as biomedical images, where obtaining sufficient multi-class annotations is a laborious and often impossible task and only single-class datasets are available. While existing segmentation research in such domains use private multi-class datasets or focus on single-class segmentations, we propose a unified highly efficient framework for robust simultaneous learning of multi-class segmentations by combining single-class datasets and utilizing a novel way of conditioning a convolutional network for the purpose of segmentation. We demonstrate various ways of incorporating the conditional information, perform an extensive evaluation, and show compelling multi-class segmentation performance on biomedical images, which outperforms current state-of-the-art solutions (up to 2.7%). Unlike current solutions, which are meticulously tailored for particular single-class datasets, we utilize datasets from a variety of sources. Furthermore, we show the applicability of our method also to natural images and evaluate it on the Cityscapes dataset. We further discuss other possible applications of our proposed framework.

Most-enhancing tumor volume by MRI radiomics predicts recurrence-free survival “early on” in neoadjuvant treatment of breast cancer

  • Drukker, Karen
  • Li, Hui
  • Antropova, Natalia
  • Edwards, Alexandra
  • Papaioannou, John
  • Giger, Maryellen L
Cancer Imaging 2018 Journal Article, cited 0 times

Local Wavelet Pattern: A New Feature Descriptor for Image Retrieval in Medical CT Databases

  • Dubey, Shiv Ram
  • Singh, Satish Kumar
  • Singh, Rajat Kumar
Image Processing, IEEE Transactions on 2015 Journal Article, cited 52 times
Website

Improving PET Imaging Acquisition and Analysis With Machine Learning: A Narrative Review With Focus on Alzheimer's Disease and Oncology

  • Duffy, Ian R
  • Boyle, Amanda J
  • Vasdev, Neil
Molecular imaging 2019 Journal Article, cited 0 times

An Ad Hoc Random Initialization Deep Neural Network Architecture for Discriminating Malignant Breast Cancer Lesions in Mammographic Images

  • Duggento, Andrea
  • Aiello, Marco
  • Cavaliere, Carlo
  • Cascella, Giuseppe L
  • Cascella, Davide
  • Conte, Giovanni
  • Guerrisi, Maria
  • Toschi, Nicola
Contrast Media Mol Imaging 2019 Journal Article, cited 1 times
Website
Breast cancer is one of the most common cancers in women, with more than 1,300,000 cases and 450,000 deaths each year worldwide. In this context, recent studies showed that early breast cancer detection, along with suitable treatment, could significantly reduce breast cancer death rates in the long term. X-ray mammography is still the instrument of choice in breast cancer screening. In this context, the false-positive and false-negative rates commonly achieved by radiologists are extremely arduous to estimate and control although some authors have estimated figures of up to 20% of total diagnoses or more. The introduction of novel artificial intelligence (AI) technologies applied to the diagnosis and, possibly, prognosis of breast cancer could revolutionize the current status of the management of the breast cancer patient by assisting the radiologist in clinical image interpretation. Lately, a breakthrough in the AI field has been brought about by the introduction of deep learning techniques in general and of convolutional neural networks in particular. Such techniques require no a priori feature space definition from the operator and are able to achieve classification performances which can even surpass human experts. In this paper, we design and validate an ad hoc CNN architecture specialized in breast lesion classification from imaging data only. We explore a total of 260 model architectures in a train-validation-test split in order to propose a model selection criterion which can pose the emphasis on reducing false negatives while still retaining acceptable accuracy. We achieve an area under the receiver operatic characteristics curve of 0.785 (accuracy 71.19%) on the test set, demonstrating how an ad hoc random initialization architecture can and should be fine tuned to a specific problem, especially in biomedical applications.

Assessing the Effects of Software Platforms on Volumetric Segmentation of Glioblastoma

  • Dunn Jr, WD
  • Aerts, HJWL
  • Cooper, LA
  • Holder, CA
  • Hwang, SN
J Neuroimaging Psychiatry Neurol 2016 Journal Article, cited 0 times
Website

Improving Brain Tumor Diagnosis Using MRI Segmentation Based on Collaboration of Beta Mixture Model and Learning Automata

  • Edalati-rad, Akram
  • Mosleh, Mohammad
Arabian Journal for Science and Engineering 2018 Journal Article, cited 0 times
Website

Automated 3-D Tissue Segmentation Via Clustering

  • Edwards, Samuel
  • Brown, Scott
  • Lee, Michael
Journal of Biomedical Engineering and Medical Imaging 2018 Journal Article, cited 0 times

Performance Analysis of Prediction Methods for Lossless Image Compression

  • Egorov, Nickolay
  • Novikov, Dmitriy
  • Gilmutdinov, Marat
2015 Book Section, cited 4 times
Website

Decision forests for learning prostate cancer probability maps from multiparametric MRI

  • Ehrenberg, Henry R
  • Cornfeld, Daniel
  • Nawaf, Cayce B
  • Sprenkle, Preston C
  • Duncan, James S
2016 Conference Proceedings, cited 2 times
Website

A Content-Based-Image-Retrieval Approach for Medical Image Repositories

  • el Rifai, Diaa
  • Maeder, Anthony
  • Liyanage, Liwan
2015 Conference Paper, cited 2 times
Website

Feature Extraction and Analysis for Lung Nodule Classification using Random Forest

  • Nada El-Askary
  • Mohammed Salem
  • Mohammed Roushdy
2019 Conference Paper, cited 0 times
Website

Imaging genomics of glioblastoma: state of the art bridge between genomics and neuroradiology

  • ElBanan, Mohamed G
  • Amer, Ahmed M
  • Zinn, Pascal O
  • Colen, Rivka R
Neuroimaging Clinics of North America 2015 Journal Article, cited 29 times
Website

Diffusion MRI quality control and functional diffusion map results in ACRIN 6677/RTOG 0625: a multicenter, randomized, phase II trial of bevacizumab and chemotherapy in recurrent glioblastoma

  • Ellingson, Benjamin M
  • Kim, Eunhee
  • Woodworth, Davis C
  • Marques, Helga
  • Boxerman, Jerrold L
  • Safriel, Yair
  • McKinstry, Robert C
  • Bokstein, Felix
  • Jain, Rajan
  • Chi, T Linda
  • Sorensen, A Gregory
  • Gilbert, Mark R
  • Barboriak, Daniel P
Int J Oncol 2015 Journal Article, cited 27 times
Website
Functional diffusion mapping (fDM) is a cancer imaging technique that quantifies voxelwise changes in apparent diffusion coefficient (ADC). Previous studies have shown value of fDMs in bevacizumab therapy for recurrent glioblastoma multiforme (GBM). The aim of the present study was to implement explicit criteria for diffusion MRI quality control and independently evaluate fDM performance in a multicenter clinical trial (RTOG 0625/ACRIN 6677). A total of 123 patients were enrolled in the current multicenter trial and signed institutional review board-approved informed consent at their respective institutions. MRI was acquired prior to and 8 weeks following therapy. A 5-point QC scoring system was used to evaluate DWI quality. fDM performance was evaluated according to the correlation of these metrics with PFS and OS at the first follow-up time-point. Results showed ADC variability of 7.3% in NAWM and 10.5% in CSF. A total of 68% of patients had usable DWI data and 47% of patients had high quality DWI data when also excluding patients that progressed before the first follow-up. fDM performance was improved by using only the highest quality DWI. High pre-treatment contrast enhancing tumor volume was associated with shorter PFS and OS. A high volume fraction of increasing ADC after therapy was associated with shorter PFS, while a high volume fraction of decreasing ADC was associated with shorter OS. In summary, DWI in multicenter trials are currently of limited value due to image quality. Improvements in consistency of image quality in multicenter trials are necessary for further advancement of DWI biomarkers.

A Novel Hybrid Perceptron Neural Network Algorithm for Classifying Breast MRI Tumors

  • ElNawasany, Amal M
  • Ali, Ahmed Fouad
  • Waheed, Mohamed E
2014 Book Section, cited 3 times
Website

A COMPUTER AIDED DIAGNOSIS SYSTEM FOR LUNG CANCER DETECTION USING SVM

  • EMİRZADE, ERKAN
2016 Thesis, cited 1 times
Website

4D robust optimization including uncertainties in time structures can reduce the interplay effect in proton pencil beam scanning radiation therapy

  • Engwall, Erik
  • Fredriksson, Albin
  • Glimelius, Lars
Medical physics 2018 Journal Article, cited 2 times
Website

Attention P-Net for Segmentation of Post-operative Glioblastoma in MRI

  • Enlund Åström, Isabelle
2019 Thesis, cited 0 times
Website
Segmentation of post-operative glioblastoma is important for follow-up treatment. In this thesis, Fully Convolutional Networks (FCN) are utilised together with attention modules for segmentation of post-operative glioblastoma in MR images. Attention-based modules help the FCN to focus on relevant features to improve segmentation results. Channel and spatial attention combines both the spatial context as well as the semantic information in MR images. P-Net is used as a backbone for creating an architecture with existing bottleneck attention modules and was named attention P-Net. The proposed network and competing techniques were evaluated on a Uppsala University database containing T1-weighted MR images of brain from 12 subjects. The proposed framework shows substantial improvement over the existing techniques.

Radiology and Enterprise Medical Imaging Extensions (REMIX)

  • Erdal, Barbaros S
  • Prevedello, Luciano M
  • Qian, Songyue
  • Demirer, Mutlu
  • Little, Kevin
  • Ryu, John
  • O’Donnell, Thomas
  • White, Richard D
Journal of Digital Imaging 2017 Journal Article, cited 1 times
Website

Multisite Image Data Collection and Management Using the RSNA Image Sharing Network

  • Erickson, Bradley J
  • Fajnwaks, Patricio
  • Langer, Steve G
  • Perry, John
Translational oncology 2014 Journal Article, cited 3 times
Website

New prognostic factor telomerase reverse transcriptase promotor mutation presents without MR imaging biomarkers in primary glioblastoma

  • Ersoy, Tunc F
  • Keil, Vera C
  • Hadizadeh, Dariusch R
  • Gielen, Gerrit H
  • Fimmers, Rolf
  • Waha, Andreas
  • Heidenreich, Barbara
  • Kumar, Rajiv
  • Schild, Hans H
  • Simon, Matthias
Neuroradiology 2017 Journal Article, cited 1 times
Website

Adaptive texture energy measure method

  • Ertugrul, Omer Faruk
arXiv preprint arXiv:1406.7075 2014 Journal Article, cited 14 times
Website

Computer-aided detection of Pulmonary Nodules based on SVM in thoracic CT images

  • Eskandarian, Parinaz
  • Bagherzadeh, Jamshid
2015 Conference Proceedings, cited 12 times
Website

Feature fusion for lung nodule classification

  • Farag, Amal A
  • Ali, Asem
  • Elshazly, Salwa
  • Farag, Aly A
International journal of computer assisted radiology and surgery 2017 Journal Article, cited 3 times
Website

Hybrid intelligent approach for diagnosis of the lung nodule from CT images using spatial kernelized fuzzy c-means and ensemble learning

  • Farahani, Farzad Vasheghani
  • Ahmadi, Abbas
  • Zarandi, Mohammad Hossein Fazel
Mathematics and Computers in Simulation 2018 Journal Article, cited 1 times
Website

Computational Challenges and Collaborative Projects in the NCI Quantitative Imaging Network

  • Farahani, Keyvan
  • Kalpathy-Cramer, Jayashree
  • Chenevert, Thomas L
  • Rubin, Daniel L
  • Sunderland, John J
  • Nordstrom, Robert J
  • Buatti, John
  • Hylton, Nola
Tomography: a journal for imaging research 2016 Journal Article, cited 2 times
Website

A study of machine learning and deep learning models for solving medical imaging problems

  • Farhat, Fadi G.
2019 Thesis, cited 0 times
Website
Application of machine learning and deep learning methods on medical imaging aims to create systems that can help in the diagnosis of disease and the automation of analyzing medical images in order to facilitate treatment planning. Deep learning methods do well in image recognition, but medical images present unique challenges. The lack of large amounts of data, the image size, and the high class-imbalance in most datasets, makes training a machine learning model to recognize a particular pattern that is typically present only in case images a formidable task. Experiments are conducted to classify breast cancer images as healthy or nonhealthy, and to detect lesions in damaged brain MRI (Magnetic Resonance Imaging) scans. Random Forest, Logistic Regression and Support Vector Machine perform competitively in the classification experiments, but in general, deep neural networks beat all conventional methods. Gaussian Naïve Bayes (GNB) and the Lesion Identification with Neighborhood Data Analysis (LINDA) methods produce better lesion detection results than single path neural networks, but a multi-modal, multi-path deep neural network beats all other methods. The importance of pre-processing training data is also highlighted and demonstrated, especially for medical images, which require extensive preparation to improve classifier and detector performance. Only a more complex and deeper neural network combined with properly pre-processed data can produce the desired accuracy levels that can rival and maybe exceed those of human experts.

Signal intensity analysis of ecological defined habitat in soft tissue sarcomas to predict metastasis development

  • Farhidzadeh, Hamidreza
  • Chaudhury, Baishali
  • Scott, Jacob G
  • Goldgof, Dmitry B
  • Hall, Lawrence O
  • Gatenby, Robert A
  • Gillies, Robert J
  • Raghavan, Meera
2016 Conference Proceedings, cited 6 times
Website

DICOM for quantitative imaging biomarker development: a standards based approach to sharing clinical data and structured PET/CT analysis results in head and neck cancer research

  • Fedorov, Andriy
  • Clunie, David
  • Ulrich, Ethan
  • Bauer, Christian
  • Wahle, Andreas
  • Brown, Bartley
  • Onken, Michael
  • Riesmeier, Jörg
  • Pieper, Steve
  • Kikinis, Ron
PeerJ 2016 Journal Article, cited 20 times
Website

A comparison of two methods for estimating DCE-MRI parameters via individual and cohort based AIFs in prostate cancer: A step towards practical implementation

  • Fedorov, Andriy
  • Fluckiger, Jacob
  • Ayers, Gregory D
  • Li, Xia
  • Gupta, Sandeep N
  • Tempany, Clare
  • Mulkern, Robert
  • Yankeelov, Thomas E
  • Fennessy, Fiona M
Magnetic Resonance Imaging 2014 Journal Article, cited 30 times
Website

An annotated test-retest collection of prostate multiparametric MRI

  • Fedorov, Andriy
  • Schwier, Michael
  • Clunie, David
  • Herz, Christian
  • Pieper, Steve
  • Kikinis, Ron
  • Tempany, Clare
  • Fennessy, Fiona
Scientific data 2018 Journal Article, cited 0 times
Website

Somatostatin Receptor Expression on VHL-Associated Hemangioblastomas Offers Novel Therapeutic Target

  • Feldman, Michael
  • Piazza, Martin G
  • Edwards, Nancy A
  • Ray-Chaudhury, Abhik
  • Maric, Dragan
  • Merrill, Marsha J
  • Zhuang, Zhengping
  • Chittiboina, Prashant
Neurosurgery 2015 Journal Article, cited 0 times

HEVC optimizations for medical environments

  • Fernández, DG
  • Del Barrio, AA
  • Botella, Guillermo
  • García, Carlos
  • Meyer-Baese, Uwe
  • Meyer-Baese, Anke
2016 Conference Proceedings, cited 5 times
Website

Characterization of Pulmonary Nodules Based on Features of Margin Sharpness and Texture

  • Ferreira, José Raniery
  • Oliveira, Marcelo Costa
  • de Azevedo-Marques, Paulo Mazzoncini
Journal of Digital Imaging 2017 Journal Article, cited 1 times
Website

On the Evaluation of the Suitability of the Materials Used to 3D Print Holographic Acoustic Lenses to Correct Transcranial Focused Ultrasound Aberrations

  • Ferri, Marcelino
  • Bravo, Jose Maria
  • Redondo, Javier
  • Jimenez-Gambin, Sergio
  • Jimenez, Noe
  • Camarena, Francisco
  • Sanchez-Perez, Juan Vicente
Polymers (Basel) 2019 Journal Article, cited 2 times
Website
The correction of transcranial focused ultrasound aberrations is a relevant topic for enhancing various non-invasive medical treatments. Presently, the most widely accepted method to improve focusing is the emission through multi-element phased arrays; however, a new disruptive technology, based on 3D printed holographic acoustic lenses, has recently been proposed, overcoming the spatial limitations of phased arrays due to the submillimetric precision of the latest generation of 3D printers. This work aims to optimize this recent solution. Particularly, the preferred acoustic properties of the polymers used for printing the lenses are systematically analyzed, paying special attention to the effect of p-wave speed and its relationship to the achievable voxel size of 3D printers. Results from simulations and experiments clearly show that, given a particular voxel size, there are optimal ranges for lens thickness and p-wave speed, fairly independent of the emitted frequency, the transducer aperture, or the transducer-target distance.

Enhanced Numerical Method for the Design of 3-D-Printed Holographic Acoustic Lenses for Aberration Correction of Single-Element Transcranial Focused Ultrasound

  • Marcelino Ferri
  • José M. Bravo
  • Javier Redondo
  • Juan V. Sánchez-Pérez
Ultrasound in Medicine & Biology 2018 Journal Article, cited 0 times
Website
The correction of transcranial focused ultrasound aberrations is a relevant issue for enhancing various non-invasive medical treatments. The emission through multi-element phased arrays has been the most widely accepted method to improve focusing in recent years; however, the number and size of transducers represent a bottleneck that limits the focusing accuracy of the technique. To overcome this limitation, a new disruptive technology, based on 3-D-printed acoustic lenses, has recently been proposed. As the submillimeter precision of the latest generation of 3-D printers has been proven to overcome the spatial limitations of phased arrays, a new challenge is to improve the accuracy of the numerical simulations required to design this type of ultrasound lens. In the study described here, we evaluated two improvements in the numerical model applied in previous works for the design of 3-D-printed lenses: (i) allowing the propagation of shear waves in the skull by means of its simulation as an isotropic solid and (ii) introduction of absorption into the set of equations that describes the dynamics of the wave in both fluid and solid media. The results obtained in the numerical simulations are evidence that the inclusion of both s-waves and absorption significantly improves focusing.

LCD-OpenPACS: sistema integrado de telerradiologia com auxílio ao diagnóstico de nódulos pulmonares em exames de tomografia computadorizada

  • Firmino Filho, José Macêdo
2015 Thesis, cited 1 times
Website

Computer-aided detection (CADe) and diagnosis (CADx) system for lung cancer with likelihood of malignancy

  • Firmino, Macedo
  • Angelo, Giovani
  • Morais, Higor
  • Dantas, Marcel R
  • Valentim, Ricardo
Biomedical engineering online 2016 Journal Article, cited 63 times
Website

A Radiogenomic Approach for Decoding Molecular Mechanisms Underlying Tumor Progression in Prostate Cancer

  • Fischer, Sarah
  • Tahoun, Mohamed
  • Klaan, Bastian
  • Thierfelder, Kolja M
  • Weber, Marc-Andre
  • Krause, Bernd J
  • Hakenberg, Oliver
  • Fuellen, Georg
  • Hamed, Mohamed
Cancers (Basel) 2019 Journal Article, cited 0 times
Website
Prostate cancer (PCa) is a genetically heterogeneous cancer entity that causes challenges in pre-treatment clinical evaluation, such as the correct identification of the tumor stage. Conventional clinical tests based on digital rectal examination, Prostate-Specific Antigen (PSA) levels, and Gleason score still lack accuracy for stage prediction. We hypothesize that unraveling the molecular mechanisms underlying PCa staging via integrative analysis of multi-OMICs data could significantly improve the prediction accuracy for PCa pathological stages. We present a radiogenomic approach comprising clinical, imaging, and two genomic (gene and miRNA expression) datasets for 298 PCa patients. Comprehensive analysis of gene and miRNA expression profiles for two frequent PCa stages (T2c and T3b) unraveled the molecular characteristics for each stage and the corresponding gene regulatory interaction network that may drive tumor upstaging from T2c to T3b. Furthermore, four biomarkers (ANPEP, mir-217, mir-592, mir-6715b) were found to distinguish between the two PCa stages and were highly correlated (average r = +/- 0.75) with corresponding aggressiveness-related imaging features in both tumor stages. When combined with related clinical features, these biomarkers markedly improved the prediction accuracy for the pathological stage. Our prediction model exhibits high potential to yield clinically relevant results for characterizing PCa aggressiveness.

The ASNR-ACR-RSNA Common Data Elements Project: What Will It Do for the House of Neuroradiology?

  • Flanders, AE
  • Jordan, JE
American Journal of Neuroradiology 2018 Journal Article, cited 0 times
Website

Computer-aided nodule assessment and risk yield risk management of adenocarcinoma: the future of imaging?

  • Foley, Finbar
  • Rajagopalan, Srinivasan
  • Raghunath, Sushravya M
  • Boland, Jennifer M
  • Karwoski, Ronald A
  • Maldonado, Fabien
  • Bartholmai, Brian J
  • Peikert, Tobias
2016 Conference Proceedings, cited 7 times
Website

A radiomics model from joint FDG-PET and MRI texture features for the prediction of lung metastases in soft-tissue sarcomas of the extremities

  • Freeman, CR
  • Skamene, SR
  • El Naqa, I
Physics in medicine and biology 2015 Journal Article, cited 199 times
Website

Supervised Machine-Learning Framework and Classifier Evaluation for Automated Three-dimensional Medical Image Segmentation based on Body MRI

  • Frischmann, Patrick
2013 Thesis, cited 0 times
Website

Automatic Detection of Lung Nodules Using 3D Deep Convolutional Neural Networks

  • Fu, Ling
  • Ma, Jingchen
  • Chen, Yizhi
  • Larsson, Rasmus
  • Zhao, Jun
Journal of Shanghai Jiaotong University (Science) 2019 Journal Article, cited 0 times
Website
Lung cancer is the leading cause of cancer deaths worldwide. Accurate early diagnosis is critical in increasing the 5-year survival rate of lung cancer, so the efficient and accurate detection of lung nodules, the potential precursors to lung cancer, is paramount. In this paper, a computer-aided lung nodule detection system using 3D deep convolutional neural networks (CNNs) is developed. The first multi-scale 11-layer 3D fully convolutional neural network (FCN) is used for screening all lung nodule candidates. Considering relative small sizes of lung nodules and limited memory, the input of the FCN consists of 3D image patches rather than of whole images. The candidates are further classified in the second CNN to get the final result. The proposed method achieves high performance in the LUNA16 challenge and demonstrates the effectiveness of using 3D deep CNNs for lung nodule detection.

Extraction of pulmonary vessels and tumour from plain computed tomography sequence

  • Ganapathy, Sridevi
  • Ashar, Kinnari
  • Kathirvelu, D
2018 Conference Proceedings, cited 0 times
Website

Performance analysis for nonlinear tomographic data processing

  • Gang, Grace J
  • Guo, Xueqi
  • Stayman IV, J Webster
2019 Conference Proceedings, cited 0 times
Website

An Improved Mammogram Classification Approach Using Back Propagation Neural Network

  • Gautam, Aman
  • Bhateja, Vikrant
  • Tiwari, Ananya
  • Satapathy, Suresh Chandra
2017 Book Section, cited 16 times
Website

A resource for the assessment of lung nodule size estimation methods: database of thoracic CT scans of an anthropomorphic phantom

  • Gavrielides, Marios A
  • Kinnard, Lisa M
  • Myers, Kyle J
  • Peregoy, Jennifer
  • Pritchard, William F
  • Zeng, Rongping
  • Esparza, Juan
  • Karanian, John
  • Petrick, Nicholas
Optics express 2010 Journal Article, cited 50 times
Website

Benefit of overlapping reconstruction for improving the quantitative assessment of CT lung nodule volume

  • Gavrielides, Marios A
  • Zeng, Rongping
  • Myers, Kyle J
  • Sahiner, Berkman
  • Petrick, Nicholas
Academic radiology 2013 Journal Article, cited 23 times
Website

Automatic Segmentation of Colon in 3D CT Images and Removal of Opacified Fluid Using Cascade Feed Forward Neural Network

  • Gayathri Devi, K
  • Radhakrishnan, R
Computational and Mathematical Methods in Medicine 2015 Journal Article, cited 5 times
Website

Segmentation of colon and removal of opacified fluid for virtual colonoscopy

  • Gayathri, Devi K
  • Radhakrishnan, R
  • Rajamani, Kumar
Pattern Analysis and Applications 2017 Journal Article, cited 0 times
Website

Synthetic Head and Neck and Phantom Images for Determining Deformable Image Registration Accuracy in Magnetic Resonance Imaging

  • Ger, Rachel B
  • Yang, Jinzhong
  • Ding, Yao
  • Jacobsen, Megan C
  • Cardenas, Carlos E
  • Fuller, Clifton D
  • Howell, Rebecca M
  • Li, Heng
  • Stafford, R Jason
  • Zhou, Shouhao
Medical physics 2018 Journal Article, cited 0 times
Website

Radiomics features of the primary tumor fail to improve prediction of overall survival in large cohorts of CT- and PET-imaged head and neck cancer patients

  • Ger, Rachel B
  • Zhou, Shouhao
  • Elgohari, Baher
  • Elhalawani, Hesham
  • Mackin, Dennis M
  • Meier, Joseph G
  • Nguyen, Callistus M
  • Anderson, Brian M
  • Gay, Casey
  • Ning, Jing
  • Fuller, Clifton D
  • Li, Heng
  • Howell, Rebecca M
  • Layman, Rick R
  • Mawlawi, Osama
  • Stafford, R Jason
  • Aerts, Hugo JWL
  • Court, Laurence E.
PLoS One 2019 Journal Article, cited 0 times
Website
Radiomics studies require many patients in order to power them, thus patients are often combined from different institutions and using different imaging protocols. Various studies have shown that imaging protocols affect radiomics feature values. We examined whether using data from cohorts with controlled imaging protocols improved patient outcome models. We retrospectively reviewed 726 CT and 686 PET images from head and neck cancer patients, who were divided into training or independent testing cohorts. For each patient, radiomics features with different preprocessing were calculated and two clinical variables-HPV status and tumor volume-were also included. A Cox proportional hazards model was built on the training data by using bootstrapped Lasso regression to predict overall survival. The effect of controlled imaging protocols on model performance was evaluated by subsetting the original training and independent testing cohorts to include only patients whose images were obtained using the same imaging protocol and vendor. Tumor volume, HPV status, and two radiomics covariates were selected for the CT model, resulting in an AUC of 0.72. However, volume alone produced a higher AUC, whereas adding radiomics features reduced the AUC. HPV status and one radiomics feature were selected as covariates for the PET model, resulting in an AUC of 0.59, but neither covariate was significantly associated with survival. Limiting the training and independent testing to patients with the same imaging protocol reduced the AUC for CT patients to 0.55, and no covariates were selected for PET patients. Radiomics features were not consistently associated with survival in CT or PET images of head and neck patients, even within patients with the same imaging protocol.

Glioblastoma multiforme: exploratory radiogenomic analysis by using quantitative image features

  • Gevaert O,
  • Mitchell LA,
  • Achrol AS,
  • Xu J,
  • Echegaray S,
  • Steinberg GK,
  • Cheshier SH,
  • Napel S,
  • Zaharchuk G,
  • Plevritis SK.
2014 Dataset, cited 151 times
Website

Glioblastoma Multiforme: Exploratory Radiogenomic Analysis by Using Quantitative Image Features

  • Gevaert, Olivier
  • Mitchell, Lex A
  • Achrol, Achal S
  • Xu, Jiajing
  • Echegaray, Sebastian
  • Steinberg, Gary K
  • Cheshier, Samuel H
  • Napel, Sandy
  • Zaharchuk, Greg
  • Plevritis, Sylvia K
Radiology 2014 Journal Article, cited 151 times
Website

Non-small cell lung cancer: identifying prognostic imaging biomarkers by leveraging public gene expression microarray data--methods and preliminary results

  • Gevaert, O.
  • Xu, J.
  • Hoang, C. D.
  • Leung, A. N.
  • Xu, Y.
  • Quon, A.
  • Rubin, D. L.
  • Napel, S.
  • Plevritis, S. K.
Radiology 2012 Journal Article, cited 187 times
Website
PURPOSE: To identify prognostic imaging biomarkers in non-small cell lung cancer (NSCLC) by means of a radiogenomics strategy that integrates gene expression and medical images in patients for whom survival outcomes are not available by leveraging survival data in public gene expression data sets. MATERIALS AND METHODS: A radiogenomics strategy for associating image features with clusters of coexpressed genes (metagenes) was defined. First, a radiogenomics correlation map is created for a pairwise association between image features and metagenes. Next, predictive models of metagenes are built in terms of image features by using sparse linear regression. Similarly, predictive models of image features are built in terms of metagenes. Finally, the prognostic significance of the predicted image features are evaluated in a public gene expression data set with survival outcomes. This radiogenomics strategy was applied to a cohort of 26 patients with NSCLC for whom gene expression and 180 image features from computed tomography (CT) and positron emission tomography (PET)/CT were available. RESULTS: There were 243 statistically significant pairwise correlations between image features and metagenes of NSCLC. Metagenes were predicted in terms of image features with an accuracy of 59%-83%. One hundred fourteen of 180 CT image features and the PET standardized uptake value were predicted in terms of metagenes with an accuracy of 65%-86%. When the predicted image features were mapped to a public gene expression data set with survival outcomes, tumor size, edge shape, and sharpness ranked highest for prognostic significance. CONCLUSION: This radiogenomics strategy for identifying imaging biomarkers may enable a more rapid evaluation of novel imaging modalities, thereby accelerating their translation to personalized medicine.

Non-small cell lung cancer: identifying prognostic imaging biomarkers by leveraging public gene expression microarray data--methods and preliminary results.

  • Gevaert, O.
  • Xu, J.
  • Hoang, C. D.
  • Leung, A. N.
  • Xu, Y.
  • Quon, A.
  • Rubin, D. L.
  • Napel, S.
  • Plevritis, S. K.
2014 Dataset, cited 187 times
Website

Medical Imaging Segmentation Assessment via Bayesian Approaches to Fusion, Accuracy and Variability Estimation with Application to Head and Neck Cancer

  • Ghattas, Andrew Emile
2017 Thesis, cited 0 times
Website

Deep Learning for Low-Dose CT Denoising Using Perceptual Loss and Edge Detection Layer

  • Gholizadeh-Ansari, M.
  • Alirezaie, J.
  • Babyn, P.
J Digit Imaging 2019 Journal Article, cited 1 times
Website
Low-dose CT denoising is a challenging task that has been studied by many researchers. Some studies have used deep neural networks to improve the quality of low-dose CT images and achieved fruitful results. In this paper, we propose a deep neural network that uses dilated convolutions with different dilation rates instead of standard convolution helping to capture more contextual information in fewer layers. Also, we have employed residual learning by creating shortcut connections to transmit image information from the early layers to later ones. To further improve the performance of the network, we have introduced a non-trainable edge detection layer that extracts edges in horizontal, vertical, and diagonal directions. Finally, we demonstrate that optimizing the network by a combination of mean-square error loss and perceptual loss preserves many structural details in the CT image. This objective function does not suffer from over smoothing and blurring effects causing by per-pixel loss and grid-like artifacts resulting from perceptual loss. The experiments show that each modification to the network improves the outcome while changing the complexity of the network, minimally.

Brain tumor detection from MRI image: An approach

  • Ghosh, Debjyoti
  • Bandyopadhyay, Samir Kumar
IJAR 2017 Journal Article, cited 0 times
Website

Role of Imaging in the Era of Precision Medicine

  • Giardino, Angela
  • Gupta, Supriya
  • Olson, Emmi
  • Sepulveda, Karla
  • Lenchik, Leon
  • Ivanidze, Jana
  • Rakow-Penner, Rebecca
  • Patel, Midhir J
  • Subramaniam, Rathan M
  • Ganeshan, Dhakshinamoorthy
Academic radiology 2017 Journal Article, cited 12 times
Website

Automatic Multi-organ Segmentation on Abdominal CT with Dense V-networks

  • E. Gibson
  • F. Giganti
  • Y. Hu
  • E. Bonmati
  • S. Bandula
  • K. Gurusamy
  • B. Davidson
  • S. P. Pereira
  • M. J. Clarkson
  • D. C. Barratt
IEEE Transactions on Medical Imaging 2018 Journal Article, cited 14 times
Website

Towards Image-Guided Pancreas and Biliary Endoscopy: Automatic Multi-organ Segmentation on Abdominal CT with Dense Dilated Networks

  • Gibson, Eli
  • Giganti, Francesco
  • Hu, Yipeng
  • Bonmati, Ester
  • Bandula, Steve
  • Gurusamy, Kurinchi
  • Davidson, Brian R
  • Pereira, Stephen P
  • Clarkson, Matthew J
  • Barratt, Dean C
2017 Conference Proceedings, cited 14 times
Website

Quantitative CT assessment of emphysema and airways in relation to lung cancer risk

  • Gierada, David S
  • Guniganti, Preethi
  • Newman, Blake J
  • Dransfield, Mark T
  • Kvale, Paul A
  • Lynch, David A
  • Pilgram, Thomas K
Radiology 2011 Journal Article, cited 41 times
Website

Projected outcomes using different nodule sizes to define a positive CT lung cancer screening examination

  • Gierada, David S
  • Pinsky, Paul
  • Nath, Hrudaya
  • Chiles, Caroline
  • Duan, Fenghai
  • Aberle, Denise R
Journal of the National Cancer Institute 2014 Journal Article, cited 74 times
Website

Machine Learning in Medical Imaging

  • Giger, M. L.
J Am Coll Radiol 2018 Journal Article, cited 157 times
Website
Advances in both imaging and computers have synergistically led to a rapid rise in the potential use of artificial intelligence in various radiological imaging tasks, such as risk assessment, detection, diagnosis, prognosis, and therapy response, as well as in multi-omics disease discovery. A brief overview of the field is given here, allowing the reader to recognize the terminology, the various subfields, and components of machine learning, as well as the clinical potential. Radiomics, an expansion of computer-aided diagnosis, has been defined as the conversion of images to minable data. The ultimate benefit of quantitative radiomics is to (1) yield predictive image-based phenotypes of disease for precision medicine or (2) yield quantitative image-based phenotypes for data mining with other -omics for discovery (ie, imaging genomics). For deep learning in radiology to succeed, note that well-annotated large data sets are needed since deep networks are complex, computer software and hardware are evolving constantly, and subtle differences in disease states are more difficult to perceive than differences in everyday objects. In the future, machine learning in radiology is expected to have a substantial clinical impact with imaging examinations being routinely obtained in clinical practice, providing an opportunity to improve decision support in medical image interpretation. The term of note is decision support, indicating that computers will augment human decision making, making it more effective and efficient. The clinical impact of having computers in the routine clinical practice may allow radiologists to further integrate their knowledge with their clinical colleagues in other medical specialties and allow for precision medicine.

Radiomics: Images are more than pictures, they are data

  • Gillies, Robert J
  • Kinahan, Paul E
  • Hricak, Hedvig
Radiology 2015 Journal Article, cited 694 times
Website

Intuitive Error Space Exploration of Medical Image Data in Clinical Daily Routine

  • Gillmann, Christina
  • Arbeláez, Pablo
  • Peñaloza, José Tiberio Hernández
  • Hagen, Hans
  • Wischgoll, Thomas
2017 Conference Paper, cited 3 times
Website

DeepCADe: A Deep Learning Architecture for the Detection of Lung Nodules in CT Scans

  • Golan, Rotem
2018 Thesis, cited 0 times
Website

Lung nodule detection in CT images using deep convolutional neural networks

  • Golan, Rotem
  • Jacob, Christian
  • Denzinger, Jörg
2016 Conference Proceedings, cited 26 times
Website

Automatic detection of pulmonary nodules in CT images by incorporating 3D tensor filtering with local image feature analysis

  • Gong, J.
  • Liu, J. Y.
  • Wang, L. J.
  • Sun, X. W.
  • Zheng, B.
  • Nie, S. D.
Physica Medica 2018 Journal Article, cited 4 times
Website

Optimal Statistical incorporation of independent feature Stability information into Radiomics Studies

  • Götz, Michael
  • Maier-Hein, Klaus H
Sci RepScientific reports 2020 Journal Article, cited 0 times
Website

Guest Editorial Deep Learning in Medical Imaging: Overview and Future Promise of an Exciting New Technique

  • Greenspan, Hayit
  • van Ginneken, Bram
  • Summers, Ronald M
IEEE Transactions on Medical Imaging 2016 Journal Article, cited 395 times
Website

Imaging and clinical data archive for head and neck squamous cell carcinoma patients treated with radiotherapy

  • Grossberg, Aaron J
  • Mohamed, Abdallah SR
  • El Halawani, Hesham
  • Bennett, William C
  • Smith, Kirk E
  • Nolan, Tracy S
  • Williams, Bowman
  • Chamchod, Sasikarn
  • Heukelom, Jolien
  • Kantor, Michael E
Scientific data 2018 Journal Article, cited 0 times
Website

Imaging-genomics reveals driving pathways of MRI derived volumetric tumor phenotype features in Glioblastoma

  • Grossmann, Patrick
  • Gutman, David A
  • Dunn, William D
  • Holder, Chad A
  • Aerts, Hugo JWL
BMC cancer 2016 Journal Article, cited 21 times
Website

Defining the biological and clinical basis of radiomics: towards clinical imaging biomarkers

  • Großmann, P. B. H. J.
  • Grossmann, Patrick Benedict Hans Juan
2018 Thesis, cited 0 times
Website

Data from: Quantitative computed tomographic descriptors associate tumor shape complexity and intratumor heterogeneity with prognosis in lung adenocarcinoma

  • Grove O,
  • Berglund AE,
  • Schabath MB.,
  • Aerts HJ,
  • Dekker A,
  • Wang H,
  • Velazquez ER,
  • Lambin P,
  • Gu Y,
  • Balagurunathan Y,
  • Eikman E,
  • Gatenby RA,
  • Eschrich S,
  • Gillies RJ.
2015 Dataset, cited 87 times
Website

Quantitative Computed Tomographic Descriptors Associate Tumor Shape Complexity and Intratumor Heterogeneity with Prognosis in Lung Adenocarcinoma

  • Grove, Olya
  • Berglund, Anders E
  • Schabath, Matthew B
  • Aerts, Hugo JWL
  • Dekker, Andre
  • Wang, Hua
  • Velazquez, Emmanuel Rios
  • Lambin, Philippe
  • Gu, Yuhua
  • Balagurunathan, Yoganand
PLoS One 2015 Journal Article, cited 87 times
Website

Using Deep Learning for Pulmonary Nodule Detection & Diagnosis

  • Gruetzemacher, Richard
  • Gupta, Ashish
2016 Conference Paper, cited 0 times

Smooth extrapolation of unknown anatomy via statistical shape models

  • Grupp, RB
  • Chiang, H
  • Otake, Y
  • Murphy, RJ
  • Gordon, CR
  • Armand, M
  • Taylor, RH
2015 Conference Proceedings, cited 2 times
Website

Generative Models and Feature Extraction on Patient Images and Structure Data in Radiation Therapy

  • Gruselius, Hanna
Mathematics 2018 Thesis, cited 0 times
Website

Exploit fully automatic low-level segmented PET data for training high-level deep learning algorithms for the corresponding CT data

  • Gsaxner, Christina
  • Roth, Peter M
  • Wallner, Jurgen
  • Egger, Jan
PLoS One 2019 Journal Article, cited 0 times
Website
We present an approach for fully automatic urinary bladder segmentation in CT images with artificial neural networks in this study. Automatic medical image analysis has become an invaluable tool in the different treatment stages of diseases. Especially medical image segmentation plays a vital role, since segmentation is often the initial step in an image analysis pipeline. Since deep neural networks have made a large impact on the field of image processing in the past years, we use two different deep learning architectures to segment the urinary bladder. Both of these architectures are based on pre-trained classification networks that are adapted to perform semantic segmentation. Since deep neural networks require a large amount of training data, specifically images and corresponding ground truth labels, we furthermore propose a method to generate such a suitable training data set from Positron Emission Tomography/Computed Tomography image data. This is done by applying thresholding to the Positron Emission Tomography data for obtaining a ground truth and by utilizing data augmentation to enlarge the dataset. In this study, we discuss the influence of data augmentation on the segmentation results, and compare and evaluate the proposed architectures in terms of qualitative and quantitative segmentation performance. The results presented in this study allow concluding that deep neural networks can be considered a promising approach to segment the urinary bladder in CT images.

Automatic lung nodule detection using multi-scale dot nodule-enhancement filter and weighted support vector machines in chest computed tomography

  • Gu, Y.
  • Lu, X.
  • Zhang, B.
  • Zhao, Y.
  • Yu, D.
  • Gao, L.
  • Cui, G.
  • Wu, L.
  • Zhou, T.
PLoS One 2019 Journal Article, cited 0 times
Website
A novel CAD scheme for automated lung nodule detection is proposed to assist radiologists with the detection of lung cancer on CT scans. The proposed scheme is composed of four major steps: (1) lung volume segmentation, (2) nodule candidate extraction and grouping, (3) false positives reduction for the non-vessel tree group, and (4) classification for the vessel tree group. Lung segmentation is performed first. Then, 3D labeling technology is used to divide nodule candidates into two groups. For the non-vessel tree group, nodule candidates are classified as true nodules at the false positive reduction stage if the candidates survive the rule-based classifier and are not screened out by the dot filter. For the vessel tree group, nodule candidates are extracted using dot filter. Next, RSFS feature selection is used to select the most discriminating features for classification. Finally, WSVM with an undersampling approach is adopted to discriminate true nodules from vessel bifurcations in vessel tree group. The proposed method was evaluated on 154 thin-slice scans with 204 nodules in the LIDC database. The performance of the proposed CAD scheme yielded a high sensitivity (87.81%) while maintaining a low false rate (1.057 FPs/scan). The experimental results indicate the performance of our method may be better than the existing methods.

Automatic Colorectal Segmentation with Convolutional Neural Network

  • Guachi, Lorena
  • Guachi, Robinson
  • Bini, Fabiano
  • Marinozzi, Franco
Computer-Aided Design and Applications 2019 Journal Article, cited 3 times
Website
This paper presents a new method for colon tissues segmentation on Computed Tomography images which takes advantages of using deep and hierarchical learning about colon features through Convolutional Neural Networks (CNN). The proposed method works robustly reducing misclassified colon tissues pixels that are introduced by the presence of noise, artifacts, unclear edges, and other organs or different areas characterized by the same intensity value as the colon. Patch analysis is exploited for allowing the classification of each center pixel as colon tissue or background pixel. Experimental results demonstrate the proposed method achieves a higher effectiveness in terms of sensitivity and specificity with respect to three state-of the art methods.

User-centered design and evaluation of interactive segmentation methods for medical images

  • Gueziri, Houssem-Eddine
2017 Thesis, cited 1 times
Website
Segmentation of medical images is a challenging task that aims to identify a particular structure present on the image. Among the existing methods involving the user at different levels, from a fully-manual to a fully-automated task, interactive segmentation methods provide assistance to the user during the task to reduce the variability in the results and allow occasional corrections of segmentation failures. Therefore, they offer a compromise between the segmentation efficiency and the accuracy of the results. It is the user who judges whether the results are satisfactory and how to correct them during the segmentation, making the process subject to human factors. Despite the strong influence of the user on the outcomes of a segmentation task, the impact of such factors has received little attention, with the literature focusing the assessment of segmentation processes on computational performance. Yet, involving the user performance in the analysis is more representative of a realistic scenario. Our goal is to explore the user behaviour in order to improve the efficiency of interactive image segmentation processes. This is achieved through three contributions. First, we developed a method which is based on a new user interaction mechanism to provide hints as to where to concentrate the computations. This significantly improves the computation efficiency without sacrificing the quality of the segmentation. The benefits of using such hints are twofold: (i) because our contribution is based on user interaction, it generalizes to a wide range of segmentation methods, and (ii) it gives comprehensive indications about where to focus the segmentation search. The latter advantage is used to achieve the second contribution. We developed an automated method based on a multi-scale strategy to: (i) reduce the user’s workload and, (ii) improve the computational time up to tenfold, allowing real-time segmentation feedback. Third, we have investigated the effects of such improvements in computations on the user’s performance. We report an experiment that manipulates the delay induced by the computation time while performing an interactive segmentation task. Results reveal that the influence of this delay can be significantly reduced with an appropriate interaction mechanism design. In conclusion, this project provides an effective image segmentation solution that has been developed in compliance with user performance requirements. We validated our approach through multiple user studies that provided a step forward into understanding the user behaviour during interactive image segmentation. Titre traduit Conception et évaluation orientées utilisateur des méthodes de segmentation interactives des images médicales Résumé traduit La segmentation d’images consiste à identifier une structure particulière dans une image. Parmi les méthodes existantes qui impliquent l’utilisateur à différents niveaux, les méthodes de segmentation interactives fournissent un support logiciel pour assister l’utilisateur dans cette tâche, ce qui aide à réduire la variabilité des résultats et permet de corriger les erreurs occasionnelles. Ces méthodes offrent un compromis entre l’efficacité et la précision des résultats. En effet, durant la segmentation, l’utilisateur décide si les résultats sont satisfaisants et dans le cas contraire, comment les corriger, rendant le processus sujet aux facteurs humains. Malgré la forte influence qu’a l’utilisateur sur l’issue de la segmentation, l’impact de ces facteurs a reçu peu d’attention de la part de la communauté scientifique, qui souvent, réduit l’évaluation des methods de segmentation à leurs performances de calcul. Pourtant, inclure la performance de l’utilisateur lors de l’évaluation de la segmentation permet une représentation plus fidèle de la réalité. Notre but est d’explorer le comportement de l’utilisateur afin d’améliorer l’efficacité des méthodes de segmentation interactives. Cette tâche est réalisée en trois contributions. Dans un premier temps, nous avons développé un nouveau mécanisme d’interaction utilisateur qui oriente la méthode de segmentation vers les endroits de l’image où concentrer les calculs. Ceci augmente significativement l’efficacité des calculs sans atténuer la qualité de la segmentation. Il y a un double avantage à utiliser un tel mécanisme: (i) puisque notre contribution est base sur l’interaction utilisateur, l’approche est généralisable à un grand nombre de méthodes de segmentation, et (ii) ce mécanisme permet une meilleure compréhension des endroits de l’image où l’on doit orienter la recherche du contour lors de la segmentation. Ce dernier point est exploité pour réaliser la deuxième contribution. En effet, nous avons remplacé le mécanisme d’interaction par une méthode automatique basée sur une stratégie multi-échelle qui permet de: (i) réduire l’effort produit par l’utilisateur lors de la segmentation, et (ii) améliorer jusqu’à dix fois le temps de calcul, permettant une segmentation en temps-réel. Dans la troisième contribution, nous avons étudié l’effet d’une telle amélioration des performances de calculs sur l’utilisateur. Nous avons mené une expérience qui manipule les délais des calculs lors de la segmentation interactive. Les résultats révèlent qu’une conception appropriée du mécanisme d’interaction peut réduire l’effet de ces délais sur l’utilisateur. En conclusion, ce projet offer une solution interactive de segmentation d’images développée en tenant compte de la performance de l’utilisateur. Nous avons validé notre approche à travers de multiples études utilisateurs qui nous ont permis une meilleure compréhension du comportement utilisateur durant la segmentation interactive des images.

User-guided graph reduction for fast image segmentation

  • Gueziri, Houssem-Eddine
  • McGuffin, Michael J
  • Laporte, Catherine
2015 Conference Proceedings, cited 2 times
Website

A generalized graph reduction framework for interactive segmentation of large images

  • Gueziri, Houssem-Eddine
  • McGuffin, Michael J
  • Laporte, Catherine
Computer Vision and Image Understanding 2016 Journal Article, cited 5 times
Website

Feature selection and patch-based segmentation in MRI for prostate radiotherapy

  • Guinin, M
  • Ruan, S
  • Dubray, B
  • Massoptier, L
  • Gardin, I
2016 Conference Proceedings, cited 0 times
Website

Prediction of clinical phenotypes in invasive breast carcinomas from the integration of radiomics and genomics data

  • Guo, Wentian
  • Li, Hui
  • Zhu, Yitan
  • Lan, Li
  • Yang, Shengjie
  • Drukker, Karen
  • Morris, Elizabeth
  • Burnside, Elizabeth
  • Whitman, Gary
  • Giger, Maryellen L
Journal of Medical Imaging 2015 Journal Article, cited 57 times
Website

A tool for lung nodules analysis based on segmentation and morphological operation

  • Gupta, Anindya
  • Martens, Olev
  • Le Moullec, Yannick
  • Saar, Tonis
2015 Conference Proceedings, cited 4 times
Website

Brain Tumor Detection using Curvelet Transform and Support Vector Machine

  • Gupta, Bhawna
  • Tiwari, Shamik
International Journal of Computer Science and Mobile Computing 2014 Journal Article, cited 8 times
Website

Appropriate Contrast Enhancement Measures for Brain and Breast Cancer Images

  • Gupta, Suneet
  • Porwal, Rabins
International Journal of Biomedical Imaging 2016 Journal Article, cited 10 times
Website

The REMBRANDT study, a large collection of genomic data from brain cancer patients

  • Gusev, Yuriy
  • Bhuvaneshwar, Krithika
  • Song, Lei
  • Zenklusen, Jean-Claude
  • Fine, Howard
  • Madhavan, Subha
Scientific data 2018 Journal Article, cited 1 times
Website

MR Imaging Predictors of Molecular Profile and Survival: Multi-institutional Study of the TCGA Glioblastoma Data Set

  • Gutman DA,
  • Cooper LA,
  • Hwang SN,
  • Holder CA,
  • Gao J,
  • Aurora TD,
  • Dunn WD Jr,
  • Scarpace L,
  • Mikkelsen T,
  • Jain R,
  • Wintermark M,
  • Jilwan M,
  • Raghavan P,
  • Huang E,
  • Clifford RJ,
  • Mongkolwat P,
  • Kleper V,
  • Freymann J,
  • Kirby J,
  • Zinn PO,
  • Moreno CS,
  • Jaffe C,
  • Colen R,
  • Rubin DL,
  • Saltz J,
  • Flanders A,
  • Brat DJ
2014 Dataset, cited 217 times
Website

Cancer Digital Slide Archive: an informatics resource to support integrated in silico analysis of TCGA pathology data

  • Gutman, David A
  • Cobb, Jake
  • Somanna, Dhananjaya
  • Park, Yuna
  • Wang, Fusheng
  • Kurc, Tahsin
  • Saltz, Joel H
  • Brat, Daniel J
  • Cooper, Lee AD
  • Kong, Jun
Journal of the American Medical Informatics Association 2013 Journal Article, cited 70 times
Website

MR Imaging Predictors of Molecular Profile and Survival: Multi-institutional Study of the TCGA Glioblastoma Data Set

  • Gutman, David A
  • Cooper, Lee AD
  • Hwang, Scott N
  • Holder, Chad A
  • Gao, JingJing
  • Aurora, Tarun D
  • Dunn, William D
  • Scarpace, Lisa
  • Mikkelsen, Tom
  • Jain, Rajan
Radiology 2013 Journal Article, cited 217 times
Website

Web based tools for visualizing imaging data and development of XNATView, a zero footprint image viewer

  • Gutman, David A
  • Dunn Jr, William D
  • Cobb, Jake
  • Stoner, Richard M
  • Kalpathy-Cramer, Jayashree
  • Erickson, Bradley
Frontiers in Neuroinformatics 2014 Journal Article, cited 12 times
Website

Somatic mutations associated with MRI-derived volumetric features in glioblastoma

  • Gutman, David A
  • Dunn Jr, William D
  • Grossmann, Patrick
  • Cooper, Lee AD
  • Holder, Chad A
  • Ligon, Keith L
  • Alexander, Brian M
  • Aerts, Hugo JWL
Neuroradiology 2015 Journal Article, cited 45 times
Website

OPTIMISING DELINEATION ACCURACY OF TUMOURS IN PET FOR RADIOTHERAPY PLANNING USING BLIND DECONVOLUTION

  • Guvenis, A
  • Koc, A
Radiation Protection Dosimetry 2015 Journal Article, cited 3 times
Website

Multi-faceted computational assessment of risk and progression in oligodendroglioma implicates NOTCH and PI3K pathways

  • Halani, Sameer H
  • Yousefi, Safoora
  • Vega, Jose Velazquez
  • Rossi, Michael R
  • Zhao, Zheng
  • Amrollahi, Fatemeh
  • Holder, Chad A
  • Baxter-Stoltzfus, Amelia
  • Eschbacher, Jennifer
  • Griffith, Brent
NPJ precision oncology 2018 Journal Article, cited 0 times
Website

Vector quantization-based automatic detection of pulmonary nodules in thoracic CT images

  • Han, Hao
  • Li, Lihong
  • Han, Fangfang
  • Zhang, Hao
  • Moore, William
  • Liang, Zhengrong
2013 Conference Proceedings, cited 8 times
Website

A novel computer-aided detection system for pulmonary nodule identification in CT images

  • Han, Hao
  • Li, Lihong
  • Wang, Huafeng
  • Zhang, Hao
  • Moore, William
  • Liang, Zhengrong
2014 Conference Proceedings, cited 5 times
Website

MRI to MGMT: predicting methylation status in glioblastoma patients using convolutional recurrent neural networks

  • Han, Lichy
  • Kamdar, Maulik R.
2018 Conference Paper, cited 5 times
Website
Glioblastoma Multiforme (GBM), a malignant brain tumor, is among the most lethal of all cancers. Temozolomide is the primary chemotherapy treatment for patients diagnosed with GBM. The methylation status of the promoter or the enhancer regions of the O6− methylguanine methyltransferase (MGMT) gene may impact the efficacy and sensitivity of temozolomide, and hence may affect overall patient survival. Microscopic genetic changes may manifest as macroscopic morphological changes in the brain tumors that can be detected using magnetic resonance imaging (MRI), which can serve as noninvasive biomarkers for determining methylation of MGMT regulatory regions. In this research, we use a compendium of brain MRI scans of GBM patients collected from The Cancer Imaging Archive (TCIA) combined with methylation data from The Cancer Genome Atlas (TCGA) to predict the methylation state of the MGMT regulatory regions in these patients. Our approach relies on a bi-directional convolutional recurrent neural network architecture (CRNN) that leverages the spatial aspects of these 3-dimensional MRI scans. Our CRNN obtains an accuracy of 67% on the validation data and 62% on the test data, with precision and recall both at 67%, suggesting the existence of MRI features that may complement existing markers for GBM patient stratification and prognosis. We have additionally presented our model via a novel neural network visualization platform, which we have developed to improve interpretability of deep learning MRI-based classification models.

Predictive capabilities of statistical learning methods for lung nodule malignancy classification using diagnostic image features: an investigation using the Lung Image Database Consortium dataset

  • Hancock, Matthew C
  • Magnan, Jerry F
2017 Conference Proceedings, cited 0 times
Website

Descriptions and evaluations of methods for determining surface curvature in volumetric data

  • Hauenstein, Jacob D.
  • Newman, Timothy S.
Computers & Graphics 2020 Journal Article, cited 0 times
Website
Highlights • Methods using convolution or fitting are often the most accurate. • The existing TE method is fast and accurate on noise-free data. • The OP method is faster than existing, similarly accurate methods on real data. • Even modest errors in curvature notably impact curvature-based renderings. • On real data, GSTH, GSTI, and OP produce the best curvature-based renderings. Abstract Three methods developed for determining surface curvature in volumetric data are described, including one convolution-based method, one fitting-based method, and one method that uses normal estimates to directly determine curvature. Additionally, a study of the accuracy and computational performance of these methods and prior methods is presented. The study considers synthetic data, noise-added synthetic data, and real data. Sample volume renderings using curvature-based transfer functions, where curvatures were determined with the methods, are also exhibited.

A biomarker basing on radiomics for the prediction of overall survival in non–small cell lung cancer patients

  • He, Bo
  • Zhao, Wei
  • Pi, Jiang-Yuan
  • Han, Dan
  • Jiang, Yuan-Ming
  • Zhang, Zhen-Guang
Respiratory research 2018 Journal Article, cited 0 times
Website

Feasibility study of a multi-criteria decision-making based hierarchical model for multi-modality feature and multi-classifier fusion: Applications in medical prognosis prediction

  • He, Qiang
  • Li, Xin
  • Kim, DW Nathan
  • Jia, Xun
  • Gu, Xuejun
  • Zhen, Xin
  • Zhou, Linghong
Information Fusion 2020 Journal Article, cited 0 times
Website

Fast Super-Resolution in MRI Images Using Phase Stretch Transform, Anchored Point Regression and Zero-Data Learning

  • He, Sifeng
  • Jalali, Bahram
2019 Conference Proceedings, cited 0 times
Website
Medical imaging is fundamentally challenging due to absorption and scattering in tissues and by the need to minimize illumination of the patient with harmful radiation. Common problems are low spatial resolution, limited dynamic range and low contrast. These predicaments have fueled interest in enhancing medical images using digital post processing. In this paper, we propose and demonstrate an algorithm for real-time inference that is suitable for edge computing. Our locally adaptive learned filtering technique named Phase Stretch Anchored Regression (PhSAR) combines the Phase Stretch Transform for local features extraction in visually impaired images with clustered anchored points to represent image feature space and fast regression based learning. In contrast with the recent widely-used deep neural network for image super-resolution, our algorithm achieves significantly faster inference and less hallucination on image details and is interpretable. Tests on brain MRI images using zero-data learning reveal its robustness with explicit PSNR improvement and lower latency compared to relevant benchmarks.

A Comparison of the Efficiency of Using a Deep CNN Approach with Other Common Regression Methods for the Prediction of EGFR Expression in Glioblastoma Patients

  • Hedyehzadeh, Mohammadreza
  • Maghooli, Keivan
  • MomenGharibvand, Mohammad
  • Pistorius, Stephen
J Digit Imaging 2019 Journal Article, cited 0 times
Website
To estimate epithermal growth factor receptor (EGFR) expression level in glioblastoma (GBM) patients using radiogenomic analysis of magnetic resonance images (MRI). A comparative study using a deep convolutional neural network (CNN)-based regression, deep neural network, least absolute shrinkage and selection operator (LASSO) regression, elastic net regression, and linear regression with no regularization was carried out to estimate EGFR expression of 166 GBM patients. Except for the deep CNN case, overfitting was prevented by using feature selection, and loss values for each method were compared. The loss values in the training phase for deep CNN, deep neural network, Elastic net, LASSO, and the linear regression with no regularization were 2.90, 8.69, 7.13, 14.63, and 21.76, respectively, while in the test phase, the loss values were 5.94, 10.28, 13.61, 17.32, and 24.19 respectively. These results illustrate that the efficiency of the deep CNN approach is better than that of the other methods, including Lasso regression, which is a regression method known for its advantage in high-dimension cases. A comparison between deep CNN, deep neural network, and three other common regression methods was carried out, and the efficiency of the CNN deep learning approach, in comparison with other regression models, was demonstrated.

Multiparametric MRI of prostate cancer: An update on state‐of‐the‐art techniques and their performance in detecting and localizing prostate cancer

  • Hegde, John V
  • Mulkern, Robert V
  • Panych, Lawrence P
  • Fennessy, Fiona M
  • Fedorov, Andriy
  • Maier, Stephan E
  • Tempany, Clare
Journal of Magnetic Resonance Imaging 2013 Journal Article, cited 164 times
Website

Deep Feature Learning For Soft Tissue Sarcoma Classification In MR Images Via Transfer Learning

  • Hermessi, Haithem
  • Mourali, Olfa
  • Zagrouba, Ezzeddine
Expert Systems with Applications 2018 Journal Article, cited 0 times
Website

Transfer learning with multiple convolutional neural networks for soft tissue sarcoma MRI classification

  • Hermessi, Haithem
  • Mourali, Olfa
  • Zagrouba, Ezzeddine
2019 Conference Proceedings, cited 1 times
Website

Quantitative Radiology: Applications to Oncology

  • Herskovits, Edward H
Emerging Applications of Molecular Imaging to Oncology 2014 Journal Article, cited 1 times
Website

Design of a Patient-Specific Radiotherapy Treatment Target

  • Heyns, Michael
  • Breseman, Kelsey
  • Lee, Christopher
  • Bloch, B Nicholas
  • Jaffe, Carl
  • Xiang, Hong
2013 Conference Proceedings, cited 3 times
Website

Automated Muscle Segmentation from Clinical CT using Bayesian U-Net for Personalized Musculoskeletal Modeling

  • Hiasa, Yuta
  • Otake, Yoshito
  • Takao, Masaki
  • Ogawa, Takeshi
  • Sugano, Nobuhiko
  • Sato, Yoshinobu
IEEE Trans Med Imaging 2019 Journal Article, cited 2 times
Website
We propose a method for automatic segmentation of individual muscles from a clinical CT. The method uses Bayesian convolutional neural networks with the U-Net architecture, using Monte Carlo dropout that infers an uncertainty metric in addition to the segmentation label. We evaluated the performance of the proposed method using two data sets: 20 fully annotated CTs of the hip and thigh regions and 18 partially annotated CTs that are publicly available from The Cancer Imaging Archive (TCIA) database. The experiments showed a Dice coefficient (DC) of 0.891+/-0.016 (mean+/-std) and an average symmetric surface distance (ASD) of 0.994+/-0.230 mm over 19 muscles in the set of 20 CTs. These results were statistically significant improvements compared to the state-of-the-art hierarchical multi-atlas method which resulted in 0.845+/-0.031 DC and 1.556+/-0.444 mm ASD. We evaluated validity of the uncertainty metric in the multi-class organ segmentation problem and demonstrated a correlation between the pixels with high uncertainty and the segmentation failure. One application of the uncertainty metric in activelearning is demonstrated, and the proposed query pixel selection method considerably reduced the manual annotation cost for expanding the training data set. The proposed method allows an accurate patient-specific analysis of individual muscle shapes in a clinical routine. This would open up various applications including personalization of biomechanical simulation and quantitative evaluation of muscle atrophy.

Approaches to uncovering cancer diagnostic and prognostic molecular signatures

  • Hong, Shengjun
  • Huang, Yi
  • Cao, Yaqiang
  • Chen, Xingwei
  • Han, Jing-Dong J
Molecular & Cellular Oncology 2014 Journal Article, cited 2 times
Website

Renal Cancer Cell Nuclei Detection from Cytological Images Using Convolutional Neural Network for Estimating Proliferation Rate

  • Hossain, Shamim
  • Jalab, Hamid A.
  • Zulfiqar, Fariha
  • Pervin, Mahfuza
Journal of Telecommunication, Electronic and Computer Engineering 2019 Journal Article, cited 0 times
Website
The Cytological images play an essential role in monitoring the progress of cancer cell mutation. The proliferation rate of the cancer cell is the prerequisite for cancer treatment. It is hard to accurately identify the nucleus of the abnormal cell in a faster way as well as find the correct proliferation rate since it requires an in-depth manual examination, observation and cell counting, which are very tedious and time-consuming. The proposed method starts with segmentation to separate the background and object regions with K-means clustering. The small candidate regions, which contain cell region is detected based on the value of support vector machine automatically. The sets of cell regions are marked with selective search according to the local distance between the nucleus and cell boundary, whether they are overlapping or non-overlapping cell regions. After that, the selective segmented cell features are taken to learn the normal and abnormal cell nuclei separately from the regional convolutional neural network. Finally, the proliferation rate in the invasive cancer area is calculated based on the number of abnormal cells. A set of renal cancer cell cytological images is taken from the National Cancer Institute, USA and this data set is available for the research work. Quantitative evaluation of this method is performed by comparing its accuracy with the accuracy of the other state of the art cancer cell nuclei detection methods. Qualitative assessment is done based on human observation. The proposed method is able to detect renal cancer cell nuclei accurately and provide automatic proliferation rate.

A Pipeline for Lung Tumor Detection and Segmentation from CT Scans Using Dilated Convolutional Neural Networks

  • Hossain, S
  • Najeeb, S
  • Shahriyar, A
  • Abdullah, ZR
  • Haque, MA
2019 Conference Proceedings, cited 0 times
Website
Lung cancer is the most prevalent cancer worldwide with about 230,000 new cases every year. Most cases go undiagnosed until it’s too late, especially in developing countries and remote areas. Early detection is key to beating cancer. Towards this end, the work presented here proposes an automated pipeline for lung tumor detection and segmentation from 3D lung CT scans from the NSCLC-Radiomics Dataset. It also presents a new dilated hybrid-3D convolutional neural network architecture for tumor segmentation. First, a binary classifier chooses CT scan slices that may contain parts of a tumor. To segment the tumors, the selected slices are passed to the segmentation model which extracts feature maps from each 2D slice using dilated convolutions and then fuses the stacked maps through 3D convolutions - incorporating the 3D structural information present in the CT scan volume into the output. Lastly, the segmentation masks are passed through a post-processing block which cleans them up through morphological operations. The proposed segmentation model outperformed other contemporary models like LungNet and U-Net. The average and median dice coefficient on the test set for the proposed model were 65.7% and 70.39% respectively. The next best model, LungNet had dice scores of 62.67% and 66.78%.

Publishing descriptions of non-public clinical datasets: proposed guidance for researchers, repositories, editors and funding organisations

  • Hrynaszkiewicz, Iain
  • Khodiyar, Varsha
  • Hufton, Andrew L
  • Sansone, Susanna-Assunta
Research Integrity and Peer Review 2016 Journal Article, cited 8 times
Website

Performance of sparse-view CT reconstruction with multi-directional gradient operators

  • Hsieh, C. J.
  • Jin, S. C.
  • Chen, J. C.
  • Kuo, C. W.
  • Wang, R. T.
  • Chu, W. C.
PLoS One 2019 Journal Article, cited 0 times
Website
To further reduce the noise and artifacts in the reconstructed image of sparse-view CT, we have modified the traditional total variation (TV) methods, which only calculate the gradient variations in x and y directions, and have proposed 8- and 26-directional (the multi-directional) gradient operators for TV calculation to improve the quality of reconstructed images. Different from traditional TV methods, the proposed 8- and 26-directional gradient operators additionally consider the diagonal directions in TV calculation. The proposed method preserves more information from original tomographic data in the step of gradient transform to obtain better reconstruction image qualities. Our algorithms were tested using two-dimensional Shepp-Logan phantom and three-dimensional clinical CT images. Results were evaluated using the root-mean-square error (RMSE), peak signal-to-noise ratio (PSNR), and universal quality index (UQI). All the experiment results show that the sparse-view CT images reconstructed using the proposed 8- and 26-directional gradient operators are superior to those reconstructed by traditional TV methods. Qualitative and quantitative analyses indicate that the more number of directions that the gradient operator has, the better images can be reconstructed. The 8- and 26-directional gradient operators we proposed have better capability to reduce noise and artifacts than traditional TV methods, and they are applicable to be applied to and combined with existing CT reconstruction algorithms derived from CS theory to produce better image quality in sparse-view reconstruction.

Quantitative glioma grading using transformed gray-scale invariant textures of MRI

  • Hsieh, Kevin Li-Chun
  • Chen, Cheng-Yu
  • Lo, Chung-Ming
Computers in biology and medicine 2017 Journal Article, cited 8 times
Website

Computer-aided grading of gliomas based on local and global MRI features

  • Hsieh, Kevin Li-Chun
  • Lo, Chung-Ming
  • Hsiao, Chih-Jou
Computer methods and programs in biomedicine 2016 Journal Article, cited 13 times
Website

Computer-aided grading of gliomas based on local and global MRI features

  • Hsieh, Kevin Li-Chun
  • Lo, Chung-Ming
  • Hsiao, Chih-Jou
Computer methods and programs in biomedicine 2017 Journal Article, cited 13 times
Website

Effect of a computer-aided diagnosis system on radiologists' performance in grading gliomas with MRI

  • Hsieh, Kevin Li-Chun
  • Tsai, Ruei-Je
  • Teng, Yu-Chuan
  • Lo, Chung-Ming
PLoS One 2017 Journal Article, cited 0 times

Brain Tumor Segmentation Using Multi-Cascaded Convolutional Neural Networks and Conditional Random Field

  • Hu, Kai
  • Gan, Qinghai
  • Zhang, Yuan
  • Deng, Shuhua
  • Xiao, Fen
  • Huang, Wei
  • Cao, Chunhong
  • Gao, Xieping
IEEE Access 2019 Journal Article, cited 2 times
Website
Accurate segmentation of brain tumor is an indispensable component for cancer diagnosis and treatment. In this paper, we propose a novel brain tumor segmentation method based on multicascaded convolutional neural network (MCCNN) and fully connected conditional random fields (CRFs). The segmentation process mainly includes the following two steps. First, we design a multi-cascaded network architecture by combining the intermediate results of several connected components to take the local dependencies of labels into account and make use of multi-scale features for the coarse segmentation. Second, we apply CRFs to consider the spatial contextual information and eliminate some spurious outputs for the fine segmentation. In addition, we use image patches obtained from axial, coronal, and sagittal views to respectively train three segmentation models, and then combine them to obtain the final segmentation result. The validity of the proposed method is evaluated on three publicly available databases. The experimental results show that our method achieves competitive performance compared with the state-of-the-art approaches.

A neural network approach to lung nodule segmentation

  • Hu, Yaoxiu
  • Menon, Prahlad G
2016 Conference Proceedings, cited 1 times
Website

Development and validation of radiomic signatures of head and neck squamous cell carcinoma molecular features and subtypes

  • Huang, Chao
  • Cintra, Murilo
  • Brennan, Kevin
  • Zhou, Mu
  • Colevas, A Dimitrios
  • Fischbein, Nancy
  • Zhu, Shankuan
  • Gevaert, Olivier
EBioMedicine 2019 Journal Article, cited 1 times
Website
BACKGROUND: Radiomics-based non-invasive biomarkers are promising to facilitate the translation of therapeutically related molecular subtypes for treatment allocation of patients with head and neck squamous cell carcinoma (HNSCC). METHODS: We included 113 HNSCC patients from The Cancer Genome Atlas (TCGA-HNSCC) project. Molecular phenotypes analyzed were RNA-defined HPV status, five DNA methylation subtypes, four gene expression subtypes and five somatic gene mutations. A total of 540 quantitative image features were extracted from pre-treatment CT scans. Features were selected and used in a regularized logistic regression model to build binary classifiers for each molecular subtype. Models were evaluated using the average area under the Receiver Operator Characteristic curve (AUC) of a stratified 10-fold cross-validation procedure repeated 10 times. Next, an HPV model was trained with the TCGA-HNSCC, and tested on a Stanford cohort (N=53). FINDINGS: Our results show that quantitative image features are capable of distinguishing several molecular phenotypes. We obtained significant predictive performance for RNA-defined HPV+ (AUC=0.73), DNA methylation subtypes MethylMix HPV+ (AUC=0.79), non-CIMP-atypical (AUC=0.77) and Stem-like-Smoking (AUC=0.71), and mutation of NSD1 (AUC=0.73). We externally validated the HPV prediction model (AUC=0.76) on the Stanford cohort. When compared to clinical models, radiomic models were superior to subtypes such as NOTCH1 mutation and DNA methylation subtype non-CIMP-atypical while were inferior for DNA methylation subtype CIMP-atypical and NSD1 mutation. INTERPRETATION: Our study demonstrates that radiomics can potentially serve as a non-invasive tool to identify treatment-relevant subtypes of HNSCC, opening up the possibility for patient stratification, treatment allocation and inclusion in clinical trials. FUND: Dr. Gevaert reports grants from National Institute of Dental & Craniofacial Research (NIDCR) U01 DE025188, grants from National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health (NIBIB), R01 EB020527, grants from National Cancer Institute (NCI), U01 CA217851, during the conduct of the study; Dr. Huang and Dr. Zhu report grants from China Scholarship Council (Grant NO:201606320087), grants from China Medical Board Collaborating Program (Grant NO:15-216), the Cyrus Tang Foundation, and the Zhejiang University Education Foundation during the conduct of the study; Dr. Cintra reports grants from Sao Paulo State Foundation for Teaching and Research (FAPESP), during the conduct of the study.

Image Super-Resolution Algorithm Based on an Improved Sparse Autoencoder

  • Huang, Detian
  • Huang, Weiqin
  • Yuan, Zhenguo
  • Lin, Yanming
  • Zhang, Jian
  • Zheng, Lixin
Information 2018 Journal Article, cited 0 times
Website

Assessment of a radiomic signature developed in a general NSCLC cohort for predicting overall survival of ALK-positive patients with different treatment types

  • Huang, Lyu
  • Chen, Jiayan
  • Hu, Weigang
  • Xu, Xinyan
  • Liu, Di
  • Wen, Junmiao
  • Lu, Jiayu
  • Cao, Jianzhao
  • Zhang, Junhua
  • Gu, Yu
  • Wang, Jiazhou
  • Fan, Min
Clinical lung cancer 2019 Journal Article, cited 0 times
Website
Objectives To investigate the potential of a radiomic signature developed in a general NSCLC cohort for predicting the overall survival of ALK-positive patients with different treatment types. Methods After test-retest in the RIDER dataset, 132 features (ICC>0.9) were selected in the LASSO Cox regression model with a leave-one-out cross-validation. The NSCLC Radiomics collection from TCIA was randomly divided into a training set (N=254) and a validation set (N=63) to develop a general radiomic signature for NSCLC. In our ALK+ set, 35 patients received targeted therapy and 19 patients received non-targeted therapy. The developed signature was tested later in this ALK+ set. Performance of the signature was evaluated with C-index and stratification analysis. Results The general signature has good performance (C-index>0.6, log-rank p-value<0.05) in the NSCLC Radiomics collection. It includes five features: Geom_va_ratio, W_GLCM_LH_Std, W_GLCM_LH_DV, W_GLCM_HH_IM2 and W_his_HL_mean (Supplementary Table S2). Its accuracy of predicting overall survival in the ALK+ set achieved 0.649 (95%CI=0.640-0.658). Nonetheless, impaired performance was observed in the targeted therapy group (C-index=0.573, 95%CI=0.556-0.589) while significantly improved performance was observed in the non-targeted therapy group (C-index=0.832, 95%CI=0.832-0.852). Stratification analysis also showed that the general signature could only identify high- and low-risk patients in the non-targeted therapy group (log-rank p-value=0.00028). Conclusions This preliminary study suggests that the applicability of a general signature to ALK-positive patients is limited. The general radiomic signature seems to be only applicable to ALK-positive patients who had received non-targeted therapy, which indicates that developing special radiomics signatures for patients treated with TKI might be necessary. Abbreviations and acronyms TCIA The Cancer Imaging Archive ALK Anaplastic lymphoma kinase NSCLC Non-small cell lung cancer EML4-ALK fusion The echinoderm microtubule-associated protein like 4-anaplastic lymphoma kinase fusion C-index Concordance index CI Confidence interval ICC The intra-class correlation coefficient OS Overall Survival LASSO The Least Absolute Shrinkage and Selection Operator EGFR Epidermal Growth Factor Receptor TKI Tyrosine-kinase inhibitor

The Study on Data Hiding in Medical Images

  • Huang, Li-Chin
  • Tseng, Lin-Yu
  • Hwang, Min-Shiang
International Journal of Network Security 2012 Journal Article, cited 25 times
Website

A reversible data hiding method by histogram shifting in high quality medical images

  • Huang, Li-Chin
  • Tseng, Lin-Yu
  • Hwang, Min-Shiang
Journal of Systems and Software 2013 Journal Article, cited 60 times
Website

The Impact of Arterial Input Function Determination Variations on Prostate Dynamic Contrast-Enhanced Magnetic Resonance Imaging Pharmacokinetic Modeling: A Multicenter Data Analysis Challenge

  • Huang, Wei
  • Chen, Yiyi
  • Fedorov, Andriy
  • Li, Xia
  • Jajamovich, Guido H
  • Malyarenko, Dariya I
  • Aryal, Madhava P
  • LaViolette, Peter S
  • Oborski, Matthew J
  • O'Sullivan, Finbarr
Tomography: a journal for imaging research 2016 Journal Article, cited 21 times
Website

Variations of dynamic contrast-enhanced magnetic resonance imaging in evaluation of breast cancer therapy response: a multicenter data analysis challenge

  • Huang, W.
  • Li, X.
  • Chen, Y.
  • Li, X.
  • Chang, M. C.
  • Oborski, M. J.
  • Malyarenko, D. I.
  • Muzi, M.
  • Jajamovich, G. H.
  • Fedorov, A.
  • Tudorica, A.
  • Gupta, S. N.
  • Laymon, C. M.
  • Marro, K. I.
  • Dyvorne, H. A.
  • Miller, J. V.
  • Barbodiak, D. P.
  • Chenevert, T. L.
  • Yankeelov, T. E.
  • Mountz, J. M.
  • Kinahan, P. E.
  • Kikinis, R.
  • Taouli, B.
  • Fennessy, F.
  • Kalpathy-Cramer, J.
2014 Journal Article, cited 60 times
Website
Pharmacokinetic analysis of dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) time-course data allows estimation of quantitative parameters such as K (trans) (rate constant for plasma/interstitium contrast agent transfer), v e (extravascular extracellular volume fraction), and v p (plasma volume fraction). A plethora of factors in DCE-MRI data acquisition and analysis can affect accuracy and precision of these parameters and, consequently, the utility of quantitative DCE-MRI for assessing therapy response. In this multicenter data analysis challenge, DCE-MRI data acquired at one center from 10 patients with breast cancer before and after the first cycle of neoadjuvant chemotherapy were shared and processed with 12 software tools based on the Tofts model (TM), extended TM, and Shutter-Speed model. Inputs of tumor region of interest definition, pre-contrast T1, and arterial input function were controlled to focus on the variations in parameter value and response prediction capability caused by differences in models and associated algorithms. Considerable parameter variations were observed with the within-subject coefficient of variation (wCV) values for K (trans) and v p being as high as 0.59 and 0.82, respectively. Parameter agreement improved when only algorithms based on the same model were compared, e.g., the K (trans) intraclass correlation coefficient increased to as high as 0.84. Agreement in parameter percentage change was much better than that in absolute parameter value, e.g., the pairwise concordance correlation coefficient improved from 0.047 (for K (trans)) to 0.92 (for K (trans) percentage change) in comparing two TM algorithms. Nearly all algorithms provided good to excellent (univariate logistic regression c-statistic value ranging from 0.8 to 1.0) early prediction of therapy response using the metrics of mean tumor K (trans) and k ep (=K (trans)/v e, intravasation rate constant) after the first therapy cycle and the corresponding percentage changes. The results suggest that the interalgorithm parameter variations are largely systematic, which are not likely to significantly affect the utility of DCE-MRI for assessment of therapy response.

Variations of dynamic contrast-enhanced magnetic resonance imaging in evaluation of breast cancer therapy response: a multicenter data analysis challenge

  • Huang, W.
  • Li, X.
  • Chen, Y.
  • Li, X.
  • Chang, M. C.
  • Oborski, M. J.
  • Malyarenko, D. I.
  • Muzi, M.
  • Jajamovich, G. H.
  • Fedorov, A.
  • Tudorica, A.
  • Gupta, S. N.
  • Laymon, C. M.
  • Marro, K. I.
  • Dyvorne, H. A.
  • Miller, J. V.
  • Barbodiak, D. P.
  • Chenevert, T. L.
  • Yankeelov, T. E.
  • Mountz, J. M.
  • Kinahan, P. E.
  • Kikinis, R.
  • Taouli, B.
  • Fennessy, F.
  • Kalpathy-Cramer, J.
2014 Dataset, cited 60 times
Website

Fast and Fully-Automated Detection and Segmentation of Pulmonary Nodules in Thoracic CT Scans Using Deep Convolutional Neural Networks

  • Huang, X.
  • Sun, W.
  • Tseng, T. B.
  • Li, C.
  • Qian, W.
Computerized Medical Imaging and Graphics 2019 Journal Article, cited 0 times
Website
Deep learning techniques have been extensively used in computerized pulmonary nodule analysis in recent years. Many reported studies still utilized hybrid methods for diagnosis, in which convolutional neural networks (CNNs) are used only as one part of the pipeline, and the whole system still needs either traditional image processing modules or human intervention to obtain final results. In this paper, we introduced a fast and fully-automated end-to-end system that can efficiently segment precise lung nodule contours from raw thoracic CT scans. Our proposed system has four major modules: candidate nodule detection with Faster regional-CNN (R-CNN), candidate merging, false positive (FP) reduction with CNN, and nodule segmentation with customized fully convolutional neural network (FCN). The entire system has no human interaction or database specific design. The average runtime is about 16 s per scan on a standard workstation. The nodule detection accuracy is 91.4% and 94.6% with an average of 1 and 4 false positives (FPs) per scan. The average dice coefficient of nodule segmentation compared to the groundtruth is 0.793.

A longitudinal four‐dimensional computed tomography and cone beam computed tomography dataset for image‐guided radiation therapy research in lung cancer

  • Hugo, Geoffrey D
  • Weiss, Elisabeth
  • Sleeman, William C
  • Balik, Salim
  • Keall, Paul J
  • Lu, Jun
  • Williamson, Jeffrey F
Medical physics 2017 Journal Article, cited 8 times
Website
PURPOSE: To describe in detail a dataset consisting of serial four-dimensional computed tomography (4DCT) and 4D cone beam CT (4DCBCT) images acquired during chemoradiotherapy of 20 locally advanced, nonsmall cell lung cancer patients we have collected at our institution and shared publicly with the research community. ACQUISITION AND VALIDATION METHODS: As part of an NCI-sponsored research study 82 4DCT and 507 4DCBCT images were acquired in a population of 20 locally advanced nonsmall cell lung cancer patients undergoing radiation therapy. All subjects underwent concurrent radiochemotherapy to a total dose of 59.4-70.2 Gy using daily 1.8 or 2 Gy fractions. Audio-visual biofeedback was used to minimize breathing irregularity during all fractions, including acquisition of all 4DCT and 4DCBCT acquisitions in all subjects. Target, organs at risk, and implanted fiducial markers were delineated by a physician in the 4DCT images. Image coordinate system origins between 4DCT and 4DCBCT were manipulated in such a way that the images can be used to simulate initial patient setup in the treatment position. 4DCT images were acquired on a 16-slice helical CT simulator with 10 breathing phases and 3 mm slice thickness during simulation. In 13 of the 20 subjects, 4DCTs were also acquired on the same scanner weekly during therapy. Every day, 4DCBCT images were acquired on a commercial onboard CBCT scanner. An optically tracked external surrogate was synchronized with CBCT acquisition so that each CBCT projection was time stamped with the surrogate respiratory signal through in-house software and hardware tools. Approximately 2500 projections were acquired over a period of 8-10 minutes in half-fan mode with the half bow-tie filter. Using the external surrogate, the CBCT projections were sorted into 10 breathing phases and reconstructed with an in-house FDK reconstruction algorithm. Errors in respiration sorting, reconstruction, and acquisition were carefully identified and corrected. DATA FORMAT AND USAGE NOTES: 4DCT and 4DCBCT images are available in DICOM format and structures through DICOM-RT RTSTRUCT format. All data are stored in the Cancer Imaging Archive (TCIA, http://www.cancerimagingarchive.net/) as collection 4D-Lung and are publicly available. DISCUSSION: Due to high temporal frequency sampling, redundant (4DCT and 4DCBCT) data at similar timepoints, oversampled 4DCBCT, and fiducial markers, this dataset can support studies in image-guided and image-guided adaptive radiotherapy, assessment of 4D voxel trajectory variability, and development and validation of new tools for image registration and motion management.

Pulmonary nodule detection on computed tomography using neuro-evolutionary scheme

  • Huidrom, Ratishchandra
  • Chanu, Yambem Jina
  • Singh, Khumanthem Manglem
Signal, Image and Video Processing 2018 Journal Article, cited 0 times
Website

Radiomics of NSCLC: Quantitative CT Image Feature Characterization and Tumor Shrinkage Prediction

  • Hunter, Luke
2013 Thesis, cited 4 times
Website

Collage CNN for Renal Cell Carcinoma Detection from CT

  • Hussain, Mohammad Arafat
  • Amir-Khalili, Alborz
  • Hamarneh, Ghassan
  • Abugharbieh, Rafeef
2017 Conference Proceedings, cited 0 times
Website

Advanced MRI Techniques in the Monitoring of Treatment of Gliomas

  • Hyare, Harpreet
  • Thust, Steffi
  • Rees, Jeremy
Current treatment options in neurology 2017 Journal Article, cited 11 times
Website

Automatic MRI Breast tumor Detection using Discrete Wavelet Transform and Support Vector Machines

  • Ibraheem, Amira Mofreh
  • Rahouma, Kamel Hussein
  • Hamed, Hesham F. A.
2019 Conference Paper, cited 0 times
Website
The human right is to live a healthy life free of serious diseases. Cancer is the most serious disease facing humans and possibly leading to death. So, a definitive solution must be done to these diseases, to eliminate them and also to protect humans from them. Breast cancer is considered being one of the dangerous types of cancers that face women in particular. Early examination should be done periodically and the diagnosis must be more sensitive and effective to preserve the women lives. There are various types of breast cancer images but magnetic resonance imaging (MRI) has become one of the important ways in breast cancer detection. In this work, a new method is done to detect the breast cancer using the MRI images that is preprocessed using a 2D Median filter. The features are extracted from the images using discrete wavelet transform (DWT). These features are reduced to 13 features. Then, support vector machine (SVM) is used to detect if there is a tumor or not. Simulation results have been accomplished using the MRI images datasets. These datasets are extracted from the standard Breast MRI database known as the “Reference Image Database to Evaluate Response (RIDER)”. The proposed method has achieved an accuracy of 98.03 % using the available MRIs database. The processing time for all processes was recorded as 0.894 seconds. The obtained results have demonstrated the superiority of the proposed system over the available ones in the literature.

Brain tumor segmentation in multi‐spectral MRI using convolutional neural networks (CNN)

  • Iqbal, Sajid
  • Ghani, M Usman
  • Saba, Tanzila
  • Rehman, Amjad
Microscopy research and technique 2018 Journal Article, cited 8 times
Website

A rotation and translation invariant method for 3D organ image classification using deep convolutional neural networks

  • Islam, Kh Tohidul
  • Wijewickrema, Sudanthi
  • O’Leary, Stephen
PeerJ Computer SciencePeerJ Computer Science 2019 Journal Article, cited 0 times
Website
Three-dimensional (3D) medical image classification is useful in applications such as disease diagnosis and content-based medical image retrieval. It is a challenging task due to several reasons. First, image intensity values are vastly different depending on the image modality. Second, intensity values within the same image modality may vary depending on the imaging machine and artifacts may also be introduced in the imaging process. Third, processing 3D data requires high computational power. In recent years, significant research has been conducted in the field of 3D medical image classification. However, most of these make assumptions about patient orientation and imaging direction to simplify the problem and/or work with the full 3D images. As such, they perform poorly when these assumptions are not met. In this paper, we propose a method of classification for 3D organ images that is rotation and translation invariant. To this end, we extract a representative two-dimensional (2D) slice along the plane of best symmetry from the 3D image. We then use this slice to represent the 3D image and use a 20-layer deep convolutional neural network (DCNN) to perform the classification task. We show experimentally, using multi-modal data, that our method is comparable to existing methods when the assumptions of patient orientation and viewing direction are met. Notably, it shows similarly high accuracy even when these assumptions are violated, where other methods fail. We also explore how this method can be used with other DCNN models as well as conventional classification approaches.

Magnetic resonance image features identify glioblastoma phenotypic subtypes with distinct molecular pathway activities

  • Itakura, Haruka
  • Achrol, Achal S
  • Mitchell, Lex A
  • Loya, Joshua J
  • Liu, Tiffany
  • Westbroek, Erick M
  • Feroze, Abdullah H
  • Rodriguez, Scott
  • Echegaray, Sebastian
  • Azad, Tej D
Science translational medicine 2015 Journal Article, cited 90 times
Website

Quantitative imaging in radiation oncology: An emerging science and clinical service

  • Jaffray, DA
  • Chung, C
  • Coolens, C
  • Foltz, W
  • Keller, H
  • Menard, C
  • Milosevic, M
  • Publicover, J
  • Yeung, I
2015 Conference Proceedings, cited 9 times
Website

Prediction of Treatment Response to Neoadjuvant Chemotherapy for Breast Cancer via Early Changes in Tumor Heterogeneity Captured by DCE-MRI Registration

  • Jahani, Nariman
  • Cohen, Eric
  • Hsieh, Meng-Kang
  • Weinstein, Susan P
  • Pantalone, Lauren
  • Hylton, Nola
  • Newitt, David
  • Davatzikos, Christos
  • Kontos, Despina
Sci RepScientific reports 2019 Journal Article, cited 0 times
Website
We analyzed DCE-MR images from 132 women with locally advanced breast cancer from the I-SPY1 trial to evaluate changes of intra-tumor heterogeneity for augmenting early prediction of pathologic complete response (pCR) and recurrence-free survival (RFS) after neoadjuvant chemotherapy (NAC). Utilizing image registration, voxel-wise changes including tumor deformations and changes in DCE-MRI kinetic features were computed to characterize heterogeneous changes within the tumor. Using five-fold cross-validation, logistic regression and Cox regression were performed to model pCR and RFS, respectively. The extracted imaging features were evaluated in augmenting established predictors, including functional tumor volume (FTV) and histopathologic and demographic factors, using the area under the curve (AUC) and the C-statistic as performance measures. The extracted voxel-wise features were also compared to analogous conventional aggregated features to evaluate the potential advantage of voxel-wise analysis. Voxel-wise features improved prediction of pCR (AUC = 0.78 (±0.03) vs 0.71 (±0.04), p < 0.05 and RFS (C-statistic = 0.76 ( ± 0.05), vs 0.63 ( ± 0.01)), p < 0.05, while models based on analogous aggregate imaging features did not show appreciable performance changes (p > 0.05). Furthermore, all selected voxel-wise features demonstrated significant association with outcome (p < 0.05). Thus, precise measures of voxel-wise changes in tumor heterogeneity extracted from registered DCE-MRI scans can improve early prediction of neoadjuvant treatment outcomes in locally advanced breast cancer.

Genomic mapping and survival prediction in glioblastoma: molecular subclassification strengthened by hemodynamic imaging biomarkers

  • Jain, Rajan
  • Poisson, Laila
  • Narang, Jayant
  • Gutman, David
  • Scarpace, Lisa
  • Hwang, Scott N
  • Holder, Chad
  • Wintermark, Max
  • Colen, Rivka R
  • Kirby, Justin
Radiology 2013 Journal Article, cited 99 times
Website

Correlation of perfusion parameters with genes related to angiogenesis regulation in glioblastoma: a feasibility study

  • Jain, R
  • Poisson, L
  • Narang, J
  • Scarpace, L
  • Rosenblum, ML
  • Rempel, S
  • Mikkelsen, T
American Journal of Neuroradiology 2012 Journal Article, cited 39 times
Website

Outcome prediction in patients with glioblastoma by using imaging, clinical, and genomic biomarkers: focus on the nonenhancing component of the tumor

  • Jain, R.
  • Poisson, L. M.
  • Gutman, D.
  • Scarpace, L.
  • Hwang, S. N.
  • Holder, C. A.
  • Wintermark, M.
  • Rao, A.
  • Colen, R. R.
  • Kirby, J.
  • Freymann, J.
  • Jaffe, C. C.
  • Mikkelsen, T.
  • Flanders, A.
Radiology 2014 Journal Article, cited 86 times
Website
PURPOSE: To correlate patient survival with morphologic imaging features and hemodynamic parameters obtained from the nonenhancing region (NER) of glioblastoma (GBM), along with clinical and genomic markers. MATERIALS AND METHODS: An institutional review board waiver was obtained for this HIPAA-compliant retrospective study. Forty-five patients with GBM underwent baseline imaging with contrast material-enhanced magnetic resonance (MR) imaging and dynamic susceptibility contrast-enhanced T2*-weighted perfusion MR imaging. Molecular and clinical predictors of survival were obtained. Single and multivariable models of overall survival (OS) and progression-free survival (PFS) were explored with Kaplan-Meier estimates, Cox regression, and random survival forests. RESULTS: Worsening OS (log-rank test, P = .0103) and PFS (log-rank test, P = .0223) were associated with increasing relative cerebral blood volume of NER (rCBVNER), which was higher with deep white matter involvement (t test, P = .0482) and poor NER margin definition (t test, P = .0147). NER crossing the midline was the only morphologic feature of NER associated with poor survival (log-rank test, P = .0125). Preoperative Karnofsky performance score (KPS) and resection extent (n = 30) were clinically significant OS predictors (log-rank test, P = .0176 and P = .0038, respectively). No genomic alterations were associated with survival, except patients with high rCBVNER and wild-type epidermal growth factor receptor (EGFR) mutation had significantly poor survival (log-rank test, P = .0306; area under the receiver operating characteristic curve = 0.62). Combining resection extent with rCBVNER marginally improved prognostic ability (permutation, P = .084). Random forest models of presurgical predictors indicated rCBVNER as the top predictor; also important were KPS, age at diagnosis, and NER crossing the midline. A multivariable model containing rCBVNER, age at diagnosis, and KPS can be used to group patients with more than 1 year of difference in observed median survival (0.49-1.79 years). CONCLUSION: Patients with high rCBVNER and NER crossing the midline and those with high rCBVNER and wild-type EGFR mutation showed poor survival. In multivariable survival models, however, rCBVNER provided unique prognostic information that went above and beyond the assessment of all NER imaging features, as well as clinical and genomic features.

Outcome Prediction in Patients with Glioblastoma by Using Imaging, Clinical, and Genomic Biomarkers: Focus on the Nonenhancing Component of the Tumor

  • Jain, R.
  • Poisson, L. M.
  • Gutman, D.
  • Scarpace, L.
  • Hwang, S. N.
  • Holder, C. A.
  • Wintermark, M.
  • Rao, A.
  • Colen, R. R.
  • Kirby, J.
  • Freymann, J.
  • Jaffe, C. C.
  • Mikkelsen, T.
  • Flanders, A.
2014 Dataset, cited 86 times
Website

Integrative analysis of diffusion-weighted MRI and genomic data to inform treatment of glioblastoma

  • Jajamovich, Guido H
  • Valiathan, Chandni R
  • Cristescu, Razvan
  • Somayajula, Sangeetha
Journal of neuro-oncology 2016 Journal Article, cited 4 times
Website

Non-invasive tumor genotyping using radiogenomic biomarkers, a systematic review and oncology-wide pathway analysis

  • Jansen, Robin W
  • van Amstel, Paul
  • Martens, Roland M
  • Kooi, Irsan E
  • Wesseling, Pieter
  • de Langen, Adrianus J
  • Menke-Van der Houven, Catharina W
Oncotarget 2018 Journal Article, cited 0 times
Website

Deep Neural Network Based Classifier Model for Lung Cancer Diagnosis and Prediction System in Healthcare Informatics

  • Jayaraj, D.
  • Sathiamoorthy, S.
2019 Conference Paper, cited 0 times
Lung cancer is a most important deadly disease which results to mortality of people because of the cells growth in unmanageable way. This problem leads to increased significance among physicians as well as academicians to develop efficient diagnosis models. Therefore, a novel method for automated identification of lung nodule becomes essential and it forms the motivation of this study. This paper presents a new deep learning classification model for lung cancer diagnosis. The presented model involves four main steps namely preprocessing, feature extraction, segmentation and classification. A particle swarm optimization (PSO) algorithm is sued for segmentation and deep neural network (DNN) is applied for classification. The presented PSO-DNN model is tested against a set of sample lung images and the results verified the goodness of the projected model on all the applied images.

Integrating Open Data on Cancer in Support to Tumor Growth Analysis

  • Jeanquartier, Fleur
  • Jean-Quartier, Claire
  • Schreck, Tobias
  • Cemernek, David
  • Holzinger, Andreas
2016 Conference Proceedings, cited 10 times
Website

Assessment of prostate cancer prognostic Gleason grade group using zonal-specific features extracted from biparametric MRI using a KNN classifier

  • Jensen, C.
  • Carl, J.
  • Boesen, L.
  • Langkilde, N. C.
  • Ostergaard, L. R.
J Appl Clin Med Phys 2019 Journal Article, cited 0 times
Website
PURPOSE: To automatically assess the aggressiveness of prostate cancer (PCa) lesions using zonal-specific image features extracted from diffusion weighted imaging (DWI) and T2W MRI. METHODS: Region of interest was extracted from DWI (peripheral zone) and T2W MRI (transitional zone and anterior fibromuscular stroma) around the center of 112 PCa lesions from 99 patients. Image histogram and texture features, 38 in total, were used together with a k-nearest neighbor classifier to classify lesions into their respective prognostic Grade Group (GG) (proposed by the International Society of Urological Pathology 2014 consensus conference). A semi-exhaustive feature search was performed (1-6 features in each feature set) and validated using threefold stratified cross validation in a one-versus-rest classification setup. RESULTS: Classifying PCa lesions into GGs resulted in AUC of 0.87, 0.88, 0.96, 0.98, and 0.91 for GG1, GG2, GG1 + 2, GG3, and GG4 + 5 for the peripheral zone, respectively. The results for transitional zone and anterior fibromuscular stroma were AUC of 0.85, 0.89, 0.83, 0.94, and 0.86 for GG1, GG2, GG1 + 2, GG3, and GG4 + 5, respectively. CONCLUSION: This study showed promising results with reasonable AUC values for classification of all GG indicating that zonal-specific imaging features from DWI and T2W MRI can be used to differentiate between PCa lesions of various aggressiveness.

Lung nodule detection from CT scans using 3D convolutional neural networks without candidate selection

  • Jenuwine, Natalia M
  • Mahesh, Sunny N
  • Furst, Jacob D
  • Raicu, Daniela S
2018 Conference Proceedings, cited 0 times
Website

Computer-aided nodule detection and volumetry to reduce variability between radiologists in the interpretation of lung nodules at low-dose screening CT

  • Jeon, Kyung Nyeo
  • Goo, Jin Mo
  • Lee, Chang Hyun
  • Lee, Youkyung
  • Choo, Ji Yung
  • Lee, Nyoung Keun
  • Shim, Mi-Suk
  • Lee, In Sun
  • Kim, Kwang Gi
  • Gierada, David S
Investigative radiology 2012 Journal Article, cited 51 times
Website

Fusion Radiomics Features from Conventional MRI Predict MGMT Promoter Methylation Status in Lower Grade Gliomas

  • Jiang, Chendan
  • Kong, Ziren
  • Liu, Sirui
  • Feng, Shi
  • Zhang, Yiwei
  • Zhu, Ruizhe
  • Chen, Wenlin
  • Wang, Yuekun
  • Lyu, Yuelei
  • You, Hui
  • Zhao, Dachun
  • Wang, Renzhi
  • Wang, Yu
  • Ma, Wenbin
  • Feng, Feng
Eur J Radiol 2019 Journal Article, cited 0 times
Website
PURPOSE: The methylation status of the O6-methylguanine-DNA methyltransferase (MGMT) promoter has been proven to be a prognostic and predictive biomarker for lower grade glioma (LGG). This study aims to build a radiomics model to preoperatively predict the MGMT promoter methylation status in LGG. METHOD: 122 pathology-confirmed LGG patients were retrospectively reviewed, with 87 local patients as the training dataset, and 35 from The Cancer Imaging Archive as independent validation. A total of 1702 radiomics features were extracted from three-dimensional contrast-enhanced T1 (3D-CE-T1)-weighted and T2-weighted MRI images, including 14 shape, 18 first order, 75 texture, and 744 wavelet features respectively. The radiomics features were selected with the least absolute shrinkage and selection operator algorithm, and prediction models were constructed with multiple classifiers. Models were evaluated using receiver operating characteristic (ROC). RESULTS: Five radiomics prediction models, namely, 3D-CE-T1-weighted single radiomics model, T2-weighted single radiomics model, fusion radiomics model, linear combination radiomics model, and clinical integrated model, were built. The fusion radiomics model, which constructed from the concatenation of both series, displayed the best performance, with an accuracy of 0.849 and an area under the curve (AUC) of 0.970 (0.939-1.000) in the training dataset, and an accuracy of 0.886 and an AUC of 0.898 (0.786-1.000) in the validation dataset. Linear combination of single radiomics models and integration of clinical factors did not improve. CONCLUSIONS: Conventional MRI radiomics models are reliable for predicting the MGMT promoter methylation status in LGG patients. The fusion of radiomics features from different series may increase the prediction performance.

Enhancement of Deep Learning in Image Classification Performance Using Xception with the Swish Activation Function for Colorectal Polyp Preliminary Screening

  • Jinsakul, Natinai
  • Tsai, Cheng-Fa
  • Tsai, Chia-En
  • Wu, Pensee
Mathematics 2019 Journal Article, cited 0 times
One of the leading forms of cancer is colorectal cancer (CRC), which is responsible for increasing mortality in young people. The aim of this paper is to provide an experimental modification of deep learning of Xception with Swish and assess the possibility of developing a preliminary colorectal polyp screening system by training the proposed model with a colorectal topogram dataset in two and three classes. The results indicate that the proposed model can enhance the original convolutional neural network model with evaluation classification performance by achieving accuracy of up to 98.99% for classifying into two classes and 91.48% for three classes. For testing of the model with another external image, the proposed method can also improve the prediction compared to the traditional method, with 99.63% accuracy for true prediction of two classes and 80.95% accuracy for true prediction of three classes.

Analysis of Vestibular Labyrinthine Geometry and Variation in the Human Temporal Bone

  • Johnson Chacko, Lejo
  • Schmidbauer, Dominik T
  • Handschuh, Stephan
  • Reka, Alen
  • Fritscher, Karl D
  • Raudaschl, Patrik
  • Saba, Rami
  • Handler, Michael
  • Schier, Peter P
  • Baumgarten, Daniel
Frontiers in neuroscience 2018 Journal Article, cited 4 times
Website

Interactive 3D Virtual Colonoscopic Navigation For Polyp Detection From CT Images

  • Joseph, Jinu
  • Kumar, Rajesh
  • Chandran, Pournami S
  • Vidya, PV
Procedia Computer Science 2017 Journal Article, cited 0 times
Website

Cloud-based NoSQL open database of pulmonary nodules for computer-aided lung cancer diagnosis and reproducible research

  • Junior, José Raniery Ferreira
  • Oliveira, Marcelo Costa
  • de Azevedo-Marques, Paulo Mazzoncini
Journal of Digital Imaging 2016 Journal Article, cited 14 times
Website

Radiographic assessment of contrast enhancement and T2/FLAIR mismatch sign in lower grade gliomas: correlation with molecular groups

  • Juratli, Tareq A
  • Tummala, Shilpa S
  • Riedl, Angelika
  • Daubner, Dirk
  • Hennig, Silke
  • Penson, Tristan
  • Zolal, Amir
  • Thiede, Christian
  • Schackert, Gabriele
  • Krex, Dietmar
Journal of neuro-oncology 2018 Journal Article, cited 0 times
Website

Multicenter CT phantoms public dataset for radiomics reproducibility tests

  • Kalendralis, Petros
  • Traverso, Alberto
  • Shi, Zhenwei
  • Zhovannik, Ivan
  • Monshouwer, Rene
  • Starmans, Martijn P A
  • Klein, Stefan
  • Pfaehler, Elisabeth
  • Boellaard, Ronald
  • Dekker, Andre
  • Wee, Leonard
Med Phys 2019 Journal Article, cited 0 times
Website
PURPOSE: The aim of this paper is to describe a public, open-access, computed tomography (CT) phantom image set acquired at three centers and collected especially for radiomics reproducibility research. The dataset is useful to test radiomic features reproducibility with respect to various parameters, such as acquisition settings, scanners, and reconstruction algorithms. ACQUISITION AND VALIDATION METHODS: Three phantoms were scanned in three independent institutions. Images of the following phantoms were acquired: Catphan 700 and COPDGene Phantom II (Phantom Laboratory, Greenwich, NY, USA), and the Triple modality 3D Abdominal Phantom (CIRS, Norfolk, VA, USA). Data were collected at three Dutch medical centers: MAASTRO Clinic (Maastricht, NL), Radboud University Medical Center (Nijmegen, NL), and University Medical Center Groningen (Groningen, NL) with scanners from two different manufacturers Siemens Healthcare and Philips Healthcare. The following acquisition parameter were varied in the phantom scans: slice thickness, reconstruction kernels, and tube current. DATA FORMAT AND USAGE NOTES: We made the dataset publically available on the Dutch instance of "Extensible Neuroimaging Archive Toolkit-XNAT" (https://xnat.bmia.nl). The dataset is freely available and reusable with attribution (Creative Commons 3.0 license). POTENTIAL APPLICATIONS: Our goal was to provide a findable, open-access, annotated, and reusable CT phantom dataset for radiomics reproducibility studies. Reproducibility testing and harmonization are fundamental requirements for wide generalizability of radiomics-based clinical prediction models. It is highly desirable to include only reproducible features into models, to be more assured of external validity across hitherto unseen contexts. In this view, phantom data from different centers represent a valuable source of information to exclude CT radiomic features that may already be unstable with respect to simplified structures and tightly controlled scan settings. The intended extension of our shared dataset is to include other modalities and phantoms with more realistic lesion simulations.

Radiomics of Lung Nodules: A Multi-Institutional Study of Robustness and Agreement of Quantitative Imaging Features

  • Kalpathy-Cramer, J.
  • Mamomov, A.
  • Zhao, B.
  • Lu, L.
  • Cherezov, D.
  • Napel, S.
  • Echegaray, S.
  • Rubin, D.
  • McNitt-Gray, M.
  • Lo, P.
  • Sieren, J. C.
  • Uthoff, J.
  • Dilger, S. K.
  • Driscoll, B.
  • Yeung, I.
  • Hadjiiski, L.
  • Cha, K.
  • Balagurunathan, Y.
  • Gillies, R.
  • Goldgof, D.
Tomography: a journal for imaging research 2016 Journal Article, cited 19 times
Website

QIN multi-site collection of Lung CT data with Nodule Segmentations

  • Kalpathy-Cramer, Jayashree
  • Napel, Sandy
  • Goldgof, Dmitry B
  • Zhao, Binsheng
2015 Dataset, cited 0 times

A Comparison of Lung Nodule Segmentation Algorithms: Methods and Results from a Multi-institutional Study

  • Kalpathy-Cramer, Jayashree
  • Zhao, Binsheng
  • Goldgof, Dmitry
  • Gu, Yuhua
  • Wang, Xingwei
  • Yang, Hao
  • Tan, Yongqiang
  • Gillies, Robert
  • Napel, Sandy
Journal of Digital Imaging 2016 Journal Article, cited 18 times
Website

A low cost approach for brain tumor segmentation based on intensity modeling and 3D Random Walker

  • Kanas, Vasileios G
  • Zacharaki, Evangelia I
  • Davatzikos, Christos
  • Sgarbas, Kyriakos N
  • Megalooikonomou, Vasileios
Biomedical Signal Processing and Control 2015 Journal Article, cited 15 times
Website

Learning MRI-based classification models for MGMT methylation status prediction in glioblastoma

  • Kanas, Vasileios G
  • Zacharaki, Evangelia I
  • Thomas, Ginu A
  • Zinn, Pascal O
  • Megalooikonomou, Vasileios
  • Colen, Rivka R
Computer methods and programs in biomedicine 2017 Journal Article, cited 16 times
Website

Neurosense: deep sensing of full or near-full coverage head/brain scans in human magnetic resonance imaging

  • Kanber, B.
  • Ruffle, J.
  • Cardoso, J.
  • Ourselin, S.
  • Ciccarelli, O.
Neuroinformatics 2019 Journal Article, cited 0 times
Website
The application of automated algorithms to imaging requires knowledge of its content, a curatorial task, for which we ordinarily rely on the Digital Imaging and Communications in Medicine (DICOM) header as the only source of image meta-data. However, identifying brain MRI scans that have full or near-full coverage among a large number (e.g. >5000) of scans comprising both head/brain and other body parts is a time-consuming task that cannot be automated with the use of the information stored in the DICOM header attributes alone. Depending on the clinical scenario, an entire set of scans acquired in a single visit may often be labelled “BRAIN” in the DICOM field 0018,0015 (Body Part Examined), while the individual scans will often not only include brain scans with full coverage, but also others with partial brain coverage, scans of the spinal cord, and in some cases other body parts.

3D multi-view convolutional neural networks for lung nodule classification

  • Kang, Guixia
  • Liu, Kui
  • Hou, Beibei
  • Zhang, Ningbo
PLoS One 2017 Journal Article, cited 7 times
Website

Public data and open source tools for multi-assay genomic investigation of disease

  • Kannan, Lavanya
  • Ramos, Marcel
  • Re, Angela
  • El-Hachem, Nehme
  • Safikhani, Zhaleh
  • Gendoo, Deena MA
  • Davis, Sean
  • Gomez-Cabrero, David
  • Castelo, Robert
  • Hansen, Kasper D
Briefings in bioinformatics 2015 Journal Article, cited 28 times
Website

Radiogenomic correlation for prognosis in patients with glioblastoma multiformae

  • Karnayana, Pallavi Machaiah
2013 Thesis, cited 0 times
Website

Identification of Tumor area from Brain MR Image

  • Kasım, Ömer
  • Kuzucuoğlu, Ahmet Emin
2016 Conference Proceedings, cited 1 times
Website

Mediator: A data sharing synchronization platform for heterogeneous medical image archives

  • Kathiravelu, Pradeeban
  • Sharma, Ashish
2015 Conference Proceedings, cited 4 times
Website

On-demand big data integration

  • Kathiravelu, Pradeeban
  • Sharma, Ashish
  • Galhardas, Helena
  • Van Roy, Peter
  • Veiga, Luís
Distributed and Parallel Databases 2018 Journal Article, cited 2 times
Website

“Radiotranscriptomics”: A synergy of imaging and transcriptomics in clinical assessment

  • Katrib, Amal
  • Hsu, William
  • Bui, Alex
  • Xing, Yi
Quantitative Biology 2016 Journal Article, cited 0 times

A joint intensity and edge magnitude-based multilevel thresholding algorithm for the automatic segmentation of pathological MR brain images

  • Kaur, Taranjit
  • Saini, Barjinder Singh
  • Gupta, Savita
Neural Computing and Applications 2016 Journal Article, cited 1 times
Website

ECM-CSD: An Efficient Classification Model for Cancer Stage Diagnosis in CT Lung Images Using FCM and SVM Techniques

  • Kavitha, MS
  • Shanthini, J
  • Sabitha, R
Journal of Medical Systems 2019 Journal Article, cited 0 times
Website

ECIDS-Enhanced Cancer Image Diagnosis and Segmentation Using Artificial Neural Networks and Active Contour Modelling

  • Kavitha, M. S.
  • Shanthini, J.
  • Bhavadharini, R. M.
Journal of Medical Imaging and Health Informatics 2020 Journal Article, cited 0 times
In the present decade, image processing techniques are extensively utilized in various medical image diagnoses, specifically in dealing with cancer images for detection and treatment in advance. The quality of the image and the accuracy are the significant factors to be considered while analyzing the images for cancer diagnosis. With that note, in this paper, an Enhanced Cancer Image Diagnosis and Segmentation (ECIDS) framework has been developed for effective detection and segmentation of lung cancer cells. Initially, the Computed Tomography lung image (CT image) has been processed for denoising by employing kernel based global denoising function. Following that, the noise free lung images are given for feature extraction. The images are further classified into normal and abnormal classes using Feed Forward Artificial Neural Network Classification. With that, the classified lung cancer images are given for segmentation and the process of segmentation has been done here with the Active Contour Modelling with reduced gradient. The segmented cancer images are further given for medical processing. Moreover, the framework is experimented with MATLAB tool using the clinical dataset called LIDC-IDRI lung CT dataset. The results are analyzed and discussed based on some performance evaluation metrics such as energy, Entropy, Correlation and Homogeneity are involved in effective classification.

Radiological Atlas for Patient Specific Model Generation

  • Kawa, Jacek
  • Juszczyk, Jan
  • Pyciński, Bartłomiej
  • Badura, Paweł
  • Pietka, Ewa
2014 Book Section, cited 11 times
Website

Supervised Dimension-Reduction Methods for Brain Tumor Image Data Analysis

  • Kawaguchi, Atsushi
2017 Book Section, cited 1 times
Website

eFis: A Fuzzy Inference Method for Predicting Malignancy of Small Pulmonary Nodules

  • Kaya, Aydın
  • Can, Ahmet Burak
2014 Book Section, cited 3 times
Website

Malignancy prediction by using characteristic-based fuzzy sets: A preliminary study

  • Kaya, Aydin
  • Can, Ahmet Burak
2015 Conference Proceedings, cited 0 times
Website

Computer-aided detection of brain tumors using image processing techniques

  • Kazdal, Seda
  • Dogan, Buket
  • Camurcu, Ali Yilmaz
2015 Conference Proceedings, cited 3 times
Website

Preliminary Detection and Analysis of Lung Cancer on CT images using MATLAB: A Cost-effective Alternative

  • Khan, Md Daud Hossain
  • Ahmed, Mansur
  • Bach, Christian
Journal of Biomedical Engineering and Medical Imaging 2016 Journal Article, cited 0 times

Zonal Segmentation of Prostate T2W-MRI using Atrous Convolutional Neural Network

  • Khan, Zia
  • Yahya, Norashikin
  • Alsaih, Khaled
  • Meriaudeau, Fabrice
2019 Conference Paper, cited 0 times
The number of prostate cancer cases is steadily increasing especially with rising number of ageing population. It is reported that 5-year relative survival rate for man with stage 1 prostate cancer is almost 99% hence, early detection will significantly improve treatment planning and increase survival rate. Magnetic resonance imaging (MRI) technique is a common imaging modality for diagnosis of prostate cancer. MRI provide good visualization of soft tissue and enable better lesion detection and staging of prostate cancer. The main challenge of prostate whole gland segmentation is due to blurry boundary of central gland (CG) and peripheral zone (PZ) which lead to differential diagnosis. Since there is substantial difference in occurance and characteristic of cancer in both zones. So to enhance the diagnosis of prostate gland, we implemented DeeplabV3+ semantic segmentation approach to segment the prostate into zones. DeepLabV3+ achieved significant results in segmentation of prostate MRI by applying several parallel atrous convolution with different rates. The CNN-based semantic segmentation approach is trained and tested on NCI-ISBI 1.5T and 3T MRI dataset consist of 40 patients. Performance evaluation based on Dice similarity coefficient (DSC) of the Deeplab-based segmentation is compared with two other CNN-based semantic segmentation techniques: FCN and PSNet. Results shows that prostate segmentation using DeepLabV3+ can perform better than FCN and PSNet with average DSC of 70.3% in PZ and 88% in CG zone. This indicates the significant contribution made by the atrous convolution layer, in producing better prostate segmentation result.

3D convolution neural networks for molecular subtype prediction in glioblastoma multiforme

  • Khened, Mahendra
  • Anand, Vikas Kumar
  • Acharya, Gagan
  • Shah, Nameeta
  • Krishnamurthi, Ganapathy
2019 Conference Proceedings, cited 0 times
Website

Prediction of 1p/19q Codeletion in Diffuse Glioma Patients Using Preoperative Multiparametric Magnetic Resonance Imaging

  • Kim, Donnie
  • Wang, Nicholas C
  • Ravikumar, Visweswaran
  • Raghuram, DR
  • Li, Jinju
  • Patel, Ankit
  • Wendt, Richard E
  • Rao, Ganesh
  • Rao, Arvind
Frontiers in computational neuroscience 2019 Journal Article, cited 0 times

Associations between gene expression profiles of invasive breast cancer and Breast Imaging Reporting and Data System MRI lexicon

  • Kim, Ga Ram
  • Ku, You Jin
  • Cho, Soon Gu
  • Kim, Sei Joong
  • Min, Byung Soh
Annals of Surgical Treatment and Research 2017 Journal Article, cited 3 times
Website

Modification of population based arterial input function to incorporate individual variation

  • Kim, Harrison
Magnetic Resonance Imaging 2018 Journal Article, cited 2 times
Website

Visual Interpretation of Convolutional Neural Network Predictions in Classifying Medical Image Modalities

  • Kim, Incheol
  • Rajaraman, Sivaramakrishnan
  • Antani, Sameer
Diagnostics (Basel) 2019 Journal Article, cited 0 times
Website
Deep learning (DL) methods are increasingly being applied for developing reliable computer-aided detection (CADe), diagnosis (CADx), and information retrieval algorithms. However, challenges in interpreting and explaining the learned behavior of the DL models hinders their adoption and use in real-world systems. In this study, we propose a novel method called “Class-selective Relevance Mapping” (CRM) for localizing and visualizing discriminative regions of interest (ROI) within a medical image. Such visualizations offer improved explanation of the convolutional neural network (CNN)-based DL model predictions. We demonstrate CRM effectiveness in classifying medical imaging modalities toward automatically labeling them for visual information retrieval applications. The CRM is based on linear sum of incremental mean squared errors (MSE) calculated at the output layer of the CNN model. It measures both positive and negative contributions of each spatial element in the feature maps produced from the last convolution layer leading to correct classification of an input image. A series of experiments on a “multi-modality” CNN model designed for classifying seven different types of image modalities shows that the proposed method is significantly better in detecting and localizing the discriminative ROIs than other state of the art class-activation methods. Further, to visualize its effectiveness we generate “class-specific” ROI maps by averaging the CRM scores of images in each modality class, and characterize the visual explanation through their different size, shape, and location for our multi-modality CNN model that achieved over 98% performance on a dataset constructed from publicly available images.

Training of deep convolutional neural nets to extract radiomic signatures of tumors

  • Kim, J.
  • Seo, S.
  • Ashrafinia, S.
  • Rahmim, A.
  • Sossi, V.
  • Klyuzhin, I.
Journal of Nuclear Medicine 2019 Journal Article, cited 0 times
Website
Objectives: Radiomics-based analysis of FDG PET images has been shown to improve the assessment and prediction of tumor growth rate, response to treatment and other patient outcomes [1]. An alternative new approach to image analysis involves the use of convolutional neural networks (CNNs), wherein relevant image features are learned implicitly and automatically in the process of network training [2]; this is in contrast to radiomics analyses, where the features are “hand-crafted” and are explicitly computed (EC). Although CNNs represent a more general approach, it is not clear whether the implicitly learned features may, or have the ability to include radiomics features (RFs) as a subset. If this is the case, CNN-based approaches may eventually obviate the use of EC RFs. Further, the use of CNNs instead of RFs may completely eliminate the need for feature selection and tumor delineation, enabling high-throughput data analyses. Thus, our objective was to test whether CNNs can learn to act similarly to several commonly used RFs. Using a set of simulated and real FDG PET images of tumors, we train the CNNs to estimate the values of RFs from the images without the explicit computation. We then compare the values of the CNN-estimated and EC features. Methods: Using a stochastic volumetric model for tumor growth, 2000 FDG images of tumors confined to a bounding box (BB) were simulated (40x40x40 voxels, voxel size 2.0 mm), and 10 RFs (3 x morphology, 4 x intensity histogram, 3 x texture features) were computed for each image using the SERA library [3] (compliant with the Image Biomarker Standardization Initiative, IBSI [4]). A 3D CNN with 4 convolutional layers, and a total of 164 filters, was implemented in Python using the Keras library with TensorFlow backend (https://www.keras.io). The mean absolute error was the optimized loss function. The CNN was trained to automatically estimate the values each of the 10 RFs for each image; 1900 of images were used for training, and 100 were used for testing, to compare the CNN-estimated values to the EC feature values. We also used a secondary test set comprised of 133 real tumor images, obtained from the head and neck PET/CT imaging study [5] publicly available at the Cancer Imaging Archive. The tumors were cropped to a BB, and the images were resampled to yield similar image size to the simulated image set. Results: After the training procedure, on the simulated test set the CNN was able to estimate the values of most EC RFs with 10-20% error (relative to the range). In the morphology group, the errors were 3.8% for volume, 12.0% for compactness, 15.7% for flatness. In the intensity group, the errors were 13.7% for the mean, 15.4% for variance, 12.3% for skewness, and 13.1% for kurtosis. In the texture group, the error was 10.6% for GLCM contrast, 13.4% for cluster tendency, and 21.7% for angular momentum. With all features, the difference between the CNN-estimated and EC feature values were statistically insignificant (two-sample t-test), and the correlation between the feature values was highly significant (p<0.01). On the real image test set, we observed higher error rates, on the order of 20-30%; however, with all but one feature (angular momentum), there was a significant correlation between the CNN-estimated and EC features (p<0.01). Conclusions: Our results suggest that CNNs can be trained to act similarly to several widely used RFs. While the accuracy of CNN-based estimates varied between the features, in general, the CNN showed a good propensity for learning. Thus, it is likely that with more complex network architectures and training data, features can be estimated more accurately. While a greater number of RFs need to be similarly tested in the future, these initial experiments provide first evidence that, given the sufficient quality and quantity of the training data, the CNNs indeed represent a more general approach to feature extraction, and may potentially replace radiomics-based analyses without compromising the descriptive thoroughness.

Influence of segmentation margin on machine learning–based high-dimensional quantitative CT texture analysis: a reproducibility study on renal clear cell carcinomas

  • Kocak, Burak
  • Ates, Ece
  • Durmaz, Emine Sebnem
  • Ulusan, Melis Baykara
  • Kilickesmez, Ozgur
European Radiology 2019 Journal Article, cited 0 times
Website

Unenhanced CT Texture Analysis of Clear Cell Renal Cell Carcinomas: A Machine Learning-Based Study for Predicting Histopathologic Nuclear Grade

  • Kocak, Burak
  • Durmaz, Emine Sebnem
  • Ates, Ece
  • Kaya, Ozlem Korkmaz
  • Kilickesmez, Ozgur
American Journal of Roentgenology 2019 Journal Article, cited 0 times
Website
OBJECTIVE: The purpose of this study is to investigate the predictive performance of machine learning (ML)-based unenhanced CT texture analysis in distinguishing low (grades I and II) and high (grades III and IV) nuclear grade clear cell renal cell carcinomas (RCCs). MATERIALS AND METHODS: For this retrospective study, 81 patients with clear cell RCC (56 high and 25 low nuclear grade) were included from a public database. Using 2D manual segmentation, 744 texture features were extracted from unenhanced CT images. Dimension reduction was done in three consecutive steps: reproducibility analysis by two radiologists, collinearity analysis, and feature selection. Models were created using artificial neural network (ANN) and binary logistic regression, with and without synthetic minority oversampling technique (SMOTE), and were validated using 10-fold cross-validation. The reference standard was histopathologic nuclear grade (low vs high). RESULTS: Dimension reduction steps yielded five texture features for the ANN and six for the logistic regression algorithm. None of clinical variables was selected. ANN alone and ANN with SMOTE correctly classified 81.5% and 70.5%, respectively, of clear cell RCCs, with AUC values of 0.714 and 0.702, respectively. The logistic regression algorithm alone and with SMOTE correctly classified 75.3% and 62.5%, respectively, of the tumors, with AUC values of 0.656 and 0.666, respectively. The ANN performed better than the logistic regression (p < 0.05). No statistically significant difference was present between the model performances created with and without SMOTE (p > 0.05). CONCLUSION: ML-based unenhanced CT texture analysis using ANN can be a promising noninvasive method in predicting the nuclear grade of clear cell RCCs.

Radiogenomics of lower-grade gliomas: machine learning-based MRI texture analysis for predicting 1p/19q codeletion status

  • Kocak, B.
  • Durmaz, E. S.
  • Ates, E.
  • Sel, I.
  • Turgut Gunes, S.
  • Kaya, O. K.
  • Zeynalova, A.
  • Kilickesmez, O.
Eur Radiol 2019 Journal Article, cited 0 times
Website
OBJECTIVE: To evaluate the potential value of the machine learning (ML)-based MRI texture analysis for predicting 1p/19q codeletion status of lower-grade gliomas (LGG), using various state-of-the-art ML algorithms. MATERIALS AND METHODS: For this retrospective study, 107 patients with LGG were included from a public database. Texture features were extracted from conventional T2-weighted and contrast-enhanced T1-weighted MRI images, using LIFEx software. Training and unseen validation splits were created using stratified 10-fold cross-validation technique along with minority over-sampling. Dimension reduction was done using collinearity analysis and feature selection (ReliefF). Classifications were done using adaptive boosting, k-nearest neighbours, naive Bayes, neural network, random forest, stochastic gradient descent, and support vector machine. Friedman test and pairwise post hoc analyses were used for comparison of classification performances based on the area under the curve (AUC). RESULTS: Overall, the predictive performance of the ML algorithms were statistically significantly different, chi2(6) = 26.7, p < 0.001. There was no statistically significant difference among the performance of the neural network, naive Bayes, support vector machine, random forest, and stochastic gradient descent, adjusted p > 0.05. The mean AUC and accuracy values of these five algorithms ranged from 0.769 to 0.869 and from 80.1 to 84%, respectively. The neural network had the highest mean rank with mean AUC and accuracy values of 0.869 and 83.8%, respectively. CONCLUSIONS: The ML-based MRI texture analysis might be a promising non-invasive technique for predicting the 1p/19q codeletion status of LGGs. Using this technique along with various ML algorithms, more than four-fifths of the LGGs can be correctly classified. KEY POINTS: * More than four-fifths of the lower-grade gliomas can be correctly classified with machine learning-based MRI texture analysis. Satisfying classification outcomes are not limited to a single algorithm. * A few-slice-based volumetric segmentation technique would be a valid approach, providing satisfactory predictive textural information and avoiding excessive segmentation duration in clinical practice. * Feature selection is sensitive to different patient data set samples so that each sampling leads to the selection of different feature subsets, which needs to be considered in future works.

Reliability of Single-Slice–Based 2D CT Texture Analysis of Renal Masses: Influence of Intra- and Interobserver Manual Segmentation Variability on Radiomic Feature Reproducibility

  • Kocak, Burak
  • Durmaz, Emine Sebnem
  • Kaya, Ozlem Korkmaz
  • Ates, Ece
  • Kilickesmez, Ozgur
AJR Am J Roentgenol 2019 Journal Article, cited 0 times
Website
OBJECTIVE. The objective of our study was to investigate the potential influence of intra- and interobserver manual segmentation variability on the reliability of single-slice-based 2D CT texture analysis of renal masses. MATERIALS AND METHODS. For this retrospective study, 30 patients with clear cell renal cell carcinoma were included from a public database. For intra- and interobserver analyses, three radiologists with varying degrees of experience segmented the tumors from unenhanced CT and corticomedullary phase contrast-enhanced CT (CECT) in different sessions. Each radiologist was blind to the image slices selected by other radiologists and him- or herself in the previous session. A total of 744 texture features were extracted from original, filtered, and transformed images. The intraclass correlation coefficient was used for reliability analysis. RESULTS. In the intraobserver analysis, the rates of features with good to excellent reliability were 84.4-92.2% for unenhanced CT and 85.5-93.1% for CECT. Considering the mean rates of unenhanced CT and CECT, having high experience resulted in better reliability rates in terms of the intraobserver analysis. In the interobserver analysis, the rates were 76.7% for unenhanced CT and 84.9% for CECT. The gray-level cooccurrence matrix and first-order feature groups yielded higher good to excellent reliability rates on both unenhanced CT and CECT. Filtered and transformed images resulted in more features with good to excellent reliability than the original images did on both unenhanced CT and CECT. CONCLUSION. Single-slice-based 2D CT texture analysis of renal masses is sensitive to intra- and interobserver manual segmentation variability. Therefore, it may lead to nonreproducible results in radiomic analysis unless a reliability analysis is considered in the workflow.

Machine learning-based unenhanced CT texture analysis for predicting BAP1 mutation status of clear cell renal cell carcinomas

  • Kocak, Burak
  • Durmaz, Emine Sebnem
  • Kaya, Ozlem Korkmaz
  • Kilickesmez, Ozgur
Acta Radiol 2019 Journal Article, cited 0 times
Website
BACKGROUND: BRCA1-associated protein 1 (BAP1) mutation is an unfavorable factor for overall survival in patients with clear cell renal cell carcinoma (ccRCC). Radiomics literature about BAP1 mutation lacks papers that consider the reliability of texture features in their workflow. PURPOSE: Using texture features with a high inter-observer agreement, we aimed to develop and internally validate a machine learning-based radiomic model for predicting the BAP1 mutation status of ccRCCs. MATERIALS AND METHODS: For this retrospective study, 65 ccRCCs were included from a public database. Texture features were extracted from unenhanced computed tomography (CT) images, using two-dimensional manual segmentation. Dimension reduction was done in three steps: (i) inter-observer agreement analysis; (ii) collinearity analysis; and (iii) feature selection. The machine learning classifier was random forest. The model was validated using 10-fold nested cross-validation. The reference standard was the BAP1 mutation status. RESULTS: Out of 744 features, 468 had an excellent inter-observer agreement. After the collinearity analysis, the number of features decreased to 17. Finally, the wrapper-based algorithm selected six features. Using selected features, the random forest correctly classified 84.6% of the labelled slices regarding BAP1 mutation status with an area under the receiver operating characteristic curve of 0.897. For predicting ccRCCs with BAP1 mutation, the sensitivity, specificity, and precision were 90.4%, 78.8%, and 81%, respectively. For predicting ccRCCs without BAP1 mutation, the sensitivity, specificity, and precision were 78.8%, 90.4%, and 89.1%, respectively. CONCLUSION: Machine learning-based unenhanced CT texture analysis might be a potential method for predicting the BAP1 mutation status of ccRCCs.

A Probabilistic U-Net for Segmentation of Ambiguous Images

  • Kohl, Simon AA
  • Romera-Paredes, Bernardino
  • Meyer, Clemens
  • De Fauw, Jeffrey
  • Ledsam, Joseph R
  • Maier-Hein, Klaus H
  • Eslami, SM
  • Rezende, Danilo Jimenez
  • Ronneberger, Olaf
arXiv preprint arXiv:1806.05034 2018 Journal Article, cited 3 times
Website

Creation and Curation of the Society of Imaging Informatics in Medicine Hackathon Dataset

  • Kohli, Marc
  • Morrison, James J
  • Wawira, Judy
  • Morgan, Matthew B
  • Hostetter, Jason
  • Genereaux, Brad
  • Hussain, Mohannad
  • Langer, Steve G
Journal of Digital Imaging 2017 Journal Article, cited 4 times
Website

Creation and curation of the society of imaging informatics in Medicine Hackathon Dataset

  • Kohli, Marc
  • Morrison, James J
  • Wawira, Judy
  • Morgan, Matthew B
  • Hostetter, Jason
  • Genereaux, Brad
  • Hussain, Mohannad
  • Langer, Steve G
Journal of Digital Imaging 2018 Journal Article, cited 4 times
Website

Feasibility of synthetic computed tomography generated with an adversarial network for multi-sequence magnetic resonance-based brain radiotherapy

  • Koike, Yuhei
  • Akino, Yuichi
  • Sumida, Iori
  • Shiomi, Hiroya
  • Mizuno, Hirokazu
  • Yagi, Masashi
  • Isohashi, Fumiaki
  • Seo, Yuji
  • Suzuki, Osamu
  • Ogawa, Kazuhiko
J Radiat Res 2019 Journal Article, cited 0 times
Website
The aim of this work is to generate synthetic computed tomography (sCT) images from multi-sequence magnetic resonance (MR) images using an adversarial network and to assess the feasibility of sCT-based treatment planning for brain radiotherapy. Datasets for 15 patients with glioblastoma were selected and 580 pairs of CT and MR images were used. T1-weighted, T2-weighted and fluid-attenuated inversion recovery MR sequences were combined to create a three-channel image as input data. A conditional generative adversarial network (cGAN) was trained using image patches. The image quality was evaluated using voxel-wise mean absolute errors (MAEs) of the CT number. For the dosimetric evaluation, 3D conformal radiotherapy (3D-CRT) and volumetric modulated arc therapy (VMAT) plans were generated using the original CT set and recalculated using the sCT images. The isocenter dose and dose-volume parameters were compared for 3D-CRT and VMAT plans, respectively. The equivalent path length was also compared. The mean MAEs for the whole body, soft tissue and bone region were 108.1 +/- 24.0, 38.9 +/- 10.7 and 366.2 +/- 62.0 hounsfield unit, respectively. The dosimetric evaluation revealed no significant difference in the isocenter dose for 3D-CRT plans. The differences in the dose received by 2% of the volume (D2%), D50% and D98% relative to the prescribed dose were <1.0%. The overall equivalent path length was shorter than that for real CT by 0.6 +/- 1.9 mm. A treatment planning study using generated sCT detected only small, clinically negligible differences. These findings demonstrated the feasibility of generating sCT images for MR-only radiotherapy from multi-sequence MR images using cGAN.

Investigating the role of model-based and model-free imaging biomarkers as early predictors of neoadjuvant breast cancer therapy outcome

  • Kontopodis, Eleftherios
  • Venianaki, Maria
  • Manikis, George C
  • Nikiforaki, Katerina
  • Salvetti, Ovidio
  • Papadaki, Efrosini
  • Papadakis, Georgios Z
  • Karantanas, Apostolos H
  • Marias, Kostas
IEEE J Biomed Health Inform 2019 Journal Article, cited 0 times
Website
Imaging biomarkers (IBs) play a critical role in the clinical management of breast cancer (BRCA) patients throughout the cancer continuum for screening, diagnosis and therapy assessment especially in the neoadjuvant setting. However, certain model-based IBs suffer from significant variability due to the complex workflows involved in their computation, whereas model-free IBs have not been properly studied regarding clinical outcome. In the present study, IBs from 35 BRCA patients who received neoadjuvant chemotherapy (NAC) were extracted from dynamic contrast enhanced MR imaging (DCE-MRI) data with two different approaches, a model-free approach based on pattern recognition (PR), and a model-based one using pharmacokinetic compartmental modeling. Our analysis found that both model-free and model-based biomarkers can predict pathological complete response (pCR) after the first cycle of NAC. Overall, 8 biomarkers predicted the treatment response after the first cycle of NAC, with statistical significance (p-value<0.05), and 3 at the baseline. The best pCR predictors at first follow-up, achieving high AUC and sensitivity and specificity more than 50%, were the hypoxic component with threshold2 (AUC 90.4%) from the PR method, and the median value of kep (AUC 73.4%) from the model-based approach. Moreover, the 80th percentile of ve achieved the highest pCR prediction at baseline with AUC 78.5%. The results suggest that model-free DCE-MRI IBs could be a more robust alternative to complex, model-based ones such as kep and favor the hypothesis that the PR image-derived hypoxic image component captures actual tumor hypoxia information able to predict BRCA NAC outcome.

Automated Segmentation of Hyperintense Regions in FLAIR MRI Using Deep Learning

  • Korfiatis, Panagiotis
  • Kline, Timothy L
  • Erickson, Bradley J
Tomography: a journal for imaging research 2016 Journal Article, cited 16 times
Website

Radiomics in Brain Tumors: An Emerging Technique for Characterization of Tumor Environment

  • Kotrotsou, Aikaterini
  • Zinn, Pascal O
  • Colen, Rivka R
Magnetic Resonance Imaging Clinics of North America 2016 Journal Article, cited 20 times
Website

The quest for'diagnostically lossless' medical image compression: a comparative study of objective quality metrics for compressed medical images

  • Kowalik-Urbaniak, Ilona
  • Brunet, Dominique
  • Wang, Jiheng
  • Koff, David
  • Smolarski-Koff, Nadine
  • Vrscay, Edward R
  • Wallace, Bill
  • Wang, Zhou
2014 Conference Proceedings, cited 0 times

Lupsix: A Cascade Framework for Lung Parenchyma Segmentation in Axial CT Images

  • Koyuncu, Hasan
International Journal of Intelligent Systems and Applications in Engineering 2018 Journal Article, cited 0 times
Website

Impact of internal target volume definition for pencil beam scanned proton treatment planning in the presence of respiratory motion variability for lung cancer: A proof of concept

  • Krieger, Miriam
  • Giger, Alina
  • Salomir, Rares
  • Bieri, Oliver
  • Celicanin, Zarko
  • Cattin, Philippe C
  • Lomax, Antony J
  • Weber, Damien C
  • Zhang, Ye
Radiotherapy and Oncology 2020 Journal Article, cited 0 times
Website

Medical (CT) image generation with style

  • Krishna, Arjun
  • Mueller, Klaus
2019 Conference Proceedings, cited 0 times

Performance Analysis of Denoising in MR Images with Double Density Dual Tree Complex Wavelets, Curvelets and NonSubsampled Contourlet Transforms

  • Krishnakumar, V
  • Parthiban, Latha
Annual Review & Research in Biology 2014 Journal Article, cited 0 times

Three-dimensional lung nodule segmentation and shape variance analysis to detect lung cancer with reduced false positives

  • Krishnamurthy, Senthilkumar
  • Narasimhan, Ganesh
  • Rengasamy, Umamaheswari
Proceedings of the Institution of Mechanical Engineers, Part H: Journal of Engineering in Medicine 2016 Journal Article, cited 17 times
Website

An Level Set Evolution Morphology Based Segmentation of Lung Nodules and False Nodule Elimination by 3D Centroid Shift and Frequency Domain DC Constant Analysis

  • Krishnamurthy, Senthilkumar
  • Narasimhan, Ganesh
  • Rengasamy, Umamaheswari
International Journal of u- and e- Service, Science and Technology 2016 Journal Article, cited 0 times
Website

Content-Based Medical Image Retrieval: A Survey of Applications to Multidimensional and Multimodality Data

  • Kumar, Ashnil
  • Kim, Jinman
  • Cai, Weidong
  • Fulham, Michael
  • Feng, Dagan
Journal of Digital Imaging 2013 Journal Article, cited 109 times
Website

A Visual Analytics Approach using the Exploration of Multi-Dimensional Feature Spaces for Content-based Medical Image Retrieval

  • Kumar, Ajit
  • Nette, Falk
  • Klein, Krystal
  • Fulham, Michael
  • Kim, Jung-Ho
2014 Journal Article, cited 13 times
Website

Discovery radiomics for pathologically-proven computed tomography lung cancer prediction

  • Kumar, Devinder
  • Chung, Audrey G
  • Shaifee, Mohammad J
  • Khalvati, Farzad
  • Haider, Masoom A
  • Wong, Alexander
2017 Conference Proceedings, cited 30 times
Website

Lung Nodule Classification Using Deep Features in CT Images

  • Kumar, Devinder
  • Wong, Alexander
  • Clausi, David A
2015 Conference Proceedings, cited 114 times
Website

Computer-Aided Diagnosis of Life-Threatening Diseases

  • Kumar, Pramod
  • Ambekar, Sameer
  • Roy, Subarna
  • Kunchur, Pavan
2019 Book Section, cited 0 times
According to WHO, the incidence of life-threatening diseases like cancer, diabetes, and Alzheimer’s disease is escalating globally. In the past few decades, traditional methods have been used to diagnose such diseases. These traditional methods often have limitations such as lack of accuracy, expense, and time-consuming procedures. Computer-aided diagnosis (CAD) aims to overcome these limitations by personalizing healthcare issues. Machine learning is a promising CAD method, offering effective solutions for these diseases. It is being used for early detection of cancer, diabetic retinopathy, as well as Alzheimer’s disease, and also to identify diseases in plants. Machine learning can increase efficiency, making the process more cost effective, with quicker delivery of results. There are several CAD algorithms (ANN, SVM, etc.) that can be used to train the disease dataset, and eventually make significant predictions. It has also been proven that CAD algorithms have potential to diagnose and early detection of life-threatening diseases.

Human Ether-a-Go-Go-Related-1 Gene (hERG) K+ Channel as a Prognostic Marker and Therapeutic Target for Glioblastoma

  • Kuo, John S.
  • Pointer, Kelli Briana
  • Clark, Paul A.
  • Robertson, Gail
Neurosurgery 2015 Journal Article, cited 0 times
Website

Combining Generative Models for Multifocal Glioma Segmentation and Registration

  • Kwon, Dongjin
  • Shinohara, Russell T
  • Akbari, Hamed
  • Davatzikos, Christos
2014 Book Section, cited 55 times
Website

Glioma Segmentation with Cascaded Unet

  • Lachinov, Dmitry
  • Vasiliev, Evgeny
  • Turlapov, Vadim
arXiv preprint arXiv:1810.04008 2018 Journal Article, cited 0 times
Website

Acute Tumor Transition Angle on Computed Tomography Predicts Chromosomal Instability Status of Primary Gastric Cancer: Radiogenomics Analysis from TCGA and Independent Validation

  • Lai, Ying-Chieh
  • Yeh, Ta-Sen
  • Wu, Ren-Chin
  • Tsai, Cheng-Kun
  • Yang, Lan-Yan
  • Lin, Gigin
  • Kuo, Michael D
Cancers 2019 Journal Article, cited 0 times
Website
Chromosomal instability (CIN) of gastric cancer is correlated with distinct outcomes. This study aimed to investigate the role of computed tomography (CT) imaging traits in predicting the CIN status of gastric cancer. We screened 443 patients in the Cancer Genome Atlas gastric cancer cohort to filter 40 patients with complete CT imaging and genomic data as the training cohort. CT imaging traits were subjected to logistic regression to select independent predictors for the CIN status. For the validation cohort, we prospectively enrolled 18 gastric cancer patients for CT and tumor genomic analysis. The imaging predictors were tested in the validation cohort using receiver operating characteristic curve (ROC) analysis. Thirty patients (75%) in the training cohort and 9 patients (50%) in the validation cohort had CIN subtype gastric cancers. Smaller tumor diameter (p = 0.017) and acute tumor transition angle (p = 0.045) independently predict CIN status in the training cohort. In the validation cohort, acute tumor transition angle demonstrated the highest accuracy, sensitivity, and specificity of 88.9%, 88.9%, and 88.9%, respectively, and areas under ROC curve of 0.89. In conclusion, this pilot study showed acute tumor transition angle on CT images may predict the CIN status of gastric cancer.

Textural Analysis of Tumour Imaging: A Radiomics Approach

  • Lambrecht, Joren
2017 Thesis, cited 0 times
Website

A simple texture feature for retrieval of medical images

  • Lan, Rushi
  • Zhong, Si
  • Liu, Zhenbing
  • Shi, Zhuo
  • Luo, Xiaonan
Multimedia Tools and Applications 2017 Journal Article, cited 2 times
Website

Collaborative and Reproducible Research: Goals, Challenges, and Strategies

  • Langer, S. G.
  • Shih, G.
  • Nagy, P.
  • Landman, B. A.
J Digit Imaging 2018 Journal Article, cited 1 times
Website
Combining imaging biomarkers with genomic and clinical phenotype data is the foundation of precision medicine research efforts. Yet, biomedical imaging research requires unique infrastructure compared with principally text-driven clinical electronic medical record (EMR) data. The issues are related to the binary nature of the file format and transport mechanism for medical images as well as the post-processing image segmentation and registration needed to combine anatomical and physiological imaging data sources. The SiiM Machine Learning Committee was formed to analyze the gaps and challenges surrounding research into machine learning in medical imaging and to find ways to mitigate these issues. At the 2017 annual meeting, a whiteboard session was held to rank the most pressing issues and develop strategies to meet them. The results, and further reflections, are summarized in this paper.

A Roadmap for Foundational Research on Artificial Intelligence in Medical Imaging: From the 2018 NIH/RSNA/ACR/The Academy Workshop

  • Langlotz, Curtis P
  • Allen, Bibb
  • Erickson, Bradley J
  • Kalpathy-Cramer, Jayashree
  • Bigelow, Keith
  • Cook, Tessa S
  • Flanders, Adam E
  • Lungren, Matthew P
  • Mendelson, David S
  • Rudie, Jeffrey D
  • Wang, Ge
  • Kandarpa, Krishna
Radiology 2019 Journal Article, cited 1 times
Website
Imaging research laboratories are rapidly creating machine learning systems that achieve expert human performance using open-source methods and tools. These artificial intelligence systems are being developed to improve medical image reconstruction, noise reduction, quality assurance, triage, segmentation, computer-aided detection, computer-aided classification, and radiogenomics. In August 2018, a meeting was held in Bethesda, Maryland, at the National Institutes of Health to discuss the current state of the art and knowledge gaps and to develop a roadmap for future research initiatives. Key research priorities include: 1, new image reconstruction methods that efficiently produce images suitable for human interpretation from source data; 2, automated image labeling and annotation methods, including information extraction from the imaging report, electronic phenotyping, and prospective structured image reporting; 3, new machine learning methods for clinical imaging data, such as tailored, pretrained model architectures, and federated machine learning methods; 4, machine learning methods that can explain the advice they provide to human users (so-called explainable artificial intelligence); and 5, validated methods for image de-identification and data sharing to facilitate wide availability of clinical imaging data sets. This research roadmap is intended to identify and prioritize these needs for academic research laboratories, funding agencies, professional societies, and industry.

A Deep Learning-Based Radiomics Model for Prediction of Survival in Glioblastoma Multiforme

  • Lao, Jiangwei
  • Chen, Yinsheng
  • Li, Zhi-Cheng
  • Li, Qihua
  • Zhang, Ji
  • Liu, Jing
  • Zhai, Guangtao
Sci RepScientific reports 2017 Journal Article, cited 32 times
Website
Traditional radiomics models mainly rely on explicitly-designed handcrafted features from medical images. This paper aimed to investigate if deep features extracted via transfer learning can generate radiomics signatures for prediction of overall survival (OS) in patients with Glioblastoma Multiforme (GBM). This study comprised a discovery data set of 75 patients and an independent validation data set of 37 patients. A total of 1403 handcrafted features and 98304 deep features were extracted from preoperative multi-modality MR images. After feature selection, a six-deep-feature signature was constructed by using the least absolute shrinkage and selection operator (LASSO) Cox regression model. A radiomics nomogram was further presented by combining the signature and clinical risk factors such as age and Karnofsky Performance Score. Compared with traditional risk factors, the proposed signature achieved better performance for prediction of OS (C-index = 0.710, 95% CI: 0.588, 0.932) and significant stratification of patients into prognostically distinct groups (P < 0.001, HR = 5.128, 95% CI: 2.029, 12.960). The combined model achieved improved predictive performance (C-index = 0.739). Our study demonstrates that transfer learning-based deep features are able to generate prognostic imaging signature for OS prediction and patient stratification for GBM, indicating the potential of deep imaging feature-based biomarker in preoperative care of GBM patients.

A Hybrid End-to-End Approach Integrating Conditional Random Fields into CNNs for Prostate Cancer Detection on MRI

  • Lapa, Paulo
  • Castelli, Mauro
  • Gonçalves, Ivo
  • Sala, Evis
  • Rundo, Leonardo
Applied Sciences 2020 Journal Article, cited 0 times

Semantic learning machine improves the CNN-Based detection of prostate cancer in non-contrast-enhanced MRI

  • Lapa, Paulo
  • Gonçalves, Ivo
  • Rundo, Leonardo
  • Castelli, Mauro
2019 Conference Proceedings, cited 0 times
Website
Considering that Prostate Cancer (PCa) is the most frequently diagnosed tumor in Western men, considerable attention has been devoted in computer-assisted PCa detection approaches. However, this task still represents an open research question. In the clinical practice, multiparametric Magnetic Resonance Imaging (MRI) is becoming the most used modality, aiming at defining biomarkers for PCa. In the latest years, deep learning techniques have boosted the performance in prostate MR image analysis and classification. This work explores the use of the Semantic Learning Machine (SLM) neuroevolution algorithm to replace the backpropagation algorithm commonly used in the last fully-connected layers of Convolutional Neural Networks (CNNs). We analyzed the non-contrast-enhanced multispectral MRI sequences included in the PROSTATEx dataset, namely: T2-weighted, Proton Density weighted, Diffusion Weighted Imaging. The experimental results show that the SLM significantly outperforms XmasNet, a state-of-the-art CNN. In particular, with respect to XmasNet, the SLM achieves higher classification accuracy (without neither pre-training the underlying CNN nor relying on backprogation) as well as a speed-up of one order of magnitude.

4DCT imaging to assess radiomics feature stability: An investigation for thoracic cancers

  • Larue, Ruben THM
  • Van De Voorde, Lien
  • van Timmeren, Janna E
  • Leijenaar, Ralph TH
  • Berbée, Maaike
  • Sosef, Meindert N
  • Schreurs, Wendy MJ
  • van Elmpt, Wouter
  • Lambin, Philippe
Radiotherapy and Oncology 2017 Journal Article, cited 7 times
Website

Robust semi-automatic segmentation of pulmonary subsolid nodules in chest computed tomography scans

  • Lassen, BC
  • Jacobs, C
  • Kuhnigk, JM
  • van Ginneken, B
  • van Rikxoort, EM
Physics in medicine and biology 2015 Journal Article, cited 25 times
Website

Discrimination of Benign and Malignant Suspicious BreastTumors Based on Semi-Quantitative DCE-MRI ParametersEmploying Support Vector Machine

  • Lavasani, Saeedeh Navaei
  • Kazerooni, Anahita Fathi
  • Rad, Hamidreza Saligheh
  • Gity, Masoumeh
Frontiers in Biomedical Technologies 2015 Journal Article, cited 4 times
Website

Automatic Prostate Cancer Segmentation Using Kinetic Analysis in Dynamic Contrast-Enhanced MRI

  • Lavasani, S Navaei
  • Mostaar, A
  • Ashtiyani, M
Journal of Biomedical Physics and Engineering 2017 Journal Article, cited 0 times
Website

Automatic Prostate Cancer Segmentation Using Kinetic Analysis in Dynamic Contrast-Enhanced MRI

  • Lavasani, S Navaei
  • Mostaar, A
  • Ashtiyani, M
Journal of Biomedical Physics & Engineering 2018 Journal Article, cited 0 times
Website

Liver Tumor Segmentation from MR Images Using 3D Fast Marching Algorithm and Single Hidden Layer Feedforward Neural Network

  • Le, Trong-Ngoc
  • Huynh, Hieu Trung
BioMed Research International 2016 Journal Article, cited 5 times
Website

Automatic GPU memory management for large neural models in TensorFlow

  • Le, Tung D.
  • Imai, Haruki
  • Negishi, Yasushi
  • Kawachiya, Kiyokuni
2019 Conference Proceedings, cited 0 times
Website
Deep learning models are becoming larger and will not fit inthe limited memory of accelerators such as GPUs for train-ing. Though many methods have been proposed to solvethis problem, they are rather ad-hoc in nature and difficultto extend and integrate with other techniques. In this pa-per, we tackle the problem in a formal way to provide astrong foundation for supporting large models. We proposea method of formally rewriting the computational graph of amodel where swap-out and swap-in operations are insertedto temporarily store intermediate results on CPU memory.By introducing a categorized topological ordering for simu-lating graph execution, the memory consumption of a modelcan be easily analyzed by using operation distances in theordering. As a result, the problem of fitting a large model intoa memory-limited accelerator is reduced to the problem ofreducing operation distances in a categorized topological or-dering. We then show how to formally derive swap-out andswap-in operations from an existing graph and present rulesto optimize the graph. Finally, we propose a simulation-basedauto-tuning to automatically find suitable graph-rewritingparameters for the best performance. We developed a modulein TensorFlow, calledLMS, by which we successfully trainedResNet-50 with a4.9x larger mini-batch size and 3D U-Netwith a5.6x larger image resolution.

A Three-Dimensional-Printed Patient-Specific Phantom for External Beam Radiation Therapy of Prostate Cancer

  • Lee, Christopher L
  • Dietrich, Max C
  • Desai, Uma G
  • Das, Ankur
  • Yu, Suhong
  • Xiang, Hong F
  • Jaffe, C Carl
  • Hirsch, Ariel E
  • Bloch, B Nicolas
Journal of Engineering and Science in Medical Diagnostics and Therapy 2018 Journal Article, cited 0 times
Website

High quality imaging from sparsely sampled computed tomography data with deep learning and wavelet transform in various domains

  • Lee, Donghoong
  • Choi, Sunghoon
  • Kim, Hee‐Joung
Medical physics 2018 Journal Article, cited 0 times
Website

Restoration of Full Data from Sparse Data in Low-Dose Chest Digital Tomosynthesis Using Deep Convolutional Neural Networks

  • Lee, Donghoon
  • Kim, Hee-Joung
Journal of Digital Imaging 2018 Journal Article, cited 0 times
Website

Comparison of novel multi-level Otsu (MO-PET) and conventional PET segmentation methods for measuring FDG metabolic tumor volume in patients with soft tissue sarcoma

  • Lee, Inki
  • Im, Hyung-Jun
  • Solaiyappan, Meiyappan
  • Cho, Steve Y
EJNMMI physics 2017 Journal Article, cited 0 times
Website

Volumetric and Voxel-Wise Analysis of Dominant Intraprostatic Lesions on Multiparametric MRI

  • Lee, Joon
  • Carver, Eric
  • Feldman, Aharon
  • Pantelic, Milan V
  • Elshaikh, Mohamed
  • Wen, Ning
Front Oncol 2019 Journal Article, cited 0 times
Website
Introduction: Multiparametric MR imaging (mpMRI) has shown promising results in the diagnosis and localization of prostate cancer. Furthermore, mpMRI may play an important role in identifying the dominant intraprostatic lesion (DIL) for radiotherapy boost. We sought to investigate the level of correlation between dominant tumor foci contoured on various mpMRI sequences. Methods: mpMRI data from 90 patients with MR-guided biopsy-proven prostate cancer were obtained from the SPIE-AAPM-NCI Prostate MR Classification Challenge. Each case consisted of T2-weighted (T2W), apparent diffusion coefficient (ADC), and K(trans) images computed from dynamic contrast-enhanced sequences. All image sets were rigidly co-registered, and the dominant tumor foci were identified and contoured for each MRI sequence. Hausdorff distance (HD), mean distance to agreement (MDA), and Dice and Jaccard coefficients were calculated between the contours for each pair of MRI sequences (i.e., T2 vs. ADC, T2 vs. K(trans), and ADC vs. K(trans)). The voxel wise spearman correlation was also obtained between these image pairs. Results: The DILs were located in the anterior fibromuscular stroma, central zone, peripheral zone, and transition zone in 35.2, 5.6, 32.4, and 25.4% of patients, respectively. Gleason grade groups 1-5 represented 29.6, 40.8, 15.5, and 14.1% of the study population, respectively (with group grades 4 and 5 analyzed together). The mean contour volumes for the T2W images, and the ADC and K(trans) maps were 2.14 +/- 2.1, 2.22 +/- 2.2, and 1.84 +/- 1.5 mL, respectively. K(trans) values were indistinguishable between cancerous regions and the rest of prostatic regions for 19 patients. The Dice coefficient and Jaccard index were 0.74 +/- 0.13, 0.60 +/- 0.15 for T2W-ADC and 0.61 +/- 0.16, 0.46 +/- 0.16 for T2W-K(trans). The voxel-based Spearman correlations were 0.20 +/- 0.20 for T2W-ADC and 0.13 +/- 0.25 for T2W-K(trans). Conclusions: The DIL contoured on T2W images had a high level of agreement with those contoured on ADC maps, but there was little to no quantitative correlation of these results with tumor location and Gleason grade group. Technical hurdles are yet to be solved for precision radiotherapy to target the DILs based on physiological imaging. A Boolean sum volume (BSV) incorporating all available MR sequences may be reasonable in delineating the DIL boost volume.

Prognostic value and molecular correlates of a CT image-based quantitative pleural contact index in early stage NSCLC

  • Lee, Juheon
  • Cui, Yi
  • Sun, Xiaoli
  • Li, Bailiang
  • Wu, Jia
  • Li, Dengwang
  • Gensheimer, Michael F
  • Loo, Billy W
  • Diehn, Maximilian
  • Li, Ruijiang
European Radiology 2017 Journal Article, cited 3 times
Website

Prognostic value and molecular correlates of a CT image-based quantitative pleural contact index in early stage NSCLC

  • Lee, Juheon
  • Cui, Yi
  • Sun, Xiaoli
  • Li, Bailiang
  • Wu, Jia
  • Li, Dengwang
  • Gensheimer, Michael F
  • Loo, Billy W
  • Diehn, Maximilian
  • Li, Ruijiang
European Radiology 2018 Journal Article, cited 3 times
Website

Texture feature ratios from relative CBV maps of perfusion MRI are associated with patient survival in glioblastoma

  • Lee, J
  • Jain, R
  • Khalil, K
  • Griffith, B
  • Bosca, R
  • Rao, G
  • Rao, A
American Journal of Neuroradiology 2016 Journal Article, cited 27 times
Website

Spatial Habitat Features Derived from Multiparametric Magnetic Resonance Imaging Data Are Associated with Molecular Subtype and 12-Month Survival Status in Glioblastoma Multiforme

  • Lee, Joonsang
  • Narang, Shivali
  • Martinez, Juan
  • Rao, Ganesh
  • Rao, Arvind
PLoS One 2015 Journal Article, cited 14 times
Website

Spatial Habitat Features Derived from Multiparametric Magnetic Resonance Imaging Data from Glioblastoma Multiforme cases

  • Lee, Joonsang
  • Narang, Shivali
  • Martinez, Juan
  • Rao, Ganesh
  • Rao, Arvind
2015 Dataset, cited 0 times
Description This dataset pertains to 74 cases from the GBM dataset on which spatial pattern analysis was performed. Spatial Habitat Features derived from Multiparametric Magnetic Resonance Imaging data are associated with Molecular Subtype and 12-month Survival Status in Glioblastoma Multiforme. Publication Citation Lee, J., Narang, S., Martinez, J., Rao, G., & Rao, A. (2015, September 14). Spatial Habitat Features Derived from Multiparametric Magnetic Resonance Imaging Data Are Associated with Molecular Subtype and 12-Month Survival Status in Glioblastoma Multiforme. (T. Jiang, Ed.)PLOS ONE. Public Library of Science (PLoS). http://doi.org/10.1371/journal.pone.0136557

Associating spatial diversity features of radiologically defined tumor habitats with epidermal growth factor receptor driver status and 12-month survival in glioblastoma: methods and preliminary investigation

  • Lee, Joonsang
  • Narang, Shivali
  • Martinez, Juan J
  • Rao, Ganesh
  • Rao, Arvind
Journal of Medical Imaging 2015 Journal Article, cited 15 times
Website

Spatiotemporal genomic architecture informs precision oncology in glioblastoma

  • Lee, Jin-Ku
  • Wang, Jiguang
  • Sa, Jason K.
  • Ladewig, Erik
  • Lee, Hae-Ock
  • Lee, In-Hee
  • Kang, Hyun Ju
  • Rosenbloom, Daniel S.
  • Camara, Pablo G.
  • Liu, Zhaoqi
  • van Nieuwenhuizen, Patrick
  • Jung, Sang Won
  • Choi, Seung Won
  • Kim, Junhyung
  • Chen, Andrew
  • Kim, Kyu-Tae
  • Shin, Sang
  • Seo, Yun Jee
  • Oh, Jin-Mi
  • Shin, Yong Jae
  • Park, Chul-Kee
  • Kong, Doo-Sik
  • Seol, Ho Jun
  • Blumberg, Andrew
  • Lee, Jung-Il
  • Iavarone, Antonio
  • Park, Woong-Yang
  • Rabadan, Raul
  • Nam, Do-Hyun
Nat Genet 2017 Journal Article, cited 45 times
Website

Quality of Radiomic Features in Glioblastoma Multiforme: Impact of Semi-Automated Tumor Segmentation Software

  • Lee, Myungeun
  • Woo, Boyeong
  • Kuo, Michael D
  • Jamshidi, Neema
  • Kim, Jong Hyo
Korean journal of radiology 2017 Journal Article, cited 7 times
Website

High-dimensional regression analysis links magnetic resonance imaging features and protein expression and signaling pathway alterations in breast invasive carcinoma

  • Lehrer, M.
  • Bhadra, A.
  • Aithala, S.
  • Ravikumar, V.
  • Zheng, Y.
  • Dogan, B.
  • Bonaccio, E.
  • Burnside, E. S.
  • Morris, E.
  • Sutton, E.
  • Whitman, G. J.
  • Net, J.
  • Brandt, K.
  • Ganott, M.
  • Zuley, M.
  • Rao, A.
  • Tcga Breast Phenotype Research Group
Oncoscience 2018 Journal Article, cited 0 times
Website
Background: Imaging features derived from MRI scans can be used for not only breast cancer detection and measuring disease extent, but can also determine gene expression and patient outcomes. The relationships between imaging features, gene/protein expression, and response to therapy hold potential to guide personalized medicine. We aim to characterize the relationship between radiologist-annotated tumor phenotypic features (based on MRI) and the underlying biological processes (based on proteomic profiling) in the tumor. Methods: Multiple-response regression of the image-derived, radiologist-scored features with reverse-phase protein array expression levels generated association coefficients for each combination of image-feature and protein in the RPPA dataset. Significantly-associated proteins for features were analyzed with Ingenuity Pathway Analysis software. Hierarchical clustering of the results of the pathway analysis determined which features were most strongly correlated with pathway activity and cellular functions. Results: Each of the twenty-nine imaging features was found to have a set of significantly correlated molecules, associated biological functions, and pathways. Conclusions: We interrogated the pathway alterations represented by the protein expression associated with each imaging feature. Our study demonstrates the relationships between biological processes (via proteomic measurements) and MRI features within breast tumors.

Multiple-response regression analysis links magnetic resonance imaging features to de-regulated protein expression and pathway activity in lower grade glioma

  • Lehrer, Michael
  • Bhadra, Anindya
  • Ravikumar, Visweswaran
  • Chen, James Y
  • Wintermark, Max
  • Hwang, Scott N
  • Holder, Chad A
  • Huang, Erich P
  • Fevrier-Sullivan, Brenda
  • Freymann, John B
Oncoscience 2017 Journal Article, cited 1 times
Website

Automated Segmentation of Prostate MR Images Using Prior Knowledge Enhanced Random Walker

  • Li, Ang
  • Li, Changyang
  • Wang, Xiuying
  • Eberl, Stefan
  • Feng, David Dagan
  • Fulham, Michael
2013 Conference Proceedings, cited 9 times
Website

Low-Dose CT streak artifacts removal using deep residual neural network

  • Li, Heyi
  • Mueller, Klaus
2017 Conference Proceedings, cited 6 times
Website

MR Imaging Radiomics Signatures for Predicting the Risk of Breast Cancer Recurrence as Given by Research Versions of MammaPrint, Oncotype DX, and PAM50 Gene Assays

  • Li, Hui
  • Zhu, Yitan
  • Burnside, Elizabeth S
  • Drukker, Karen
  • Hoadley, Katherine A
  • Fan, Cheng
  • Conzen, Suzanne D
  • Whitman, Gary J
  • Sutton, Elizabeth J
  • Net, Jose M
Radiology 2016 Journal Article, cited 103 times
Website

Quantitative MRI radiomics in the prediction of molecular classifications of breast cancer subtypes in the TCGA/TCIA data set

  • Li, Hui
  • Zhu, Yitan
  • Burnside, Elizabeth S
  • Huang, Erich
  • Drukker, Karen
  • Hoadley, Katherine A
  • Fan, Cheng
  • Conzen, Suzanne D
  • Zuley, Margarita
  • Net, Jose M
npj Breast Cancer 2016 Journal Article, cited 63 times
Website

Evaluating the performance of a deep learning‐based computer‐aided diagnosis (DL‐CAD) system for detecting and characterizing lung nodules: Comparison with the performance of double reading by radiologists

  • Li, Li
  • Liu, Zhou
  • Huang, Hua
  • Lin, Meng
  • Luo, Dehong
Thoracic cancer 2018 Journal Article, cited 0 times
Website

Patient-specific biomechanical model as whole-body CT image registration tool

  • Li, Mao
  • Miller, Karol
  • Joldes, Grand Roman
  • Doyle, Barry
  • Garlapati, Revanth Reddy
  • Kikinis, Ron
  • Wittek, Adam
Medical Image Analysis 2015 Journal Article, cited 15 times
Website

Biomechanical model for computing deformations for whole‐body image registration: A meshless approach

  • Li, Mao
  • Miller, Karol
  • Joldes, Grand Roman
  • Kikinis, Ron
  • Wittek, Adam
International Journal for Numerical Methods in Biomedical Engineering 2016 Journal Article, cited 13 times
Website

A Fully-Automatic Multiparametric Radiomics Model: Towards Reproducible and Prognostic Imaging Signature for Prediction of Overall Survival in Glioblastoma Multiforme

  • Li, Qihua
  • Bai, Hongmin
  • Chen, Yinsheng
  • Sun, Qiuchang
  • Liu, Lei
  • Zhou, Sijie
  • Wang, Guoliang
  • Liang, Chaofeng
  • Li, Zhi-Cheng
Sci RepScientific reports 2017 Journal Article, cited 9 times
Website

Comparison Between Radiological Semantic Features and Lung-RADS in Predicting Malignancy of Screen-Detected Lung Nodules in the National Lung Screening Trial

  • Li, Qian
  • Balagurunathan, Yoganand
  • Liu, Ying
  • Qi, Jin
  • Schabath, Matthew B
  • Ye, Zhaoxiang
  • Gillies, Robert J
Clinical lung cancer 2017 Journal Article, cited 3 times
Website

Data From QIN-Breast

  • Li, Xia
  • Abramson, Richard G.
  • Arlinghaus, Lori R.
  • Chakravarthy, Anuradha B.
  • Abramson, Vandana G.
  • Sanders, Melinda
  • Yankeelov, Thomas E.
2016 Dataset, cited 0 times
Website

Genotype prediction of ATRX mutation in lower-grade gliomas using an MRI radiomics signature

  • Li, Y.
  • Liu, X.
  • Qian, Z.
  • Sun, Z.
  • Xu, K.
  • Wang, K.
  • Fan, X.
  • Zhang, Z.
  • Li, S.
  • Wang, Y.
  • Jiang, T.
Eur Radiol 2018 Journal Article, cited 2 times
Website
OBJECTIVES: To predict ATRX mutation status in patients with lower-grade gliomas using radiomic analysis. METHODS: Cancer Genome Atlas (TCGA) patients with lower-grade gliomas were randomly allocated into training (n = 63) and validation (n = 32) sets. An independent external-validation set (n = 91) was built based on the Chinese Genome Atlas (CGGA) database. After feature extraction, an ATRX-related signature was constructed. Subsequently, the radiomic signature was combined with a support vector machine to predict ATRX mutation status in training, validation and external-validation sets. Predictive performance was assessed by receiver operating characteristic curve analysis. Correlations between the selected features were also evaluated. RESULTS: Nine radiomic features were screened as an ATRX-associated radiomic signature of lower-grade gliomas based on the LASSO regression model. All nine radiomic features were texture-associated (e.g. sum average and variance). The predictive efficiencies measured by the area under the curve were 94.0 %, 92.5 % and 72.5 % in the training, validation and external-validation sets, respectively. The overall correlations between the nine radiomic features were low in both TCGA and CGGA databases. CONCLUSIONS: Using radiomic analysis, we achieved efficient prediction of ATRX genotype in lower-grade gliomas, and our model was effective in two independent databases. KEY POINTS: * ATRX in lower-grade gliomas could be predicted using radiomic analysis. * The LASSO regression algorithm and SVM performed well in radiomic analysis. * Nine radiomic features were screened as an ATRX-predictive radiomic signature. * The machine-learning model for ATRX-prediction was validated by an independent database.

Large-scale retrieval for medical image analytics: A comprehensive review

  • Li, Zhongyu
  • Zhang, Xiaofan
  • Müller, Henning
  • Zhang, Shaoting
Medical Image Analysis 2018 Journal Article, cited 23 times
Website

Multiregional radiomics profiling from multiparametric MRI: Identifying an imaging predictor of IDH1 mutation status in glioblastoma

  • Li, Zhi‐Cheng
  • Bai, Hongmin
  • Sun, Qiuchang
  • Zhao, Yuanshen
  • Lv, Yanchun
  • Zhou, Jian
  • Liang, Chaofeng
  • Chen, Yinsheng
  • Liang, Dong
  • Zheng, Hairong
Cancer medicine 2018 Journal Article, cited 0 times
Website

Evaluate the Malignancy of Pulmonary Nodules Using the 3D Deep Leaky Noisy-or Network

  • Liao, Fangzhou
  • Liang, Ming
  • Li, Zhe
  • Hu, Xiaolin
  • Song, Sen
arXiv preprint arXiv:1711.08324 2017 Journal Article, cited 15 times
Website

Promises and challenges for the implementation of computational medical imaging (radiomics) in oncology

  • Limkin, EJ
  • Sun, R
  • Dercle, L
  • Zacharaki, EI
  • Robert, C
  • Reuzé, S
  • Schernberg, A
  • Paragios, N
  • Deutsch, E
  • Ferté, C
Annals of Oncology 2017 Journal Article, cited 49 times
Website

High-resolution anatomic correlation of cyclic motor patterns in the human colon: Evidence of a rectosigmoid brake

  • Lin, Anthony Y
  • Du, Peng
  • Dinning, Philip G
  • Arkwright, John W
  • Kamp, Jozef P
  • Cheng, Leo K
  • Bissett, Ian P
  • O'Grady, Gregory
American Journal of Physiology-Gastrointestinal and Liver Physiology 2017 Journal Article, cited 12 times
Website

A radiogenomics signature for predicting the clinical outcome of bladder urothelial carcinoma

  • Lin, Peng
  • Wen, Dong-Yue
  • Chen, Ling
  • Li, Xin
  • Li, Sheng-Hua
  • Yan, Hai-Biao
  • He, Rong-Quan
  • Chen, Gang
  • He, Yun
  • Yang, Hong
Eur Radiol 2019 Journal Article, cited 0 times
Website
OBJECTIVES: To determine the integrative value of contrast-enhanced computed tomography (CECT), transcriptomics data and clinicopathological data for predicting the survival of bladder urothelial carcinoma (BLCA) patients. METHODS: RNA sequencing data, radiomics features and clinical parameters of 62 BLCA patients were included in the study. Then, prognostic signatures based on radiomics features and gene expression profile were constructed by using least absolute shrinkage and selection operator (LASSO) Cox analysis. A multi-omics nomogram was developed by integrating radiomics, transcriptomics and clinicopathological data. More importantly, radiomics risk score-related genes were identified via weighted correlation network analysis and submitted to functional enrichment analysis. RESULTS: The radiomics and transcriptomics signatures significantly stratified BLCA patients into high- and low-risk groups in terms of the progression-free interval (PFI). The two risk models remained independent prognostic factors in multivariate analyses after adjusting for clinical parameters. A nomogram was developed and showed an excellent predictive ability for the PFI in BLCA patients. Functional enrichment analysis suggested that the radiomics signature we developed could reflect the angiogenesis status of BLCA patients. CONCLUSIONS: The integrative nomogram incorporated CECT radiomics, transcriptomics and clinical features improved the PFI prediction in BLCA patients and is a feasible and practical reference for oncological precision medicine. KEY POINTS: * Our radiomics and transcriptomics models are proved robust for survival prediction in bladder urothelial carcinoma patients. * A multi-omics nomogram model which integrates radiomics, transcriptomics and clinical features for prediction of progression-free interval in bladder urothelial carcinoma is established. * Molecular functional enrichment analysis is used to reveal the potential molecular function of radiomics signature.

Normalized Euclidean Super-Pixels for Medical Image Segmentation

  • Liu, Feihong
  • Feng, Jun
  • Su, Wenhuo
  • Lv, Zhaohui
  • Xiao, Fang
  • Qiu, Shi
2017 Conference Proceedings, cited 0 times
Website

The Current Role of Image Compression Standards in Medical Imaging

  • Liu, Feng
  • Hernandez-Cabronero, Miguel
  • Sanchez, Victor
  • Marcellin, Michael W
  • Bilgin, Ali
Information 2017 Journal Article, cited 4 times
Website

Multi-subtype classification model for non-small cell lung cancer based on radiomics: SLS model

  • Liu, J.
  • Cui, J.
  • Liu, F.
  • Yuan, Y.
  • Guo, F.
  • Zhang, G.
Med Phys 2019 Journal Article, cited 0 times
Website
PURPOSE: Histological subtypes of non-small cell lung cancer (NSCLC) are crucial for systematic treatment decisions. However, the current studies which used non-invasive radiomic methods to classify NSCLC histology subtypes mainly focused on two main subtypes: squamous cell carcinoma (SCC) and adenocarcinoma (ADC), while multi-subtype classifications that included the other two subtypes of NSCLC: large cell carcinoma (LCC) and not otherwise specified (NOS), were very few in the previous studies. The aim of this work is to establish a multi-subtype classification model for the four main subtypes of NSCLC and improve the classification performance and generalization ability compared with previous studies. METHODS: In this work, we extracted 1029 features from regions of interest in computed tomography (CT) images of 349 patients from two different datasets using radiomic methods. Based on 'three-in-one' concept, we proposed a model called SLS wrapping three algorithms, synthetic minority oversampling technique, l2,1-norm minimization, and support vector machines, into one hybrid technique to classify the four main subtypes of NSCLC: SCC, ADC, LCC and NOS, which could cover the whole range of NSCLC. RESULTS: We analyzed the 247 features obtained by dimension reduction, and found that the extracted features from three methods: first order statistics, gray level co-occurrence matrix, and gray level size zone matrix, were more conducive to the classification of NSCLC subtypes. The proposed SLS model achieved an average classification accuracy of 0.89 on the training set (95% confidence interval [CI]: 0.846 to 0.912) and a classification accuracy of 0.86 on the test set (95% CI: 0.779 to 0.941). CONCLUSIONS: The experiment results showed that the subtypes of NSCLC could be well classified by radiomic method. Our SLS model can accurately classify and diagnose the four subtypes of NSCLC based on CT images, and thus it has the potential to be used in the clinical practice to provide valuable information for lung cancer treatment and further promote the personalized medicine. This article is protected by copyright. All rights reserved.

Computational Identification of Tumor Anatomic Location Associated with Survival in 2 Large Cohorts of Human Primary Glioblastomas

  • Liu, TT
  • Achrol, AS
  • Mitchell, LA
  • Du, WA
  • Loya, JJ
  • Rodriguez, SA
  • Feroze, A
  • Westbroek, EM
  • Yeom, KW
  • Stuart, JM
American Journal of Neuroradiology 2016 Journal Article, cited 6 times
Website

Magnetic resonance perfusion image features uncover an angiogenic subgroup of glioblastoma patients with poor survival and better response to antiangiogenic treatment

  • Liu, Tiffany T.
  • Achrol, Achal S.
  • Mitchell, Lex A.
  • Rodriguez, Scott A.
  • Feroze, Abdullah
  • Michael Iv
  • Kim, Christine
  • Chaudhary, Navjot
  • Gevaert, Olivier
  • Stuart, Josh M.
  • Harsh, Griffith R.
  • Chang, Steven D.
  • Rubin, Daniel L.
Neuro-oncology 2016 Journal Article, cited 15 times
Website
Background. In previous clinical trials, antiangiogenic therapies such as bevacizumab did not show efficacy in patients with newly diagnosed glioblastoma (GBM). This may be a result of the heterogeneity of GBM, which has a variety of imaging-based phenotypes and gene expression patterns. In this study, we sought to identify a phenotypic subtype of GBM patients who have distinct tumor-image features and molecular activities and who may benefit from antiangiogenic therapies.Methods. Quantitative image features characterizing subregions of tumors and the whole tumor were extracted from preoperative and pretherapy perfusion magnetic resonance (MR) images of 117 GBM patients in 2 independent cohorts. Unsupervised consensus clustering was performed to identify robust clusters of GBM in each cohort. Cox survival and gene set enrichment analyses were conducted to characterize the clinical significance and molecular pathway activities of the clusters. The differential treatment efficacy of antiangiogenic therapy between the clusters was evaluated.Results. A subgroup of patients with elevated perfusion features was identified and was significantly associated with poor patient survival after accounting for other clinical covariates (P values <.01; hazard ratios > 3) consistently found in both cohorts. Angiogenesis and hypoxia pathways were enriched in this subgroup of patients, suggesting the potential efficacy of antiangiogenic therapy. Patients of the angiogenic subgroups pooled from both cohorts, who had chemotherapy information available, had significantly longer survival when treated with antiangiogenic therapy (log-rank P=.022).Conclusions. Our findings suggest that an angiogenic subtype of GBM patients may benefit from antiangiogenic therapy with improved overall survival.

A CADe system for nodule detection in thoracic CT images based on artificial neural network

  • Liu, Xinglong
  • Hou, Fei
  • Qin, Hong
  • Hao, Aimin
Science China Information Sciences 2017 Journal Article, cited 11 times
Website

A radiomic signature as a non-invasive predictor of progression-free survival in patients with lower-grade gliomas

  • Liu, Xing
  • Li, Yiming
  • Qian, Zenghui
  • Sun, Zhiyan
  • Xu, Kaibin
  • Wang, Kai
  • Liu, Shuai
  • Fan, Xing
  • Li, Shaowu
  • Zhang, Zhong
NeuroImage: Clinical 2018 Journal Article, cited 0 times
Website

Molecular profiles of tumor contrast enhancement: A radiogenomic analysis in anaplastic gliomas

  • Liu, Xing
  • Li, Yiming
  • Sun, Zhiyan
  • Li, Shaowu
  • Wang, Kai
  • Fan, Xing
  • Liu, Yuqing
  • Wang, Lei
  • Wang, Yinyan
  • Jiang, Tao
Cancer medicine 2018 Journal Article, cited 0 times
Website

A Genetic Polymorphism in CTLA-4 Is Associated with Overall Survival in Sunitinib-Treated Patients with Clear Cell Metastatic Renal Cell Carcinoma

  • Liu, X.
  • Swen, J. J.
  • Diekstra, M. H. M.
  • Boven, E.
  • Castellano, D.
  • Gelderblom, H.
  • Mathijssen, R. H. J.
  • Vermeulen, S. H.
  • Oosterwijk, E.
  • Junker, K.
  • Roessler, M.
  • Alexiusdottir, K.
  • Sverrisdottir, A.
  • Radu, M. T.
  • Ambert, V.
  • Eisen, T.
  • Warren, A.
  • Rodriguez-Antona, C.
  • Garcia-Donas, J.
  • Bohringer, S.
  • Koudijs, K. K. M.
  • Kiemeney, Lalm
  • Rini, B. I.
  • Guchelaar, H. J.
Clin Cancer Res 2018 Journal Article, cited 0 times
Website
Purpose: The survival of patients with clear cell metastatic renal cell carcinoma (cc-mRCC) has improved substantially since the introduction of tyrosine kinase inhibitors (TKI). With the fact that TKIs interact with immune responses, we investigated whether polymorphisms of genes involved in immune checkpoints are related to the clinical outcome of cc-mRCC patients treated with sunitinib as first TKI.Experimental Design: Twenty-seven single-nucleotide polymorphisms (SNP) in CD274 (PD-L1), PDCD1 (PD-1), and CTLA-4 were tested for a possible association with progression-free survival (PFS) and overall survival (OS) in a discovery cohort of 550 sunitinib-treated cc-mRCC patients. SNPs with a significant association (P < 0.05) were tested in an independent validation cohort of 138 sunitinib-treated cc-mRCC patients. Finally, data of the discovery and validation cohort were pooled for meta-analysis.Results:CTLA-4 rs231775 and CD274 rs7866740 showed significant associations with OS in the discovery cohort after correction for age, gender, and Heng prognostic risk group [HR, 0.84; 95% confidence interval (CI), 0.72-0.98; P = 0.028, and HR, 0.73; 95% CI, 0.54-0.99; P = 0.047, respectively]. In the validation cohort, the associations of both SNPs with OS did not meet the significance threshold of P < 0.05. After meta-analysis, CTLA-4 rs231775 showed a significant association with OS (HR, 0.83; 95% CI, 0.72-0.95; P = 0.008). Patients with the GG genotype had longer OS (35.1 months) compared with patients with an AG (30.3 months) or AA genotype (24.3 months). No significant associations with PFS were found.Conclusions: The G-allele of rs231775 in the CTLA-4 gene is associated with an improved OS in sunitinib-treated cc-mRCC patients and could potentially be used as a prognostic biomarker. Clin Cancer Res; 1-7. (c)2018 AACR.

Cross-Modality Knowledge Transfer for Prostate Segmentation from CT Scans

  • Liu, Yucheng
  • Khosravan, Naji
  • Liu, Yulin
  • Stember, Joseph
  • Shoag, Jonathan
  • Bagci, Ulas
  • Jambawalikar, Sachin
2019 Book Section, cited 0 times

Relationship between Glioblastoma Heterogeneity and Survival Time: An MR Imaging Texture Analysis

  • Liu, Y
  • Xu, X
  • Yin, L
  • Zhang, X
  • Li, L
  • Lu, H
American Journal of Neuroradiology 2017 Journal Article, cited 8 times
Website

Conventional MR-based Preoperative Nomograms for Prediction of IDH/1p19q Subtype in Low-Grade Glioma

  • Liu, Zhenyin
  • Zhang, Tao
  • Jiang, Hua
  • Xu, Wenchan
  • Zhang, Jing
Academic radiology 2018 Journal Article, cited 0 times
Website

Detecting Lung Abnormalities From X-rays Using an Improved SSL Algorithm

  • Livieris, Ioannis
  • Kanavos, Andreas
  • Pintelas, Panagiotis
Electronic Notes in Theoretical Computer Science 2019 Journal Article, cited 0 times

A Weighted Voting Ensemble Self-Labeled Algorithm for the Detection of Lung Abnormalities from X-Rays

  • Livieris, Ioannis
  • Kanavos, Andreas
  • Tampakas, Vassilis
  • Pintelas, Panagiotis
Algorithms 2019 Journal Article, cited 0 times
Website
During the last decades, intensive efforts have been devoted to the extraction of useful knowledge from large volumes of medical data employing advanced machine learning and data mining techniques. Advances in digital chest radiography have enabled research and medical centers to accumulate large repositories of classified (labeled) images and mostly of unclassified (unlabeled) images from human experts. Machine learning methods such as semi-supervised learning algorithms have been proposed as a new direction to address the problem of shortage of available labeled data, by exploiting the explicit classification information of labeled data with the information hidden in the unlabeled data. In the present work, we propose a new ensemble semi-supervised learning algorithm for the classification of lung abnormalities from chest X-rays based on a new weighted voting scheme. The proposed algorithm assigns a vector of weights on each component classifier of the ensemble based on its accuracy on each class. Our numerical experiments illustrate the efficiency of the proposed ensemble methodology against other state-of-the-art classification methods.

JOURNAL CLUB: Computer-Aided Detection of Lung Nodules on CT With a Computerized Pulmonary Vessel Suppressed Function

  • Lo, ShihChung B
  • Freedman, Matthew T
  • Gillis, Laura B
  • White, Charles S
  • Mun, Seong K
American Journal of Roentgenology 2018 Journal Article, cited 4 times
Website

Effect of Imaging Parameter Thresholds on MRI Prediction of Neoadjuvant Chemotherapy Response in Breast Cancer Subtypes

  • Lo, Wei-Ching
  • Li, Wen
  • Jones, Ella F
  • Newitt, David C
  • Kornak, John
  • Wilmes, Lisa J
  • Esserman, Laura J
  • Hylton, Nola M
PLoS One 2016 Journal Article, cited 7 times
Website

Brain tumor segmentation using morphological processing and the discrete wavelet transform

  • Lojzim, Joshua Michael
  • Fries, Marcus
Journal of Young Investigators 2017 Journal Article, cited 0 times
Website

Machine Learning-Based Radiomics for Molecular Subtyping of Gliomas

  • Lu, Chia-Feng
  • Hsu, Fei-Ting
  • Hsieh, Kevin Li-Chun
  • Kao, Yu-Chieh Jill
  • Cheng, Sho-Jen
  • Hsu, Justin Bo-Kai
  • Tsai, Ping-Huei
  • Chen, Ray-Jade
  • Huang, Chao-Ching
  • Yen, Yun
Clinical Cancer Research 2018 Journal Article, cited 1 times
Website

A mathematical-descriptor of tumor-mesoscopic-structure from computed-tomography images annotates prognostic- and molecular-phenotypes of epithelial ovarian cancer

  • Lu, Haonan
  • Arshad, Mubarik
  • Thornton, Andrew
  • Avesani, Giacomo
  • Cunnea, Paula
  • Curry, Ed
  • Kanavati, Fahdi
  • Liang, Jack
  • Nixon, Katherine
  • Williams, Sophie T.
  • Hassan, Mona Ali
  • Bowtell, David D. L.
  • Gabra, Hani
  • Fotopoulou, Christina
  • Rockall, Andrea
  • Aboagye, Eric O.
Nature Communications 2019 Journal Article, cited 0 times
Website
The five-year survival rate of epithelial ovarian cancer (EOC) is approximately 35-40% despite maximal treatment efforts, highlighting a need for stratification biomarkers for personalized treatment. Here we extract 657 quantitative mathematical descriptors from the preoperative CT images of 364 EOC patients at their initial presentation. Using machine learning, we derive a non-invasive summary-statistic of the primary ovarian tumor based on 4 descriptors, which we name "Radiomic Prognostic Vector" (RPV). RPV reliably identifies the 5% of patients with median overall survival less than 2 years, significantly improves established prognostic methods, and is validated in two independent, multi-center cohorts. Furthermore, genetic, transcriptomic and proteomic analysis from two independent datasets elucidate that stromal phenotype and DNA damage response pathways are activated in RPV-stratified tumors. RPV and its associated analysis platform could be exploited to guide personalized therapy of EOC and is potentially transferrable to other cancer types.

Study on Prognosis Factors of Non-Small Cell Lung Cancer Based on CT Image Features

  • Lu, Xiaoteng
  • Gong, Jing
  • Nie, Shengdong
Journal of Medical Imaging and Health Informatics 2019 Journal Article, cited 0 times
This study aims to investigate the prognosis factors of non-small cell lung cancer (NSCLC) based on CT image features and develop a new quantitative image feature prognosis approach using CT images. Firstly, lung tumors were segmented and images features were extracted. Secondly, the Kaplan-Meier method was used to have a univariate survival analysis. A multiple survival analysis was carried out with the method of COX regression model. Thirdly, SMOTE algorithm was took to make the feature data balanced. Finally, classifiers based on WEKA were established to test the prognosis ability of independent prognosis factors. Univariate analysis results reflected that six features had significant influence on patients' prognosis. After multivariate analysis, angular second moment, srhge and volume were significantly related to the survival situation of NSCLC patients (P < 0.05). According to the results of classifiers, these three features could make a well prognosis on the NSCLC. The best classification accuracy was 78.4%. The results of our study suggested that angular second moment, srhge and volume were high potential independent prognosis factors of NSCLC.

Evolutionary image simplification for lung nodule classification with convolutional neural networks

  • Lückehe, Daniel
  • von Voigt, Gabriele
International journal of computer assisted radiology and surgery 2018 Journal Article, cited 0 times
Website

vPSNR: a visualization-aware image fidelity metric tailored for diagnostic imaging

  • Lundström, Claes
International journal of computer assisted radiology and surgery 2013 Journal Article, cited 0 times
Website

Automatic lung nodule classification with radiomics approach

  • Ma, Jingchen
  • Wang, Qian
  • Ren, Yacheng
  • Hu, Haibo
  • Zhao, Jun
2016 Conference Proceedings, cited 10 times
Website

Opportunities and challenges to utilization of quantitative imaging: Report of the AAPM practical big data workshop

  • Mackie, Thomas R
  • Jackson, Edward F
  • Giger, Maryellen
Medical physics 2018 Journal Article, cited 1 times
Website

Harmonizing the pixel size in retrospective computed tomography radiomics studies

  • Mackin, Dennis
  • Fave, Xenia
  • Zhang, Lifei
  • Yang, Jinzhong
  • Jones, A Kyle
  • Ng, Chaan S
PLoS One 2017 Journal Article, cited 19 times
Website

Automatic Classification of Normal and Cancer Lung CT Images Using Multiscale AM-FM Features

  • Magdy, Eman
  • Zayed, Nourhan
  • Fakhr, Mahmoud
International Journal of Biomedical Imaging 2015 Journal Article, cited 6 times
Website

Lung Cancer Detection using CT Scan Images

  • Makaju, Suren
  • Prasad, PWC
  • Alsadoon, Abeer
  • Singh, AK
  • Elchouemi, A
Procedia Computer Science 2018 Journal Article, cited 5 times
Website

Measurement of smaller colon polyp in CT colonography images using morphological image processing

  • Manjunath, KN
  • Siddalingaswamy, PC
  • Prabhu, GK
International journal of computer assisted radiology and surgery 2017 Journal Article, cited 1 times
Website

Tumor Growth in the Brain: Complexity and Fractality

  • Martín-Landrove, Miguel
  • Brú, Antonio
  • Rueda-Toicen, Antonio
  • Torres-Hoyos, Francisco
2016 Book Section, cited 1 times
Website

Can Planning Images Reduce Scatter in Follow-Up Cone-Beam CT?

  • Mason, Jonathan
  • Perelli, Alessandro
  • Nailon, William
  • Davies, Mike
arXiv preprint arXiv:1703.07179 2017 Journal Article, cited 2 times
Website

Quantitative cone-beam computed tomography reconstruction for radiotherapy planning

  • Mason, Jonathan Hugh
2018 Thesis, cited 0 times
Website

Computer-Assisted Decision Support System in Pulmonary Cancer Detection and Stage Classification on CT Images

  • Masood, Anum
  • Sheng, Bin
  • Li, Ping
  • Hou, Xuhong
  • Wei, Xiaoer
  • Qin, Jing
  • Feng, Dagan
Journal of biomedical informatics 2018 Journal Article, cited 10 times
Website

Bone suppression for chest X-ray image using a convolutional neural filter

  • Matsubara, N.
  • Teramoto, A.
  • Saito, K.
  • Fujita, H.
Australas Phys Eng Sci Med 2019 Journal Article, cited 0 times
Website
Chest X-rays are used for mass screening for the early detection of lung cancer. However, lung nodules are often overlooked because of bones overlapping the lung fields. Bone suppression techniques based on artificial intelligence have been developed to solve this problem. However, bone suppression accuracy needs improvement. In this study, we propose a convolutional neural filter (CNF) for bone suppression based on a convolutional neural network which is frequently used in the medical field and has excellent performance in image processing. CNF outputs a value for the bone component of the target pixel by inputting pixel values in the neighborhood of the target pixel. By processing all positions in the input image, a bone-extracted image is generated. Finally, bone-suppressed image is obtained by subtracting the bone-extracted image from the original chest X-ray image. Bone suppression was most accurate when using CNF with six convolutional layers, yielding bone suppression of 89.2%. In addition, abnormalities, if present, were effectively imaged by suppressing only bone components and maintaining soft-tissue. These results suggest that the chances of missing abnormalities may be reduced by using the proposed method. The proposed method is useful for bone suppression in chest X-ray images.

Automated Classification of Lung Diseases in Computed Tomography Images Using a Wavelet Based Convolutional Neural Network

  • Matsuyama, Eri
  • Tsai, Du-Yih
Journal of Biomedical Science and Engineering 2018 Journal Article, cited 0 times
Website

[18F] FDG Positron Emission Tomography (PET) Tumor and Penumbra Imaging Features Predict Recurrence in Non-Small Cell Lung Cancer

  • Mattonen, Sarah A.
  • Davidzon, Guido A.
  • Bakr, Shaimaa
  • Echegaray, Sebastian
  • Leung, Ann N. C.
  • Vasanawala, Minal
  • Horng, George
  • Napel, Sandy
  • Nair, Viswam S.
Tomography (Ann Arbor, Mich.) 2019 Journal Article, cited 0 times
Website
We identified computational imaging features on 18F-fluorodeoxyglucose positron emission tomography (PET) that predict recurrence/progression in non-small cell lung cancer (NSCLC). We retrospectively identified 291 patients with NSCLC from 2 prospectively acquired cohorts (training, n = 145; validation, n = 146). We contoured the metabolic tumor volume (MTV) on all pretreatment PET images and added a 3-dimensional penumbra region that extended outward 1 cm from the tumor surface. We generated 512 radiomics features, selected 435 features based on robustness to contour variations, and then applied randomized sparse regression (LASSO) to identify features that predicted time to recurrence in the training cohort. We built Cox proportional hazards models in the training cohort and independently evaluated the models in the validation cohort. Two features including stage and a MTV plus penumbra texture feature were selected by LASSO. Both features were significant univariate predictors, with stage being the best predictor (hazard ratio [HR] = 2.15 [95% confidence interval (CI): 1.56-2.95], P < .001). However, adding the MTV plus penumbra texture feature to stage significantly improved prediction (P = .006). This multivariate model was a significant predictor of time to recurrence in the training cohort (concordance = 0.74 [95% CI: 0.66-0.81], P < .001) that was validated in a separate validation cohort (concordance = 0.74 [95% CI: 0.67-0.81], P < .001). A combined radiomics and clinical model improved NSCLC recurrence prediction. FDG PET radiomic features may be useful biomarkers for lung cancer prognosis and add clinical utility for risk stratification.

Bone Marrow and Tumor Radiomics at (18)F-FDG PET/CT: Impact on Outcome Prediction in Non-Small Cell Lung Cancer

  • Mattonen, Sarah A
  • Davidzon, Guido A
  • Benson, Jalen
  • Leung, Ann N C
  • Vasanawala, Minal
  • Horng, George
  • Shrager, Joseph B
  • Napel, Sandy
  • Nair, Viswam S.
Radiology 2019 Journal Article, cited 0 times
Website
Background Primary tumor maximum standardized uptake value is a prognostic marker for non-small cell lung cancer. In the setting of malignancy, bone marrow activity from fluorine 18-fluorodeoxyglucose (FDG) PET may be informative for clinical risk stratification. Purpose To determine whether integrating FDG PET radiomic features of the primary tumor, tumor penumbra, and bone marrow identifies lung cancer disease-free survival more accurately than clinical features alone. Materials and Methods Patients were retrospectively analyzed from two distinct cohorts collected between 2008 and 2016. Each tumor, its surrounding penumbra, and bone marrow from the L3-L5 vertebral bodies was contoured on pretreatment FDG PET/CT images. There were 156 bone marrow and 512 tumor and penumbra radiomic features computed from the PET series. Randomized sparse Cox regression by least absolute shrinkage and selection operator identified features that predicted disease-free survival in the training cohort. Cox proportional hazards models were built and locked in the training cohort, then evaluated in an independent cohort for temporal validation. Results There were 227 patients analyzed; 136 for training (mean age, 69 years +/- 9 [standard deviation]; 101 men) and 91 for temporal validation (mean age, 72 years +/- 10; 91 men). The top clinical model included stage; adding tumor region features alone improved outcome prediction (log likelihood, -158 vs -152; P = .007). Adding bone marrow features continued to improve performance (log likelihood, -158 vs -145; P = .001). The top model integrated stage, two bone marrow texture features, one tumor with penumbra texture feature, and two penumbra texture features (concordance, 0.78; 95% confidence interval: 0.70, 0.85; P < .001). This fully integrated model was a predictor of poor outcome in the independent cohort (concordance, 0.72; 95% confidence interval: 0.64, 0.80; P < .001) and a binary score stratified patients into high and low risk of poor outcome (P < .001). Conclusion A model that includes pretreatment fluorine 18-fluorodeoxyglucose PET texture features from the primary tumor, tumor penumbra, and bone marrow predicts disease-free survival of patients with non-small cell lung cancer more accurately than clinical features alone. (c) RSNA, 2019 Online supplemental material is available for this article.

“One Stop Shop” for Prostate Cancer Staging using Imaging Biomarkers and Spatially Registered Multi-Parametric MRI

  • Mayer, Rulon
2020 Patent, cited 0 times
Website

Pilot study for supervised target detection applied to spatially registered multiparametric MRI in order to non-invasively score prostate cancer

  • Mayer, Rulon
  • Simone, Charles B
  • Skinner, William
  • Turkbey, Baris
  • Choykey, Peter
Computers in biology and medicine 2018 Journal Article, cited 0 times
Website

Radiogenomics of lower-grade glioma: algorithmically-assessed tumor shape is associated with tumor genomic subtypes and patient outcomes in a multi-institutional study with The Cancer Genome Atlas data

  • Mazurowski, Maciej A
  • Clark, Kal
  • Czarnek, Nicholas M
  • Shamsesfandabadi, Parisa
  • Peters, Katherine B
  • Saha, Ashirbani
Journal of neuro-oncology 2017 Journal Article, cited 8 times
Website

Predicting outcomes in glioblastoma patients using computerized analysis of tumor shape: preliminary data

  • Mazurowski, Maciej A
  • Czarnek, Nicholas M
  • Collins, Leslie M
  • Peters, Katherine B
  • Clark, Kal L
2016 Conference Proceedings, cited 6 times
Website

Imaging descriptors improve the predictive power of survival models for glioblastoma patients

  • Mazurowski, Maciej Andrzej
  • Desjardins, Annick
  • Malof, Jordan Milton
Neuro-oncology 2013 Journal Article, cited 62 times
Website

Radiogenomic Analysis of Breast Cancer: Luminal B Molecular Subtype Is Associated with Enhancement Dynamics at MR Imaging

  • Mazurowski, Maciej A
  • Zhang, Jing
  • Grimm, Lars J
  • Yoon, Sora C
  • Silber, James I
Radiology 2014 Journal Article, cited 88 times
Website

Radiogenomic Analysis of Breast Cancer: Luminal B Molecular Subtype Is Associated with Enhancement Dynamics at MR Imaging

  • Mazurowski, Maciej A
  • Zhang, Jing
  • Grimm, Lars J
  • Yoon, Sora C
  • Silber, James I
2014 Dataset, cited 88 times
Website

Computer-extracted MR imaging features are associated with survival in glioblastoma patients

  • Mazurowski, Maciej A
  • Zhang, Jing
  • Peters, Katherine B
  • Hobbs, Hasan
Journal of neuro-oncology 2014 Journal Article, cited 33 times
Website

Quantitative Multiparametric MRI Features and PTEN Expression of Peripheral Zone Prostate Cancer: A Pilot Study

  • McCann, Stephanie M
  • Jiang, Yulei
  • Fan, Xiaobing
  • Wang, Jianing
  • Antic, Tatjana
  • Prior, Fred
  • VanderWeele, David
  • Oto, Aytekin
American Journal of Roentgenology 2016 Journal Article, cited 11 times
Website

EQUIPMENT TO ADDRESS INFRASTRUCTURE AND HUMAN RESOURCE CHALLENGES FOR RADIOTHERAPY IN LOW-RESOURCE SETTINGS

  • McCarroll, Rachel
2018 Thesis, cited 0 times
Website

Determining the variability of lesion size measurements from ct patient data sets acquired under “no change” conditions

  • McNitt-Gray, Michael F
  • Kim, Grace Hyun
  • Zhao, Binsheng
  • Schwartz, Lawrence H
  • Clunie, David
  • Cohen, Kristin
  • Petrick, Nicholas
  • Fenimore, Charles
  • Lu, ZQ John
  • Buckler, Andrew J
Translational oncology 2015 Journal Article, cited 0 times

Content-Based Image Retrieval System for Pulmonary Nodules Using Optimal Feature Sets and Class Membership-Based Retrieval

  • Mehre, Shrikant A
  • Dhara, Ashis Kumar
  • Garg, Mandeep
  • Kalra, Naveen
  • Khandelwal, Niranjan
  • Mukhopadhyay, Sudipta
Journal of Digital Imaging 2018 Journal Article, cited 0 times
Website

Bolus arrival time and its effect on tissue characterization with dynamic contrast-enhanced magnetic resonance imaging

  • Mehrtash, Alireza
  • Gupta, Sandeep N
  • Shanbhag, Dattesh
  • Miller, James V
  • Kapur, Tina
  • Fennessy, Fiona M
  • Kikinis, Ron
  • Fedorov, Andriy
Journal of Medical Imaging 2016 Journal Article, cited 6 times
Website

Clinical Evaluation of a Fully-automatic Segmentation Method for Longitudinal Brain Tumor Volumetry

  • Meier, Raphael
  • Knecht, Urspeter
  • Loosli, Tina
  • Bauer, Stefan
  • Slotboom, Johannes
  • Wiest, Roland
  • Reyes, Mauricio
Sci RepScientific reports 2016 Journal Article, cited 26 times
Website

Database Acquisition for the Lung Cancer Computer Aided Diagnostic Systems

  • Meldo, Anna
  • Utkin, Lev
  • Lukashin, Aleksey
  • Muliukha, Vladimir
  • Zaborovsky, Vladimir
2019 Conference Paper, cited 0 times
Website
Most of the used computer aided diagnostic (CAD) systems based on applying the deep learning algorithms are similar from the point of view of data processing stages. The main typical stages are the training data acquisition, pre-processing, segmentation and classification. Homogeneity of a training dataset structure and its completeness are very important for minimizing inaccuracies in the development of the CAD systems. The main difficulties in the medical training data acquisition are concerned with their heterogeneity and incompleteness. Another problem is a lack of a sufficient large amount of data for training deep neural networks which are a basis of the CAD systems. In order to overcome these problems in the lung cancer CAD systems, a new methodology of the dataset acquisition is proposed by using as an example the database called LIRA which has been applied to training the intellectual lung cancer CAD system called by Dr. AIzimov. One of the important peculiarities of the dataset LIRA is the morphological confirmation of diseases. Another peculiarity is taking into account and including “atypical” cases from the point of view of radiographic features. The database development is carried out in the interdisciplinary collaboration of radiologists and data scientists developing the CAD system.

Comparison of Automatic Seed Generation Methods for Breast Tumor Detection Using Region Growing Technique

  • Melouah, Ahlem
2015 Book Section, cited 7 times
Website

More accurate and efficient segmentation of organs‐at‐risk in radiotherapy with Convolutional Neural Networks Cascades

  • Men, Kuo
  • Geng, Huaizhi
  • Cheng, Chingyun
  • Zhong, Haoyu
  • Huang, Mi
  • Fan, Yong
  • Plastaras, John P
  • Lin, Alexander
  • Xiao, Ying
Medical physics 2018 Journal Article, cited 0 times
Website

Segmentation of Pulmonary Nodules in Computed Tomography Using a Regression Neural Network Approach and its Application to the Lung Image Database Consortium and Image Database Resource Initiative Dataset

  • Messay T,
  • Hardie RC,
  • Tuinstra TR,
2014 Dataset, cited 55 times
Website

Segmentation of Pulmonary Nodules in Computed Tomography Using a Regression Neural Network Approach and its Application to the Lung Image Database Consortium and Image Database Resource Initiative Dataset

  • Messay, Temesguen
  • Hardie, Russell C
  • Tuinstra, Timothy R
Medical Image Analysis 2015 Journal Article, cited 55 times
Website

Phase I trial of preoperative chemoradiation plus sorafenib for high-risk extremity soft tissue sarcomas with dynamic contrast-enhanced MRI correlates

  • Meyer, Janelle M
  • Perlewitz, Kelly S
  • Hayden, James B
  • Doung, Yee-Cheen
  • Hung, Arthur Y
  • Vetto, John T
  • Pommier, Rodney F
  • Mansoor, Atiya
  • Beckett, Brooke R
  • Tudorica, Alina
Clinical Cancer Research 2013 Journal Article, cited 41 times
Website

Detection of Lung Cancer Nodule on CT scan Images by using Region Growing Method

  • Mhetre, Rajani R
  • Sache, Rukhsana G
International Journal of Current Trends in Engineering & Research 2016 Journal Article, cited 0 times
Website

Transcription elongation factors represent in vivo cancer dependencies in glioblastoma

  • Miller, Tyler E
  • Liau, Brian B
  • Wallace, Lisa C
  • Morton, Andrew R
  • Xie, Qi
  • Dixit, Deobrat
  • Factor, Daniel C
  • Kim, Leo JY
  • Morrow, James J
  • Wu, Qiulian
Nature 2017 Journal Article, cited 41 times
Website

Volumetric brain tumour detection from MRI using visual saliency

  • Mitra, Somosmita
  • Banerjee, Subhashis
  • Hayashi, Yoichi
PLoS One 2017 Journal Article, cited 2 times
Website

IMAGE FUSION BASED LUNG NODULE DETECTION USING STRUCTURAL SIMILARITY AND MAX RULE

  • Mohana, P
  • Venkatesan, P
INTERNATIONAL JOURNAL OF ADVANCES IN SIGNAL AND IMAGE SCIENCES 2019 Journal Article, cited 0 times
Website

Automated AJCC staging of non-small cell lung cancer (NSCLC) using deep convolutional neural network (CNN) and recurrent neural network (RNN)

  • Moitra, Dipanjan
  • Mandal, Rakesh Kr
Health Inf Sci Syst 2019 Journal Article, cited 0 times
Website
Purpose: A large chunk of lung cancers are of the type non-small cell lung cancer (NSCLC). Both the treatment planning and patients' prognosis depend greatly on factors like AJCC staging which is an abstraction over TNM staging. Many significant efforts have so far been made towards automated staging of NSCLC, but the groundbreaking application of a deep neural networks (DNNs) is yet to be observed in this domain of study. DNN is capable of achieving higher level of accuracy than the traditional artificial neural networks (ANNs) as it uses deeper layers of convolutional neural network (CNN). The objective of the present study is to propose a simple yet fast CNN model combined with recurrent neural network (RNN) for automated AJCC staging of NSCLC and to compare the outcome with a few standard machine learning algorithms along with a few similar studies. Methods: The NSCLC radiogenomics collection from the cancer imaging archive (TCIA) dataset was considered for the study. The tumor images were refined and filtered by resizing, enhancing, de-noising, etc. The initial image processing phase was followed by texture based image segmentation. The segmented images were fed into a hybrid feature detection and extraction model which was comprised of two sequential phases: maximally stable extremal regions (MSER) and the speeded up robust features (SURF). After a prolonged experiment, the desired CNN-RNN model was derived and the extracted features were fed into the model. Results: The proposed CNN-RNN model almost outperformed the other machine learning algorithms under consideration. The accuracy remained steadily higher than the other contemporary studies. Conclusion: The proposed CNN-RNN model performed commendably during the study. Further studies may be carried out to refine the model and develop an improved auxiliary decision support system for oncologists and radiologists.

Automated grading of non-small cell lung cancer by fuzzy rough nearest neighbour method

  • Moitra, Dipanjan
  • Mandal, Rakesh Kr
Network Modeling Analysis in Health Informatics and Bioinformatics 2019 Journal Article, cited 0 times
Lung cancer is one of the most lethal diseases across the world. Most lung cancers belong to the category of non-small cell lung cancer (NSCLC). Many studies have so far been carried out to avoid the hazards and bias of manual classification of NSCLC tumors. A few of such studies were intended towards automated nodal staging using the standard machine learning algorithms. Many others tried to classify tumors as either benign or malignant. None of these studies considered the pathological grading of NSCLC. Automated grading may perfectly depict the dissimilarity between normal tissue and cancer affected tissue. Such automation may save patients from undergoing a painful biopsy and may also help radiologists or oncologists in grading the tumor or lesion correctly. The present study aims at the automated grading of NSCLC tumors using the fuzzy rough nearest neighbour (FRNN) method. The dataset was extracted from The Cancer Imaging Archive and it comprised PET/CT images of NSCLC tumors of 211 patients. Accelerated segment test (FAST) and histogram oriented gradients methods were used to detect and extract features from the segmented images. Gray level co-occurrence matrix (GLCM) features were also considered in the study. The features along with the clinical grading information were fed into four machine learning algorithms: FRNN, logistic regression, multi-layer perceptron, and support vector machine. The results were thoroughly compared in the light of various evaluation-metrics. The confusion matrix was found balanced, and the outcome was found more cost-effective for FRNN. Results were also compared with various other leading studies done earlier in this field. The proposed FRNN model performed satisfactorily during the experiment. Further exploration of FRNN may be very helpful for radiologists and oncologists in planning the treatment for NSCLC. More varieties of cancers may be considered while conducting similar studies.

Automated AJCC (7th edition) staging of non-small cell lung cancer (NSCLC) using deep convolutional neural network (CNN) and recurrent neural network (RNN)

  • Moitra, Dipanjan
  • Mandal, Rakesh Kr
Health Inf Sci Syst 2019 Journal Article, cited 0 times
Website
Purpose: A large chunk of lung cancers are of the type non-small cell lung cancer (NSCLC). Both the treatment planning and patients' prognosis depend greatly on factors like AJCC staging which is an abstraction over TNM staging. Many significant efforts have so far been made towards automated staging of NSCLC, but the groundbreaking application of a deep neural networks (DNNs) is yet to be observed in this domain of study. DNN is capable of achieving higher level of accuracy than the traditional artificial neural networks (ANNs) as it uses deeper layers of convolutional neural network (CNN). The objective of the present study is to propose a simple yet fast CNN model combined with recurrent neural network (RNN) for automated AJCC staging of NSCLC and to compare the outcome with a few standard machine learning algorithms along with a few similar studies. Methods: The NSCLC radiogenomics collection from the cancer imaging archive (TCIA) dataset was considered for the study. The tumor images were refined and filtered by resizing, enhancing, de-noising, etc. The initial image processing phase was followed by texture based image segmentation. The segmented images were fed into a hybrid feature detection and extraction model which was comprised of two sequential phases: maximally stable extremal regions (MSER) and the speeded up robust features (SURF). After a prolonged experiment, the desired CNN-RNN model was derived and the extracted features were fed into the model. Results: The proposed CNN-RNN model almost outperformed the other machine learning algorithms under consideration. The accuracy remained steadily higher than the other contemporary studies. Conclusion: The proposed CNN-RNN model performed commendably during the study. Further studies may be carried out to refine the model and develop an improved auxiliary decision support system for oncologists and radiologists.

Informatics in Radiology: An Open-Source and Open-Access Cancer Biomedical Informatics Grid Annotation and Image Markup Template Builder

  • Mongkolwat, Pattanasak
  • Channin, David S
  • Rubin, Vladimir Kleper Daniel L
Radiographics 2012 Journal Article, cited 15 times
Website

CNN models discriminating between pulmonary micro-nodules and non-nodules from CT images

  • Monkam, Patrice
  • Qi, Shouliang
  • Xu, Mingjie
  • Han, Fangfang
  • Zhao, Xinzhuo
  • Qian, Wei
Biomedical engineering online 2018 Journal Article, cited 1 times
Website

Evaluation of TP53/PIK3CA mutations using texture and morphology analysis on breast MRI

  • Moon, W. K.
  • Chen, H. H.
  • Shin, S. U.
  • Han, W.
  • Chang, R. F.
Magn Reson Imaging 2019 Journal Article, cited 0 times
Website
PURPOSE: Somatic mutations in TP53 and PIK3CA genes, the two most frequent genetic alternations in breast cancer, are associated with prognosis and therapeutic response. This study predicted the presence of TP53 and PIK3CA mutations in breast cancer by using texture and morphology analyses on breast MRI. MATERIALS AND METHODS: A total of 107 breast cancers (dataset A) from The Cancer Imaging Archive (TCIA) consisting of 40 TP53 mutation cancer and 67 cancers without TP53 mutation; 35 PIK3CA mutations cancer and 72 without PIK3CA mutation. 122 breast cancer (dataset B) from Seoul National University Hospital containing 54 TP53 mutation cancer and 68 without mutations were used in this study. At first, the tumor area was segmented by a region growing method. Subsequently, gray level co-occurrence matrix (GLCM) texture features were extracted after ranklet transform, and a series of features including compactness, margin, and ellipsoid fitting model were used to describe the morphological characteristics of tumors. Lastly, a logistic regression was used to identify the presence of TP53 and PIK3CA mutations. The classification performances were evaluated by accuracy, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV). Taking into account the trade-offs of sensitivity and specificity, the overall performances were evaluated by using receiver operating characteristic (ROC) curve analysis. RESULTS: The GLCM texture feature based on ranklet transform is more capable of recognizing TP53 and PIK3CA mutations than morphological feature, especially for the TP53 mutation that achieves statistically significant. The area under the ROC curve (AUC) for TP53 mutation dataset A and dataset B achieved 0.78 and 0.81 respectively. For PIK3CA mutation, the AUC of ranklet texture feature was 0.70. CONCLUSION: Texture analysis of segmented tumor on breast MRI based on ranklet transform is potential in recognizing the presence of TP53 mutation and PIK3CA mutation.

Impact of image preprocessing methods on reproducibility of radiomic features in multimodal magnetic resonance imaging in glioblastoma

  • Moradmand, Hajar
  • Aghamiri, Seyed Mahmoud Reza
  • Ghaderi, Reza
J Appl Clin Med Phys 2019 Journal Article, cited 0 times
Website
To investigate the effect of image preprocessing, in respect to intensity inhomogeneity correction and noise filtering, on the robustness and reproducibility of the radiomics features extracted from the Glioblastoma (GBM) tumor in multimodal MR images (mMRI). In this study, for each patient 1461 radiomics features were extracted from GBM subregions (i.e., edema, necrosis, enhancement, and tumor) of mMRI (i.e., FLAIR, T1, T1C, and T2) volumes for five preprocessing combinations (in total 116 880 radiomics features). The robustness and reproducibility of the radiomics features were assessed under four comparisons: (a) Baseline versus modified bias field; (b) Baseline versus modified bias field followed by noise filtering; (c) Baseline versus modified noise, and (d) Baseline versus modified noise followed bias field correction. The concordance correlation coefficient (CCC), dynamic range (DR), and interclass correlation coefficient (ICC) were used as metrics. Shape features and subsequently, local binary pattern (LBP) filtered images were highly stable and reproducible against bias field correction and noise filtering in all measurements. In all MRI modalities, necrosis regions (NC: n ~449/1461, 30%) had the highest number of highly robust features, with CCC and DR >= 0.9, in comparison with edema (ED: n ~296/1461, 20%), enhanced (EN: n ~ 281/1461, 19%) and active-tumor regions (TM: n ~254/1461, 17%). The necrosis regions (NC: n ~ 449/1461, 30%) had a higher number of highly robust features (CCC and DR >= 0.9) than edema (ED: n ~ 296/1461, 20%), enhanced (EN: n ~ 281/1461, 19%) and active-tumor (TM: n ~ 254/1461, 17%) regions across all modalities. Furthermore, our results identified that the percentage of high reproducible features with ICC >= 0.9 after bias field correction (23.2%), and bias field correction followed by noise filtering (22.4%) were higher in contrast with noise smoothing and also noise smoothing follow by bias correction. These preliminary findings imply that preprocessing sequences can also have a significant impact on the robustness and reproducibility of mMRI-based radiomics features and identification of generalizable and consistent preprocessing algorithms is a pivotal step before imposing radiomics biomarkers into the clinic for GBM patients.

Deep Learning For Brain Tumor Segmentation

  • Moreno Lopez, Marc
2017 Thesis, cited 393 times
Website

Using Computer-extracted Image Phenotypes from Tumors on Breast MRI to Predict Stage

  • Morris, Elizabeth
  • Burnside, Elizabeth
  • Whitman, Gary
  • Zuley, Margarita
  • Bonaccio, Ermelinda
  • Ganott, Marie
  • Giger, Maryellen L.
2014 Dataset, cited 29 times
Website

Optimization Methods for Medical Image Super Resolution Reconstruction

  • Moustafa, Marwa
  • Ebied, Hala M
  • Helmy, Ashraf
  • Nazamy, Taymoor M
  • Tolba, Mohamed F
2016 Book Section, cited 0 times
Website

Forschungsanwendungen in der digitalen Radiologie

  • Müller, H
  • Hanbury, A
Der Radiologe 2016 Journal Article, cited 1 times
Website

Tumor metabolic features identified by FDG PET correlates with gene networks of immune cell microenvironment in head and neck cancer

  • Na, Kwon Joong
  • Choi, Hongyoon
Journal of Nuclear Medicine 2017 Journal Article, cited 1 times
Website

Tumor Metabolic Features Identified by 18F-FDG PET Correlate with Gene Networks of Immune Cell Microenvironment in Head and Neck Cancer

  • Na, Kwon Joong
  • Choi, Hongyoon
Journal of Nuclear Medicine 2018 Journal Article, cited 4 times
Website

Automated Brain Lesion Detection and Segmentation Using Magnetic Resonance Images

  • Nabizadeh, Nooshin
2015 Thesis, cited 10 times
Website

Brain tumors detection and segmentation in MR images: Gabor wavelet vs. statistical features

  • Nabizadeh, Nooshin
  • Kubat, Miroslav
Computers & Electrical Engineering 2015 Journal Article, cited 85 times
Website

Automatic tumor segmentation in single-spectral MRI using a texture-based and contour-based algorithm

  • Nabizadeh, Nooshin
  • Kubat, Miroslav
Expert Systems with Applications 2017 Journal Article, cited 8 times
Website

Advanced 3D printed model of middle cerebral artery aneurysms for neurosurgery simulation

  • Nagassa, Ruth G
  • McMenamin, Paul G
  • Adams, Justin W
  • Quayle, Michelle R
  • Rosenfeld, Jeffrey V
3D Print Med 2019 Journal Article, cited 0 times
Website
BACKGROUND: Neurosurgical residents are finding it more difficult to obtain experience as the primary operator in aneurysm surgery. The present study aimed to replicate patient-derived cranial anatomy, pathology and human tissue properties relevant to cerebral aneurysm intervention through 3D printing and 3D print-driven casting techniques. The final simulator was designed to provide accurate simulation of a human head with a middle cerebral artery (MCA) aneurysm. METHODS: This study utilized living human and cadaver-derived medical imaging data including CT angiography and MRI scans. Computer-aided design (CAD) models and pre-existing computational 3D models were also incorporated in the development of the simulator. The design was based on including anatomical components vital to the surgery of MCA aneurysms while focusing on reproducibility, adaptability and functionality of the simulator. Various methods of 3D printing were utilized for the direct development of anatomical replicas and moulds for casting components that optimized the bio-mimicry and mechanical properties of human tissues. Synthetic materials including various types of silicone and ballistics gelatin were cast in these moulds. A novel technique utilizing water-soluble wax and silicone was used to establish hollow patient-derived cerebrovascular models. RESULTS: A patient-derived 3D aneurysm model was constructed for a MCA aneurysm. Multiple cerebral aneurysm models, patient-derived and CAD, were replicated as hollow high-fidelity models. The final assembled simulator integrated six anatomical components relevant to the treatment of cerebral aneurysms of the Circle of Willis in the left cerebral hemisphere. These included models of the cerebral vasculature, cranial nerves, brain, meninges, skull and skin. The cerebral circulation was modeled through the patient-derived vasculature within the brain model. Linear and volumetric measurements of specific physical modular components were repeated, averaged and compared to the original 3D meshes generated from the medical imaging data. Calculation of the concordance correlation coefficient (rhoc: 90.2%-99.0%) and percentage difference (</=0.4%) confirmed the accuracy of the models. CONCLUSIONS: A multi-disciplinary approach involving 3D printing and casting techniques was used to successfully construct a multi-component cerebral aneurysm surgery simulator. Further study is planned to demonstrate the educational value of the proposed simulator for neurosurgery residents.

Quantitative and Qualitative Evaluation of Convolutional Neural Networks with a Deeper U-Net for Sparse-View Computed Tomography Reconstruction

  • Nakai, H.
  • Nishio, M.
  • Yamashita, R.
  • Ono, A.
  • Nakao, K. K.
  • Fujimoto, K.
  • Togashi, K.
Acad Radiol 2019 Journal Article, cited 0 times
Website
Rationale and Objectives To evaluate the utility of a convolutional neural network (CNN) with an increased number of contracting and expanding paths of U-net for sparse-view CT reconstruction. Materials and Methods This study used 60 anonymized chest CT cases from a public database called “The Cancer Imaging Archive”. Eight thousand images from 40 cases were used for training. Eight hundred and 80 images from another 20 cases were used for quantitative and qualitative evaluation, respectively. Sparse-view CT images subsampled by a factor of 20 were simulated, and two CNNs were trained to create denoised images from the sparse-view CT. A CNN based on U-net with residual learning with four contracting and expanding paths (the preceding CNN) was compared with another CNN with eight contracting and expanding paths (the proposed CNN) both quantitatively (peak signal to noise ratio, structural similarity index), and qualitatively (the scores given by two radiologists for anatomical visibility, artifact and noise, and overall image quality) using the Wilcoxon signed-rank test. Nodule and emphysema appearance were also evaluated qualitatively. Results The proposed CNN was significantly better than the preceding CNN both quantitatively and qualitatively (overall image quality interquartile range, 3.0–3.5 versus 1.0–1.0 reported from the preceding CNN; p < 0.001). However, only 2 of 22 cases used for emphysematous evaluation (2 CNNs for every 11 cases with emphysema) had an average score of ≥ 2 (on a 3 point scale). Conclusion Increasing contracting and expanding paths may be useful for sparse-view CT reconstruction with CNN. However, poor reproducibility of emphysema appearance should also be noted. Key Words Convolutional neural network CNN Sparse-view CT Deep learning Abbreviations BN batch normalization CNN convolutional neural networks CTcomputed tomography dB decibel GGO ground glass opacity GPU graphics processing unit MSE the mean squared error PSNR peak signal to noise ratio ReLU rectified linear unit SSIM structural similarity index TCIA The Cancer Imaging Archive

Prediction of malignant glioma grades using contrast-enhanced T1-weighted and T2-weighted magnetic resonance images based on a radiomic analysis

  • Nakamoto, Takahiro
  • Takahashi, Wataru
  • Haga, Akihiro
  • Takahashi, Satoshi
  • Kiryu, Shigeru
  • Nawa, Kanabu
  • Ohta, Takeshi
  • Ozaki, Sho
  • Nozawa, Yuki
  • Tanaka, Shota
  • Mukasa, Akitake
  • Nakagawa, Keiichi
Sci RepScientific reports 2019 Journal Article, cited 0 times
Website
We conducted a feasibility study to predict malignant glioma grades via radiomic analysis using contrast-enhanced T1-weighted magnetic resonance images (CE-T1WIs) and T2-weighted magnetic resonance images (T2WIs). We proposed a framework and applied it to CE-T1WIs and T2WIs (with tumor region data) acquired preoperatively from 157 patients with malignant glioma (grade III: 55, grade IV: 102) as the primary dataset and 67 patients with malignant glioma (grade III: 22, grade IV: 45) as the validation dataset. Radiomic features such as size/shape, intensity, histogram, and texture features were extracted from the tumor regions on the CE-T1WIs and T2WIs. The Wilcoxon-Mann-Whitney (WMW) test and least absolute shrinkage and selection operator logistic regression (LASSO-LR) were employed to select the radiomic features. Various machine learning (ML) algorithms were used to construct prediction models for the malignant glioma grades using the selected radiomic features. Leave-one-out cross-validation (LOOCV) was implemented to evaluate the performance of the prediction models in the primary dataset. The selected radiomic features for all folds in the LOOCV of the primary dataset were used to perform an independent validation. As evaluation indices, accuracies, sensitivities, specificities, and values for the area under receiver operating characteristic curve (or simply the area under the curve (AUC)) for all prediction models were calculated. The mean AUC value for all prediction models constructed by the ML algorithms in the LOOCV of the primary dataset was 0.902 +/- 0.024 (95% CI (confidence interval), 0.873-0.932). In the independent validation, the mean AUC value for all prediction models was 0.747 +/- 0.034 (95% CI, 0.705-0.790). The results of this study suggest that the malignant glioma grades could be sufficiently and easily predicted by preparing the CE-T1WIs, T2WIs, and tumor delineations for each patient. Our proposed framework may be an effective tool for preoperatively grading malignant gliomas.

Automatic rectum limit detection by anatomical markers correlation

  • Namías, R
  • D’Amato, JP
  • Del Fresno, M
  • Vénere, M
Computerized Medical Imaging and Graphics 2014 Journal Article, cited 1 times
Website

Tumor image-derived texture features are associated with CD3 T-cell infiltration status in glioblastoma

  • Narang, Shivali
  • Kim, Donnie
  • Aithala, Sathvik
  • Heimberger, Amy B
  • Ahmed, Salmaan
  • Rao, Dinesh
  • Rao, Ganesh
  • Rao, Arvind
Oncotarget 2017 Journal Article, cited 1 times
Website

Performance analysis of a computer-aided detection system for lung nodules in CT at different slice thicknesses

  • Narayanan, B. N.
  • Hardie, R. C.
  • Kebede, T. M.
J Med Imaging (Bellingham) 2018 Journal Article, cited 2 times
Website
We study the performance of a computer-aided detection (CAD) system for lung nodules in computed tomography (CT) as a function of slice thickness. In addition, we propose and compare three different training methodologies for utilizing nonhomogeneous thickness training data (i.e., composed of cases with different slice thicknesses). These methods are (1) aggregate training using the entire suite of data at their native thickness, (2) homogeneous subset training that uses only the subset of training data that matches each testing case, and (3) resampling all training and testing cases to a common thickness. We believe this study has important implications for how CT is acquired, processed, and stored. We make use of 192 CT cases acquired at a thickness of 1.25 mm and 283 cases at 2.5 mm. These data are from the publicly available Lung Nodule Analysis 2016 dataset. In our study, CAD performance at 2.5 mm is comparable with that at 1.25 mm and is much better than at higher thicknesses. Also, resampling all training and testing cases to 2.5 mm provides the best performance among the three training methods compared in terms of accuracy, memory consumption, and computational time.

Image Processing and Classification Techniques for Early Detection of Lung Cancer for Preventive Health Care: A Survey

  • Naresh, Prashant
  • Shettar, Rajashree
Int. J. of Recent Trends in Engineering & Technology 2014 Journal Article, cited 6 times
Website

Reduced lung-cancer mortality with low-dose computed tomographic screening

  • National
  • Lung
  • Screening
  • Trial
  • Research
  • Team
New England Journal of Medicine 2011 Journal Article, cited 4992 times
Website

The national lung screening trial: overview and study design

  • National
  • Lung
  • Screening
  • Trial
  • Research
  • Team
Radiology 2011 Journal Article, cited 760 times
Website

Security of Multi-frame DICOM Images Using XOR Encryption Approach

  • Natsheh, QN
  • Li, B
  • Gale, AG
Procedia Computer Science 2016 Journal Article, cited 4 times
Website

Automatic Classification of Brain MRI Images Using SVM and Neural Network Classifiers

  • Natteshan, NVS
  • Jothi, J Angel Arul
2015 Book Section, cited 8 times
Website

Discrimination of Benign and Malignant Suspicious Breast Tumors Based on Semi-Quantitative DCE-MRI Parameters Employing Support Vector Machine

  • Navaei-Lavasani, Saeedeh
  • Fathi-Kazerooni, Anahita
  • Saligheh-Rad, Hamidreza
  • Gity, Masoumeh
Frontiers in Biomedical Technologies 2015 Journal Article, cited 4 times
Website

Big biomedical image processing hardware acceleration: A case study for K-means and image filtering

  • Neshatpour, Katayoun
  • Koohi, Arezou
  • Farahmand, Farnoud
  • Joshi, Rajiv
  • Rafatirad, Setareh
  • Sasan, Avesta
  • Homayoun, Houman
IEEE International Symposium on Circuits and Systems (ISCAS) 2016 Journal Article, cited 7 times
Website

Multisite concordance of apparent diffusion coefficient measurements across the NCI Quantitative Imaging Network

  • Newitt, David C
  • Malyarenko, Dariya
  • Chenevert, Thomas L
  • Quarles, C Chad
  • Bell, Laura
  • Fedorov, Andriy
  • Fennessy, Fiona
  • Jacobs, Michael A
  • Solaiyappan, Meiyappan
  • Hectors, Stefanie
  • Taouli, B.
  • Muzi, M.
  • Kinahan, P. E. E.
  • Schmainda, K. M.
  • Prah, M. A.
  • Taber, E. N.
  • Kroenke, C.
  • Huang, W., Arlinghaus, L.
  • Yankeelov, T. E.
  • Cao, Y.
  • Aryal, M.
  • Yen, Y.-F.
  • Kalpathy-Cramer, J.
  • Shukla-Dave, A.
  • Fung, M.
  • Liang, J.
  • Boss, M.
  • Hylton, N.
Journal of Medical Imaging 2017 Journal Article, cited 6 times
Website

Synergy of Sex Differences in Visceral Fat Measured with CT and Tumor Metabolism Helps Predict Overall Survival in Patients with Renal Cell Carcinoma

  • Nguyen, Gerard K
  • Mellnick, Vincent M
  • Yim, Aldrin Kay-Yuen
  • Salter, Amber
  • Ippolito, Joseph E
Radiology 2018 Journal Article, cited 1 times
Website

Pulmonary nodule classification with deep residual networks

  • Nibali, Aiden
  • He, Zhen
  • Wollersheim, Dennis
International journal of computer assisted radiology and surgery 2017 Journal Article, cited 19 times
Website
Lung cancer has the highest death rate among all cancers in the USA. In this work we focus on improving the ability of computer-aided diagnosis (CAD) systems to predict the malignancy of nodules from cropped CT images of lung nodules.

Addition of MR imaging features and genetic biomarkers strengthens glioblastoma survival prediction in TCGA patients

  • Nicolasjilwan, Manal
  • Hu, Ying
  • Yan, Chunhua
  • Meerzaman, Daoud
  • Holder, Chad A
  • Gutman, David
  • Jain, Rajan
  • Colen, Rivka
  • Rubin, Daniel L
  • Zinn, Pascal O
Journal of Neuroradiology 2014 Journal Article, cited 49 times
Website

Efficient Colorization of Medical Imaging based on Colour Transfer Method

  • Nida, Nudrat
  • Khan, Muhammad Usman Ghani
Proceedings of the Pakistan Academy of Sciences: Pakistan Academy of Sciences: B. Life and Environmental Sciences 2016 Journal Article, cited 0 times
Website

A FRAMEWORK FOR AUTOMATIC COLORIZATION OF MEDICAL IMAGING

  • Nida, Nudrat
  • Sharif, Muhammad
  • Khan, Muhammad Usman Ghani
  • Yasmin, Mussarat
  • Fernandes, Steven Lawrence
IIOABJ 2016 Journal Article, cited 3 times
Website

Homological radiomics analysis for prognostic prediction in lung cancer patients

  • Ninomiya, Kenta
  • Arimura, Hidetaka
Physica Medica 2020 Journal Article, cited 0 times
Website

Computer-aided Diagnosis for Lung Cancer: Usefulness of Nodule Heterogeneity

  • Nishio, Mizuho
  • Nagashima, Chihiro
Academic radiology 2017 Journal Article, cited 12 times
Website

Computer-aided diagnosis of lung nodule using gradient tree boosting and Bayesian optimization

  • Nishio, Mizuho
  • Nishizawa, Mitsuo
  • Sugiyama, Osamu
  • Kojima, Ryosuke
  • Yakami, Masahiro
  • Kuroda, Tomohiro
  • Togashi, Kaori
PLoS One 2018 Journal Article, cited 3 times
Website

Segmentation of lung from CT using various active contour models

  • Nithila, Ezhil E
  • Kumar, SS
Biomedical Signal Processing and Control 2018 Journal Article, cited 0 times
Website

Image descriptors in radiology images: a systematic review

  • Nogueira, Mariana A
  • Abreu, Pedro Henriques
  • Martins, Pedro
  • Machado, Penousal
  • Duarte, Hugo
  • Santos, João
Artificial Intelligence Review 2016 Journal Article, cited 8 times
Website

Modified fast adaptive scatter kernel superposition (mfASKS) correction and its dosimetric impact on CBCT‐based proton therapy dose calculation

  • Nomura, Yusuke
  • Xu, Qiong
  • Peng, Hao
  • Takao, Seishin
  • Shimizu, Shinichi
  • Xing, Lei
  • Shirato, Hiroki
Medical physics 2020 Journal Article, cited 0 times
Website

Projection-domain scatter correction for cone beam computed tomography using a residual convolutional neural network

  • Nomura, Yusuke
  • Xu, Qiong
  • Shirato, Hiroki
  • Shimizu, Shinichi
  • Xing, Lei
Med Phys 2019 Journal Article, cited 0 times
Website
PURPOSE: Scatter is a major factor degrading the image quality of cone beam computed tomography (CBCT). Conventional scatter correction strategies require handcrafted analytical models with ad hoc assumptions, which often leads to less accurate scatter removal. This study aims to develop an effective scatter correction method using a residual convolutional neural network (CNN). METHODS: A U-net based 25-layer CNN was constructed for CBCT scatter correction. The establishment of the model consists of three steps: model training, validation, and testing. For model training, a total of 1800 pairs of x-ray projection and the corresponding scatter-only distribution in nonanthropomorphic phantoms taken in full-fan scan were generated using Monte Carlo simulation of a CBCT scanner installed with a proton therapy system. An end-to-end CNN training was implemented with two major loss functions for 100 epochs with a mini-batch size of 10. Image rotations and flips were randomly applied to augment the training datasets during training. For validation, 200 projections of a digital head phantom were collected. The proposed CNN-based method was compared to a conventional projection-domain scatter correction method named fast adaptive scatter kernel superposition (fASKS) method using 360 projections of an anthropomorphic head phantom. Two different loss functions were applied for the same CNN to evaluate the impact of loss functions on the final results. Furthermore, the CNN model trained with full-fan projections was fine-tuned for scatter correction in half-fan scan by using transfer learning with additional 360 half-fan projection pairs of nonanthropomorphic phantoms. The tuned-CNN model for half-fan scan was compared with the fASKS method as well as the CNN-based method without the fine-tuning using additional lung phantom projections. RESULTS: The CNN-based method provides projections with significantly reduced scatter and CBCT images with more accurate Hounsfield Units (HUs) than that of the fASKS-based method. Root mean squared error of the CNN-corrected projections was improved to 0.0862 compared to 0.278 for uncorrected projections or 0.117 for the fASKS-corrected projections. The CNN-corrected reconstruction provided better HU quantification, especially in regions near the air or bone interfaces. All four image quality measures, which include mean absolute error (MAE), mean squared error (MSE), peak signal-to-noise ratio (PSNR), and structural similarity (SSIM), indicated that the CNN-corrected images were significantly better than that of the fASKS-corrected images. Moreover, the proposed transfer learning technique made it possible for the CNN model trained with full-fan projections to be applicable to remove scatters in half-fan projections after fine-tuning with only a small number of additional half-fan training datasets. SSIM value of the tuned-CNN-corrected images was 0.9993 compared to 0.9984 for the non-tuned-CNN-corrected images or 0.9990 for the fASKS-corrected images. Finally, the CNN-based method is computationally efficient - the correction time for the 360 projections only took less than 5 s in the reported experiments on a PC (4.20 GHz Intel Core-i7 CPU) with a single NVIDIA GTX 1070 GPU. CONCLUSIONS: The proposed deep learning-based method provides an effective tool for CBCT scatter correction and holds significant value for quantitative imaging and image-guided radiation therapy.

Medical Image Retrieval Using Vector Quantization and Fuzzy S-tree

  • Nowaková, Jana
  • Prílepok, Michal
  • Snášel, Václav
Journal of Medical Systems 2017 Journal Article, cited 33 times
Website

Autocorrection of lung boundary on 3D CT lung cancer images

  • Nurfauzi, R.
  • Nugroho, H. A.
  • Ardiyanto, I.
  • Frannita, E. L.
Journal of King Saud University - Computer and Information Sciences 2019 Journal Article, cited 0 times
Website
Lung cancer in men has the highest mortality rate among all types of cancer. Juxta-pleural and juxta-vascular are the most common nodules located on the lung surface. A computer aided detection (CADe) system is effective for assisting radiologists in diagnosing lung nodules. However, the lung segmentation step requires sophisticated methods when juxta-pleural and juxta-vascular nodules are present. Fast computational time and low error in covering nodule areas are the aims of this study. The proposed method consists of five stages, namely ground truth (GT) extraction, data preparation, tracheal extraction, separation of lung fusion and lung border correction. The used data consist of 57 3D CT lung cancer images taken from selected LIDC-IDRI dataset. These nodules are determined as the outer areas labeled by four radiologists. The proposed method achieves the fastest computational time of 0.32 s per slice or 60 times faster than that of conventional adaptive border marching (ABM). Moreover, it produces under segmentation of nodule value as low as 14.6%. It indicates that the proposed method has a potential to be embedded in the lung CADe system to cover pleural juxta and vascular nodule areas in lung segmentation.

Memory-efficient 3D connected component labeling with parallel computing

  • Ohira, Norihiro
Signal, Image and Video Processing 2017 Journal Article, cited 0 times
Website

Development of Clinically-Informed 3D Tumor Models for Microwave Imaging Applications

  • Oliveira, Barbara
  • O'Halloran, Martin
  • Conceicao, Raquel
  • Glavin, Martin
  • Jones, Edward
IEEE Antennas and Wireless Propagation Letters 2016 Journal Article, cited 8 times
Website

Uma Proposta Para Utilização De Workflows Científicos Para A Definição De Pipelines Para A Recuperação De Imagens Médicas Por Conteúdo Em Um Ambiente Distribuído

  • Oliveira, Luis Fernando Milano
2016 Thesis, cited 1 times
Website

Image segmentation on GPGPUs: a cellular automata-based approach

  • Olmedo, Irving
  • Perez, Yessika Guerra
  • Johnson, James F
  • Raut, Lakshman
  • Hoe, David HK
2013 Conference Proceedings, cited 0 times
Website

A Neuro-Fuzzy Based System for the Classification of Cells as Cancerous or Non-Cancerous

  • Omotosho, Adebayo
  • Oluwatobi, Asani Emmanuel
  • Oluwaseun, Ogundokun Roseline
  • Chukwuka, Ananti Emmanuel
  • Adekanmi, Adegun
International Journal of Medical Research & Health Sciences 2018 Journal Article, cited 0 times
Website

Application of Sparse-Coding Super-Resolution to 16-Bit DICOM Images for Improving the Image Resolution in MRI

  • Ota, Junko
  • Umehara, Kensuke
  • Ishimaru, Naoki
  • Ishida, Takayuki
Open Journal of Medical Imaging 2017 Journal Article, cited 1 times
Website

Automatic localization of target vertebrae in spine surgery using fast CT-to-fluoroscopy (3D-2D) image registration

  • Otake, Y
  • Schafer, S
  • Stayman, JW
  • Zbijewski, W
  • Kleinszig, G
  • Graumann, R
  • Khanna, AJ
  • Siewerdsen, JH
2012 Conference Proceedings, cited 8 times
Website

Medical image retrieval using hybrid wavelet network classifier

  • Othman, Sufri
  • Jemai, Olfa
  • Zaied, Mourad
  • Ben Amar, Chokri
2014 Conference Proceedings, cited 3 times
Website

Effective Diagnosis and Treatment through Content-Based Medical Image Retrieval (CBMIR) by Using Artificial Intelligence

  • Owais, Muhammad
  • Arsalan, Muhammad
  • Choi, Jiho
  • Park, Kang Ryoung
J Clin Med 2019 Journal Article, cited 0 times
Website
Medical-image-based diagnosis is a tedious task‚ and small lesions in various medical images can be overlooked by medical experts due to the limited attention span of the human visual system, which can adversely affect medical treatment. However, this problem can be resolved by exploring similar cases in the previous medical database through an efficient content-based medical image retrieval (CBMIR) system. In the past few years, heterogeneous medical imaging databases have been growing rapidly with the advent of different types of medical imaging modalities. Recently, a medical doctor usually refers to various types of imaging modalities all together such as computed tomography (CT), magnetic resonance imaging (MRI), X-ray, and ultrasound, etc of various organs in order for the diagnosis and treatment of specific disease. Accurate classification and retrieval of multimodal medical imaging data is the key challenge for the CBMIR system. Most previous attempts use handcrafted features for medical image classification and retrieval, which show low performance for a massive collection of multimodal databases. Although there are a few previous studies on the use of deep features for classification, the number of classes is very small. To solve this problem, we propose the classification-based retrieval system of the multimodal medical images from various types of imaging modalities by using the technique of artificial intelligence, named as an enhanced residual network (ResNet). Experimental results with 12 databases including 50 classes demonstrate that the accuracy and F1.score by our method are respectively 81.51% and 82.42% which are higher than those by the previous method of CBMIR (the accuracy of 69.71% and F1.score of 69.63%).

Brain tumor detection based on Convolutional Neural Network with neutrosophic expert maximum fuzzy sure entropy

  • Özyurt, Fatih
  • Sert, Eser
  • Avci, Engin
  • Dogantekin, Esin
Measurement 2019 Journal Article, cited 0 times