scholarly journals Early prediction of neoadjuvant chemotherapy response by exploiting a transfer learning approach on breast DCE-MRIs

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Maria Colomba Comes ◽  
Annarita Fanizzi ◽  
Samantha Bove ◽  
Vittorio Didonna ◽  
Sergio Diotaiuti ◽  
...  

AbstractThe dynamic contrast-enhanced MR imaging plays a crucial role in evaluating the effectiveness of neoadjuvant chemotherapy (NAC) even since its early stage through the prediction of the final pathological complete response (pCR). In this study, we proposed a transfer learning approach to predict if a patient achieved pCR (pCR) or did not (non-pCR) by exploiting, separately or in combination, pre-treatment and early-treatment exams from I-SPY1 TRIAL public database. First, low-level features, i.e., related to local structure of the image, were automatically extracted by a pre-trained convolutional neural network (CNN) overcoming manual feature extraction. Next, an optimal set of most stable features was detected and then used to design an SVM classifier. A first subset of patients, called fine-tuning dataset (30 pCR; 78 non-pCR), was used to perform the optimal choice of features. A second subset not involved in the feature selection process was employed as an independent test (7 pCR; 19 non-pCR) to validate the model. By combining the optimal features extracted from both pre-treatment and early-treatment exams with some clinical features, i.e., ER, PgR, HER2 and molecular subtype, an accuracy of 91.4% and 92.3%, and an AUC value of 0.93 and 0.90, were returned on the fine-tuning dataset and the independent test, respectively. Overall, the low-level CNN features have an important role in the early evaluation of the NAC efficacy by predicting pCR. The proposed model represents a first effort towards the development of a clinical support tool for an early prediction of pCR to NAC.

Cancers ◽  
2021 ◽  
Vol 13 (10) ◽  
pp. 2298
Author(s):  
Maria Colomba Comes ◽  
Daniele La Forgia ◽  
Vittorio Didonna ◽  
Annarita Fanizzi ◽  
Francesco Giotta ◽  
...  

Cancer treatment planning benefits from an accurate early prediction of the treatment efficacy. The goal of this study is to give an early prediction of three-year Breast Cancer Recurrence (BCR) for patients who underwent neoadjuvant chemotherapy. We addressed the task from a new perspective based on transfer learning applied to pre-treatment and early-treatment DCE-MRI scans. Firstly, low-level features were automatically extracted from MR images using a pre-trained Convolutional Neural Network (CNN) architecture without human intervention. Subsequently, the prediction model was built with an optimal subset of CNN features and evaluated on two sets of patients from I-SPY1 TRIAL and BREAST-MRI-NACT-Pilot public databases: a fine-tuning dataset (70 not recurrent and 26 recurrent cases), which was primarily used to find the optimal subset of CNN features, and an independent test (45 not recurrent and 17 recurrent cases), whose patients had not been involved in the feature selection process. The best results were achieved when the optimal CNN features were augmented by four clinical variables (age, ER, PgR, HER2+), reaching an accuracy of 91.7% and 85.2%, a sensitivity of 80.8% and 84.6%, a specificity of 95.7% and 85.4%, and an AUC value of 0.93 and 0.83 on the fine-tuning dataset and the independent test, respectively. Finally, the CNN features extracted from pre-treatment and early-treatment exams were revealed to be strong predictors of BCR.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Simon Tam ◽  
Mounir Boukadoum ◽  
Alexandre Campeau-Lecours ◽  
Benoit Gosselin

AbstractMyoelectric hand prostheses offer a way for upper-limb amputees to recover gesture and prehensile abilities to ease rehabilitation and daily life activities. However, studies with prosthesis users found that a lack of intuitiveness and ease-of-use in the human-machine control interface are among the main driving factors in the low user acceptance of these devices. This paper proposes a highly intuitive, responsive and reliable real-time myoelectric hand prosthesis control strategy with an emphasis on the demonstration and report of real-time evaluation metrics. The presented solution leverages surface high-density electromyography (HD-EMG) and a convolutional neural network (CNN) to adapt itself to each unique user and his/her specific voluntary muscle contraction patterns. Furthermore, a transfer learning approach is presented to drastically reduce the training time and allow for easy installation and calibration processes. The CNN-based gesture recognition system was evaluated in real-time with a group of 12 able-bodied users. A real-time test for 6 classes/grip modes resulted in mean and median positive predictive values (PPV) of 93.43% and 100%, respectively. Each gesture state is instantly accessible from any other state, with no mode switching required for increased responsiveness and natural seamless control. The system is able to output a correct prediction within less than 116 ms latency. 100% PPV has been attained in many trials and is realistically achievable consistently with user practice and/or employing a thresholded majority vote inference. Using transfer learning, these results are achievable after a sensor installation, data recording and network training/fine-tuning routine taking less than 10 min to complete, a reduction of 89.4% in the setup time of the traditional, non-transfer learning approach.


Healthcare ◽  
2020 ◽  
Vol 8 (3) ◽  
pp. 272
Author(s):  
Khajamoinuddin Syed ◽  
William Sleeman ◽  
Michael Hagan ◽  
Jatinder Palta ◽  
Rishabh Kapoor ◽  
...  

The Radiotherapy Incident Reporting and Analysis System (RIRAS) receives incident reports from Radiation Oncology facilities across the US Veterans Health Affairs (VHA) enterprise and Virginia Commonwealth University (VCU). In this work, we propose a computational pipeline for analysis of radiation oncology incident reports. Our pipeline uses machine learning (ML) and natural language processing (NLP) based methods to predict the severity of the incidents reported in the RIRAS platform using the textual description of the reported incidents. These incidents in RIRAS are reviewed by a radiation oncology subject matter expert (SME), who initially triages some incidents based on the salient elements in the incident report. To automate the triage process, we used the data from the VHA treatment centers and the VCU radiation oncology department. We used NLP combined with traditional ML algorithms, including support vector machine (SVM) with linear kernel, and compared it against the transfer learning approach with the universal language model fine-tuning (ULMFiT) algorithm. In RIRAS, severities are divided into four categories; A, B, C, and D, with A being the most severe to D being the least. In this work, we built models to predict High (A & B) vs. Low (C & D) severity instead of all the four categories. Models were evaluated with macro-averaged precision, recall, and F1-Score. The Traditional ML machine learning (SVM-linear) approach did well on the VHA dataset with 0.78 F1-Score but performed poorly on the VCU dataset with 0.5 F1-Score. The transfer learning approach did well on both datasets with 0.81 F1-Score on VHA dataset and 0.68 F1-Score on the VCU dataset. Overall, our methods show promise in automating the triage and severity determination process from radiotherapy incident reports.


2021 ◽  
Author(s):  
Geoffrey F. Schau ◽  
Hassan Ghani ◽  
Erik A. Burlingame ◽  
Guillaume Thibault ◽  
Joe W. Gray ◽  
...  

AbstractAccurate diagnosis of metastatic cancer is essential for prescribing optimal control strategies to halt further spread of metastasizing disease. While pathological inspection aided by immunohistochemistry staining provides a valuable gold standard for clinical diagnostics, deep learning methods have emerged as powerful tools for identifying clinically relevant features of whole slide histology relevant to a tumor’s metastatic origin. Although deep learning models require significant training data to learn effectively, transfer learning paradigms provide mechanisms to circumvent limited training data by first training a model on related data prior to fine-tuning on smaller data sets of interest. In this work we propose a transfer learning approach that trains a convolutional neural network to infer the metastatic origin of tumor tissue from whole slide images of hematoxylin and eosin (H&E) stained tissue sections and illustrate the advantages of pre-training network on whole slide images of primary tumor morphology. We further characterize statistical dissimilarity between primary and metastatic tumors of various indications on patch-level images to highlight limitations of our indication-specific transfer learning approach. Using a primary-to-metastatic transfer learning approach, we achieved mean class-specific areas under receiver operator characteristics curve (AUROC) of 0.779, which outperformed comparable models trained on only images of primary tumor (mean AUROC of 0.691) or trained on only images of metastatic tumor (mean AUROC of 0.675), supporting the use of large scale primary tumor imaging data in developing computer vision models to characterize metastatic origin of tumor lesions.


2018 ◽  
Vol 2018 ◽  
pp. 1-16 ◽  
Author(s):  
Lamyaa Sadouk ◽  
Taoufiq Gadi ◽  
El Hassan Essoufi

Autism Spectrum Disorder (ASD) is a neurodevelopmental disorder characterized by persistent difficulties including repetitive patterns of behavior known as stereotypical motor movements (SMM). So far, several techniques have been implemented to track and identify SMMs. In this context, we propose a deep learning approach for SMM recognition, namely, convolutional neural networks (CNN) in time and frequency-domains. To solve the intrasubject SMM variability, we propose a robust CNN model for SMM detection within subjects, whose parameters are set according to a proper analysis of SMM signals, thereby outperforming state-of-the-art SMM classification works. And, to solve the intersubject variability, we propose a global, fast, and light-weight framework for SMM detection across subjects which combines a knowledge transfer technique with an SVM classifier, therefore resolving the “real-life” medical issue associated with the lack of supervised SMMs per testing subject in particular. We further show that applying transfer learning across domains instead of transfer learning within the same domain also generalizes to the SMM target domain, thus alleviating the problem of the lack of supervised SMMs in general.


Author(s):  
Zhengxu Yu ◽  
Zhongming Jin ◽  
Long Wei ◽  
Jishun Guo ◽  
Jianqiang Huang ◽  
...  

Model fine-tuning is a widely used transfer learning approach in person Re-identification (ReID) applications, which fine-tuning a pre-trained feature extraction model into the target scenario instead of training a model from scratch. It is challenging due to the significant variations inside the target scenario, e.g., different camera viewpoint, illumination changes, and occlusion. These variations result in a gap between the distribution of each mini-batch and the distribution of the whole dataset when using mini-batch training. In this paper, we study model fine-tuning from the perspective of the aggregation and utilization of the global information of the dataset when using mini-batch training. Specifically, we introduce a novel network structure called Batch-related Convolutional Cell (BConv-Cell), which progressively collects the global information of the dataset into a latent state and uses this latent state to rectify the extracted feature. Based on BConv-Cells, we further proposed the Progressive Transfer Learning (PTL) method to facilitate the model fine-tuning process by joint training the BConv-Cells and the pre-trained ReID model. Empirical experiments show that our proposal can improve the performance of the ReID model greatly on MSMT17, Market-1501, CUHK03 and DukeMTMC-reID datasets. The code will be released later on at \url{https://github.com/ZJULearning/PTL}


Sensors ◽  
2019 ◽  
Vol 19 (22) ◽  
pp. 4850 ◽  
Author(s):  
Carlos S. Pereira ◽  
Raul Morais ◽  
Manuel J. C. S. Reis

Frequently, the vineyards in the Douro Region present multiple grape varieties per parcel and even per row. An automatic algorithm for grape variety identification as an integrated software component was proposed that can be applied, for example, to a robotic harvesting system. However, some issues and constraints in its development were highlighted, namely, the images captured in natural environment, low volume of images, high similarity of the images among different grape varieties, leaf senescence, and significant changes on the grapevine leaf and bunch images in the harvest seasons, mainly due to adverse climatic conditions, diseases, and the presence of pesticides. In this paper, the performance of the transfer learning and fine-tuning techniques based on AlexNet architecture were evaluated when applied to the identification of grape varieties. Two natural vineyard image datasets were captured in different geographical locations and harvest seasons. To generate different datasets for training and classification, some image processing methods, including a proposed four-corners-in-one image warping algorithm, were used. The experimental results, obtained from the application of an AlexNet-based transfer learning scheme and trained on the image dataset pre-processed through the four-corners-in-one method, achieved a test accuracy score of 77.30%. Applying this classifier model, an accuracy of 89.75% on the popular Flavia leaf dataset was reached. The results obtained by the proposed approach are promising and encouraging in helping Douro wine growers in the automatic task of identifying grape varieties.


Sign in / Sign up

Export Citation Format

Share Document