Knee Implant Identification by Fine-Tuning Deep Learning Models

Author(s):  
Sukkrit Sharma ◽  
Vineet Batta ◽  
Malathy Chidambaranathan ◽  
Prabhakaran Mathialagan ◽  
Gayathri Mani ◽  
...  
2021 ◽  
Author(s):  
Noor Ahmad ◽  
Muhammad Aminu ◽  
Mohd Halim Mohd Noor

Deep learning approaches have attracted a lot of attention in the automatic detection of Covid-19 and transfer learning is the most common approach. However, majority of the pre-trained models are trained on color images, which can cause inefficiencies when fine-tuning the models on Covid-19 images which are often grayscale. To address this issue, we propose a deep learning architecture called CovidNet which requires a relatively smaller number of parameters. CovidNet accepts grayscale images as inputs and is suitable for training with limited training dataset. Experimental results show that CovidNet outperforms other state-of-the-art deep learning models for Covid-19 detection.


Stroke ◽  
2021 ◽  
Vol 52 (Suppl_1) ◽  
Author(s):  
Yannan Yu ◽  
Soren Christensen ◽  
Yuan Xie ◽  
Enhao Gong ◽  
Maarten G Lansberg ◽  
...  

Objective: Ischemic core prediction from CT perfusion (CTP) remains inaccurate compared with gold standard diffusion-weighted imaging (DWI). We evaluated if a deep learning model to predict the DWI lesion from MR perfusion (MRP) could facilitate ischemic core prediction on CTP. Method: Using the multi-center CRISP cohort of acute ischemic stroke patient with CTP before thrombectomy, we included patients with major reperfusion (TICI score≥2b), adequate image quality, and follow-up MRI at 3-7 days. Perfusion parameters including Tmax, mean transient time, cerebral blood flow (CBF), and cerebral blood volume were reconstructed by RAPID software. Core lab experts outlined the stroke lesion on the follow-up MRI. A previously trained MRI model in a separate group of patients was used as a starting point, which used MRP parameters as input and RAPID ischemic core on DWI as ground truth. We fine-tuned this model, using CTP parameters as input, and follow-up MRI as ground truth. Another model was also trained from scratch with only CTP data. 5-fold cross validation was used. Performance of the models was compared with ischemic core (rCBF≤30%) from RAPID software to identify the presence of a large infarct (volume>70 or >100ml). Results: 94 patients in the CRISP trial met the inclusion criteria (mean age 67±15 years, 52% male, median baseline NIHSS 18, median 90-day mRS 2). Without fine-tuning, the MRI model had an agreement of 73% in infarct >70ml, and 69% in >100ml; the MRI model fine-tuned on CT improved the agreement to 77% and 73%; The CT model trained from scratch had agreements of 73% and 71%; All of the deep learning models outperformed the rCBF segmentation from RAPID, which had agreements of 51% and 64%. See Table and figure. Conclusions: It is feasible to apply MRP-based deep learning model to CT. Fine-tuning with CTP data further improves the predictions. All deep learning models predict the stroke lesion after major recanalization better than thresholding approaches based on rCBF.


2021 ◽  
Author(s):  
Noor Ahmad ◽  
Muhammad Aminu ◽  
Mohd Halim Mohd Noor

Deep learning approaches have attracted a lot of attention in the automatic detection of Covid-19 and transfer learning is the most common approach. However, majority of the pre-trained models are trained on color images, which can cause inefficiencies when fine-tuning the models on Covid-19 images which are often grayscale. To address this issue, we propose a deep learning architecture called CovidNet which requires a relatively smaller number of parameters. CovidNet accepts grayscale images as inputs and is suitable for training with limited training dataset. Experimental results show that CovidNet outperforms other state-of-the-art deep learning models for Covid-19 detection.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Hyunseob Kim ◽  
Jeongcheol Lee ◽  
Sunil Ahn ◽  
Jongsuk Ruth Lee

AbstractDeep learning has brought a dramatic development in molecular property prediction that is crucial in the field of drug discovery using various representations such as fingerprints, SMILES, and graphs. In particular, SMILES is used in various deep learning models via character-based approaches. However, SMILES has a limitation in that it is hard to reflect chemical properties. In this paper, we propose a new self-supervised method to learn SMILES and chemical contexts of molecules simultaneously in pre-training the Transformer. The key of our model is learning structures with adjacency matrix embedding and learning logics that can infer descriptors via Quantitative Estimation of Drug-likeness prediction in pre-training. As a result, our method improves the generalization of the data and achieves the best average performance by benchmarking downstream tasks. Moreover, we develop a web-based fine-tuning service to utilize our model on various tasks.


2021 ◽  
Vol 27 (2) ◽  
Author(s):  
Caisse Amisse ◽  
Mario Ernesto Jijón-Palma ◽  
Jorge Antonio Silva Centeno

Author(s):  
Ahmad Heidary-Sharifabad ◽  
Mohsen Sardari Zarchi ◽  
Sima Emadi ◽  
Gholamreza Zarei

The Chenopodiaceae species are ecologically and financially important, and play a significant role in biodiversity around the world. Biodiversity protection is critical for the survival and sustainability of each ecosystem and since plant species recognition in their natural habitats is the first process in plant diversity protection, an automatic species classification in the wild would greatly help the species analysis and consequently biodiversity protection on earth. Computer vision approaches can be used for automatic species analysis. Modern computer vision approaches are based on deep learning techniques. A standard dataset is essential in order to perform a deep learning algorithm. Hence, the main goal of this research is to provide a standard dataset of Chenopodiaceae images. This dataset is called ACHENY and contains 27030 images of 30 Chenopodiaceae species in their natural habitats. The other goal of this study is to investigate the applicability of ACHENY dataset by using deep learning models. Therefore, two novel deep learning models based on ACHENY dataset are introduced: First, a lightweight deep model which is trained from scratch and is designed innovatively to be agile and fast. Second, a model based on the EfficientNet-B1 architecture, which is pre-trained on ImageNet and is fine-tuned on ACHENY. The experimental results show that the two proposed models can do Chenopodiaceae fine-grained species recognition with promising accuracy. To evaluate our models, their performance was compared with the well-known VGG-16 model after fine-tuning it on ACHENY. Both VGG-16 and our first model achieved about 80% accuracy while the size of VGG-16 is about 16[Formula: see text] larger than the first model. Our second model has an accuracy of about 90% and outperforms the other models where its number of parameters is 5[Formula: see text] than the first model but it is still about one-third of the VGG-16 parameters.


2020 ◽  
Vol 15 ◽  
Author(s):  
Fareed Ahmad ◽  
Amjad Farooq ◽  
Muhammad Usman Ghani Khan

Background: Bacterial pathogens are deadly for animals and humans. The ease of their dissemination, coupled with their high capacity for ailment and death in infected individuals, makes them a threat to society. Objective: Due to high similarity among genera and species of pathogens, it is sometimes difficult for microbiologists to differentiate between them. Their automatic classification using deep-learning models can help in reliable, and accurate outcomes. Method: Deep-learning models, namely; AlexNet, GoogleNet, ResNet101, and InceptionV3 are used with numerous variations including training model from scratch, fine-tuning without pre-trained weights, fine-tuning along with freezing weights of initial layers, fine-tuning along with adjusting weights of all layers and augmenting the dataset by random translation and reflection. Moreover, as the dataset is small, fine-tuning and data augmentation strategies are applied to avoid overfitting and produce a generalized model. A merged feature vector is produced using two best-performing models and accuracy is calculated by xgboost algorithm on the feature vector by applying cross-validation. Results: Fine-tuned models where augmentation is applied produces the best results. Out of these, two-best-performing deep models i.e. (ResNet101, and InceptionV3) selected for feature fusion, produced a similar validation accuracy of 95.83 with a loss of 0.0213 and 0.1066, and a testing accuracy of 97.92 and 93.75, respectively. The proposed model used xgboost to attained a classification accuracy of 98.17% by using 35-folds cross-validation. Conclusion: The automatic classification using these models can help experts in the correct identification of pathogens. Consequently, they can help in controlling epidemics and thereby minimizing the socio-economic impact on the community.


Sign in / Sign up

Export Citation Format

Share Document