scholarly journals Image Recognition of Coal and Coal Gangue Using a Convolutional Neural Network and Transfer Learning

Energies ◽  
2019 ◽  
Vol 12 (9) ◽  
pp. 1735 ◽  
Author(s):  
Yuanyuan Pu ◽  
Derek B. Apel ◽  
Alicja Szmigiel ◽  
Jie Chen

Recognizing and distinguishing coal and gangue are essential in engineering, such as in coal-fired power plants. This paper employed a convolutional neural network (CNN) to recognize coal and gangue images and help segregate coal and gangue. A typical workflow for CNN image recognition is presented as well as a strategy for updating the model parameters. Based on a powerful trained image recognition model, VGG16, the idea of transfer learning was introduced to build a custom CNN model to solve the problems of massive trainable parameters and limited computing power linked to the building of a brand-new model from scratch. Two hundred and forty coal and gangue images were collected in a database, including 100 training images and 20 validation images for each material. A recognition accuracy of 82.5% was obtained for the validation images, which demonstrated a decent performance of our model. According to the analysis of parameter updating in the training process, a principal constraint for obtaining a higher recognition accuracy mainly resided in a shortage of training samples. This model was also used to identify photos from a washing plant stockpiles, which verified its capability of dealing with field pictures. CNN combined with the transfer learning method we used can provide fast and robust coal/gangue distinction that does not require harsh data support and equipment support. This method will exhibit brighter prospects in engineering if the target image database (as with the coal and gangue images in this study) can be further enlarged.

2021 ◽  
Vol 290 ◽  
pp. 02020
Author(s):  
Boyu Zhang ◽  
Xiao Wang ◽  
Shudong Li ◽  
Jinghua Yang

Current underwater shipwreck side scan sonar samples are few and difficult to label. With small sample sizes, their image recognition accuracy with a convolutional neural network model is low. In this study, we proposed an image recognition method for shipwreck side scan sonar that combines transfer learning with deep learning. In the non-transfer learning, shipwreck sonar sample data were used to train the network, and the results were saved as the control group. The weakly correlated data were applied to train the network, then the network parameters were transferred to the new network, and then the shipwreck sonar data was used for training. These steps were repeated using strongly correlated data. Experiments were carried out on Lenet-5, AlexNet, GoogLeNet, ResNet and VGG networks. Without transfer learning, the highest accuracy was obtained on the ResNet network (86.27%). Using weakly correlated data for transfer training, the highest accuracy was on the VGG network (92.16%). Using strongly correlated data for transfer training, the highest accuracy was also on the VGG network (98.04%). In all network architectures, transfer learning improved the correct recognition rate of convolutional neural network models. Experiments show that transfer learning combined with deep learning improves the accuracy and generalization of the convolutional neural network in the case of small sample sizes.


2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Mingyu Gao ◽  
Peng Song ◽  
Fei Wang ◽  
Junyan Liu ◽  
Andreas Mandelis ◽  
...  

Wood defects are quickly identified from an optical image based on deep learning methodology, which effectively improves wood utilization. Traditional neural network techniques have not yet been employed for wood defect detection due to long training time, low recognition accuracy, and nonautomatical extraction of defect image features. In this work, a model (so-called ReSENet-18) for wood knot defect detection that combined deep learning and transfer learning is proposed. The “squeeze-and-excitation” (SE) module is firstly embedded into the “residual basic block” structure for a “SE-Basic-Block” module construction. This model has the advantages of the features that are extracted in the channel dimension, and it is fused in multiscale with original features. Instantaneously, the fully connected layer is replaced with a global average pooling; consequently, the model parameters could be reduced effectively. The experimental results show that the accuracy has reached 99.02%, meanwhile the training time is also reduced. It shows that the proposed deep convolutional neural network based on ReSENet-18 combined with transfer learning can improve the accuracy of defect recognition and has a potential application in the detection of wood knot defects.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Young-Gon Kim ◽  
Sungchul Kim ◽  
Cristina Eunbee Cho ◽  
In Hye Song ◽  
Hee Jin Lee ◽  
...  

AbstractFast and accurate confirmation of metastasis on the frozen tissue section of intraoperative sentinel lymph node biopsy is an essential tool for critical surgical decisions. However, accurate diagnosis by pathologists is difficult within the time limitations. Training a robust and accurate deep learning model is also difficult owing to the limited number of frozen datasets with high quality labels. To overcome these issues, we validated the effectiveness of transfer learning from CAMELYON16 to improve performance of the convolutional neural network (CNN)-based classification model on our frozen dataset (N = 297) from Asan Medical Center (AMC). Among the 297 whole slide images (WSIs), 157 and 40 WSIs were used to train deep learning models with different dataset ratios at 2, 4, 8, 20, 40, and 100%. The remaining, i.e., 100 WSIs, were used to validate model performance in terms of patch- and slide-level classification. An additional 228 WSIs from Seoul National University Bundang Hospital (SNUBH) were used as an external validation. Three initial weights, i.e., scratch-based (random initialization), ImageNet-based, and CAMELYON16-based models were used to validate their effectiveness in external validation. In the patch-level classification results on the AMC dataset, CAMELYON16-based models trained with a small dataset (up to 40%, i.e., 62 WSIs) showed a significantly higher area under the curve (AUC) of 0.929 than those of the scratch- and ImageNet-based models at 0.897 and 0.919, respectively, while CAMELYON16-based and ImageNet-based models trained with 100% of the training dataset showed comparable AUCs at 0.944 and 0.943, respectively. For the external validation, CAMELYON16-based models showed higher AUCs than those of the scratch- and ImageNet-based models. Model performance for slide feasibility of the transfer learning to enhance model performance was validated in the case of frozen section datasets with limited numbers.


Sign in / Sign up

Export Citation Format

Share Document