scholarly journals Research of Water Body Turbidity Classification Model for Aquiculture Based on Transfer Learning

2021 ◽  
Vol 1757 (1) ◽  
pp. 012004
Author(s):  
Jianhua Zheng ◽  
Gaolin Yang ◽  
Yanxuan Huang ◽  
Leian Liu ◽  
Guihuang Hong ◽  
...  
2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Young-Gon Kim ◽  
Sungchul Kim ◽  
Cristina Eunbee Cho ◽  
In Hye Song ◽  
Hee Jin Lee ◽  
...  

AbstractFast and accurate confirmation of metastasis on the frozen tissue section of intraoperative sentinel lymph node biopsy is an essential tool for critical surgical decisions. However, accurate diagnosis by pathologists is difficult within the time limitations. Training a robust and accurate deep learning model is also difficult owing to the limited number of frozen datasets with high quality labels. To overcome these issues, we validated the effectiveness of transfer learning from CAMELYON16 to improve performance of the convolutional neural network (CNN)-based classification model on our frozen dataset (N = 297) from Asan Medical Center (AMC). Among the 297 whole slide images (WSIs), 157 and 40 WSIs were used to train deep learning models with different dataset ratios at 2, 4, 8, 20, 40, and 100%. The remaining, i.e., 100 WSIs, were used to validate model performance in terms of patch- and slide-level classification. An additional 228 WSIs from Seoul National University Bundang Hospital (SNUBH) were used as an external validation. Three initial weights, i.e., scratch-based (random initialization), ImageNet-based, and CAMELYON16-based models were used to validate their effectiveness in external validation. In the patch-level classification results on the AMC dataset, CAMELYON16-based models trained with a small dataset (up to 40%, i.e., 62 WSIs) showed a significantly higher area under the curve (AUC) of 0.929 than those of the scratch- and ImageNet-based models at 0.897 and 0.919, respectively, while CAMELYON16-based and ImageNet-based models trained with 100% of the training dataset showed comparable AUCs at 0.944 and 0.943, respectively. For the external validation, CAMELYON16-based models showed higher AUCs than those of the scratch- and ImageNet-based models. Model performance for slide feasibility of the transfer learning to enhance model performance was validated in the case of frozen section datasets with limited numbers.


2021 ◽  
Vol 65 (1) ◽  
pp. 11-22
Author(s):  
Mengyao Lu ◽  
Shuwen Jiang ◽  
Cong Wang ◽  
Dong Chen ◽  
Tian’en Chen

HighlightsA classification model for the front and back sides of tobacco leaves was developed for application in industry.A tobacco leaf grading method that combines a CNN with double-branch integration was proposed.The A-ResNet network was proposed and compared with other classic CNN networks.The grading accuracy of eight different grades was 91.30% and the testing time was 82.180 ms, showing a relatively high classification accuracy and efficiency.Abstract. Flue-cured tobacco leaf grading is a key step in the production and processing of Chinese-style cigarette raw materials, directly affecting cigarette blend and quality stability. At present, manual grading of tobacco leaves is dominant in China, resulting in unsatisfactory grading quality and consuming considerable material and financial resources. In this study, for fast, accurate, and non-destructive tobacco leaf grading, 2,791 flue-cured tobacco leaves of eight different grades in south Anhui Province, China, were chosen as the study sample, and a tobacco leaf grading method that combines convolutional neural networks and double-branch integration was proposed. First, a classification model for the front and back sides of tobacco leaves was trained by transfer learning. Second, two processing methods (equal-scaled resizing and cropping) were used to obtain global images and local patches from the front sides of tobacco leaves. A global image-based tobacco leaf grading model was then developed using the proposed A-ResNet-65 network, and a local patch-based tobacco leaf grading model was developed using the ResNet-34 network. These two networks were compared with classic deep learning networks, such as VGGNet, GoogLeNet-V3, and ResNet. Finally, the grading results of the two grading models were integrated to realize tobacco leaf grading. The tobacco leaf classification accuracy of the final model, for eight different grades, was 91.30%, and grading of a single tobacco leaf required 82.180 ms. The proposed method achieved a relatively high grading accuracy and efficiency. It provides a method for industrial implementation of the tobacco leaf grading and offers a new approach for the quality grading of other agricultural products. Keywords: Convolutional neural network, Deep learning, Image classification, Transfer learning, Tobacco leaf grading


2021 ◽  
Vol 234 ◽  
pp. 00064
Author(s):  
Anass Barodi ◽  
Abderrahim Bajit ◽  
Mohammed Benbrahim ◽  
Ahmed Tamtaoui

This paper represents a study for the realization of a system based on Artificial Intelligence, which allows the recognition of traffic road signs in an intelligent way, and also demonstrates the performance of Transfer Learning for object classification in general. When systems are trained on the aspects of human visualization (HVS), which helps or generates the same decisions, the construct robust and efficient systems. This allows us to avoid many environmental risks, both for weather conditions, such as cloudy or rainy weather that causes obscured vision of signs, but the main objective is to avoid all road risks that are dangerous to achieve road safety, such as accidents due to non-compliance with traffic rules, both for vehicles and passengers. However, simply collecting road signs in different places does not solve the problem, an intelligent system for classifying road signs is needed to improve the safety of people in its environment. This study proposed a traffic road sign classification system that extracts visual characteristics from a Convolution Neural Network (CNN) classification model. This model aims to assign a class to the image of the road sign through the classifier with the most efficient optimized. Then the evaluation of its effectiveness according to several criteria, using the Confusion Matrix and the classification report, with an in-depth analysis of the results obtained by the images that are taken from the urban world. The results obtained by the system are encouraging in comparison with the systems developed in the scientific literature, for example, the Advanced Driving Assistance Systems (ADAS) of the sector automobile.


Author(s):  
Na Wu ◽  
Fei Liu ◽  
Fanjia Meng ◽  
Mu Li ◽  
Chu Zhang ◽  
...  

Rapid varieties classification of crop seeds is significant for breeders to screen out seeds with specific traits and market regulators to detect seed purity. However, collecting high-quality, large-scale samples takes high costs in some cases, making it difficult to build an accurate classification model. This study aimed to explore a rapid and accurate method for varieties classification of different crop seeds under the sample-limited condition based on hyperspectral imaging (HSI) and deep transfer learning. Three deep neural networks with typical structures were designed based on a sample-rich Pea dataset. Obtained the highest accuracy of 99.57%, VGG-MODEL was transferred to classify four target datasets (rice, oat, wheat, and cotton) with limited samples. Accuracies of the deep transferred model achieved 95, 99, 80.8, and 83.86% on the four datasets, respectively. Using training sets with different sizes, the deep transferred model could always obtain higher performance than other traditional methods. The visualization of the deep features and classification results confirmed the portability of the shared features of seed spectra, providing an interpreted method for rapid and accurate varieties classification of crop seeds. The overall results showed great superiority of HSI combined with deep transfer learning for seed detection under sample-limited condition. This study provided a new idea for facilitating a crop germplasm screening process under the scenario of sample scarcity and the detection of other qualities of crop seeds under sample-limited condition based on HSI.


Author(s):  
Kevin William Gunawan ◽  
◽  
Alam Ahmad Hidayat ◽  
Tjeng Wawan Cenggoro ◽  
Bens Pardamean ◽  
...  

2021 ◽  
Vol 11 ◽  
Author(s):  
Runping Hou ◽  
Xiaoyang Li ◽  
Junfeng Xiong ◽  
Tianle Shen ◽  
Wen Yu ◽  
...  

BackgroundFor stage IV patients harboring EGFR mutations, there is a differential response to the first-line TKI treatment. We constructed three-dimensional convolutional neural networks (CNN) with deep transfer learning to stratify patients into subgroups with different response and progression risks.Materials and MethodsFrom 2013 to 2017, 339 patients with EGFR mutation receiving first-line TKI treatment were included. Progression-free survival (PFS) time and progression patterns were confirmed by routine follow-up and restaging examinations. Patients were divided into two subgroups according to the median PFS (<=9 months, > 9 months). We developed a PFS prediction model and a progression pattern classification model using transfer learning from a pre-trained EGFR mutation classification 3D CNN. Clinical features were fused with the 3D CNN to build the final hybrid prediction model. The performance was quantified using area under receiver operating characteristic curve (AUC), and model performance was compared by AUCs with Delong test.ResultsThe PFS prediction CNN showed an AUC of 0.744 (95% CI, 0.645–0.843) in the independent validation set and the hybrid model of CNNs and clinical features showed an AUC of 0.771 (95% CI, 0.676–0.866), which are significantly better than clinical features-based model (AUC, 0.624, P<0.01). The progression pattern prediction model showed an AUC of 0.762(95% CI, 0.643–0.882) and the hybrid model with clinical features showed an AUC of 0.794 (95% CI, 0.681–0.908), which can provide compensate information for clinical features-based model (AUC, 0.710; 95% CI, 0.582–0.839).ConclusionThe CNN exhibits potential ability to stratify progression status in patients with EGFR mutation treated with first-line TKI, which might help make clinical decisions.


2019 ◽  
Vol 9 (13) ◽  
pp. 2725 ◽  
Author(s):  
Zhang Xiao ◽  
Yu Tan ◽  
Xingxing Liu ◽  
Shenghui Yang

The classification of plug seedlings is important work in the replanting process. This paper proposed a classification method for plug seedlings based on transfer learning. Firstly, by extracting and graying the interest region of the original image acquired, a regional grayscale cumulative distribution curve is obtained. Calculating the number of peak points of the curve to identify the plug tray specification is then done. Secondly, the transfer learning method based on convolutional neural network is used to construct the classification model of plug seedlings. According to the growth characteristics of the seedlings, 2286 seedlings samples were collected to train the model at the two-leaf and one-heart stages. Finally, the image of the interest region is divided into cell images according to the specification of the plug tray, and the cell images are put into the classification model, thereby classifying the qualified seedling, the unqualified seedling and the lack of seedling. After testing, the identification method of the tray specification has an average accuracy of 100% for the three specifications (50 cells, 72 cells, 105 cells) of the 20-day and 25-day pepper seedlings. Seedling classification models based on the transfer learning method of four different convolutional neural networks (Alexnet, Inception-v3, Resnet-18, VGG16) are constructed and tested. The classification accuracy of the VGG16-based classification model is the best, which is 95.50%, the Alexnet-based classification model has the shortest training time, which is 6 min and 8 s. This research has certain theoretical reference significance for intelligent replanting classification work.


Sign in / Sign up

Export Citation Format

Share Document