Improving Historical Data Discovery in Weather Radar Image Data Sets Using Transfer Learning

Author(s):  
Steven Ryan Gooch ◽  
V. Chandrasekar
Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-9 ◽  
Author(s):  
Weibin Chen ◽  
Zhiyang Gu ◽  
Zhimin Liu ◽  
Yaoyao Fu ◽  
Zhipeng Ye ◽  
...  

Thyroid nodule is a clinical disorder with a high incidence rate, with large number of cases being detected every year globally. Early analysis of a benign or malignant thyroid nodule using ultrasound imaging is of great importance in the diagnosis of thyroid cancer. Although the b-mode ultrasound can be used to find the presence of a nodule in the thyroid, there is no existing method for an accurate and automatic diagnosis of the ultrasound image. In this pursuit, the present study envisaged the development of an ultrasound diagnosis method for the accurate and efficient identification of thyroid nodules, based on transfer learning and deep convolutional neural network. Initially, the Total Variation- (TV-) based self-adaptive image restoration method was adopted to preprocess the thyroid ultrasound image and remove the boarder and marks. With data augmentation as a training set, transfer learning with the trained GoogLeNet convolutional neural network was performed to extract image features. Finally, joint training and secondary transfer learning were performed to improve the classification accuracy, based on the thyroid images from open source data sets and the thyroid images collected from local hospitals. The GoogLeNet model was established for the experiments on thyroid ultrasound image data sets. Compared with the network established with LeNet5, VGG16, GoogLeNet, and GoogLeNet (Improved), the results showed that using GoogLeNet (Improved) model enhanced the accuracy for the nodule classification. The joint training of different data sets and the secondary transfer learning further improved its accuracy. The results of experiments on the medical image data sets of various types of diseased and normal thyroids showed that the accuracy rate of classification and diagnosis of this method was 96.04%, with a significant clinical application value.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4845
Author(s):  
Jingbo Li ◽  
Changchun Li ◽  
Shuaipeng Fei ◽  
Chunyan Ma ◽  
Weinan Chen ◽  
...  

The number of wheat ears is an essential indicator for wheat production and yield estimation, but accurately obtaining wheat ears requires expensive manual cost and labor time. Meanwhile, the characteristics of wheat ears provide less information, and the color is consistent with the background, which can be challenging to obtain the number of wheat ears required. In this paper, the performance of Faster regions with convolutional neural networks (Faster R-CNN) and RetinaNet to predict the number of wheat ears for wheat at different growth stages under different conditions is investigated. The results show that using the Global WHEAT dataset for recognition, the RetinaNet method, and the Faster R-CNN method achieve an average accuracy of 0.82 and 0.72, with the RetinaNet method obtaining the highest recognition accuracy. Secondly, using the collected image data for recognition, the R2 of RetinaNet and Faster R-CNN after transfer learning is 0.9722 and 0.8702, respectively, indicating that the recognition accuracy of the RetinaNet method is higher on different data sets. We also tested wheat ears at both the filling and maturity stages; our proposed method has proven to be very robust (the R2 is above 90). This study provides technical support and a reference for automatic wheat ear recognition and yield estimation.


2021 ◽  
Vol 7 (s2) ◽  
Author(s):  
Alexander Bergs

Abstract This paper focuses on the micro-analysis of historical data, which allows us to investigate language use across the lifetime of individual speakers. Certain concepts, such as social network analysis or communities of practice, put individual speakers and their social embeddedness and dynamicity at the center of attention. This means that intra-speaker variation can be described and analyzed in quite some detail in certain historical data sets. The paper presents some exemplary empirical analyses of the diachronic linguistic behavior of individual speakers/writers in fifteenth to seventeenth century England. It discusses the social factors that influence this behavior, with an emphasis on the methodological and theoretical challenges and opportunities when investigating intra-speaker variation and change.


2021 ◽  
Vol 18 (1) ◽  
pp. 172988142199334
Author(s):  
Guangchao Zhang ◽  
Junrong Liu

With the urgent demand of consumers for diversified automobile modeling, simple, efficient, and intelligent automobile modeling analysis and modeling method is an urgent problem to be solved in current automobile modeling design. The purpose of this article is to analyze the modeling preference and trend of the current automobile market in time, which can assist the modeling design of new models of automobile main engine factories and strengthen their branding family. Intelligent rapid modeling shortens the current modeling design cycle, so that the product rapid iteration is to occupy an active position in the automotive market. In this article, aiming at the family analysis of automobile front face, the image database of automobile front face modeling analysis was created. The database included two data sets of vehicle signs and no vehicle signs, and the image data of vehicle front face modeling of most models of 22 domestic mainstream brands were collected. Then, this article adopts the image classification processing method in computer vision to conduct car brand classification training on the database. Based on ResNet-8 and other model architectures, it trains and classifies the intelligent vehicle brand classification database with and without vehicle label. Finally, based on the shape coefficient, a 3D wireframe model and a curved surface model are obtained. The experimental results show that the 3D curve model can be obtained based on a single image from any angle, which greatly shortens the modeling period by 92%.


Author(s):  
Jianping Ju ◽  
Hong Zheng ◽  
Xiaohang Xu ◽  
Zhongyuan Guo ◽  
Zhaohui Zheng ◽  
...  

AbstractAlthough convolutional neural networks have achieved success in the field of image classification, there are still challenges in the field of agricultural product quality sorting such as machine vision-based jujube defects detection. The performance of jujube defect detection mainly depends on the feature extraction and the classifier used. Due to the diversity of the jujube materials and the variability of the testing environment, the traditional method of manually extracting the features often fails to meet the requirements of practical application. In this paper, a jujube sorting model in small data sets based on convolutional neural network and transfer learning is proposed to meet the actual demand of jujube defects detection. Firstly, the original images collected from the actual jujube sorting production line were pre-processed, and the data were augmented to establish a data set of five categories of jujube defects. The original CNN model is then improved by embedding the SE module and using the triplet loss function and the center loss function to replace the softmax loss function. Finally, the depth pre-training model on the ImageNet image data set was used to conduct training on the jujube defects data set, so that the parameters of the pre-training model could fit the parameter distribution of the jujube defects image, and the parameter distribution was transferred to the jujube defects data set to complete the transfer of the model and realize the detection and classification of the jujube defects. The classification results are visualized by heatmap through the analysis of classification accuracy and confusion matrix compared with the comparison models. The experimental results show that the SE-ResNet50-CL model optimizes the fine-grained classification problem of jujube defect recognition, and the test accuracy reaches 94.15%. The model has good stability and high recognition accuracy in complex environments.


Author(s):  
Daniel Overhoff ◽  
Peter Kohlmann ◽  
Alex Frydrychowicz ◽  
Sergios Gatidis ◽  
Christian Loewe ◽  
...  

Purpose The DRG-ÖRG IRP (Deutsche Röntgengesellschaft-Österreichische Röntgengesellschaft international radiomics platform) represents a web-/cloud-based radiomics platform based on a public-private partnership. It offers the possibility of data sharing, annotation, validation and certification in the field of artificial intelligence, radiomics analysis, and integrated diagnostics. In a first proof-of-concept study, automated myocardial segmentation and automated myocardial late gadolinum enhancement (LGE) detection using radiomic image features will be evaluated for myocarditis data sets. Materials and Methods The DRG-ÖRP IRP can be used to create quality-assured, structured image data in combination with clinical data and subsequent integrated data analysis and is characterized by the following performance criteria: Possibility of using multicentric networked data, automatically calculated quality parameters, processing of annotation tasks, contour recognition using conventional and artificial intelligence methods and the possibility of targeted integration of algorithms. In a first study, a neural network pre-trained using cardiac CINE data sets was evaluated for segmentation of PSIR data sets. In a second step, radiomic features were applied for segmental detection of LGE of the same data sets, which were provided multicenter via the IRP. Results First results show the advantages (data transparency, reliability, broad involvement of all members, continuous evolution as well as validation and certification) of this platform-based approach. In the proof-of-concept study, the neural network demonstrated a Dice coefficient of 0.813 compared to the expert's segmentation of the myocardium. In the segment-based myocardial LGE detection, the AUC was 0.73 and 0.79 after exclusion of segments with uncertain annotation.The evaluation and provision of the data takes place at the IRP, taking into account the FAT (fairness, accountability, transparency) and FAIR (findable, accessible, interoperable, reusable) criteria. Conclusion It could be shown that the DRG-ÖRP IRP can be used as a crystallization point for the generation of further individual and joint projects. The execution of quantitative analyses with artificial intelligence methods is greatly facilitated by the platform approach of the DRG-ÖRP IRP, since pre-trained neural networks can be integrated and scientific groups can be networked.In a first proof-of-concept study on automated segmentation of the myocardium and automated myocardial LGE detection, these advantages were successfully applied.Our study shows that with the DRG-ÖRP IRP, strategic goals can be implemented in an interdisciplinary way, that concrete proof-of-concept examples can be demonstrated, and that a large number of individual and joint projects can be realized in a participatory way involving all groups. Key Points:  Citation Format


2021 ◽  
pp. 1-13
Author(s):  
Hailin Liu ◽  
Fangqing Gu ◽  
Zixian Lin

Transfer learning methods exploit similarities between different datasets to improve the performance of the target task by transferring knowledge from source tasks to the target task. “What to transfer” is a main research issue in transfer learning. The existing transfer learning method generally needs to acquire the shared parameters by integrating human knowledge. However, in many real applications, an understanding of which parameters can be shared is unknown beforehand. Transfer learning model is essentially a special multi-objective optimization problem. Consequently, this paper proposes a novel auto-sharing parameter technique for transfer learning based on multi-objective optimization and solves the optimization problem by using a multi-swarm particle swarm optimizer. Each task objective is simultaneously optimized by a sub-swarm. The current best particle from the sub-swarm of the target task is used to guide the search of particles of the source tasks and vice versa. The target task and source task are jointly solved by sharing the information of the best particle, which works as an inductive bias. Experiments are carried out to evaluate the proposed algorithm on several synthetic data sets and two real-world data sets of a school data set and a landmine data set, which show that the proposed algorithm is effective.


Sign in / Sign up

Export Citation Format

Share Document