scholarly journals Experimental Evaluation of PSO Based Transfer Learning Method for Meteorological Visibility Estimation

Atmosphere ◽  
2021 ◽  
Vol 12 (7) ◽  
pp. 828
Author(s):  
Wai Lun Lo ◽  
Henry Shu Hung Chung ◽  
Hong Fu

Estimation of Meteorological visibility from image characteristics is a challenging problem in the research of meteorological parameters estimation. Meteorological visibility can be used to indicate the weather transparency and this indicator is important for transport safety. This paper summarizes the outcomes of the experimental evaluation of a Particle Swarm Optimization (PSO) based transfer learning method for meteorological visibility estimation method. This paper proposes a modified approach of the transfer learning method for visibility estimation by using PSO feature selection. Image data are collected at fixed location with fixed viewing angle. The database images were gone through a pre-processing step of gray-averaging so as to provide information of static landmark objects for automatic extraction of effective regions from images. Effective regions are then extracted from image database and the image features are then extracted from the Neural Network. Subset of Image features are selected based on the Particle Swarming Optimization (PSO) methods to obtain the image feature vectors for each effective sub-region. The image feature vectors are then used to estimate the visibilities of the images by using the Multiple Support Vector Regression (SVR) models. Experimental results show that the proposed method can give an accuracy more than 90% for visibility estimation and the proposed method is effective and robust.

2021 ◽  
Vol 11 (3) ◽  
pp. 997
Author(s):  
Jiaping Li ◽  
Wai Lun Lo ◽  
Hong Fu ◽  
Henry Shu Hung Chung

Meteorological visibility is an important meteorological observation indicator to measure the weather transparency which is important for the transport safety. It is a challenging problem to estimate the visibilities accurately from the image characteristics. This paper proposes a transfer learning method for the meteorological visibility estimation based on image feature fusion. Different from the existing methods, the proposed method estimates the visibility based on the data processing and features’ extraction in the selected subregions of the whole image and therefore it had less computation load and higher efficiency. All the database images were gray-averaged firstly for the selection of effective subregions and features extraction. Effective subregions are extracted for static landmark objects which can provide useful information for visibility estimation. Four different feature extraction methods (Densest, ResNet50, Vgg16, and Vgg19) were used for the feature extraction of the subregions. The features extracted by the neural network were then imported into the proposed support vector regression (SVR) regression model, which derives the estimated visibilities of the subregions. Finally, based on the weight fusion of the visibility estimates from the subregion models, an overall comprehensive visibility was estimated for the whole image. Experimental results show that the visibility estimation accuracy is more than 90%. This method can estimate the visibility of the image, with high robustness and effectiveness.


2019 ◽  
Vol 2019 ◽  
pp. 1-12 ◽  
Author(s):  
Tsun-Kuo Lin

This paper developed a principal component analysis (PCA)-integrated algorithm for feature identification in manufacturing; this algorithm is based on an adaptive PCA-based scheme for identifying image features in vision-based inspection. PCA is a commonly used statistical method for pattern recognition tasks, but an effective PCA-based approach for identifying suitable image features in manufacturing has yet to be developed. Unsuitable image features tend to yield poor results when used in conventional visual inspections. Furthermore, research has revealed that the use of unsuitable or redundant features might influence the performance of object detection. To address these problems, the adaptive PCA-based algorithm developed in this study entails the identification of suitable image features using a support vector machine (SVM) model for inspecting of various object images; this approach can be used for solving the inherent problem of detection that occurs when the extraction contains challenging image features in manufacturing processes. The results of experiments indicated that the proposed algorithm can successfully be used to adaptively select appropriate image features. The algorithm combines image feature extraction and PCA/SVM classification to detect patterns in manufacturing. The algorithm was determined to achieve high-performance detection and to outperform the existing methods.


2021 ◽  
Author(s):  
Yulong Wang ◽  
Xiaofeng Liao ◽  
Dewen Qiao ◽  
Jiahui Wu

Abstract With the rapid development of modern medical science and technology, medical image classification has become a more and more challenging problem. However, in most traditional classification methods, image feature extraction is difficult, and the accuracy of classifier needs to be improved. Therefore, this paper proposes a high-accuracy medical image classification method based on deep learning, which is called hybrid CQ-SVM. Specifically, we combine the advantages of convolutional neural network (CNN) and support vector machine (SVM), and integrate the novel hybrid model. In our scheme, quantum-behaved particle swarm optimization algorithm (QPSO) is adopted to set its parameters automatically for solving the SVM parameter setting problem, CNN works as a trainable feature extractor and SVM optimized by QPSO performs as a trainable classifier. This method can automatically extract features from original medical images and generate predictions. The experimental results show that this method can extract better medical image features, and achieve higher classification accuracy.


Sensors ◽  
2020 ◽  
Vol 20 (24) ◽  
pp. 7251
Author(s):  
Hong Zeng ◽  
Jiaming Zhang ◽  
Wael Zakaria ◽  
Fabio Babiloni ◽  
Borghini Gianluca ◽  
...  

Electroencephalogram (EEG) is an effective indicator for the detection of driver fatigue. Due to the significant differences in EEG signals across subjects, and difficulty in collecting sufficient EEG samples for analysis during driving, detecting fatigue across subjects through using EEG signals remains a challenge. EasyTL is a kind of transfer-learning model, which has demonstrated better performance in the field of image recognition, but not yet been applied in cross-subject EEG-based applications. In this paper, we propose an improved EasyTL-based classifier, the InstanceEasyTL, to perform EEG-based analysis for cross-subject fatigue mental-state detection. Experimental results show that InstanceEasyTL not only requires less EEG data, but also obtains better performance in accuracy and robustness than EasyTL, as well as existing machine-learning models such as Support Vector Machine (SVM), Transfer Component Analysis (TCA), Geodesic Flow Kernel (GFK), and Domain-adversarial Neural Networks (DANN), etc.


2020 ◽  
Author(s):  
Jing Li ◽  
Xinfang li ◽  
Yuwen Ning

Abstract With the advent of the 5G era,the development of massive data learning algorithms and in-depth research on neural networks, deep learning methods are widely used in image recognition tasks. However, there is currently a lack of methods for identifying and classifying efficiently Internet of Things (IoT) images. This paper develops an IoT image recognition system based on deep learning, i.e., uses convolutional neural networks (CNN) to construct image recognition algorithms, and uses principal component analysis (PCA) and linear discriminant analysis (LDA) to extract image features, respectively. The effectiveness of the two PCA and LDA image recognition methods is verified through experiments. And when the image feature dimension is 25, the best image recognition effect can be obtained. The main classifier used for image recognition in the IoT is the support vector machine (SVM), and the SVM and CNN are trained by using the database of this paper. At the same time, the effectiveness of the two for image recognition is checked, and then the trained classifier is used for image recognition. It is found that a CNN and SVM-based secondary classification IoT image recognition method improves the accuracy of image recognition. The secondary classification method combines the characteristics of the SVM and CNN image recognition methods, and the accuracy of the image recognition method is verified to provide an effective improvement through experimental verification.


Electronics ◽  
2021 ◽  
Vol 10 (16) ◽  
pp. 1911
Author(s):  
Kai Xie ◽  
Chao Wang ◽  
Peng Wang

Ontology plays a critical role in knowledge engineering and knowledge graphs (KGs). However, building ontology is still a nontrivial task. Ontology learning aims at generating domain ontologies from various kinds of resources by natural language processing and machine learning techniques. One major challenge of ontology learning is reducing labeling work for new domains. This paper proposes an ontology learning method based on transfer learning, namely TF-Mnt, which aims at learning knowledge from new domains that have limited labeled data. This paper selects Web data as the learning source and defines various features, which utilizes abundant textual information and heterogeneous semi-structured information. Then, a new transfer learning model TF-Mnt is proposed, and the parameters’ estimation is also addressed. Although there exist distribution differences of features between two domains, TF-Mnt can measure the relevance by calculating the correlation coefficient. Moreover, TF-Mnt can efficiently transfer knowledge from the source domain to the target domain and avoid negative transfer. Experiments in real-world datasets show that TF-Mnt achieves promising learning performance for new domains despite the small number of labels in it, by learning knowledge from a proper existing domain which can be automatically selected.


Content based image retrieval uses different feature descriptors for image search and retrieval. For image retrieval from huge image repositories, the query image features are extracted and compares these features with the contents of feature repository. The most matching image is found and retrieved from the database. This mapping is done based on the distance calculated between feature vector of query image and the extracted feature vectors of images in the database. There are various distance measures used for comparing image feature vectors. This paper compares a set of distance measures using a set of features used for CBIR. The city-block distance measure gives the best results for CBIR.


2019 ◽  
Vol 36 (10) ◽  
pp. 1945-1956
Author(s):  
Qian Li ◽  
Shaoen Tang ◽  
Xuan Peng ◽  
Qiang Ma

AbstractAtmospheric visibility is an important element of meteorological observation. With existing methods, defining image features that reflect visibility accurately and comprehensively is difficult. This paper proposes a visibility detection method based on transfer learning using deep convolutional neural networks (DCNN) that addresses issues caused by a lack of sufficient visibility labeled datasets. In the proposed method, each image was first divided into several subregions, which were encoded to extract visual features using a pretrained no-reference image quality assessment neural network. Then a support vector regression model was trained to map the extracted features to the visibility. The fusion weight of each subregion was evaluated according to the error analysis of the regression model. Finally, the neural network was fine-tuned to better fit the problem of visibility detection using the current detection results conversely. Experimental results demonstrated that the detection accuracy of the proposed method exceeds 90% and satisfies the requirements of daily observation applications.


Algorithms ◽  
2019 ◽  
Vol 12 (12) ◽  
pp. 271 ◽  
Author(s):  
Yuntian Feng ◽  
Guoliang Wang ◽  
Zhipeng Liu ◽  
Runming Feng ◽  
Xiang Chen ◽  
...  

Aiming at the current problem that it is difficult to deal with an unknown radar emitter in the radar emitter identification process, we propose an unknown radar emitter identification method based on semi-supervised and transfer learning. Firstly, we construct the support vector machine (SVM) model based on transfer learning, using the information of labeled samples in the source domain to train in the target domain, which can solve the problem that the training data and the testing data do not satisfy the same-distribution hypothesis. Then, we design a semi-supervised co-training algorithm using the information of unlabeled samples to enhance the training effect, which can solve the problem that insufficient labeled data results in inadequate training of the classifier. Finally, we combine the transfer learning method with the semi-supervised learning method for the unknown radar emitter identification task. Simulation experiments show that the proposed method can effectively identify an unknown radar emitter and still maintain high identification accuracy within a certain measurement error range.


2020 ◽  
Vol 2020 ◽  
pp. 1-20
Author(s):  
Jaya H. Dewan ◽  
Sudeep D. Thepade

Billions of multimedia data files are getting created and shared on the web, mainly social media websites. The explosive increase in multimedia data, especially images and videos, has created an issue of searching and retrieving the relevant data from the archive collection. In the last few decades, the complexity of the image data has increased exponentially. Text-based image retrieval techniques do not meet the needs of the users due to the difference between image contents and text annotations associated with an image. Various methods have been proposed in recent years to tackle the problem of the semantic gap and retrieve images similar to the query specified by the user. Image retrieval based on image contents has attracted many researchers as it uses the visual content of the image such as color, texture, and shape feature. The low-level image features represent the image contents as feature vectors. The query image feature vector is compared with the dataset images feature vectors to retrieve similar images. The main aim of this article is to appraise the various image retrieval methods based on feature extraction, description, and matching content that has been presented in the last 10–15 years based on low-level feature contents and local features and proposes a promising future research direction for researchers.


Sign in / Sign up

Export Citation Format

Share Document