scholarly journals An Adversarial Generative Network for Crop Classification from Remote Sensing Timeseries Images

2020 ◽  
Vol 13 (1) ◽  
pp. 65
Author(s):  
Jingtao Li ◽  
Yonglin Shen ◽  
Chao Yang

Due to the increasing demand for the monitoring of crop conditions and food production, it is a challenging and meaningful task to identify crops from remote sensing images. The state-of the-art crop classification models are mostly built on supervised classification models such as support vector machines (SVM), convolutional neural networks (CNN), and long- and short-term memory neural networks (LSTM). Meanwhile, as an unsupervised generative model, the adversarial generative network (GAN) is rarely used to complete classification tasks for agricultural applications. In this work, we propose a new method that combines GAN, CNN, and LSTM models to classify crops of corn and soybeans from remote sensing time-series images, in which GAN’s discriminator was used as the final classifier. The method is feasible on the condition that the training samples are small, and it fully takes advantage of spectral, spatial, and phenology features of crops from satellite data. The classification experiments were conducted on crops of corn, soybeans, and others. To verify the effectiveness of the proposed method, comparisons with models of SVM, SegNet, CNN, LSTM, and different combinations were also conducted. The results show that our method achieved the best classification results, with the Kappa coefficient of 0.7933 and overall accuracy of 0.86. Experiments in other study areas also demonstrate the extensibility of the proposed method.

Electronics ◽  
2020 ◽  
Vol 9 (5) ◽  
pp. 721 ◽  
Author(s):  
Barath Narayanan Narayanan ◽  
Venkata Salini Priyamvada Davuluru

With the advancement of technology, there is a growing need of classifying malware programs that could potentially harm any computer system and/or smaller devices. In this research, an ensemble classification system comprising convolutional and recurrent neural networks is proposed to distinguish malware programs. Microsoft’s Malware Classification Challenge (BIG 2015) dataset with nine distinct classes is utilized for this study. This dataset contains an assembly file and a compiled file for each malware program. Compiled files are visualized as images and are classified using Convolutional Neural Networks (CNNs). Assembly files consist of machine language opcodes that are distinguished among classes using Long Short-Term Memory (LSTM) networks after converting them into sequences. In addition, features are extracted from these architectures (CNNs and LSTM) and are classified using a support vector machine or logistic regression. An accuracy of 97.2% is achieved using LSTM network for distinguishing assembly files, 99.4% using CNN architecture for classifying compiled files and an overall accuracy of 99.8% using the proposed ensemble approach thereby setting a new benchmark. An independent and automated classification system for assembly and/or compiled files provides the luxury to anti-malware industry experts to choose the type of system depending on their available computational resources.


2020 ◽  
Vol 12 (3) ◽  
pp. 408
Author(s):  
Małgorzata Krówczyńska ◽  
Edwin Raczko ◽  
Natalia Staniszewska ◽  
Ewa Wilk

Due to the pathogenic nature of asbestos, a statutory ban on asbestos-containing products has been in place in Poland since 1997. In order to protect human health and the environment, it is crucial to estimate the quantity of asbestos–cement products in use. It has been evaluated that about 90% of them are roof coverings. Different methods are used to estimate the amount of asbestos–cement products, such as the use of indicators, field inventory, remote sensing data, and multi- and hyperspectral images; the latter are used for relatively small areas. Other methods are sought for the reliable estimation of the quantity of asbestos-containing products, as well as their spatial distribution. The objective of this paper is to present the use of convolutional neural networks for the identification of asbestos–cement roofing on aerial photographs in natural color (RGB) and color infrared (CIR) compositions. The study was conducted for the Chęciny commune. Aerial photographs, each with the spatial resolution of 25 cm in RGB and CIR compositions, were used, and field studies were conducted to verify data and to develop a database for Convolutional Neural Networks (CNNs) training. Network training was carried out using the TensorFlow and R-Keras libraries in the R programming environment. The classification was carried out using a convolutional neural network consisting of two convolutional blocks, a spatial dropout layer, and two blocks of fully connected perceptrons. Asbestos–cement roofing products were classified with the producer’s accuracy of 89% and overall accuracy of 87% and 89%, depending on the image composition used. Attempts have been made at the identification of asbestos–cement roofing. They focus primarily on the use of hyperspectral data and multispectral imagery. The following classification algorithms were usually employed: Spectral Angle Mapper, Support Vector Machine, object classification, Spectral Feature Fitting, and decision trees. Previous studies undertaken by other researchers showed that low spectral resolution only allowed for a rough classification of roofing materials. The use of one coherent method would allow data comparison between regions. Determining the amount of asbestos–cement products in use is important for assessing environmental exposure to asbestos fibres, determining patterns of disease, and ultimately modelling potential solutions to counteract threats.


Author(s):  
Lian-Zhi Huo ◽  
Ping Tang

Remote sensing (RS) technology provides essential data for monitoring the Earth. To fully utilize the data, image classification is often needed to convert data to information. The success of image classification methods greatly depends on the quality and quantity of training samples. To effectively select more informative training samples, this paper proposes a new active learning (AL) technique for classification of remote sensing (RS) images based on graph theory. A new diversity criterion is proposed based on geometrical features of the support vector machines (SVM) outputs. The diversity selection procedure is converted to the densest k-subgraph [Formula: see text] maximization problem in graph theory. The [Formula: see text] maximization problem is solved by a greedy algorithm. The proposed technique is compared with competing methods adopted in RS community. Experimental tests are performed on very high resolution (VHR) multispectral and hyperspectral images. Experimental results demonstrate that the proposed technique leads to comparable or even better classification accuracies with respect to competing methods on the two datasets.


2021 ◽  
Vol 27 (4) ◽  
pp. 230-245
Author(s):  
Chih-Chiang Wei

Strong wind during extreme weather conditions (e.g., strong winds during typhoons) is one of the natural factors that cause the collapse of frame-type scaffolds used in façade work. This study developed an alert system for use in determining whether the scaffold structure could withstand the stress of the wind force. Conceptually, the scaffolds collapsed by the warning system developed in the study contains three modules. The first module involves the establishment of wind velocity prediction models. This study employed various deep learning and machine learning techniques, namely deep neural networks, long short-term memory neural networks, support vector regressions, random forest, and k-nearest neighbors. Then, the second module contains the analysis of wind force on the scaffolds. The third module involves the development of the scaffold collapse evaluation approach. The study area was Taichung City, Taiwan. This study collected meteorological data from the ground stations from 2012 to 2019. Results revealed that the system successfully predicted the possible collapse time for scaffolds within 1 to 6 h, and effectively issued a warning time. Overall, the warning system can provide practical warning information related to the destruction of scaffolds to construction teams in need of the information to reduce the damage risk.


Author(s):  
M. Rußwurm ◽  
M. Körner

<i>Land cover classification (LCC)</i> is a central and wide field of research in earth observation and has already put forth a variety of classification techniques. Many approaches are based on classification techniques considering observation at certain points in time. However, some land cover classes, such as crops, change their spectral characteristics due to environmental influences and can thus not be monitored effectively with classical mono-temporal approaches. Nevertheless, these temporal observations should be utilized to benefit the classification process. After extensive research has been conducted on modeling temporal dynamics by spectro-temporal profiles using vegetation indices, we propose a deep learning approach to utilize these temporal characteristics for classification tasks. In this work, we show how <i>long short-term memory</i> (LSTM) neural networks can be employed for crop identification purposes with SENTINEL 2A observations from large study areas and label information provided by local authorities. We compare these temporal neural network models, <i>i.e.</i>, LSTM and <i>recurrent neural network</i> (RNN), with a classical non-temporal <i>convolutional neural network</i> (CNN) model and an additional <i>support vector machine</i> (SVM) baseline. With our rather straightforward LSTM variant, we exceeded state-of-the-art classification performance, thus opening promising potential for further research.


2019 ◽  
Vol 9 (8) ◽  
pp. 1687 ◽  
Author(s):  
Huafeng Qin ◽  
Peng Wang

Finger-vein biometrics has been extensively investigated for personal verification. A challenge is that the finger-vein acquisition is affected by many factors, which results in many ambiguous regions in the finger-vein image. Generally, the separability between vein and background is poor in such regions. Despite recent advances in finger-vein pattern segmentation, current solutions still lack the robustness to extract finger-vein features from raw images because they do not take into account the complex spatial dependencies of vein pattern. This paper proposes a deep learning model to extract vein features by combining the Convolutional Neural Networks (CNN) model and Long Short-Term Memory (LSTM) model. Firstly, we automatically assign the label based on a combination of known state of the art handcrafted finger-vein image segmentation techniques, and generate various sequences for each labeled pixel along different directions. Secondly, several Stacked Convolutional Neural Networks and Long Short-Term Memory (SCNN-LSTM) models are independently trained on the resulting sequences. The outputs of various SCNN-LSTMs form a complementary and over-complete representation and are conjointly put into Probabilistic Support Vector Machine (P-SVM) to predict the probability of each pixel of being foreground (i.e., vein pixel) given several sequences centered on it. Thirdly, we propose a supervised encoding scheme to extract the binary vein texture. A threshold is automatically computed by taking into account the maximal separation between the inter-class distance and the intra-class distance. In our approach, the CNN learns robust features for vein texture pattern representation and LSTM stores the complex spatial dependencies of vein patterns. So, the pixels in any region of a test image can then be classified effectively. In addition, the supervised information is employed to encode the vein patterns, so the resulting encoding images contain more discriminating features. The experimental results on one public finger-vein database show that the proposed approach significantly improves the finger-vein verification accuracy.


2020 ◽  
Vol 10 (17) ◽  
pp. 5792 ◽  
Author(s):  
Biserka Petrovska ◽  
Tatjana Atanasova-Pacemska ◽  
Roberto Corizzo ◽  
Paolo Mignone ◽  
Petre Lameski ◽  
...  

Remote Sensing (RS) image classification has recently attracted great attention for its application in different tasks, including environmental monitoring, battlefield surveillance, and geospatial object detection. The best practices for these tasks often involve transfer learning from pre-trained Convolutional Neural Networks (CNNs). A common approach in the literature is employing CNNs for feature extraction, and subsequently train classifiers exploiting such features. In this paper, we propose the adoption of transfer learning by fine-tuning pre-trained CNNs for end-to-end aerial image classification. Our approach performs feature extraction from the fine-tuned neural networks and remote sensing image classification with a Support Vector Machine (SVM) model with linear and Radial Basis Function (RBF) kernels. To tune the learning rate hyperparameter, we employ a linear decay learning rate scheduler as well as cyclical learning rates. Moreover, in order to mitigate the overfitting problem of pre-trained models, we apply label smoothing regularization. For the fine-tuning and feature extraction process, we adopt the Inception-v3 and Xception inception-based CNNs, as well the residual-based networks ResNet50 and DenseNet121. We present extensive experiments on two real-world remote sensing image datasets: AID and NWPU-RESISC45. The results show that the proposed method exhibits classification accuracy of up to 98%, outperforming other state-of-the-art methods.


2018 ◽  
Vol 10 (2) ◽  
pp. 75 ◽  
Author(s):  
Shunping Ji ◽  
Chi Zhang ◽  
Anjian Xu ◽  
Yun Shi ◽  
Yulin Duan

Sign in / Sign up

Export Citation Format

Share Document