Urban management image classification approach based on deep learning

2021 ◽  
Vol 13 (5) ◽  
pp. 347-360
Author(s):  
Qinqing Kang ◽  
Xiong Ding

Based on the case images in the smart city management system, the advantage of deep learning is used to learn image features on its own, an improved deep convolutional neural network algorithm is proposed in this paper, and the algorithm is used to improve the smart city management system (hereinafter referred to as “Smart City Management”). These case images are quickly and accurately classified, the automatic classification of cases is completed in the city management system. ZCA (Zero-phase Component Analysis)-whitening is used to reduce the correlation between image data features, an eight-layer convolutional neural network model is built to classify the whitened images, and rectified linear unit (ReLU) is used in the convolutional layer to accelerate the training process, the dropout technology is used in the pooling layer, the algorithm is prevented from overfitting. Back Propagation (BP) algorithm is used for optimization in the network fine-tuning stage, the robustness of the algorithm is improved. Based on the above method, the two types of case images of road traffic and city appearance environment were subjected to two classification experiments. The accuracy has reached 97.5%, and the F1-Score has reached 0.98. The performance exceeded LSVM (Langrangian Support Vector Machine), SAE (Sparse autoencoder), and traditional CNN (Convolution Neural Network). At the same time, this method conducts four-classification experiments on four types of cases: electric vehicles, littering, illegal parking of motor vehicles, and mess around garbage bins. The accuracy is 90.5%, and the F1-Score is 0.91. The performance still exceeds LSVM, SAE and traditional CNN and other methods.

Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 742
Author(s):  
Canh Nguyen ◽  
Vasit Sagan ◽  
Matthew Maimaitiyiming ◽  
Maitiniyazi Maimaitijiang ◽  
Sourav Bhadra ◽  
...  

Early detection of grapevine viral diseases is critical for early interventions in order to prevent the disease from spreading to the entire vineyard. Hyperspectral remote sensing can potentially detect and quantify viral diseases in a nondestructive manner. This study utilized hyperspectral imagery at the plant level to identify and classify grapevines inoculated with the newly discovered DNA virus grapevine vein-clearing virus (GVCV) at the early asymptomatic stages. An experiment was set up at a test site at South Farm Research Center, Columbia, MO, USA (38.92 N, −92.28 W), with two grapevine groups, namely healthy and GVCV-infected, while other conditions were controlled. Images of each vine were captured by a SPECIM IQ 400–1000 nm hyperspectral sensor (Oulu, Finland). Hyperspectral images were calibrated and preprocessed to retain only grapevine pixels. A statistical approach was employed to discriminate two reflectance spectra patterns between healthy and GVCV vines. Disease-centric vegetation indices (VIs) were established and explored in terms of their importance to the classification power. Pixel-wise (spectral features) classification was performed in parallel with image-wise (joint spatial–spectral features) classification within a framework involving deep learning architectures and traditional machine learning. The results showed that: (1) the discriminative wavelength regions included the 900–940 nm range in the near-infrared (NIR) region in vines 30 days after sowing (DAS) and the entire visual (VIS) region of 400–700 nm in vines 90 DAS; (2) the normalized pheophytization index (NPQI), fluorescence ratio index 1 (FRI1), plant senescence reflectance index (PSRI), anthocyanin index (AntGitelson), and water stress and canopy temperature (WSCT) measures were the most discriminative indices; (3) the support vector machine (SVM) was effective in VI-wise classification with smaller feature spaces, while the RF classifier performed better in pixel-wise and image-wise classification with larger feature spaces; and (4) the automated 3D convolutional neural network (3D-CNN) feature extractor provided promising results over the 2D convolutional neural network (2D-CNN) in learning features from hyperspectral data cubes with a limited number of samples.


2021 ◽  
Vol 16 ◽  
Author(s):  
Farida Alaaeldin Mostafa ◽  
Yasmine Mohamed Afify ◽  
Rasha Mohamed Ismail ◽  
Nagwa Lotfy Badr

Background: Protein sequence analysis helps in the prediction of protein functions. As the number of proteins increases, it gives the bioinformaticians a challenge to analyze and study the similarity between them. Most of the existing protein analysis methods use Support Vector Machine. Deep learning did not receive much attention regarding protein analysis as it is noted that little work focused on studying the protein diseases classification. Objective: The contribution of this paper is to present a deep learning approach that classifies protein diseases based on protein descriptors. Methods: Different protein descriptors are used and decomposed into modified feature descriptors. Uniquely, we introduce using Convolutional Neural Network model to learn and classify protein diseases. The modified feature descriptors are fed to the Convolutional Neural Network model on a dataset of 1563 protein sequences classified into 3 different disease classes: Aids, Tumor suppressor, and Proto oncogene. Results: The usage of the modified feature descriptors shows a significant increase in the performance of the Convolutional Neural Network model over Support Vector Machine using different kernel functions. One modified feature descriptor improved by 19.8%, 27.9%, 17.6%, 21.5%, 17.3%, and 22% for evaluation metrics: Area Under the Curve, Matthews Correlation Coefficient, Accuracy, F1-score, Recall, and Precision, respectively. Conclusion: Results show that the prediction of the proposed modified feature descriptors significantly surpasses that of Support Vector Machine model.


2018 ◽  
Vol 7 (11) ◽  
pp. 418 ◽  
Author(s):  
Tian Jiang ◽  
Xiangnan Liu ◽  
Ling Wu

Accurate and timely information about rice planting areas is essential for crop yield estimation, global climate change and agricultural resource management. In this study, we present a novel pixel-level classification approach that uses convolutional neural network (CNN) model to extract the features of enhanced vegetation index (EVI) time series curve for classification. The goal is to explore the practicability of deep learning techniques for rice recognition in complex landscape regions, where rice is easily confused with the surroundings, by using mid-resolution remote sensing images. A transfer learning strategy is utilized to fine tune a pre-trained CNN model and obtain the temporal features of the EVI curve. Support vector machine (SVM), a traditional machine learning approach, is also implemented in the experiment. Finally, we evaluate the accuracy of the two models. Results show that our model performs better than SVM, with the overall accuracies being 93.60% and 91.05%, respectively. Therefore, this technique is appropriate for estimating rice planting areas in southern China on the basis of a pre-trained CNN model by using time series data. And more opportunity and potential can be found for crop classification by remote sensing and deep learning technique in the future study.


2021 ◽  
Vol 9 ◽  
Author(s):  
Ashwini K ◽  
P. M. Durai Raj Vincent ◽  
Kathiravan Srinivasan ◽  
Chuan-Yu Chang

Neonatal infants communicate with us through cries. The infant cry signals have distinct patterns depending on the purpose of the cries. Preprocessing, feature extraction, and feature selection need expert attention and take much effort in audio signals in recent days. In deep learning techniques, it automatically extracts and selects the most important features. For this, it requires an enormous amount of data for effective classification. This work mainly discriminates the neonatal cries into pain, hunger, and sleepiness. The neonatal cry auditory signals are transformed into a spectrogram image by utilizing the short-time Fourier transform (STFT) technique. The deep convolutional neural network (DCNN) technique takes the spectrogram images for input. The features are obtained from the convolutional neural network and are passed to the support vector machine (SVM) classifier. Machine learning technique classifies neonatal cries. This work combines the advantages of machine learning and deep learning techniques to get the best results even with a moderate number of data samples. The experimental result shows that CNN-based feature extraction and SVM classifier provides promising results. While comparing the SVM-based kernel techniques, namely radial basis function (RBF), linear and polynomial, it is found that SVM-RBF provides the highest accuracy of kernel-based infant cry classification system provides 88.89% accuracy.


Author(s):  
Pramod Sekharan Nair ◽  
Tsrity Asefa Berihu ◽  
Varun Kumar

Gangrene disease is one of the deadliest diseases on the globe which is caused by lack of blood supply to the body parts or any kind of infection. The gangrene disease often affects the human body parts such as fingers, limbs, toes but there are many cases of on muscles and organs. In this paper, the gangrene disease classification is being done from the given images of high resolution. The convolutional neural network (CNN) is used for feature extraction on disease images. The first layer of the convolutional neural network was used to capture the elementary image features such as dots, edges and blobs. The intermediate layers or the hidden layers of the convolutional neural network extracts detailed image features such as shapes, brightness, and contrast as well as color. Finally, the CNN extracted features are given to the Support Vector Machine to classify the gangrene disease. The experiment results show the approach adopted in this study performs better and acceptable.


2021 ◽  
Vol 13 (19) ◽  
pp. 3953
Author(s):  
Patrick Clifton Gray ◽  
Diego F. Chamorro ◽  
Justin T. Ridge ◽  
Hannah Rae Kerner ◽  
Emily A. Ury ◽  
...  

The ability to accurately classify land cover in periods before appropriate training and validation data exist is a critical step towards understanding subtle long-term impacts of climate change. These trends cannot be properly understood and distinguished from individual disturbance events or decadal cycles using only a decade or less of data. Understanding these long-term changes in low lying coastal areas, home to a huge proportion of the global population, is of particular importance. Relatively simple deep learning models that extract representative spatiotemporal patterns can lead to major improvements in temporal generalizability. To provide insight into major changes in low lying coastal areas, our study (1) developed a recurrent convolutional neural network that incorporates spectral, spatial, and temporal contexts for predicting land cover class, (2) evaluated this model across time and space and compared this model to conventional Random Forest and Support Vector Machine methods as well as other deep learning approaches, and (3) applied this model to classify land cover across 20 years of Landsat 5 data in the low-lying coastal plain of North Carolina, USA. We observed striking changes related to sea level rise that support evidence on a smaller scale of agricultural land and forests transitioning into wetlands and “ghost forests”. This work demonstrates that recurrent convolutional neural networks should be considered when a model is needed that can generalize across time and that they can help uncover important trends necessary for understanding and responding to climate change in vulnerable coastal regions.


2021 ◽  
Author(s):  
Ewerthon Dyego de Araújo Batista ◽  
Wellington Candeia de Araújo ◽  
Romeryto Vieira Lira ◽  
Laryssa Izabel de Araújo Batista

Dengue é um problema de saúde pública no Brasil, os casos da doença voltaram a crescer na Paraíba. O boletim epidemiológico da Paraíba, divulgado em agosto de 2021, informa um aumento de 53% de casos em relação ao ano anterior. Técnicas de Machine Learning (ML) e de Deep Learning estão sendo utilizadas como ferramentas para a predição da doença e suporte ao seu combate. Por meio das técnicas Random Forest (RF), Support Vector Regression (SVR), Multilayer Perceptron (MLP), Long ShortTerm Memory (LSTM) e Convolutional Neural Network (CNN), este artigo apresenta um sistema capaz de realizar previsões de internações causadas por dengue para as cidades Bayeux, Cabedelo, João Pessoa e Santa Rita. O sistema conseguiu realizar previsões para Bayeux com taxa de erro 0,5290, já em Cabedelo o erro foi 0,92742, João Pessoa 9,55288 e Santa Rita 0,74551.


Author(s):  
MUHAMMAD EFAN ABDULFATTAH ◽  
LEDYA NOVAMIZANTI ◽  
SYAMSUL RIZAL

ABSTRAKBencana di Indonesia didominasi oleh bencana hidrometeorologi yang mengakibatkan kerusakan dalam skala besar. Melalui pemetaan, penanganan yang menyeluruh dapat dilakukan guna membantu analisa dan penindakan selanjutnya. Unmanned Aerial Vehicle (UAV) dapat digunakan sebagai alat bantu pemetaan dari udara. Namun, karena faktor kamera maupun perangkat pengolah citra yang tidak memenuhi spesifikasi, hasilnya menjadi kurang informatif. Penelitian ini mengusulkan Super Resolution pada citra udara berbasis Convolutional Neural Network (CNN) dengan model DCSCN. Model terdiri atas Feature Extraction Network untuk mengekstraksi ciri citra, dan Reconstruction Network untuk merekonstruksi citra. Performa DCSCN dibandingkan dengan Super Resolution CNN (SRCNN). Eksperimen dilakukan pada dataset Set5 dengan nilai scale factor 2, 3 dan 4. Secara berurutan SRCNN menghasilkan nilai PSNR dan SSIM sebesar 36.66 dB / 0.9542, 32.75 dB / 0.9090 dan 30.49 dB / 0.8628. Performa DCSCN meningkat menjadi 37.614dB / 0.9588, 33.86 dB / 0.9225 dan 31.48 dB / 0.8851.Kata kunci: citra udara, deep learning, super resolution ABSTRACTDisasters in Indonesia are dominated by hydrometeorological disasters, which cause large-scale damage. Through mapping, comprehensive handling can be done to help the analysis and subsequent action. Unmanned Aerial Vehicle (UAV) can be used as an aerial mapping tool. However, due to the camera and image processing devices that do not meet specifications, the results are less informative. This research proposes Super Resolution on aerial imagery based on Convolutional Neural Network (CNN) with the DCSCN model. The model consists of Feature Extraction Network for extracting image features and Reconstruction Network for reconstructing images. DCSCN's performance is compared to CNN Super Resolution (SRCNN). Experiments were carried out on the Set5 dataset with scale factor values 2, 3, and 4. The SRCNN sequentially produced PSNR and SSIM values of 36.66dB / 0.9542, 32.75dB / 0.9090 and 30.49dB / 0.8628. DCSCN's performance increased to 37,614dB / 0.9588, 33.86dB / 0.9225 and 31.48dB / 0.8851.Keywords: aerial imagery, deep learning, super resolution


2020 ◽  
Vol 13 (5) ◽  
pp. 2219-2239 ◽  
Author(s):  
Georgios Touloupas ◽  
Annika Lauber ◽  
Jan Henneberger ◽  
Alexander Beck ◽  
Aurélien Lucchi

Abstract. During typical field campaigns, millions of cloud particle images are captured with imaging probes. Our interest lies in classifying these particles in order to compute the statistics needed for understanding clouds. Given the large volume of collected data, this raises the need for an automated classification approach. Traditional classification methods that require extracting features manually (e.g., decision trees and support vector machines) show reasonable performance when trained and tested on data coming from a unique dataset. However, they often have difficulties in generalizing to test sets coming from other datasets where the distribution of the features might be significantly different. In practice, we found that for holographic imagers each new dataset requires labeling a huge amount of data by hand using those methods. Convolutional neural networks have the potential to overcome this problem due to their ability to learn complex nonlinear models directly from the images instead of pre-engineered features, as well as by relying on powerful regularization techniques. We show empirically that a convolutional neural network trained on cloud particles from holographic imagers generalizes well to unseen datasets. Moreover, fine tuning the same network with a small number (256) of training images improves the classification accuracy. Thus, the automated classification with a convolutional neural network not only reduces the hand-labeling effort for new datasets but is also no longer the main error source for the classification of small particles.


2018 ◽  
Vol 10 (11) ◽  
pp. 1840 ◽  
Author(s):  
Meng Zhang ◽  
Hui Lin ◽  
Guangxing Wang ◽  
Hua Sun ◽  
Jing Fu

Rice is one of the world’s major staple foods, especially in China. Highly accurate monitoring on rice-producing land is, therefore, crucial for assessing food supplies and productivity. Recently, the deep-learning convolutional neural network (CNN) has achieved considerable success in remote-sensing data analysis. A CNN-based paddy-rice mapping method using the multitemporal Landsat 8, phenology data, and land-surface temperature (LST) was developed during this study. First, the spatial–temporal adaptive reflectance fusion model (STARFM) was used to blend the moderate-resolution imaging spectroradiometer (MODIS) and Landsat data for obtaining multitemporal Landsat-like data. Subsequently, the threshold method is applied to derive the phenological variables from the Landsat-like (Normalized difference vegetation index) NDVI time series. Then, a generalized single-channel algorithm was employed to derive LST from the Landsat 8. Finally, multitemporal Landsat 8 spectral images, combined with phenology and LST data, were employed to extract paddy-rice information using a patch-based deep-learning CNN algorithm. The results show that the proposed method achieved an overall accuracy of 97.06% and a Kappa coefficient of 0.91, which are 6.43% and 0.07 higher than that of the support vector machine method, and 7.68% and 0.09 higher than that of the random forest method, respectively. Moreover, the Landsat-derived rice area is strongly correlated (R2 = 0.9945) with government statistical data, demonstrating that the proposed method has potential in large-scale paddy-rice mapping using moderate spatial resolution images.


Sign in / Sign up

Export Citation Format

Share Document