Application of Deep Learning in the Processing of the Aerospace System's Multispectral Images

Author(s):  
Heorhii Kuchuk ◽  
Andrii Podorozhniak ◽  
Daria Hlavcheva ◽  
Vladyslav Yaloveha

This chapter uses deep learning neural networks for processing of aerospace system multispectral images. Convolutional and Capsule Neural Network were used for processing multispectral images from satellite Landsat 8, previously processed using spectral indices NDVI, NDWI, PSRI. The authors' approach was applied to wildfire Camp Fire (California, USA). The deep learning neural networks are used to solve the problem of detecting fire hazardous forest areas. Comparison of Convolutional and Capsule Neural Network results was done. The theory of neural networks of deep learning, the theory of recognition of multispectral images, methods of mathematical statistics were used.

2021 ◽  
Vol 13 (24) ◽  
pp. 5054
Author(s):  
Blessing Kavhu ◽  
Zama Eric Mashimbye ◽  
Linda Luvuno

Accurate land use and cover data are essential for effective land-use planning, hydrological modeling, and policy development. Since the Okavango Delta is a transboundary Ramsar site, managing natural resources within the Okavango Basin is undoubtedly a complex issue. It is often difficult to accurately map land use and cover using remote sensing in heterogeneous landscapes. This study investigates the combined value of climate-based regionalization and integration of spectral bands with spectral indices to enhance the accuracy of multi-temporal land use/cover classification using deep learning and machine learning approaches. Two experiments were set up, the first entailing the integration of spectral bands with spectral indices and the second involving the combined integration of spectral indices and climate-based regionalization based on Koppen–Geiger climate zones. Landsat 5 TM and Landsat 8 OLI images, machine learning classifiers (random forest and extreme gradient boosting), and deep learning (neural network and deep neural network) classifiers were used in this study. Supervised classification using a total of 5140 samples was conducted for the years 1996, 2004, 2013, and 2020. Average overall accuracy and Kappa coefficients were used to validate the results. The study found that the integration of spectral bands with indices improves the accuracy of land use/cover classification using machine learning and deep learning. Post-feature selection combinations yield higher accuracies in comparison to combinations of bands and indices. A combined integration of spectral indices with bands and climate-based regionalization did not significantly improve the accuracy of land use/cover classification consistently for all the classifiers (p < 0.05). However, post-feature selection combinations and climate-based regionalization significantly improved the accuracy for all classifiers investigated in this study. Findings of this study will improve the reliability of land use/cover monitoring in complex heterogeneous TDBs.


2021 ◽  
Vol 26 (1) ◽  
pp. 200-215
Author(s):  
Muhammad Alam ◽  
Jian-Feng Wang ◽  
Cong Guangpei ◽  
LV Yunrong ◽  
Yuanfang Chen

AbstractIn recent years, the success of deep learning in natural scene image processing boosted its application in the analysis of remote sensing images. In this paper, we applied Convolutional Neural Networks (CNN) on the semantic segmentation of remote sensing images. We improve the Encoder- Decoder CNN structure SegNet with index pooling and U-net to make them suitable for multi-targets semantic segmentation of remote sensing images. The results show that these two models have their own advantages and disadvantages on the segmentation of different objects. In addition, we propose an integrated algorithm that integrates these two models. Experimental results show that the presented integrated algorithm can exploite the advantages of both the models for multi-target segmentation and achieve a better segmentation compared to these two models.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2852
Author(s):  
Parvathaneni Naga Srinivasu ◽  
Jalluri Gnana SivaSai ◽  
Muhammad Fazal Ijaz ◽  
Akash Kumar Bhoi ◽  
Wonjoon Kim ◽  
...  

Deep learning models are efficient in learning the features that assist in understanding complex patterns precisely. This study proposed a computerized process of classifying skin disease through deep learning based MobileNet V2 and Long Short Term Memory (LSTM). The MobileNet V2 model proved to be efficient with a better accuracy that can work on lightweight computational devices. The proposed model is efficient in maintaining stateful information for precise predictions. A grey-level co-occurrence matrix is used for assessing the progress of diseased growth. The performance has been compared against other state-of-the-art models such as Fine-Tuned Neural Networks (FTNN), Convolutional Neural Network (CNN), Very Deep Convolutional Networks for Large-Scale Image Recognition developed by Visual Geometry Group (VGG), and convolutional neural network architecture that expanded with few changes. The HAM10000 dataset is used and the proposed method has outperformed other methods with more than 85% accuracy. Its robustness in recognizing the affected region much faster with almost 2× lesser computations than the conventional MobileNet model results in minimal computational efforts. Furthermore, a mobile application is designed for instant and proper action. It helps the patient and dermatologists identify the type of disease from the affected region’s image at the initial stage of the skin disease. These findings suggest that the proposed system can help general practitioners efficiently and effectively diagnose skin conditions, thereby reducing further complications and morbidity.


2021 ◽  
Author(s):  
Andrew Bennett ◽  
Bart Nijssen

&lt;p&gt;Machine learning (ML), and particularly deep learning (DL), for geophysical research has shown dramatic successes in recent years. However, these models are primarily geared towards better predictive capabilities, and are generally treated as black box models, limiting researchers&amp;#8217; ability to interpret and understand how these predictions are made. As these models are incorporated into larger models and pushed to be used in more areas it will be important to build methods that allow us to reason about how these models operate. This will have implications for scientific discovery that will ensure that these models are robust and reliable for their respective applications. Recent work in explainable artificial intelligence (XAI) has been used to interpret and explain the behavior of machine learned models.&lt;/p&gt;&lt;p&gt;Here, we apply new tools from the field of XAI to provide physical interpretations of a system that couples a deep-learning based parameterization for turbulent heat fluxes to a process based hydrologic model. To develop this coupling we have trained a neural network to predict turbulent heat fluxes using FluxNet data from a large number of hydroclimatically diverse sites. This neural network is coupled to the SUMMA hydrologic model, taking imodel derived states as additional inputs to improve predictions. We have shown that this coupled system provides highly accurate simulations of turbulent heat fluxes at 30 minute timesteps, accurately predicts the long-term observed water balance, and reproduces other signatures such as the phase lag with shortwave radiation. Because of these features, it seems this coupled system is learning physically accurate relationships between inputs and outputs.&amp;#160;&lt;/p&gt;&lt;p&gt;We probe the relative importance of which input features are used to make predictions during wet and dry conditions to better understand what the neural network has learned. Further, we conduct controlled experiments to understand how the neural networks are able to learn to regionalize between different hydroclimates. By understanding how these neural networks make their predictions as well as how they learn to make predictions we can gain scientific insights and use them to further improve our models of the Earth system.&lt;/p&gt;


Entropy ◽  
2020 ◽  
Vol 22 (12) ◽  
pp. 1365
Author(s):  
Bogdan Muşat ◽  
Răzvan Andonie

Convolutional neural networks utilize a hierarchy of neural network layers. The statistical aspects of information concentration in successive layers can bring an insight into the feature abstraction process. We analyze the saliency maps of these layers from the perspective of semiotics, also known as the study of signs and sign-using behavior. In computational semiotics, this aggregation operation (known as superization) is accompanied by a decrease of spatial entropy: signs are aggregated into supersign. Using spatial entropy, we compute the information content of the saliency maps and study the superization processes which take place between successive layers of the network. In our experiments, we visualize the superization process and show how the obtained knowledge can be used to explain the neural decision model. In addition, we attempt to optimize the architecture of the neural model employing a semiotic greedy technique. To the extent of our knowledge, this is the first application of computational semiotics in the analysis and interpretation of deep neural networks.


2021 ◽  
pp. 1-17
Author(s):  
Hania H. Farag ◽  
Lamiaa A. A. Said ◽  
Mohamed R. M. Rizk ◽  
Magdy Abd ElAzim Ahmed

COVID-19 has been considered as a global pandemic. Recently, researchers are using deep learning networks for medical diseases’ diagnosis. Some of these researches focuses on optimizing deep learning neural networks for enhancing the network accuracy. Optimizing the Convolutional Neural Network includes testing various networks which are obtained through manually configuring their hyperparameters, then the configuration with the highest accuracy is implemented. Each time a different database is used, a different combination of the hyperparameters is required. This paper introduces two COVID-19 diagnosing systems using both Residual Network and Xception Network optimized by random search in the purpose of finding optimal models that give better diagnosis rates for COVID-19. The proposed systems showed that hyperparameters tuning for the ResNet and the Xception Net using random search optimization give more accurate results than other techniques with accuracies 99.27536% and 100 % respectively. We can conclude that hyperparameters tuning using random search optimization for either the tuned Residual Network or the tuned Xception Network gives better accuracies than other techniques diagnosing COVID-19.


2020 ◽  
Vol 9 (1) ◽  
pp. 7-10
Author(s):  
Hendry Fonda

ABSTRACT Riau batik is known since the 18th century and is used by royal kings. Riau Batik is made by using a stamp that is mixed with coloring and then printed on fabric. The fabric used is usually silk. As its development, comparing Javanese  batik with riau batik Riau is very slowly accepted by the public. Convolutional Neural Networks (CNN) is a combination of artificial neural networks and deeplearning methods. CNN consists of one or more convolutional layers, often with a subsampling layer followed by one or more fully connected layers as a standard neural network. In the process, CNN will conduct training and testing of Riau batik so that a collection of batik models that have been classified based on the characteristics that exist in Riau batik can be determined so that images are Riau batik and non-Riau batik. Classification using CNN produces Riau batik and not Riau batik with an accuracy of 65%. Accuracy of 65% is due to basically many of the same motifs between batik and other batik with the difference lies in the color of the absorption in the batik riau. Kata kunci: Batik; Batik Riau; CNN; Image; Deep Learning   ABSTRAK   Batik Riau dikenal sejak abad ke 18 dan digunakan oleh bangsawan raja. Batik Riau dibuat dengan menggunakan cap yang dicampur dengan pewarna kemudian dicetak di kain. Kain yang digunakan biasanya sutra. Seiring perkembangannya, dibandingkan batik Jawa maka batik Riau sangat lambat diterima oleh masyarakat. Convolutional Neural Networks (CNN) merupakan kombinasi dari jaringan syaraf tiruan dan metode deeplearning. CNN terdiri dari satu atau lebih lapisan konvolutional, seringnya dengan suatu lapisan subsampling yang diikuti oleh satu atau lebih lapisan yang terhubung penuh sebagai standar jaringan syaraf. Dalam prosesnya CNN akan melakukan training dan testing terhadap batik Riau sehingga didapat kumpulan model batik yang telah terklasi    fikasi berdasarkan ciri khas yang ada pada batik Riau sehingga dapat ditentukan gambar (image) yang merupakan batik Riau dan yang bukan merupakan batik Riau. Klasifikasi menggunakan CNN menghasilkan batik riau dan bukan batik riau dengan akurasi 65%. Akurasi 65% disebabkan pada dasarnya banyak motif yang sama antara batik riau dengan batik lainnya dengan perbedaan terletak pada warna cerap pada batik riau. Kata kunci: Batik; Batik Riau; CNN; Image; Deep Learning


Author(s):  
Tahani Aljohani ◽  
Alexandra I. Cristea

Massive Open Online Courses (MOOCs) have become universal learning resources, and the COVID-19 pandemic is rendering these platforms even more necessary. In this paper, we seek to improve Learner Profiling (LP), i.e. estimating the demographic characteristics of learners in MOOC platforms. We have focused on examining models which show promise elsewhere, but were never examined in the LP area (deep learning models) based on effective textual representations. As LP characteristics, we predict here the employment status of learners. We compare sequential and parallel ensemble deep learning architectures based on Convolutional Neural Networks and Recurrent Neural Networks, obtaining an average high accuracy of 96.3% for our best method. Next, we predict the gender of learners based on syntactic knowledge from the text. We compare different tree-structured Long-Short-Term Memory models (as state-of-the-art candidates) and provide our novel version of a Bi-directional composition function for existing architectures. In addition, we evaluate 18 different combinations of word-level encoding and sentence-level encoding functions. Based on these results, we show that our Bi-directional model outperforms all other models and the highest accuracy result among our models is the one based on the combination of FeedForward Neural Network and the Stack-augmented Parser-Interpreter Neural Network (82.60% prediction accuracy). We argue that our prediction models recommended for both demographics characteristics examined in this study can achieve high accuracy. This is additionally also the first time a sound methodological approach toward improving accuracy for learner demographics classification on MOOCs was proposed.


Author(s):  
Elena Morotti ◽  
Davide Evangelista ◽  
Elena Loli Piccolomini

Deep Learning is developing interesting tools which are of great interest for inverse imaging applications. In this work, we consider a medical imaging reconstruction task from subsampled measurements, which is an active research field where Convolutional Neural Networks have already revealed their great potential. However, the commonly used architectures are very deep and, hence, prone to overfitting and unfeasible for clinical usages. Inspired by the ideas of the green-AI literature, we here propose a shallow neural network to perform an efficient Learned Post-Processing on images roughly reconstructed by the filtered backprojection algorithm. The results obtained on images from the training set and on unseen images, using both the non-expensive network and the widely used very deep ResUNet show that the proposed network computes images of comparable or higher quality in about one fourth of time.


2021 ◽  
Author(s):  
Wael Alnahari

Abstract In this paper, I proposed an iris recognition system by using deep learning via neural networks (CNN). Although CNN is used for machine learning, the recognition is achieved by building a non-trained CNN network with multiple layers. The main objective of the code the test pictures’ category (aka person name) with a high accuracy rate after having extracted enough features from training pictures of the same category which are obtained from a that I added to the code. I used IITD iris which included 10 iris pictures for 223 people.


Sign in / Sign up

Export Citation Format

Share Document