scholarly journals When Convolutional Neural Networks Meet Remote Sensing Data for Fire Detection

2021 ◽  
Vol 1914 (1) ◽  
pp. 012002
Author(s):  
Ziwen Li ◽  
Yuehuan Wang ◽  
Shuo Liang
2021 ◽  
Author(s):  
Rajagopal T K P ◽  
Sakthi G ◽  
Prakash J

Abstract Hyperspectral remote sensing based image classification is found to be a very widely used method employed for scene analysis that is from a remote sensing data which is of a high spatial resolution. Classification is a critical task in the processing of remote sensing. On the basis of the fact that there are different materials with reflections in a particular spectral band, all the traditional pixel-wise classifiers both identify and also classify all materials on the basis of their spectral curves (or pixels). Owing to the dimensionality of the remote sensing data of high spatial resolution along with a limited number of labelled samples, a remote sensing image of a high spatial resolution tends to suffer from something known as the Hughes phenomenon which can pose a serious problem. In order to overcome such a small-sample problem, there are several methods of learning like the Support Vector Machine (SVM) along with the other methods that are kernel based and these were introduced recently for a remote sensing classification of the image and this has shown a good performance. For the purpose of this work, an SVM along with Radial Basis Function (RBF) method was proposed. But, a feature learning approach for the classification of the hyperspectral image is based on the Convolutional Neural Networks (CNNs). The results of the experiment that were based on various image datasets that were hyperspectral which implies that the method proposed will be able to achieve a better performance of classification compared to other traditional methods like the SVM and the RBF kernel and also all conventional methods based on deep learning (CNN).


2010 ◽  
Vol 15 (2) ◽  
pp. 221-224 ◽  
Author(s):  
Takashi Yamaguchi ◽  
Kazuya Kishida ◽  
Eiji Nunohiro ◽  
Jong Geol Park ◽  
Kenneth J. Mackin ◽  
...  

Author(s):  
L. Liebel ◽  
M. Körner

In optical remote sensing, spatial resolution of images is crucial for numerous applications. Space-borne systems are most likely to be affected by a lack of spatial resolution, due to their natural disadvantage of a large distance between the sensor and the sensed object. Thus, methods for <i>single-image super resolution</i> are desirable to exceed the limits of the sensor. Apart from assisting visual inspection of datasets, post-processing operations—e.g., segmentation or feature extraction—can benefit from detailed and distinguishable structures. In this paper, we show that recently introduced state-of-the-art approaches for single-image super resolution of conventional photographs, making use of <i>deep learning</i> techniques, such as <i>convolutional neural networks</i> (CNN), can successfully be applied to remote sensing data. With a huge amount of training data available, <i>end-to-end learning</i> is reasonably easy to apply and can achieve results unattainable using conventional handcrafted algorithms. <br><br> We trained our CNN on a specifically designed, domain-specific dataset, in order to take into account the special characteristics of multispectral remote sensing data. This dataset consists of publicly available SENTINEL-2 images featuring 13 spectral bands, a ground resolution of up to 10m, and a high radiometric resolution and thus satisfying our requirements in terms of quality and quantity. In experiments, we obtained results superior compared to competing approaches trained on generic image sets, which failed to reasonably scale satellite images with a high radiometric resolution, as well as conventional interpolation methods.


2021 ◽  
Vol 13 (18) ◽  
pp. 3727
Author(s):  
Benoit Vozel ◽  
Vladimir Lukin ◽  
Joan Serra-Sagristà

A huge amount of remote sensing data is acquired each day, which is transferred to image processing centers and/or to customers. Due to different limitations, compression has to be applied on-board and/or on-the-ground. This Special Issue collects 15 papers dealing with remote sensing data compression, introducing solutions for both lossless and lossy compression, analyzing the impact of compression on different processes, investigating the suitability of neural networks for compression, and researching on low complexity hardware and software approaches to deliver competitive coding performance.


2020 ◽  
Vol 44 (5) ◽  
pp. 763-771
Author(s):  
A.V. Kuznetsov ◽  
M.V. Gashnikov

We investigate image retouching algorithms for generating forgery Earth remote sensing data. We provide an overview of existing neural network solutions in the field of generation and inpainting of remote sensing images. To retouch Earth remote sensing data, we use imageinpainting algorithms based on convolutional neural networks and generative-adversarial neural networks. We pay special attention to a generative neural network with a separate contour prediction block that includes two series-connected generative-adversarial subnets. The first subnet inpaints contours of the image within the retouched area. The second subnet uses the inpainted contours to generate the resulting retouch area. As a basis for comparison, we use exemplar-based algorithms of image inpainting. We carry out computational experiments to study the effectiveness of these algorithms when retouching natural data of remote sensing of various types. We perform a comparative analysis of the quality of the algorithms considered, depending on the type, shape and size of the retouched objects and areas. We give qualitative and quantitative characteristics of the efficiency of the studied image inpainting algorithms when retouching Earth remote sensing data. We experimentally prove the advantage of generative-competitive neural networks in the construction of forgery remote sensing data.


Sign in / Sign up

Export Citation Format

Share Document