scholarly journals Early forecasting of tsunami inundation from tsunami and geodetic observation data with convolutional neural networks

2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Fumiyasu Makinoshima ◽  
Yusuke Oishi ◽  
Takashi Yamazaki ◽  
Takashi Furumura ◽  
Fumihiko Imamura

AbstractRapid and accurate hazard forecasting is important for prompt evacuations and reducing casualties during natural disasters. In the decade since the 2011 Tohoku tsunami, various tsunami forecasting methods using real-time data have been proposed. However, rapid and accurate tsunami inundation forecasting in coastal areas remains challenging. Here, we propose a tsunami forecasting approach using convolutional neural networks (CNNs) for early warning. Numerical tsunami forecasting experiments for Tohoku demonstrated excellent performance with average maximum tsunami amplitude and tsunami arrival time forecasting errors of ~0.4 m and ~48 s, respectively, for 1,000 unknown synthetic tsunami scenarios. Our forecasting approach required only 0.004 s on average using a single CPU node. Moreover, the CNN trained on only synthetic tsunami scenarios provided reasonable inundation forecasts using actual observation data from the 2011 event, even with noisy inputs. These results verify the feasibility of AI-enabled tsunami forecasting for providing rapid and accurate early warnings.

2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Teja Kattenborn ◽  
Jana Eichel ◽  
Fabian Ewald Fassnacht

AbstractRecent technological advances in remote sensing sensors and platforms, such as high-resolution satellite imagers or unmanned aerial vehicles (UAV), facilitate the availability of fine-grained earth observation data. Such data reveal vegetation canopies in high spatial detail. Efficient methods are needed to fully harness this unpreceded source of information for vegetation mapping. Deep learning algorithms such as Convolutional Neural Networks (CNN) are currently paving new avenues in the field of image analysis and computer vision. Using multiple datasets, we test a CNN-based segmentation approach (U-net) in combination with training data directly derived from visual interpretation of UAV-based high-resolution RGB imagery for fine-grained mapping of vegetation species and communities. We demonstrate that this approach indeed accurately segments and maps vegetation species and communities (at least 84% accuracy). The fact that we only used RGB imagery suggests that plant identification at very high spatial resolutions is facilitated through spatial patterns rather than spectral information. Accordingly, the presented approach is compatible with low-cost UAV systems that are easy to operate and thus applicable to a wide range of users.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Haibin Chang ◽  
Ying Cui

More and more image materials are used in various industries these days. Therefore, how to collect useful images from a large set has become an urgent priority. Convolutional neural networks (CNN) have achieved good results in certain image classification tasks, but there are still problems such as poor classification ability, low accuracy, and slow convergence speed. This article mainly introduces the image classification algorithm (ICA) research based on the multilabel learning of the improved convolutional neural network and some improvement ideas for the research of the ICA based on the multilabel learning of the convolutional neural network. This paper proposes an ICA research method based on multilabel learning of improved convolutional neural networks, including the image classification process, convolutional network algorithm, and multilabel learning algorithm. The conclusions show that the average maximum classification accuracy of the improved CNN in this paper is 90.63%, and the performance is better, which is beneficial to improving the efficiency of image classification. The improved CNN network structure has reached the highest accuracy rate of 91.47% on the CIFAR-10 data set, which is much higher than the traditional CNN algorithm.


2021 ◽  
Author(s):  
Dario Spiller ◽  
Luigi Ansalone ◽  
Nicolas Longépé ◽  
James Wheeler ◽  
Pierre Philippe Mathieu

<p>Over the last few years, wildfires have become more severe and destructive, having extreme consequences on local and global ecosystems. Fire detection and accurate monitoring of risk areas is becoming increasingly important. Satellite remote sensing offers unique opportunities for mapping, monitoring, and analysing the evolution of wildfires, providing helpful contributions to counteract dangerous situations.</p><p>Among the different remote sensing technologies, hyper-spectral (HS) imagery presents nonpareil features in support to fire detection. In this study, HS images from the Italian satellite PRISMA (PRecursore IperSpettrale della Missione Applicativa) will be used. The PRISMA satellite, launched on 22 March 2019, holds a hyperspectral and panchromatic  payload which is able to acquire images with a worldwide coverage. The hyperspectral camera works in the spectral range of 0.4–2.5 µm, with 66 and 173 channels in the VNIR (Visible and Near InfraRed) and SWIR (Short-Wave InfraRed) regions, respectively. The average spectral resolution is less than 10 nm on the entire range with an accuracy of ±0.1 nm, while the ground sampling distance of PRISMA images is about 5 m and 30 m for panchromatic and hyperspectral camera, respectively.</p><p>This work will investigate how PRISMA HS images can be used to support fire detection and related crisis management. To this aim, deep learning methodologies will be investigated, as 1D convolutional neural networks to perform spectral analysis of the data or 3D convolutional neural networks to perform spatial and spectral analyses at the same time. Semantic segmentation of input HS data will be discussed, where an output image with metadata will be associated to each pixels of the input image. The overall goal of this work is to highlight how PRISMA hyperspectral data can contribute to remote sensing and Earth-observation data analysis with regard to natural hazard and risk studies focusing specially on wildfires, also considering the benefits with respect to standard multi-spectral imagery or previous hyperspectral sensors such as Hyperion.</p><p>The contributions of this work to the state of the art are the following:</p><ul><li>Demonstrating the advantages of using PRISMA HS data over using multi-spectral data.</li> <li>Discussing the potentialities of deep learning methodologies based on 1D and 3D convolutional neural networks to catch spectral (and spatial for the 3D case) dependencies, which is crucial when dealing with HS images.</li> <li>Discussing the possibility and benefit to integrate HS-based approach in future monitoring systems in case of wildfire alerts and disasters.</li> <li>Discussing the opportunity to design and develop future missions for HS remote sensing specifically dedicated for fire detection with on-board analysis.</li> </ul><p>To conclude, this work will raise awareness in the potentialities of using PRISMA HS data for disasters monitoring with specialized focus on wildfires.</p>


2021 ◽  
Author(s):  
James George Clifford Ball ◽  
Katerina Petrova ◽  
David Coomes ◽  
Seth Flaxman

1. Tropical forests are subject to diverse deforestation pressures but their conservation is essential to achieve global climate goals. Predicting the location of deforestation is challenging due to the complexity of the natural and human systems involved but accurate and timely forecasts could enable effective planning and on-the-ground enforcement practices to curb deforestation rates. New computer vision technologies based on deep learning can be applied to the increasing volume of Earth observation data to generate novel insights and make predictions with unprecedented accuracy. 2. Here, we demonstrate the ability of deep convolutional neural networks to learn spatiotemporal patterns of deforestation from a limited set of freely available global data layers, including multispectral satellite imagery, the Hansen maps of historic deforestation (2001-2020) and the ALOS JAXA digital surface model, to forecast future deforestation (2021). We designed four original deep learning model architectures, based on 2D Convolutional Neural Networks (2DCNN), 3D Convolutional Neural Networks (3DCNN), and Long Short-Term Memory (LSTM) Recurrent Neural Networks (RNN) to produce spatial maps that indicate the risk to each forested pixel (~30 m) in the landscape of becoming deforested within the next year.. They were trained and tested on data from two ~80,000 km2 tropical forest regions in the Southern Peruvian Amazon. 3. We found that the networks could predict the likely location of future deforestation to a high degree of accuracy. Our best performing model - a 3DCNN - had the highest pixel-wise accuracy (80-90%) when validated on 2020 deforestation based 2014-2019 training. Visual examination of the forecasts indicated that the 3DCNN network could automatically discern the drivers of forest loss from the input data. For example, pixels around new access routes (e.g. roads) were assigned high risk whereas this was not the case for recent, concentrated natural loss events (e.g. remote landslides). 4. CNNs can harness limited time-series data to predict near-future deforestation patterns, an important step in using the growing volume of satellite remote sensing data to curb global deforestation. The modelling framework can be readily applied to any tropical forest location and used by governments and conservation organisations to prevent deforestation and plan protected areas.


2020 ◽  
Vol 2020 (10) ◽  
pp. 28-1-28-7 ◽  
Author(s):  
Kazuki Endo ◽  
Masayuki Tanaka ◽  
Masatoshi Okutomi

Classification of degraded images is very important in practice because images are usually degraded by compression, noise, blurring, etc. Nevertheless, most of the research in image classification only focuses on clean images without any degradation. Some papers have already proposed deep convolutional neural networks composed of an image restoration network and a classification network to classify degraded images. This paper proposes an alternative approach in which we use a degraded image and an additional degradation parameter for classification. The proposed classification network has two inputs which are the degraded image and the degradation parameter. The estimation network of degradation parameters is also incorporated if degradation parameters of degraded images are unknown. The experimental results showed that the proposed method outperforms a straightforward approach where the classification network is trained with degraded images only.


Sign in / Sign up

Export Citation Format

Share Document