Application of Visible/Near-Infrared Hyperspectral Imaging with Convolutional Neural Networks to Phenotype Aboveground Parts to Detect Cabbage Plasmodiophora brassicae (clubroot)

2022 ◽  
pp. 104040
Author(s):  
Lei Feng ◽  
Baohua Wu ◽  
Shuangshuang Chen ◽  
Chu Zhang ◽  
Yong He
The Analyst ◽  
2019 ◽  
Vol 144 (21) ◽  
pp. 6438-6446
Author(s):  
Hideaki Kanayama ◽  
Te Ma ◽  
Satoru Tsuchikawa ◽  
Tetsuya Inagaki

From the viewpoint of combating illegal logging and examining wood properties, there is a contemporary demand for a wood species identification system.


2020 ◽  
Vol 40 (16) ◽  
pp. 1610001
Author(s):  
唐超影 Tang Chaoying ◽  
浦世亮 Pu Shiliang ◽  
叶鹏钊 Ye Pengzhao ◽  
肖飞 Xiao Fei ◽  
冯华君 Feng Huajun

2021 ◽  
Author(s):  
Dario Spiller ◽  
Luigi Ansalone ◽  
Nicolas Longépé ◽  
James Wheeler ◽  
Pierre Philippe Mathieu

<p>Over the last few years, wildfires have become more severe and destructive, having extreme consequences on local and global ecosystems. Fire detection and accurate monitoring of risk areas is becoming increasingly important. Satellite remote sensing offers unique opportunities for mapping, monitoring, and analysing the evolution of wildfires, providing helpful contributions to counteract dangerous situations.</p><p>Among the different remote sensing technologies, hyper-spectral (HS) imagery presents nonpareil features in support to fire detection. In this study, HS images from the Italian satellite PRISMA (PRecursore IperSpettrale della Missione Applicativa) will be used. The PRISMA satellite, launched on 22 March 2019, holds a hyperspectral and panchromatic  payload which is able to acquire images with a worldwide coverage. The hyperspectral camera works in the spectral range of 0.4–2.5 µm, with 66 and 173 channels in the VNIR (Visible and Near InfraRed) and SWIR (Short-Wave InfraRed) regions, respectively. The average spectral resolution is less than 10 nm on the entire range with an accuracy of ±0.1 nm, while the ground sampling distance of PRISMA images is about 5 m and 30 m for panchromatic and hyperspectral camera, respectively.</p><p>This work will investigate how PRISMA HS images can be used to support fire detection and related crisis management. To this aim, deep learning methodologies will be investigated, as 1D convolutional neural networks to perform spectral analysis of the data or 3D convolutional neural networks to perform spatial and spectral analyses at the same time. Semantic segmentation of input HS data will be discussed, where an output image with metadata will be associated to each pixels of the input image. The overall goal of this work is to highlight how PRISMA hyperspectral data can contribute to remote sensing and Earth-observation data analysis with regard to natural hazard and risk studies focusing specially on wildfires, also considering the benefits with respect to standard multi-spectral imagery or previous hyperspectral sensors such as Hyperion.</p><p>The contributions of this work to the state of the art are the following:</p><ul><li>Demonstrating the advantages of using PRISMA HS data over using multi-spectral data.</li> <li>Discussing the potentialities of deep learning methodologies based on 1D and 3D convolutional neural networks to catch spectral (and spatial for the 3D case) dependencies, which is crucial when dealing with HS images.</li> <li>Discussing the possibility and benefit to integrate HS-based approach in future monitoring systems in case of wildfire alerts and disasters.</li> <li>Discussing the opportunity to design and develop future missions for HS remote sensing specifically dedicated for fire detection with on-board analysis.</li> </ul><p>To conclude, this work will raise awareness in the potentialities of using PRISMA HS data for disasters monitoring with specialized focus on wildfires.</p>


Sensors ◽  
2018 ◽  
Vol 18 (6) ◽  
pp. 1944 ◽  
Author(s):  
Lei Feng ◽  
Susu Zhu ◽  
Fucheng Lin ◽  
Zhenzhu Su ◽  
Kangpei Yuan ◽  
...  

2019 ◽  
Vol 11 (22) ◽  
pp. 2635 ◽  
Author(s):  
Massimiliano Gargiulo ◽  
Antonio Mazza ◽  
Raffaele Gaetano ◽  
Giuseppe Ruello ◽  
Giuseppe Scarpa

Images provided by the ESA Sentinel-2 mission are rapidly becoming the main source of information for the entire remote sensing community, thanks to their unprecedented combination of spatial, spectral and temporal resolution, as well as their associated open access policy. Due to a sensor design trade-off, images are acquired (and delivered) at different spatial resolutions (10, 20 and 60 m) according to specific sets of wavelengths, with only the four visible and near infrared bands provided at the highest resolution (10 m). Although this is not a limiting factor in general, many applications seem to emerge in which the resolution enhancement of 20 m bands may be beneficial, motivating the development of specific super-resolution methods. In this work, we propose to leverage Convolutional Neural Networks (CNNs) to provide a fast, upscalable method for the single-sensor fusion of Sentinel-2 (S2) data, whose aim is to provide a 10 m super-resolution of the original 20 m bands. Experimental results demonstrate that the proposed solution can achieve better performance with respect to most of the state-of-the-art methods, including other deep learning based ones with a considerable saving of computational burden.


Sign in / Sign up

Export Citation Format

Share Document