visible imagery
Recently Published Documents


TOTAL DOCUMENTS

68
(FIVE YEARS 11)

H-INDEX

15
(FIVE YEARS 2)

Geosciences ◽  
2022 ◽  
Vol 12 (1) ◽  
pp. 40
Author(s):  
Christine Simurda ◽  
Lori A. Magruder ◽  
Jonathan Markel ◽  
James B. Garvin ◽  
Daniel A. Slayback

Submarine volcanism in shallow waters (<100 m), particularly in remote settings, is difficult to monitor quantitatively and, in the rare formation of islands, it is challenging to understand the rapid-paced erosion. However, these newly erupted volcanic islands become observable to airborne and/or satellite remote sensing instruments. NASA’s ICESat-2 satellite laser altimeter, combined with visible imagery (optical and microwave), provide a novel method of evaluating the elevation characteristics of newly emerged volcanoes and their subaerial eruption products. Niijima Fukutoku-Okanoba (NFO) is a submarine volcano 1300 km south of Tokyo (Ogasawara Archipelago of Japan) that periodically breaches the ocean surface to create new islands that are subsequently eroded. The recent eruption in August 2021 is a rare opportunity to investigate this island evolution using high-resolution satellite datasets with geodetic-quality ICESat-2 altimetry. Lansdat-8 and Planet imagery provide a qualitative analysis of the exposed volcanic deposits, while ICESat-2 products provide elevation profiles necessary to quantify the physical surface structures. This investigation determines an innovative application for ICESat-2 data in evaluating newly emerged islands and how the combination of satellite remote sensing (visible and lidar) to investigate these short-lived volcanic features can improve our understanding of the volcanic island system in ways not previously possible.


Aerospace ◽  
2022 ◽  
Vol 9 (1) ◽  
pp. 31
Author(s):  
Farhad Samadzadegan ◽  
Farzaneh Dadrass Javan ◽  
Farnaz Ashtari Mahini ◽  
Mehrnaz Gholamshahi

Drones are becoming increasingly popular not only for recreational purposes but also in a variety of applications in engineering, disaster management, logistics, securing airports, and others. In addition to their useful applications, an alarming concern regarding physical infrastructure security, safety, and surveillance at airports has arisen due to the potential of their use in malicious activities. In recent years, there have been many reports of the unauthorized use of various types of drones at airports and the disruption of airline operations. To address this problem, this study proposes a novel deep learning-based method for the efficient detection and recognition of two types of drones and birds. Evaluation of the proposed approach with the prepared image dataset demonstrates better efficiency compared to existing detection systems in the literature. Furthermore, drones are often confused with birds because of their physical and behavioral similarity. The proposed method is not only able to detect the presence or absence of drones in an area but also to recognize and distinguish between two types of drones, as well as distinguish them from birds. The dataset used in this work to train the network consists of 10,000 visible images containing two types of drones as multirotors, helicopters, and also birds. The proposed deep learning method can directly detect and recognize two types of drones and distinguish them from birds with an accuracy of 83%, mAP of 84%, and IoU of 81%. The values of average recall, average accuracy, and average F1-score were also reported as 84%, 83%, and 83%, respectively, in three classes.


Author(s):  
A. Collin ◽  
D. James ◽  
A. Mury ◽  
M. Letard ◽  
B. Guillot

Abstract. The infrared (IR) imagery provides additional information to the visible (red-green-blue, RGB) about vegetation, soil, water, mineral, or temperature, and has become essential for various disciplines, such as geology, hydrology, ecology, archeology, meteorology or geography. The integration of the IR sensors, ranging from near-IR (NIR) to thermal-IR through mid-IR, constitutes a baseline for Earth Observation satellites but not for unmanned airborne vehicles (UAV). Given the hyperspatial and hypertemporal characteristics associated with the UAV survey, it is relevant to benefit from the IR waveband in addition to the visible imagery for mapping purposes. This paper proposes to predict the NIR reflectance from RGB digital number predictors collected with a consumer-grade UAV over a structurally and compositionally complex coastal area. An array of 15 000 data, distributed into calibration, validation and test datasets across 15 representative coastal habitats, was used to build and compare the performance of the standard least squares, decision tree, boosted tree, bootstrap forest and fully connected neural network (NN) models. The NN family surpassed the four other ones, and the best NN model (R2 = 0.67) integrated two hidden layers provided, each, with five nodes of hyperbolic tangent and five nodes of Gaussian activation functions. This perceptron enabled to produce a NIR reflectance spatially-explicit model deprived of original artifacts due to the flight constraints. At the habitat scale, sedimentary and dry vegetation environments were satisfactorily predicted (R2 > 0.6), contrary to the healthy vegetation (R2 < 0.2). Those innovative findings will be useful for scientists and managers tasked with hyperspatial and hypertemporal mapping.


2021 ◽  
Author(s):  
Benoît Tournadre ◽  
Benoît Gschwind ◽  
Yves-Marie Saint-Drenan ◽  
Philippe Blanc

Abstract. We develop a new way to retrieve the cloud index from a large variety of satellite instruments sensitive to reflected solar radiation, embedded on geostationary as non geostationary platforms. The cloud index is a widely used proxy for the effective cloud transmissivity, also called clear-sky index. This study is in the framework of the development of the Heliosat-V method for estimating downwelling solar irradiance at the surface of the Earth (DSSI) from satellite imagery. To reach its versatility, the method uses simulations from a fast radiative transfer model to estimate overcast (cloudy) and clear-sky (cloud-free) satellite scenes of the Earth’s reflectances. Simulations consider the anisotropy of the reflectances caused by both surface and atmosphere, and are adapted to the spectral sensitivity of the sensor. The anisotropy of ground reflectances is described by a bidirectional reflectance distribution function model and external satellite-derived data. An implementation of the method is applied to the visible imagery from a Meteosat Second Generation satellite, for 11 locations where high quality in situ measurements of DSSI are available from the Baseline Surface Radiation Network. Results from our preliminary implementation of Heliosat-V and ground-based measurements show a correlation coefficient reaching 0.948, for 15-minute means of DSSI, similar to operational and corrected satellite-based data products (0.950 for HelioClim3 version 5 and 0.937 for CAMS Radiation Service).


2021 ◽  
Vol 13 (2) ◽  
pp. 301
Author(s):  
Jinjin Li ◽  
Shi Qiu ◽  
Yu Zhang ◽  
Benyong Yang ◽  
Caixia Gao ◽  
...  

The Day–Night Band (DNB) imaging sensor of the Visible Infrared Imaging Radiometer Suite (VIIRS) adds nighttime monitoring capability to the Suomi National Polar-Orbiting Partnership and National Oceanic and Atmospheric Administration 20 weather satellite launched in 2011 and 2017, respectively. Nighttime visible imagery has already found diverse applications, but image quality is often unsatisfactory. In this study, variations in observed top-of-atmosphere (TOA) reflectance were examined in terms of nighttime bidirectional effects. The Antarctica Dome C ground site was selected due to high uniformity. First, variation of reflectance was characterized in terms of viewing zenith angle, lunar zenith angle, and relative lunar azimuth angle, using DNB data from 2012 to 2020 and Miller–Turner 2009 simulations. Variations in reflectance were observed to be strongly anisotropic, suggesting the presence of bidirectional effects. Then, based on this finding, three popular bidirectional reflectance distribution function (BRDF) models were evaluated for effectiveness in correcting for these effects on the nighttime images. The observed radiance of VIIRS DNB was compared with the simulated radiance respectively based on the three BRDF models under the same geometry. Compared with the RossThick-LiSparseReciprocal (RossLi) BRDF model and Hudson model, the Warren model has a higher correlation coefficient (0.9899–0.9945) and a lower root-mean-square-error (0.0383–0.0487). Moreover, the RossLi BRDF model and Hudson model may have similar effects in the description of the nighttime TOA over Dome C. These findings are potentially useful to evaluate the radiometric calibration stability and consistency of nighttime satellite sensors.


Author(s):  
F. Dadras Javan ◽  
M. Savadkouhi

Abstract. In the last few years, Unmanned Aerial Vehicles (UAVs) are being frequently used to acquire high resolution photogrammetric images and consequently producing Digital Surface Models (DSMs) and orthophotos in a photogrammetric procedure for topography and surface processing applications. Thermal imaging sensors are mostly used for interpretation and monitoring purposes because of lower geometric resolution. But yet, thermal mapping is getting more important in civil applications, as thermal sensors can be used in condition that visible sensors cannot, such as foggy weather and night times which is not possible for visible cameras. But, low geometric quality and resolution of thermal images is a main drawback that 3D thermal modelling are encountered with. This study aims to offer a solution for to fixing mentioned problem and generating a thermal 3D model with higher spatial resolution based on thermal and visible point clouds integration. This integration leads to generate a more accurate thermal point cloud and DEM with more density and resolution which is appropriate for 3D thermal modelling. The main steps of this study are: generating thermal and RGB point clouds separately, registration of them in two course and fine level and finally adding thermal information to RGB high resolution point cloud by interpolation concept. Experimental results are presented in a mesh that has more faces (With a factor of 23) which leads to a higher resolution textured mesh with thermal information.


Image fusion is a process of combining an image sequence of the same scene into a single image for better human perception & targeting. The thermal energy reflected from outstanding objects under poor lighting conditions and visible information that yield spatial details needs to be fused for improving the performance of surveillance systems. In this paper, we present a fusion technique that is helpful in surveillance systems for detecting targets when the background and the targets are of the same color. A nonparametric segmentation based weight map computation technique is proposed to extract target details from infrared (IR) imagery. The optimal threshold based on local features is selected automatically for target detection. With this, the extracted salient information of targets is blended to visible image without introducing distortions. The main advantage of the new technique is that it is based on a single-scale binary map (SSBM) fusion approach. The binary weight maps are computed for the fusion of separable IR target with visible imagery. An extension to IR and visible color image fusion is also suggested for target localization. Several simulation results are demonstrated for different data sets to support the validity of the proposed technique


Sign in / Sign up

Export Citation Format

Share Document