scholarly journals Advanced Ultraviolet Radiation and Ozone Retrieval for Applications—Surface Ultraviolet Radiation Products

Atmosphere ◽  
2020 ◽  
Vol 11 (4) ◽  
pp. 324
Author(s):  
Antti Lipponen ◽  
Simone Ceccherini ◽  
Ugo Cortesi ◽  
Marco Gai ◽  
Arno Keppens ◽  
...  

AURORA (Advanced Ultraviolet Radiation and Ozone Retrieval for Applications) is a three-year project supported by the European Union in the frame of its H2020 Call (EO-2-2015) for “Stimulating wider research use of Copernicus Sentinel Data”. The project addresses key scientific issues relevant for synergistic exploitation of data acquired in different spectral ranges by different instruments on board the atmospheric Sentinels. A novel approach, based on the assimilation of geosynchronous equatorial orbit (GEO) and low Earth orbit (LEO) fused products by application of an innovative algorithm to Sentinel-4 (S-4) and Sentinel-5 (S-5) synthetic data, is adopted to assess the quality of the unique ozone vertical profile obtained in a context simulating the operational environment. The first priority is then attributed to the lower atmosphere with calculation of tropospheric columns and ultraviolet (UV) surface radiation from the resulting ozone vertical distribution. Here we provide details on the surface UV algorithm of AURORA. Both UV index (UVI) and UV-A irradiance are provided from synthetic satellite measurements, which in turn are based on atmospheric scenarios from MERRA-2 (Modern-Era Retrospective analysis for Research and Applications, Version 2) re-analysis. The UV algorithm is implemented in a software tool integrated in the technological infrastructure developed in the context of AURORA for the management of the synthetic data and for supporting the data processing. This was closely linked to the application-oriented activities of the project, aimed to improve the performance and functionality of a downstream application for personal UV dosimetry based on satellite data. The use of synthetic measurements from MERRA-2 gives us also a “ground truth”, against which to evaluate the performance of our UV model with varying inputs. In this study we both describe the UV algorithm itself and assess the influence that changes in ozone profiles, due to the fusion and assimilation, can cause in surface UV levels.

Atmosphere ◽  
2018 ◽  
Vol 9 (11) ◽  
pp. 454 ◽  
Author(s):  
Ugo Cortesi ◽  
Simone Ceccherini ◽  
Samuele Del Bianco ◽  
Marco Gai ◽  
Cecilia Tirelli ◽  
...  

With the launch of the Sentinel-5 Precursor (S-5P, lifted-off on 13 October 2017), Sentinel-4 (S-4) and Sentinel-5 (S-5)(from 2021 and 2023 onwards, respectively) operational missions of the ESA/EU Copernicus program, a massive amount of atmospheric composition data with unprecedented quality will become available from geostationary (GEO) and low Earth orbit (LEO) observations. Enhanced observational capabilities are expected to foster deeper insight than ever before on key issues relevant for air quality, stratospheric ozone, solar radiation, and climate. A major potential strength of the Sentinel observations lies in the exploitation of complementary information that originates from simultaneous and independent satellite measurements of the same air mass. The core purpose of the AURORA (Advanced Ultraviolet Radiation and Ozone Retrieval for Applications) project is to investigate this exploitation from a novel approach for merging data acquired in different spectral regions from on board the GEO and LEO platforms. A data processing chain is implemented and tested on synthetic observations. A new data algorithm combines the ultraviolet, visible and thermal infrared ozone products into S-4 and S-5(P) fused profiles. These fused products are then ingested into state-of-the-art data assimilation systems to obtain a unique ozone profile in analyses and forecasts mode. A comparative evaluation and validation of fused products assimilation versus the assimilation of the operational products will seek to demonstrate the improvements achieved by the proposed approach. This contribution provides a first general overview of the project, and discusses both the challenges of developing a technological infrastructure for implementing the AURORA concept, and the potential for applications of AURORA derived products, such as tropospheric ozone and UV surface radiation, in sectors such as air quality monitoring and health.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
João Lobo ◽  
Rui Henriques ◽  
Sara C. Madeira

Abstract Background Three-way data started to gain popularity due to their increasing capacity to describe inherently multivariate and temporal events, such as biological responses, social interactions along time, urban dynamics, or complex geophysical phenomena. Triclustering, subspace clustering of three-way data, enables the discovery of patterns corresponding to data subspaces (triclusters) with values correlated across the three dimensions (observations $$\times$$ × features $$\times$$ × contexts). With increasing number of algorithms being proposed, effectively comparing them with state-of-the-art algorithms is paramount. These comparisons are usually performed using real data, without a known ground-truth, thus limiting the assessments. In this context, we propose a synthetic data generator, G-Tric, allowing the creation of synthetic datasets with configurable properties and the possibility to plant triclusters. The generator is prepared to create datasets resembling real 3-way data from biomedical and social data domains, with the additional advantage of further providing the ground truth (triclustering solution) as output. Results G-Tric can replicate real-world datasets and create new ones that match researchers needs across several properties, including data type (numeric or symbolic), dimensions, and background distribution. Users can tune the patterns and structure that characterize the planted triclusters (subspaces) and how they interact (overlapping). Data quality can also be controlled, by defining the amount of missing, noise or errors. Furthermore, a benchmark of datasets resembling real data is made available, together with the corresponding triclustering solutions (planted triclusters) and generating parameters. Conclusions Triclustering evaluation using G-Tric provides the possibility to combine both intrinsic and extrinsic metrics to compare solutions that produce more reliable analyses. A set of predefined datasets, mimicking widely used three-way data and exploring crucial properties was generated and made available, highlighting G-Tric’s potential to advance triclustering state-of-the-art by easing the process of evaluating the quality of new triclustering approaches.


2021 ◽  
Vol 13 (7) ◽  
pp. 1238
Author(s):  
Jere Kaivosoja ◽  
Juho Hautsalo ◽  
Jaakko Heikkinen ◽  
Lea Hiltunen ◽  
Pentti Ruuttunen ◽  
...  

The development of UAV (unmanned aerial vehicle) imaging technologies for precision farming applications is rapid, and new studies are published frequently. In cases where measurements are based on aerial imaging, there is the need to have ground truth or reference data in order to develop reliable applications. However, in several precision farming use cases such as pests, weeds, and diseases detection, the reference data can be subjective or relatively difficult to capture. Furthermore, the collection of reference data is usually laborious and time consuming. It also appears that it is difficult to develop generalisable solutions for these areas. This review studies previous research related to pests, weeds, and diseases detection and mapping using UAV imaging in the precision farming context, underpinning the applied reference measurement techniques. The majority of the reviewed studies utilised subjective visual observations of UAV images, and only a few applied in situ measurements. The conclusion of the review is that there is a lack of quantitative and repeatable reference data measurement solutions in the areas of mapping pests, weeds, and diseases. In addition, the results that the studies present should be reflected in the applied references. An option in the future approach could be the use of synthetic data as reference.


2020 ◽  
Vol 12 (13) ◽  
pp. 2137 ◽  
Author(s):  
Ilinca-Valentina Stoica ◽  
Marina Vîrghileanu ◽  
Daniela Zamfir ◽  
Bogdan-Andrei Mihai ◽  
Ionuț Săvulescu

Monitoring uncontained built-up area expansion remains a complex challenge for the development and implementation of a sustainable planning system. In this regard, proper planning requires accurate monitoring tools and up-to-date information on rapid territorial transformations. The purpose of the study was to assess built-up area expansion, comparing two freely available and widely used datasets, respectively, Corine Land Cover and Landsat, to each other, as well as the ground truth, with the goal of identifying the most cost-effective and reliable tool. The analysis was based on the largest post-socialist city in the European Union, the capital of Romania, Bucharest, and its neighboring Ilfov County, from 1990 to 2018. This study generally represents a new approach to measuring the process of urban expansion, offering insights about the strengths and limitations of the two datasets through a multi-level territorial perspective. The results point out discrepancies between the datasets, both at the macro-scale level and at the administrative unit’s level. On the macro-scale level, despite the noticeable differences, the two datasets revealed the spatiotemporal magnitude of the expansion of the built-up area and can be a useful tool for supporting the decision-making process. On the smaller territorial scale, detailed comparative analyses through five case-studies were conducted, indicating that, if used alone, limitations on the information that can be derived from the datasets would lead to inaccuracies, thus significantly limiting their potential to be used in the development of enforceable regulation in urban planning.


2013 ◽  
Vol 2013 ◽  
pp. 1-15 ◽  
Author(s):  
Tomi Kauppi ◽  
Joni-Kristian Kämäräinen ◽  
Lasse Lensu ◽  
Valentina Kalesnykiene ◽  
Iiris Sorri ◽  
...  

We address the performance evaluation practices for developing medical image analysis methods, in particular, how to establish and share databases of medical images with verified ground truth and solid evaluation protocols. Such databases support the development of better algorithms, execution of profound method comparisons, and, consequently, technology transfer from research laboratories to clinical practice. For this purpose, we propose a framework consisting of reusable methods and tools for the laborious task of constructing a benchmark database. We provide a software tool for medical image annotation helping to collect class label, spatial span, and expert's confidence on lesions and a method to appropriately combine the manual segmentations from multiple experts. The tool and all necessary functionality for method evaluation are provided as public software packages. As a case study, we utilized the framework and tools to establish the DiaRetDB1 V2.1 database for benchmarking diabetic retinopathy detection algorithms. The database contains a set of retinal images, ground truth based on information from multiple experts, and a baseline algorithm for the detection of retinopathy lesions.


Author(s):  
Wei Sun ◽  
Ethan Stoop ◽  
Scott S. Washburn

Florida’s interstate rest areas are heavily utilized by commercial trucks for overnight parking. Many of these rest areas regularly experience 100% utilization of available commercial truck parking spaces during the evening and early-morning hours. Being able to communicate availability of commercial truck parking space to drivers in advance of arriving at a rest area would reduce unnecessary stops at full rest areas as well as driver anxiety. In order to do this, it is critical to implement a vehicle detection technology to reflect the parking status of the rest area correctly. The objective of this project was to evaluate three different wireless in-pavement vehicle detection technologies as applied to commercial truck parking at interstate rest areas. This paper mainly focuses on the following aspects: (a) accuracy of the vehicle detection in parking spaces, (b) installation, setup, and maintenance of the vehicle detection technology, and (c) truck parking trends at the rest area study site. The final project report includes a more detailed summary of the evaluation. The research team recorded video of the rest areas as the ground-truth data and developed a software tool to compare the video data with the parking sensor data. Two accuracy tests (event accuracy and occupancy accuracy) were conducted to evaluate each sensor’s ability to reflect the status of each parking space correctly. Overall, it was found that all three technologies performed well, with accuracy rates of 95% or better for both tests. This result suggests that, for implementation, pricing, and/or maintenance issues may be more significant factors for the choice of technology.


Symmetry ◽  
2019 ◽  
Vol 11 (2) ◽  
pp. 227
Author(s):  
Eckart Michaelsen ◽  
Stéphane Vujasinovic

Representative input data are a necessary requirement for the assessment of machine-vision systems. For symmetry-seeing machines in particular, such imagery should provide symmetries as well as asymmetric clutter. Moreover, there must be reliable ground truth with the data. It should be possible to estimate the recognition performance and the computational efforts by providing different grades of difficulty and complexity. Recent competitions used real imagery labeled by human subjects with appropriate ground truth. The paper at hand proposes to use synthetic data instead. Such data contain symmetry, clutter, and nothing else. This is preferable because interference with other perceptive capabilities, such as object recognition, or prior knowledge, can be avoided. The data are given sparsely, i.e., as sets of primitive objects. However, images can be generated from them, so that the same data can also be fed into machines requiring dense input, such as multilayered perceptrons. Sparse representations are preferred, because the author’s own system requires such data, and in this way, any influence of the primitive extraction method is excluded. The presented format allows hierarchies of symmetries. This is important because hierarchy constitutes a natural and dominant part in symmetry-seeing. The paper reports some experiments using the author’s Gestalt algebra system as symmetry-seeing machine. Additionally included is a comparative test run with the state-of-the-art symmetry-seeing deep learning convolutional perceptron of the PSU. The computational efforts and recognition performance are assessed.


Author(s):  
Kévin Lamy ◽  
Marion Ranaivombola ◽  
Hassan Bencherif ◽  
Thierry Portafaix ◽  
Mohamed Abdoulwahab Toihir ◽  
...  

As part of the UV-Indien project, a station for measuring ultraviolet radiation and the cloud fraction was installed in December 2019 in Moroni, the capital of the Comoros, situated on the west coast of the island of Ngazidja. A ground measurement campaign was also carried out on 12 January 2020 during the ascent of Mount Karthala, located in the center of the island of Ngazidja. In addition, satellite estimates (Ozone Monitoring Instrument and TROPOspheric Monitoring Instrument) and model outputs (Copernicus Atmospheric Monitoring Service and Tropospheric Ultraviolet Model) were combined for this same region. On the one hand, these different measurements and estimates make it possible to quantify, evaluate, and monitor the health risk linked to exposure to ultraviolet radiation in this region and, on the other, they help to understand how cloud cover influences the variability of UV-radiation on the ground. The measurements of the Ozone Monitoring Instrument onboard the EOS-AURA satellite, being the longest timeseries of ultraviolet measurements available in this region, make it possible to quantify the meteorological conditions in Moroni and to show that more than 80% of the ultraviolet indices are classified as high, and that 60% of these are classified as extreme. The cloud cover measured in Moroni by an All Sky Camera was used to distinguish between the cases of UV index measurements taken under clear or cloudy sky conditions. The ground-based measurements thus made it possible to describe the variability of the diurnal cycle of the UV index and the influence of cloud cover on this parameter. They also permitted the satellite measurements and the results of the simulations to be validated. In clear sky conditions, a relative difference of between 6 and 11% was obtained between satellite or model estimates and ground measurements. The ultraviolet index measurement campaign on Mount Karthala showed maximum one-minute standard erythemal doses at 0.3 J·m−2 and very high daily cumulative erythemal doses, at more than 80 J·m−2. These very high levels are also observed throughout the year and all skin phototypes can exceed the daily erythemal dose threshold, at more than 20 J·m−2.


Sensors ◽  
2019 ◽  
Vol 19 (17) ◽  
pp. 3784 ◽  
Author(s):  
Jameel Malik ◽  
Ahmed Elhayek ◽  
Didier Stricker

Hand shape and pose recovery is essential for many computer vision applications such as animation of a personalized hand mesh in a virtual environment. Although there are many hand pose estimation methods, only a few deep learning based algorithms target 3D hand shape and pose from a single RGB or depth image. Jointly estimating hand shape and pose is very challenging because none of the existing real benchmarks provides ground truth hand shape. For this reason, we propose a novel weakly-supervised approach for 3D hand shape and pose recovery (named WHSP-Net) from a single depth image by learning shapes from unlabeled real data and labeled synthetic data. To this end, we propose a novel framework which consists of three novel components. The first is the Convolutional Neural Network (CNN) based deep network which produces 3D joints positions from learned 3D bone vectors using a new layer. The second is a novel shape decoder that recovers dense 3D hand mesh from sparse joints. The third is a novel depth synthesizer which reconstructs 2D depth image from 3D hand mesh. The whole pipeline is fine-tuned in an end-to-end manner. We demonstrate that our approach recovers reasonable hand shapes from real world datasets as well as from live stream of depth camera in real-time. Our algorithm outperforms state-of-the-art methods that output more than the joint positions and shows competitive performance on 3D pose estimation task.


2020 ◽  
Vol 21 (S1) ◽  
Author(s):  
Daniel Ruiz-Perez ◽  
Haibin Guan ◽  
Purnima Madhivanan ◽  
Kalai Mathee ◽  
Giri Narasimhan

Abstract Background Partial Least-Squares Discriminant Analysis (PLS-DA) is a popular machine learning tool that is gaining increasing attention as a useful feature selector and classifier. In an effort to understand its strengths and weaknesses, we performed a series of experiments with synthetic data and compared its performance to its close relative from which it was initially invented, namely Principal Component Analysis (PCA). Results We demonstrate that even though PCA ignores the information regarding the class labels of the samples, this unsupervised tool can be remarkably effective as a feature selector. In some cases, it outperforms PLS-DA, which is made aware of the class labels in its input. Our experiments range from looking at the signal-to-noise ratio in the feature selection task, to considering many practical distributions and models encountered when analyzing bioinformatics and clinical data. Other methods were also evaluated. Finally, we analyzed an interesting data set from 396 vaginal microbiome samples where the ground truth for the feature selection was available. All the 3D figures shown in this paper as well as the supplementary ones can be viewed interactively at http://biorg.cs.fiu.edu/plsda Conclusions Our results highlighted the strengths and weaknesses of PLS-DA in comparison with PCA for different underlying data models.


Sign in / Sign up

Export Citation Format

Share Document