Adaptive Spatio-Temporal Video Noise Filtering for High Quality Applications

Author(s):  
Sitaram Bhagavathy ◽  
Joan Llach
2021 ◽  
Vol 2094 (4) ◽  
pp. 042004
Author(s):  
Ya A Ivakin ◽  
E G Semenova ◽  
A G Ruchev ◽  
M S Smirnova

Abstract Geochronological tracking has received wide recognition as an appropriate scientific and methodological tool and an effective information technology of qualimetric research in the interests of ensuring high quality of transport services, transportation efficiency, analysis of the facts of insufficient meeting the needs of the population with spatially remote services, etc. On the basis of geochronotracking, a procedure has been developed for statistical verification of research hypotheses about stable trends in changes in the quality of various spatio-temporal processes. The reliability and validity of accepting a particular hypothesis in the framework of a qualimetric research is determined by the representativeness of the volume of initial data on geographical movements, considered as a selection from the general population. This article is devoted to the analysis of this dependence and the development of an algorithm for assessing the specified stability (significance).


1989 ◽  
Author(s):  
A. K. Katsaggelos ◽  
J. N. Driessen ◽  
S. N. Efstratiadis ◽  
R. L. Lagendijk

2019 ◽  
Vol 1 ◽  
pp. 1-2
Author(s):  
Magnus Heitzler ◽  
Lorenz Hurni

<p><strong>Abstract.</strong> Thoroughly prepared historical map data can facilitate research in a wide range of domains, including ecology and hydrology (e.g., for preservation and renaturation), urban planning and architecture (e.g., to analyse the settlement development), geology and insurance (e.g., to derive indicators of past natural hazards to estimate future events), and even linguistics (e.g., to explore the evolution of toponyms). Research groups in Switzerland have invested large amounts of time and money to manually derive features (e.g., pixel-based segmentations, vectorizations) from historical maps such as the Dufour Map Series (1845&amp;ndash;1865) or the Siegfried Map Series (1872&amp;ndash;1949). The results of these efforts typically cover limited areas of the respective map series and are tailored to specific research questions.</p><p>Recent research in automated data extraction from historical maps shows that Deep Learning (DL) methods based on Artificial Neural Networks (ANN) might significantly reduce this manual workload (Uhl et al. (2017), Heitzler et al. (2018)). Yet, efficiently exploiting DL methods to provide high-quality features requires detailed knowledge of the underlying mathematical concepts and software libraries, high-performance hardware to train models in a timely manner, and sufficient amounts of data.</p><p>Hence, a new initiative at the Institute of Cartography and Geoinformation (IKG) at ETH Zurich aims to establish a hub to systematically bundle the efforts of the many Swiss institutes working with historical map data and to provide the computational capabilities to efficiently extract the desired features from the vast collection of Swiss historical maps. This is primarily achieved by providing a spatial data infrastructure (SDI), which integrates a geoportal with a DL environment (see Figure 1).</p><p>The SDI builds on top of the geoportal geodata4edu.ch (G4E), which was established to facilitate the access of federal and cantonal geodata to Swiss academic institutions. G4E inherently supports the integration and exploration of spatio-temporal data via an easy-to-use web interface and common web services and hence is an ideal choice to share historical map data. Making historical map data accessible in G4E is realized using state-of-the-art software libraries (e.g., Tensorflow, Keras), and suitable hardware (e.g., NVIDIA GPUs). Existing project data generated by the Swiss scientific community serve as the initial set to train a DL model for a specific thematic layer. If such data does not exist it is generated manually. Combining these data with georeferenced sheets of the corresponding map series allows the DL system to learn a way of obtaining the expected results based on the input map sheet. In the common case where an actual vectorization of a thematic layer is required, two steps are taken. First, the underlying ANN architecture yields a segmentation of the map sheet to determine which pixel is part of the feature type of interest (e.g., by using a fully convolutional architecture such as U-Net (Ronneberger et al. (2015)) and, second, the resulting segmentations will be vectorized using GIS algorithms (e.g., using methods as described in Hori &amp; Okazaki (1992)). These vectorizations undergo a quality check and might be directly published in G4E if the quality is considered high enough. In addition, the results may be manually corrected. A corrected dataset may have a greater value for the scientific community but might be time consuming to create. However, it has also the advantage to serve as additional training data for the DL system. This may lead to a positive feedback loop, which allows the ANN to gradually improve its predictions, which in turn improves the vectorization results and hence reduces the correction workload. Figure 2 shows automatically generated vectorizations of building footprints after two such iterations. Special emphasis was put on enforcing perpendicularity without requiring human intervention. At the time of writing, such building polygons have been generated for all Siegfried map sheets.</p><p>It is worth emphasizing that showing the ability of generating high-quality features of single thematic layers at a large scale and making them easily available to the scientific community is a key aspect when establishing a hub for sharing historical map data. Research groups are more willing to share their data if they see that the coverage of the data they produce might get multiplied and if they realize that other groups are providing their data as well. Apart from the benefits for research groups using such data, such an environment also allows to facilitate the development of new methods to derive features from historical maps (e.g., for extraction, generalization). The current focus lies on the systematic preparation of all thematic layers of the main Swiss map series. Afterwards it is aimed to place higher emphasis on the fusion of the extracted layers. In the long-term, these efforts will lead to a comprehensive spatio-temporal database of high scientific value for the Swiss scientific community.</p>


Author(s):  
S. Aigner ◽  
M. Körner

<p><strong>Abstract.</strong> We introduce a new <i>encoder-decoder GAN</i> model, <i>FutureGAN</i>, that predicts future frames of a video sequence conditioned on a sequence of past frames. During training, the networks solely receive the raw pixel values as an input, without relying on additional constraints or dataset specific conditions. To capture both the spatial and temporal components of a video sequence, spatio-temporal 3d convolutions are used in all encoder and decoder modules. Further, we utilize concepts of the existing <i>progressively growing GAN (PGGAN)</i> that achieves high-quality results on generating high-resolution single images. The FutureGAN model extends this concept to the complex task of video prediction. We conducted experiments on three different datasets, <i>MovingMNIST</i>, <i>KTH Action</i>, and <i>Cityscapes</i>. Our results show that the model learned representations to transform the information of an input sequence into a plausible future sequence effectively for all three datasets. The main advantage of the FutureGAN framework is that it is applicable to various different datasets without additional changes, whilst achieving stable results that are competitive to the state-of-the-art in video prediction. The code to reproduce the results of this paper is publicly available at https://github.com/TUM-LMF/FutureGAN.</p>


Author(s):  
RASTISLAV LUKAC ◽  
PAVOL GALAJDA ◽  
ALENA GALAJDOVA

This paper focuses on impulsive noise filtering and outliers rejection in gray-scale images. The proposed method combines neural networks, lower-upper-middle (LUM) smoothers and adaptive switching operations to produce a high-quality enhanced image. Extensive experimentation reported in this paper indicates that the proposed method is sufficiently robust, achieves an excellent balance between noise suppression and signal-detail preservation, and outperforms some well-known filters both subjectively and objectively.


2018 ◽  
Vol 15 ◽  
pp. 31-37 ◽  
Author(s):  
Uwe Pfeifroth ◽  
Jedrzej S. Bojanowski ◽  
Nicolas Clerbaux ◽  
Veronica Manara ◽  
Arturo Sanchez-Lorenzo ◽  
...  

Abstract. Solar radiation is the main driver of the Earth's climate. Measuring solar radiation and analysing its interaction with clouds are essential for the understanding of the climate system. The EUMETSAT Satellite Application Facility on Climate Monitoring (CM SAF) generates satellite-based, high-quality climate data records, with a focus on the energy balance and water cycle. Here, multiple of these data records are analyzed in a common framework to assess the consistency in trends and spatio-temporal variability of surface solar radiation, top-of-atmosphere reflected solar radiation and cloud fraction. This multi-parameter analysis focuses on Europe and covers the time period from 1992 to 2015. A high correlation between these three variables has been found over Europe. An overall consistency of the climate data records reveals an increase of surface solar radiation and a decrease in top-of-atmosphere reflected radiation. In addition, those trends are confirmed by negative trends in cloud cover. This consistency documents the high quality and stability of the CM SAF climate data records, which are mostly derived independently from each other. The results of this study indicate that one of the main reasons for the positive trend in surface solar radiation since the 1990's is a decrease in cloud coverage even if an aerosol contribution cannot be completely ruled out.


Sign in / Sign up

Export Citation Format

Share Document