temporal averaging
Recently Published Documents


TOTAL DOCUMENTS

143
(FIVE YEARS 31)

H-INDEX

20
(FIVE YEARS 1)

2022 ◽  
Vol 29 (1) ◽  
Author(s):  
Fucheng Yu ◽  
Feixiang Wang ◽  
Ke Li ◽  
Guohao Du ◽  
Biao Deng ◽  
...  

Rodents are used extensively as animal models for the preclinical investigation of microvascular-related diseases. However, motion artifacts in currently available imaging methods preclude real-time observation of microvessels in vivo. In this paper, a pixel temporal averaging (PTA) method that enables real-time imaging of microvessels in the mouse brain in vivo is described. Experiments using live mice demonstrated that PTA efficiently eliminated motion artifacts and random noise, resulting in significant improvements in contrast-to-noise ratio. The time needed for image reconstruction using PTA with a normal computer was 250 ms, highlighting the capability of the PTA method for real-time angiography. In addition, experiments with less than one-quarter of photon flux in conventional angiography verified that motion artifacts and random noise were suppressed and microvessels were successfully identified using PTA, whereas conventional temporal subtraction and averaging methods were ineffective. Experiments performed with an X-ray tube verified that the PTA method could also be successfully applied to microvessel imaging of the mouse brain using a laboratory X-ray source. In conclusion, the proposed PTA method may facilitate the real-time investigation of cerebral microvascular-related diseases using small animal models.


2021 ◽  
Vol 12 (1) ◽  
pp. 76
Author(s):  
Ju-Ho Kim ◽  
Hye-Jin Shim ◽  
Jee-Weon Jung ◽  
Ha-Jin Yu

The majority of recent speaker verification tasks are studied under open-set evaluation scenarios considering real-world conditions. The characteristics of these tasks imply that the generalization towards unseen speakers is a critical capability. Thus, this study aims to improve the generalization of the system for the performance enhancement of speaker verification. To achieve this goal, we propose a novel supervised-learning-method-based speaker verification system using the mean teacher framework. The mean teacher network refers to the temporal averaging of deep neural network parameters, which can produce a more accurate, stable representations than fixed weights at the end of training and is conventionally used for semi-supervised learning. Leveraging the success of the mean teacher framework in many studies, the proposed supervised learning method exploits the mean teacher network as an auxiliary model for better training of the main model, the student network. By learning the reliable intermediate representations derived from the mean teacher network as well as one-hot speaker labels, the student network is encouraged to explore more discriminative embedding spaces. The experimental results demonstrate that the proposed method relatively reduces the equal error rate by 11.61%, compared to the baseline system.


2021 ◽  
Vol 28 (3) ◽  
pp. 371-378
Author(s):  
Achim Wirth ◽  
Bertrand Chapron

Abstract. Ocean dynamics is predominantly driven by the shear stress between the atmospheric winds and ocean currents. The mechanical power input to the ocean is fluctuating in space and time and the atmospheric wind sometimes decelerates the ocean currents. Building on 24 years of global satellite observations, the input of mechanical power to the ocean is analysed. A fluctuation theorem (FT) holds when the logarithm of the ratio between the occurrence of positive and negative events, of a certain magnitude of the power input, is a linear function of this magnitude and the averaging period. The flux of mechanical power to the ocean shows evidence of a FT for regions within the recirculation area of the subtropical gyre but not over extensions of western boundary currents. A FT puts a strong constraint on the temporal distribution of fluctuations of power input, connects variables obtained with different lengths of temporal averaging, guides the temporal down- and up-scaling and constrains the episodes of improbable events.


Author(s):  
Seth Avram Schweitzer ◽  
Edwin Alfred Cowen

In recent years field-scale applications of image-based velocimetry methods, often referred to as large scale particle image velocimetry (LSPIV), have been increasingly deployed. These velocimetry measurements have several advantages—they allow high resolution, non-contact measurement of surface velocity over a large two dimensional area, from which the bulk flow can be inferred. However, visiblelight LSPIV methods can have significant limitations. The water surface often lacks natural features that can be tracked in the visible and generally requires seeding with tracer particles, which creates concerns regarding the fidelity with which tracer particles track the flow, and introduces challenges in achieving sufficient and uniform seeding density, in particular in regions with appreciable velocity accelerations such as turbulence. In LSPIV, image collection is generally limited to daylight hours, and can suffer from non-uniformity of illumination across the camera’s field of view. Due to these issues LSPIV often requires spatio-temporal averaging, and as a result is generally able to extracting the mean, but not the instantaneous, velocity field, and hence is often not a suitable tool for calculating turbulence metrics of the flow.


2021 ◽  
Vol 9 (6) ◽  
pp. 664
Author(s):  
Hui Chen ◽  
Shaofeng Li ◽  
Jinbao Song ◽  
Hailun He

This study aimed to highlight a general lack of clarity regarding the scale of the temporal averaging implicit in Ekman-type models. Under the assumption of time and depth-dependent eddy viscosity, we present an analytical Fourier series solution for a wave-modified Ekman model. The depth dependence of eddy viscosity is based on the K-Profile Parameterization (KPP) scheme. The solution reproduces major characteristics of diurnal variation in ocean velocity and shear. Results show that the time variability in eddy viscosity leads to an enhanced mean current near-surface and a decrease in the effective eddy viscosity, which finally results in an intensified near-surface shear and wakes a low-level jet flow. Rectification values are dominated by the strength of diurnal mixing, and partly due to the nonlinear depth dependence of the eddy viscosity.


A slew of motion detection methods have been proposed in recent years. The background includes some constraints such as changes in illumination, shadow, cluttered the background, scene change and speed of dance between hand gestures and body gestures are different. One of the most basic methods for background subtraction is temporal averaging. We looked at a new adaptive temporal averaging approach in this paper. To identify moving objects in video sequences, an adaptive temporal averaging technique is used. Depending upon the speed of the technique we proposed a Gaussian distribution technique. Gaussian distribution done background subtraction depending upon active pixels it differentiates whether it is a background or foreground. The background model's update rate has been modified to be adaptive and determined by pixel difference .Our aim is to improve the method's F-measure by making it more adaptable to various scene scenarios. The experiment results are shown and evaluated. The proposed method and the original method's quality parameters are compared


Author(s):  
Noah Bolohan ◽  
Victor LeBlanc ◽  
Frithjof Lutscher

In ecological communities, the behaviour of individuals and the interaction between species may change between seasons, yet this seasonal variation is often not represented explicitly in mathematical models. As global change is predicted to alter season length and other climatic aspects, such seasonal variation needs to be included in models in order to make reasonable predictions for community dynamics. The resulting mathematical descriptions are nonautonomous models with a large number of parameters, and are therefore challenging to analyze. We present a model for two predators and one prey, whereby one predator switches hunting behaviour to seasonally include alternative prey when available. We use a combination of temporal averaging and invasion analysis to derive simplified models and determine the behaviour of the system, in particular to gain insight into conditions under which the two predators can coexist in a changing climate. We compare our results with numerical simulations of the temporally varying model.


2021 ◽  
Author(s):  
Jane Yook ◽  
Lysha Lee ◽  
Simone Vossel ◽  
Ralph Weidner ◽  
Hinze Hogendoorn

In the flash-lag effect (FLE), a flash in spatiotemporal alignment with a moving object is often misperceived as lagging behind the moving object. One proposed explanation for the illusion is based on predictive motion extrapolation of trajectories. In this interpretation, observers require an estimate of the object′s velocity to anticipate future positions, implying that the FLE is dependent on a neural representation of perceived velocity. By contrast, alternative models of the FLE based on differential latencies or temporal averaging should not rely on such a representation of velocity. Here, we test the extrapolation account by investigating whether the FLE is sensitive to illusory changes in perceived speed when physical speed is actually constant. This was tested using rotational wedge stimuli with variable noise texture (Experiment 1) and luminance contrast (Experiment 2). We show for both manipulations, differences in perceived speed corresponded to differences in the FLE: dynamic versus static noise, and low versus high contrast stimuli led to increases in perceived speed and FLE magnitudes. These effects were consistent across different textures and were not due to low-level factors. Our results support the idea that the FLE depends on a neural representation of velocity, which is consistent with mechanisms of motion extrapolation. Hence, the faster the perceived speed, the larger the extrapolation, the stronger the flash-lag.


2021 ◽  
Vol 13 (3) ◽  
pp. 359 ◽  
Author(s):  
Maria Gavrouzou ◽  
Nikolaos Hatzianastassiou ◽  
Antonis Gkikas ◽  
Marios-Bruno Korras-Carraca ◽  
Nikolaos Mihalopoulos

A satellite-based algorithm is developed and used to determine the presence of dust aerosols on a global scale. The algorithm uses as input aerosol optical properties from the MOderate Resolution Imaging Spectroradiometer (MODIS)-Aqua Collection 6.1 and Ozone Monitoring Instrument (OMI)-Aura version v003 (OMAER-UV) datasets and identifies the existence of dust aerosols in the atmosphere by applying specific thresholds, which ensure the coarse size and the absorptivity of dust aerosols, on the input optical properties. The utilized aerosol optical properties are the multiwavelength aerosol optical depth (AOD), the Aerosol Absorption Index (AI) and the Ångström Exponent (a). The algorithm operates on a daily basis and at 1° × 1° latitude-longitude spatial resolution for the period 2005–2019 and computes the absolute and relative frequency of the occurrence of dust. The monthly and annual mean frequencies are calculated on a pixel level for each year of the study period, enabling the study of the seasonal as well as the inter-annual variation of dust aerosols’ occurrence all over the globe. Temporal averaging is also applied to the annual values in order to estimate the 15-year climatological mean values. Apart from temporal, a spatial averaging is also applied for the entire globe as well as for specific regions of interest, namely great global deserts and areas of desert dust export. According to the algorithm results, the highest frequencies of dust occurrence (up to 160 days/year) are primarily observed over the western part of North Africa (Sahara), and over the broader area of Bodélé, and secondarily over the Asian Taklamakan desert (140 days/year). For most of the study regions, the maximum frequencies appear in boreal spring and/or summer and the minimum ones in winter or autumn. A clear seasonality of global dust is revealed, with the lowest frequencies in November–December and the highest ones in June. Finally, an increasing trend of global dust frequency of occurrence from 2005 to 2019, equal to 56.2%, is also found. Such an increasing trend is observed over all study regions except for North Middle East, where a slight decreasing trend (−2.4%) is found.


2020 ◽  
Author(s):  
Nick Schutgens ◽  
Oleg Dubovik ◽  
Otto Hasekamp ◽  
Omar Torres ◽  
Hiren Jethva ◽  
...  

Abstract. Global measurements of absorptive aerosol optical depth (AAOD) are scarce and mostly provided by the ground network AERONET (AErosol RObotic NETwork). In recent years, several satellite products of AAOD have appeared. This study's primary aim is to establish the usefulness of these datasets for AEROCOM (AEROsol Comparisons between Observations and Models) model evaluation with a focus on the years 2006, 2008 and 2010. The satellite products are super-observations consisting of 1° × 1° × 30min aggregated retrievals. This study consist of two parts: 1) an assessment of satellite datasets; 2) their application to the evaluation of AEROCOM models. The current paper describes the first part and details an evaluation with AERONET observations from the sparse AERONET network as well as a global intercomparison of satellite datasets, with a focus on how minimum AOD (Aerosol Optical Depth) thresholds and temporal averaging may improve agreement. All satellite datasets are shown to have reasonable skill for AAOD (3 out of 4 datasets show correlations with AERONET in excess of 0.6) but less skill for SSA (Single Scattering Albedo; only 1 out of 4 datasets shows correlations with AERONET in excess of 0.6). In comparison, satellite AOD shows correlations from 0.72 to 0.88 against the same AERONET dataset. We do show that performance vs. AERONET and satellite agreements for SSA significantly improve at higher AOD. Temporal averaging also improves agreements between satellite datasets. Nevertheless multi-annual averages still show systematic differences, even at high AOD. In particular, we show that two POLDER products appear to have a systematic SSA difference over land of about 0.04, independent of AOD. Identifying the cause of this bias offers the possibility of substantially improving current datasets. We also provide evidence that suggests that evaluation with AERONET observations leads to an underestimate of true biases in satellite SSA. In the second part of this study we show that, notwithstanding these biases in satellite AAOD and SSA, the datasets allow meaningful evaluation of AEROCOM models.


Sign in / Sign up

Export Citation Format

Share Document