scholarly journals Correlation Plenoptic Imaging: An Overview

2018 ◽  
Vol 8 (10) ◽  
pp. 1958 ◽  
Author(s):  
Francesco Di Lena ◽  
Francesco Pepe ◽  
Augusto Garuccio ◽  
Milena D’Angelo

Plenoptic imaging (PI) enables refocusing, depth-of-field (DOF) extension and 3D visualization, thanks to its ability to reconstruct the path of light rays from the lens to the image. However, in state-of-the-art plenoptic devices, these advantages come at the expenses of the image resolution, which is always well above the diffraction limit defined by the lens numerical aperture (NA). To overcome this limitation, we have proposed exploiting the spatio-temporal correlations of light, and to modify the ghost imaging scheme by endowing it with plenoptic properties. This approach, named Correlation Plenoptic Imaging (CPI), enables pushing both resolution and DOF to the fundamental limit imposed by wave-optics. In this paper, we review the methods to perform CPI both with chaotic light and with entangled photon pairs. Both simulations and a proof-of-principle experimental demonstration of CPI will be presented.

Author(s):  
Ali Zonoozi ◽  
Jung-jae Kim ◽  
Xiao-Li Li ◽  
Gao Cong

Time-series forecasting in geo-spatial domains has important applications, including urban planning, traffic management and behavioral analysis. We observed recurring periodic patterns in some spatio-temporal data, which were not considered explicitly by previous non-linear works. To address this lack, we propose novel `Periodic-CRN' (PCRN) method, which adapts convolutional recurrent network (CRN) to accurately capture spatial and temporal correlations, learns and incorporates explicit periodic representations, and can be optimized with multi-step ahead prediction. We show that PCRN consistently outperforms the state-of-the-art methods for crowd density prediction across two taxi datasets from Beijing and Singapore.


2018 ◽  
Vol 14 (12) ◽  
pp. 1915-1960 ◽  
Author(s):  
Rudolf Brázdil ◽  
Andrea Kiss ◽  
Jürg Luterbacher ◽  
David J. Nash ◽  
Ladislava Řezníčková

Abstract. The use of documentary evidence to investigate past climatic trends and events has become a recognised approach in recent decades. This contribution presents the state of the art in its application to droughts. The range of documentary evidence is very wide, including general annals, chronicles, memoirs and diaries kept by missionaries, travellers and those specifically interested in the weather; records kept by administrators tasked with keeping accounts and other financial and economic records; legal-administrative evidence; religious sources; letters; songs; newspapers and journals; pictographic evidence; chronograms; epigraphic evidence; early instrumental observations; society commentaries; and compilations and books. These are available from many parts of the world. This variety of documentary information is evaluated with respect to the reconstruction of hydroclimatic conditions (precipitation, drought frequency and drought indices). Documentary-based drought reconstructions are then addressed in terms of long-term spatio-temporal fluctuations, major drought events, relationships with external forcing and large-scale climate drivers, socio-economic impacts and human responses. Documentary-based drought series are also considered from the viewpoint of spatio-temporal variability for certain continents, and their employment together with hydroclimate reconstructions from other proxies (in particular tree rings) is discussed. Finally, conclusions are drawn, and challenges for the future use of documentary evidence in the study of droughts are presented.


2021 ◽  
Vol 15 (6) ◽  
pp. 1-21
Author(s):  
Huandong Wang ◽  
Yong Li ◽  
Mu Du ◽  
Zhenhui Li ◽  
Depeng Jin

Both app developers and service providers have strong motivations to understand when and where certain apps are used by users. However, it has been a challenging problem due to the highly skewed and noisy app usage data. Moreover, apps are regarded as independent items in existing studies, which fail to capture the hidden semantics in app usage traces. In this article, we propose App2Vec, a powerful representation learning model to learn the semantic embedding of apps with the consideration of spatio-temporal context. Based on the obtained semantic embeddings, we develop a probabilistic model based on the Bayesian mixture model and Dirichlet process to capture when , where , and what semantics of apps are used to predict the future usage. We evaluate our model using two different app usage datasets, which involve over 1.7 million users and 2,000+ apps. Evaluation results show that our proposed App2Vec algorithm outperforms the state-of-the-art algorithms in app usage prediction with a performance gap of over 17.0%.


Author(s):  
Roman Rotermund ◽  
Jan Regelsberger ◽  
Katharina Osterhage ◽  
Jens Aberle ◽  
Jörg Flitsch

Abstract Background In previous reports on experiences with an exoscope, this new technology was not found to be applicable for transsphenoidal pituitary surgery. As a specialized center for pituitary surgery, we were using a 4K 3D video microscope (Orbeye, Olympus) to evaluate the system for its use in transsphenoidal pituitary surgery in comparison to conventional microscopy. Method We report on 296 cases performed with the Orbeye at a single institution. An observational study was conducted with standardized subjective evaluation by the surgeons after each procedure. An objective measurement was added to compare the exoscopic and microscopic methods, involving surgery time and the initial postoperative remission rate in matched cohorts. Results The patients presented with a wide range of pathologies. No serious events or minor complications occurred based on the usage of the 4K 3D exoscope. There was no need for switching back to the microscope in any of the cases. Compared to our microsurgically operated collective, there was no significant difference regarding duration of surgery, complications, or extent of resection. The surgeons rated the Orbeye beneficial in regard to instrument size, positioning, surgeon’s ergonomics, learning curve, image resolution, and high magnification. Conclusions The Orbeye exoscope presents with optical and digital zoom options as well as a 4K image resolution and 3D visualization resulting in better depth perception and flexibility in comparison to the microscope. Split screen mode offers the complementary benefit of the endoscope which may increase the possibilities of lateral view but has to be evaluated in comparison to endoscopic transsphenoidal procedures in the next step.


2021 ◽  
Author(s):  
Lonni Besançon ◽  
Anders Ynnerman ◽  
Daniel F. Keefe ◽  
Lingyun Yu ◽  
Tobias Isenberg

2015 ◽  
Vol 778 ◽  
pp. 216-252 ◽  
Author(s):  
C. D. Pokora ◽  
J. J. McGuirk

Stereoscopic three-component particle image velocimetry (3C-PIV) measurements have been made in a turbulent round jet to investigate the spatio-temporal correlations that are the origin of aerodynamic noise. Restricting attention to subsonic, isothermal jets, measurements were taken in a water flow experiment where, for the same Reynolds number and nozzle size, the shortest time scale of the dynamically important turbulent structures is more than an order of magnitude greater that in equivalent airflow experiments, greatly facilitating time-resolved PIV measurements. Results obtained (for a jet nozzle diameter and velocity of 40 mm and $1~\text{m}~\text{s}^{-1}$, giving $\mathit{Re}=4\times 10^{4}$) show that, on the basis of both single-point statistics and two-point quantities (correlation functions, integral length scales) the present incompressible flow data are in excellent agreement with published compressible, subsonic airflow measurements. The 3C-PIV data are first compared to higher-spatial-resolution 2C-PIV data and observed to be in good agreement, although some deterioration in quality for higher-order correlations caused by high-frequency noise in the 3C-PIV data is noted. A filter method to correct for this is proposed, based on proper orthogonal decomposition (POD) of the 3C-PIV data. The corrected data are then used to construct correlation maps at the second- and fourth-order level for all velocity components. The present data are in accordance with existing hot-wire measurements, but provide significantly more detailed information on correlation components than has previously been available. The measured relative magnitudes of various components of the two-point fourth-order turbulence correlation coefficient ($R_{ij,kl}$) – the fundamental building block for free shear flow aerodynamic noise sources – are presented and represent a valuable source of validation data for acoustic source modelling. The relationship between fourth-order and second-order velocity correlations is also examined, based on an assumption of a quasi-Gaussian nearly normal p.d.f. for the velocity fluctuations. The present results indicate that this approximation shows reasonable agreement for the measured relative magnitudes of several correlation components; however, areas of discrepancy are identified, indicating the need for work on alternative models such as the shell turbulence concept of Afsar (Eur. J. Mech. (B/Fluids), vol. 31, 2012, pp. 129–139).


2018 ◽  
Vol 4 (9) ◽  
pp. 107 ◽  
Author(s):  
Mohib Ullah ◽  
Ahmed Mohammed ◽  
Faouzi Alaya Cheikh

Articulation modeling, feature extraction, and classification are the important components of pedestrian segmentation. Usually, these components are modeled independently from each other and then combined in a sequential way. However, this approach is prone to poor segmentation if any individual component is weakly designed. To cope with this problem, we proposed a spatio-temporal convolutional neural network named PedNet which exploits temporal information for spatial segmentation. The backbone of the PedNet consists of an encoder–decoder network for downsampling and upsampling the feature maps, respectively. The input to the network is a set of three frames and the output is a binary mask of the segmented regions in the middle frame. Irrespective of classical deep models where the convolution layers are followed by a fully connected layer for classification, PedNet is a Fully Convolutional Network (FCN). It is trained end-to-end and the segmentation is achieved without the need of any pre- or post-processing. The main characteristic of PedNet is its unique design where it performs segmentation on a frame-by-frame basis but it uses the temporal information from the previous and the future frame for segmenting the pedestrian in the current frame. Moreover, to combine the low-level features with the high-level semantic information learned by the deeper layers, we used long-skip connections from the encoder to decoder network and concatenate the output of low-level layers with the higher level layers. This approach helps to get segmentation map with sharp boundaries. To show the potential benefits of temporal information, we also visualized different layers of the network. The visualization showed that the network learned different information from the consecutive frames and then combined the information optimally to segment the middle frame. We evaluated our approach on eight challenging datasets where humans are involved in different activities with severe articulation (football, road crossing, surveillance). The most common CamVid dataset which is used for calculating the performance of the segmentation algorithm is evaluated against seven state-of-the-art methods. The performance is shown on precision/recall, F 1 , F 2 , and mIoU. The qualitative and quantitative results show that PedNet achieves promising results against state-of-the-art methods with substantial improvement in terms of all the performance metrics.


Sign in / Sign up

Export Citation Format

Share Document