Issues about combination of temporal information and spatial information

Author(s):  
Young-Seob Jeong ◽  
Bogyum Kim ◽  
Ho-Jin Choi ◽  
Jae Sung Lee
2021 ◽  
Vol 10 (3) ◽  
pp. 166
Author(s):  
Hartmut Müller ◽  
Marije Louwsma

The Covid-19 pandemic put a heavy burden on member states in the European Union. To govern the pandemic, having access to reliable geo-information is key for monitoring the spatial distribution of the outbreak over time. This study aims to analyze the role of spatio-temporal information in governing the pandemic in the European Union and its member states. The European Nomenclature of Territorial Units for Statistics (NUTS) system and selected national dashboards from member states were assessed to analyze which spatio-temporal information was used, how the information was visualized and whether this changed over the course of the pandemic. Initially, member states focused on their own jurisdiction by creating national dashboards to monitor the pandemic. Information between member states was not aligned. Producing reliable data and timeliness reporting was problematic, just like selecting indictors to monitor the spatial distribution and intensity of the outbreak. Over the course of the pandemic, with more knowledge about the virus and its characteristics, interventions of member states to govern the outbreak were better aligned at the European level. However, further integration and alignment of public health data, statistical data and spatio-temporal data could provide even better information for governments and actors involved in managing the outbreak, both at national and supra-national level. The Infrastructure for Spatial Information in Europe (INSPIRE) initiative and the NUTS system provide a framework to guide future integration and extension of existing systems.


eLife ◽  
2018 ◽  
Vol 7 ◽  
Author(s):  
Avner Wallach ◽  
Erik Harvey-Girard ◽  
James Jaeyoon Jun ◽  
André Longtin ◽  
Len Maler

Learning the spatial organization of the environment is essential for most animals’ survival. This requires the animal to derive allocentric spatial information from egocentric sensory and motor experience. The neural mechanisms underlying this transformation are mostly unknown. We addressed this problem in electric fish, which can precisely navigate in complete darkness and whose brain circuitry is relatively simple. We conducted the first neural recordings in the preglomerular complex, the thalamic region exclusively connecting the optic tectum with the spatial learning circuits in the dorsolateral pallium. While tectal topographic information was mostly eliminated in preglomerular neurons, the time-intervals between object encounters were precisely encoded. We show that this reliable temporal information, combined with a speed signal, can permit accurate estimation of the distance between encounters, a necessary component of path-integration that enables computing allocentric spatial relations. Our results suggest that similar mechanisms are involved in sequential spatial learning in all vertebrates.


2020 ◽  
Vol 39 (3) ◽  
pp. 3769-3781
Author(s):  
Zhisong Han ◽  
Yaling Liang ◽  
Zengqun Chen ◽  
Zhiheng Zhou

Video-based person re-identification aims to match videos of pedestrians captured by non-overlapping cameras. Video provides spatial information and temporal information. However, most existing methods do not combine these two types of information well and ignore that they are of different importance in most cases. To address the above issues, we propose a two-stream network with a joint distance metric for measuring the similarity of two videos. The proposed two-stream network has several appealing properties. First, the spatial stream focuses on multiple parts of a person and outputs robust local spatial features. Second, a lightweight and effective temporal information extraction block is introduced in video-based person re-identification. In the inference stage, the distance of two videos is measured by the weighted sum of spatial distance and temporal distance. We conduct extensive experiments on four public datasets, i.e., MARS, PRID2011, iLIDS-VID and DukeMTMC-VideoReID to show that our proposed approach outperforms existing methods in video-based person re-ID.


2021 ◽  
Author(s):  
Somang Nam

Vibrotactile stimulation can be used as a substitute for audio or visual stimulation for people who are deaf or blind. In order to do this, new tools must be developed and evaluated that support the creation and experience of vibration on the skin. In this paper, a vibrotactile composition tool, the “Beadbox”, along with the results of a user study will be described. Beadbox is a vibrotactile notation system and software tool, which helps users to create and record a vibrotactile art composition. It allows users to control the four essential vibrotactile technical elements: (1) frequency; (2) intensity; (3) temporal information; and (4) spatial information consists of how the vibrotactile signal is distributed on the human body. A user study designed to evaluate the usability and support for creative expression of Beadbox. Results from the user study indicate that the Beadbox is easy to use, and viable for vibrotactile composition.


2021 ◽  
pp. 1-19
Author(s):  
Wouter Kruijne ◽  
Christian N. L. Olivers ◽  
Hedderik van Rijn

Abstract Different theories have been proposed to explain how the human brain derives an accurate sense of time. One specific class of theories, intrinsic clock theories, postulate that temporal information of a stimulus is represented much like other features such as color and location, bound together to form a coherent percept. Here, we explored to what extent this holds for temporal information after it has been perceived and is held in working memory for subsequent comparison. We recorded EEG of participants who were asked to time stimuli at lateral positions of the screen followed by comparison stimuli presented in the center. Using well-established markers of working memory maintenance, we investigated whether the usage of temporal information evoked neural signatures that were indicative of the location where the stimuli had been presented, both during maintenance and during comparison. Behavior and neural measures including the contralateral delay activity, lateralized alpha suppression, and decoding analyses through time all supported the same conclusion: The representation of location was strongly involved during perception of temporal information, but when temporal information was to be used for comparison, it no longer showed a relation to spatial information. These results support a model where the initial perception of a stimulus involves intrinsic computations, but that this information is subsequently translated to a stimulus-independent format to be used to further guide behavior.


2020 ◽  
Vol 10 (15) ◽  
pp. 5319
Author(s):  
Md Anwarul Islam ◽  
Md Azher Uddin ◽  
Young-Koo Lee

In the era of digital devices and the Internet, thousands of videos are taken and share through the Internet. Similarly, CCTV cameras in the digital city produce a large amount of video data that carry essential information. To handle the increased video data and generate knowledge, there is an increasing demand for distributed video annotation. Therefore, in this paper, we propose a novel distributed video annotation platform that explores the spatial information and temporal information. Afterward, we provide higher-level semantic information. The proposed framework is divided into two parts: spatial annotation and spatiotemporal annotation. Therefore, we propose a spatiotemporal descriptor, namely, volume local directional ternary pattern-three orthogonal planes (VLDTP–TOP) in a distributed manner using Spark. Moreover, we developed several state-of-the-art appearance-based and spatiotemporal-based feature descriptors on top of Spark. We also provide the distributed video annotation services for the end-users so that they can easily use the video annotation and APIs for development to produce new video annotation algorithms. Due to the lack of a spatiotemporal video annotation dataset that provides ground truth for both spatial and temporal information, we introduce a video annotation dataset, namely, STAD which provides ground truth for spatial and temporal information. An extensive experimental analysis was performed in order to validate the performance and scalability of the proposed feature descriptors, which proved the excellence of our proposed approach.


Author(s):  
Wynne Hsu ◽  
Mong Li Lee ◽  
Junmei Wang

Association rule mining in spatial databases and temporal databases have been studied extensively in data mining research. Most of the research studies have found interesting patterns in either spatial information or temporal information, however, few studies have handled both efficiently. Meanwhile, developments in spatio-temporal databases and spatio-temporal applications have prompted data analysts to turn their focus to spatio-temporal patterns that explore both spatial and temporal information.


Sensors ◽  
2019 ◽  
Vol 19 (2) ◽  
pp. 410 ◽  
Author(s):  
Dat Nguyen ◽  
Tuyen Pham ◽  
Min Lee ◽  
Kang Park

Face-based biometric recognition systems that can recognize human faces are widely employed in places such as airports, immigration offices, and companies, and applications such as mobile phones. However, the security of this recognition method can be compromised by attackers (unauthorized persons), who might bypass the recognition system using artificial facial images. In addition, most previous studies on face presentation attack detection have only utilized spatial information. To address this problem, we propose a visible-light camera sensor-based presentation attack detection that is based on both spatial and temporal information, using the deep features extracted by a stacked convolutional neural network (CNN)-recurrent neural network (RNN) along with handcrafted features. Through experiments using two public datasets, we demonstrate that the temporal information is sufficient for detecting attacks using face images. In addition, it is established that the handcrafted image features efficiently enhance the detection performance of deep features, and the proposed method outperforms previous methods.


Sign in / Sign up

Export Citation Format

Share Document