temporal information extraction
Recently Published Documents


TOTAL DOCUMENTS

39
(FIVE YEARS 9)

H-INDEX

7
(FIVE YEARS 1)

2021 ◽  
Vol 13 (2) ◽  
pp. 46
Author(s):  
Jedsada Phengsuwan ◽  
Tejal Shah ◽  
Nipun Balan Thekkummal ◽  
Zhenyu Wen ◽  
Rui Sun ◽  
...  

Social media has played a significant role in disaster management, as it enables the general public to contribute to the monitoring of disasters by reporting incidents related to disaster events. However, the vast volume and wide variety of generated social media data create an obstacle in disaster management by limiting the availability of actionable information from social media. Several approaches have therefore been proposed in the literature to cope with the challenges of social media data for disaster management. To the best of our knowledge, there is no published literature on social media data management and analysis that identifies the research problems and provides a research taxonomy for the classification of the common research issues. In this paper, we provide a survey of how social media data contribute to disaster management and the methodologies for social media data management and analysis in disaster management. This survey includes the methodologies for social media data classification and event detection as well as spatial and temporal information extraction. Furthermore, a taxonomy of the research dimensions of social media data management and analysis for disaster management is also proposed, which is then applied to a survey of existing literature and to discuss the core advantages and disadvantages of the various methodologies.


2020 ◽  
Vol 39 (3) ◽  
pp. 3769-3781
Author(s):  
Zhisong Han ◽  
Yaling Liang ◽  
Zengqun Chen ◽  
Zhiheng Zhou

Video-based person re-identification aims to match videos of pedestrians captured by non-overlapping cameras. Video provides spatial information and temporal information. However, most existing methods do not combine these two types of information well and ignore that they are of different importance in most cases. To address the above issues, we propose a two-stream network with a joint distance metric for measuring the similarity of two videos. The proposed two-stream network has several appealing properties. First, the spatial stream focuses on multiple parts of a person and outputs robust local spatial features. Second, a lightweight and effective temporal information extraction block is introduced in video-based person re-identification. In the inference stage, the distance of two videos is measured by the weighted sum of spatial distance and temporal distance. We conduct extensive experiments on four public datasets, i.e., MARS, PRID2011, iLIDS-VID and DukeMTMC-VideoReID to show that our proposed approach outperforms existing methods in video-based person re-ID.


Author(s):  
Artuur Leeuwenberg ◽  
Marie-Francine Moens

Time is deeply woven into how people perceive, and communicate about the world. Almost unconsciously, we provide our language utterances with temporal cues, like verb tenses, and we can hardly produce sentences without such cues. Extracting temporal cues from text, and constructing a global temporal view about the order of described events is a major challenge of automatic natural language understanding. Temporal reasoning, the process of combining different temporal cues into a coherent temporal view, plays a central role in temporal information extraction. This article presents a comprehensive survey of the research from the past decades on temporal reasoning for automatic temporal information extraction from text, providing a case study on the integration of symbolic reasoning with machine learning-based information extraction systems.


In this article, we focused on the super-resolution technique in computer vision applications. During last decade, image super-resolution techniques have been introduced and adopted widely in various applications. However, increasing demand of high quality multimedia data has lead towards the high resolution data streaming. The conventional techniques which are based on the image super-resolution are not suitable for multi-frame SR. Moreover, the motion estimation, motion compensation, spatial and temporal information extraction are the well-known challenging issues in video super-resolution field. In this work, we address these issues and developed deep learning based novel architecture which performs feature alignment, filtering the image using deep learning and estimates the residual of low-resolution frames to generate the high-resolution frame. The proposed approach is named as Align-Filter & Learn Video Super resolution using Deep learning (AFLVSR). We have conducted and extensive experimental analysis which shows a significant improvement in the performance when compared with the state-of-art video SR techniques.


2019 ◽  
Vol 66 ◽  
pp. 341-380 ◽  
Author(s):  
Artuur Leeuwenberg ◽  
Marie-Francine Moens

Time is deeply woven into how people perceive, and communicate about the world. Almost unconsciously, we provide our language utterances with temporal cues, like verb tenses, and we can hardly produce sentences without such cues. Extracting temporal cues from text, and constructing a global temporal view about the order of described events is a major challenge of automatic natural language understanding. Temporal reasoning, the process of combining different temporal cues into a coherent temporal view, plays a central role in temporal information extraction. This article presents a comprehensive survey of the research from the past decades on temporal reasoning for automatic temporal information extraction from text, providing a case study on how combining symbolic reasoning with machine learning-based information extraction systems can improve performance. It gives a clear overview of the used methodologies for temporal reasoning, and explains how temporal reasoning can be, and has been successfully integrated into temporal information extraction systems. Based on the distillation of existing work, this survey also suggests currently unexplored research areas. We argue that the level of temporal reasoning that current systems use is still incomplete for the full task of temporal information extraction, and that a deeper understanding of how the various types of temporal information can be integrated into temporal reasoning is required to drive future research in this area.


2019 ◽  
Vol 25 (3) ◽  
pp. 385-403 ◽  
Author(s):  
Chris R. Giannella ◽  
Ransom K. Winder ◽  
Joseph P. Jubinski

AbstractApproaches to building temporal information extraction systems typically rely on large, manually annotated corpora. Thus, porting these systems to new languages requires acquiring large corpora of manually annotated documents in the new languages. Acquiring such corpora is difficult owing to the complexity of temporal information extraction annotation. One strategy for addressing this difficulty is to reduce or eliminate the need for manually annotated corpora through annotation projection. This technique utilizes a temporal information extraction system for a source language (typically English) to automatically annotate the source language side of a parallel corpus. It then uses automatically generated word alignments to project the annotations, thereby creating noisily annotated target language training data. We developed an annotation projection technique for producing target language temporal information extraction systems. We carried out an English (source) to French (target) case study wherein we compared a French temporal information extraction system built using annotation projection with one built using a manually annotated French corpus. While annotation projection has been applied to building other kinds of Natural Language Processing tools (e.g., Named Entity Recognizers), to our knowledge, this is the first paper examining annotation projection as applied to temporal information extraction where no manual corrections of the target language annotations were made. We found that, even using manually annotated data to build a temporal information extraction system, F-scores were relatively low (<0.35), which suggests that the problem is challenging even with manually annotated data. Our annotation projection approach performed well (relative to the system built from manually annotated data) on some aspects of temporal information extraction (e.g., event–document creation time temporal relation prediction), but it performed poorly on the other kinds of temporal relation prediction (e.g., event–event and event–time).


Sign in / Sign up

Export Citation Format

Share Document