scene description
Recently Published Documents


TOTAL DOCUMENTS

126
(FIVE YEARS 27)

H-INDEX

12
(FIVE YEARS 3)

2021 ◽  
Author(s):  
Natan Santos Moura ◽  
João Medrado Gondim ◽  
Daniela Barreiro Claro ◽  
Marlo Souza ◽  
Roberto de Cerqueira Figueiredo

The employment of video surveillance cameras by public safety agencies enables incident detection in monitored cities by using object detection for scene description, enhancing the protection to the general public. Object detection has its drawbacks, such as false positives. Our work aims to enhance object detection and image classification by employing IoU (Intersection over Union) to minimize the false positives and identify weapon holders or fire in a frame, adding more information to the scene.


2021 ◽  
Author(s):  
Sinan Tan ◽  
Di Guo ◽  
Huaping Liu ◽  
Xinyu Zhang ◽  
Fuchun Sun
Keyword(s):  

2021 ◽  
Author(s):  
Rosiana Natalie ◽  
Jolene Loh ◽  
Huei Suen Tan ◽  
Joshua Tseng ◽  
Hernisa Kacorri ◽  
...  
Keyword(s):  

2021 ◽  
Author(s):  
Margaret Kandel ◽  
Colin Phillips

Although reflexive–antecedent agreement shows little susceptibility to number attraction in comprehension, prior production research using the preamble-completion paradigm has demonstrated attraction for both verbs and anaphora. In four production experiments, we compared number attraction effects on subject–verb and reflexive–antecedent agreement using a novel scene-description task in addition to a more traditional preamble elicitation paradigm. While the results from the preamble task align with prior findings, the more naturalistic scene description task produced the same contrast observed in comprehension, with robust verb attraction but minimal anaphor attraction. In addition to analyzing agreement error distributions, we also analyzed the production time-course of participant responses, finding timing effects that pattern with error distributions, even when no error is present. The results suggest that production agreement processes show similar profiles to comprehension processes. We discuss potential sources of variable susceptibility to agreement attraction, suggesting that differences may arise from the time-course of information processing across tasks and linguistic dependencies.


2021 ◽  
pp. 192536212110224
Author(s):  
Melissa C. Mercado ◽  
Deborah M. Stone ◽  
Caroline W. Kokubun ◽  
Aimée-Rika T. Trudeau ◽  
Elizabeth Gaylor ◽  
...  

Introduction: It is widely accepted that suicides—which account for more than 47 500 deaths per year in the United States—are undercounted by 10% to 30%, partially due to incomplete death scene investigations (DSI) and varying burden-of-proof standards across jurisdictions. This may result in the misclassification of overdose-related suicides as accidents or undetermined intent. Methods: Virtual and in-person meetings were held with suicidologists and DSI experts from five states (Spring-Summer 2017) to explore how features of a hypothetical electronic DSI tool may help address these challenges. Results: Participants envisioned a mobile DSI application for cell phones, tablets, or laptop computers. Features for systematic information collection, scene description, and guiding key informant interviews were perceived as useful for less-experienced investigators. Discussion: Wide adoption may be challenging due to differences in DSI standards, practices, costs, data privacy and security, and system integration needs. However, technological tools that support consistent and complete DSIs could strengthen the information needed to accurately identify overdose suicides.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2872
Author(s):  
Miroslav Uhrina ◽  
Anna Holesova ◽  
Juraj Bienik ◽  
Lukas Sevcik

This paper deals with the impact of content on the perceived video quality evaluated using the subjective Absolute Category Rating (ACR) method. The assessment was conducted on eight types of video sequences with diverse content obtained from the SJTU dataset. The sequences were encoded at 5 different constant bitrates in two widely video compression standards H.264/AVC and H.265/HEVC at Full HD and Ultra HD resolutions, which means 160 annotated video sequences were created. The length of Group of Pictures (GOP) was set to half the framerate value, as is typical for video intended for transmission over a noisy communication channel. The evaluation was performed in two laboratories: one situated at the University of Zilina, and the second at the VSB—Technical University in Ostrava. The results acquired in both laboratories reached/showed a high correlation. Notwithstanding the fact that the sequences with low Spatial Information (SI) and Temporal Information (TI) values reached better Mean Opinion Score (MOS) score than the sequences with higher SI and TI values, these two parameters are not sufficient for scene description, and this domain should be the subject of further research. The evaluation results led us to the conclusion that it is unnecessary to use the H.265/HEVC codec for compression of Full HD sequences and the compression efficiency of the H.265 codec by the Ultra HD resolution reaches the compression efficiency of both codecs by the Full HD resolution. This paper also includes the recommendations for minimum bitrate thresholds at which the video sequences at both resolutions retain good and fair subjectively perceived quality.


2021 ◽  
Vol 58 (4) ◽  
pp. 0410012
Author(s):  
黄友文 Huang Youwen ◽  
周斌 Zhou Bin ◽  
唐欣 Tang Xin

2020 ◽  
Vol 45 (12) ◽  
pp. 10511-10527
Author(s):  
Haikel Alhichri ◽  
Yakoub Bazi ◽  
Naif Alajlan

AbstractAdvances in technology can provide a lot of support for visually impaired (VI) persons. In particular, computer vision and machine learning can provide solutions for object detection and recognition. In this work, we propose a multi-label image classification solution for assisting a VI person in recognizing the presence of multiple objects in a scene. The solution is based on the fusion of two deep CNN models using the induced ordered weighted averaging (OWA) approach. Namely, in this work, we fuse the outputs of two pre-trained CNN models, VGG16 and SqueezeNet. To use the induced OWA approach, we need to estimate a confidence measure in the outputs of the two CNN base models. To this end, we propose the residual error between the predicted output and the true output as a measure of confidence. We estimate this residual error using another dedicated CNN model that is trained on the residual errors computed from the main CNN models. Then, the OAW technique uses these estimated residual errors as confidence measures and fuses the decisions of the two main CNN models. When tested on four image datasets of indoor environments from two separate locations, the proposed novel method improves the detection accuracy compared to both base CNN models. The results are also significantly better than state-of-the-art methods reported in the literature.


Sign in / Sign up

Export Citation Format

Share Document