object level
Recently Published Documents


TOTAL DOCUMENTS

320
(FIVE YEARS 116)

H-INDEX

25
(FIVE YEARS 4)

2022 ◽  
Author(s):  
Yujia Peng ◽  
Joseph M Burling ◽  
Greta K Todorova ◽  
Catherine Neary ◽  
Frank E Pollick ◽  
...  

When viewing the actions of others, we not only see patterns of body movements, but we also "see" the intentions and social relations of people, enabling us to understand the surrounding social environment. Previous research has shown that experienced forensic examiners, Closed Circuit Television (CCTV) operators, convey superior performance in identifying and predicting hostile intentions from surveillance footages than novices. However, it remains largely unknown what visual content CCTV operators actively attend to when viewing surveillance footage, and whether CCTV operators develop different strategies for active information seeking from what novices do. In this study, we conducted computational analysis for the gaze-centered stimuli captured by experienced CCTV operators and novices' eye movements when they viewed the same surveillance footage. These analyses examined how low-level visual features and object-level semantic features contribute to attentive gaze patterns associated with the two groups of participants. Low-level image features were extracted by a visual saliency model, whereas object-level semantic features were extracted by a deep convolutional neural network (DCNN), AlexNet, from gaze-centered regions. We found that visual regions attended by CCTV operators versus by novices can be reliably classified by patterns of saliency features and DCNN features. Additionally, CCTV operators showed greater inter-subject correlation in attending to saliency features and DCNN features than did novices. These results suggest that the looking behavior of CCTV operators differs from novices by actively attending to different patterns of saliency and semantic features in both low-level and high-level visual processing. Expertise in selectively attending to informative features at different levels of visual hierarchy may play an important role in facilitating the efficient detection of social relationships between agents and the prediction of harmful intentions.


2022 ◽  
Vol 31 ◽  
pp. 365
Author(s):  
Alexandros Kalomoiros
Keyword(s):  

This paper is concerned with the way the denotation of the bare singular and the process of Pseudo-Incorporation (PI) interact in Western Armenian (WA). We argue that bare singulars in WA unambiguously denote properties of kinds, thus differing significantly from languages like English and Turkish, where they are ambiguous between object-level and kind-level properties (Dayal 2004, Sağ 2019). Our argument comes from Pseudo-Incorporation. WA allows PI of [Num (CLF) Nsg] elements (covert plurals) which denote object-level properties. At the same time, PI-ed NPs (either bare singulars or covert plurals) accept only kind- level modification. This cannot be accounted by restricting PI to kind-denoting NPs (like in Turkish, Sağ 2019), as object-level properties (i.e. covert plurals) are also PI-ed. We derive the pattern by building an analysis of PI where the bare singular is unambiguously kind-denoting.


2021 ◽  
Vol 13 (2) ◽  
pp. 162-172
Author(s):  
A. Abidi ◽  
A. Ben Abbes ◽  
Y. J. E. Gbodjo ◽  
D. Ienco ◽  
I. R. Farah

2021 ◽  
Vol 8 (2) ◽  
pp. 317-328
Author(s):  
Meng-Yao Cui ◽  
Zhe Zhu ◽  
Yulu Yang ◽  
Shao-Ping Lu

AbstractExisting color editing algorithms enable users to edit the colors in an image according to their own aesthetics. Unlike artists who have an accurate grasp of color, ordinary users are inexperienced in color selection and matching, and allowing non-professional users to edit colors arbitrarily may lead to unrealistic editing results. To address this issue, we introduce a palette-based approach for realistic object-level image recoloring. Our data-driven approach consists of an offline learning part that learns the color distributions for different objects in the real world, and an online recoloring part that first recognizes the object category, and then recommends appropriate realistic candidate colors learned in the offline step for that category. We also provide an intuitive user interface for efficient color manipulation. After color selection, image matting is performed to ensure smoothness of the object boundary. Comprehensive evaluation on various color editing examples demonstrates that our approach outperforms existing state-of-the-art color editing algorithms.


2021 ◽  
Vol 11 (21) ◽  
pp. 10261
Author(s):  
Joanna Kazzandra Dumagpi ◽  
Yong-Jin Jeong

Threat detection in X-ray security images is critical for preserving public safety. Recently, deep learning algorithms have begun to be adopted for threat detection tasks in X-ray security images. However, most of the prior works in this field have largely focused on using image-level classification and object-level detection approaches. Adopting object separation as a pixel-level approach to analyze X-ray security images can significantly improve automatic threat detection. In this paper, we investigated the effects of incorporating segmentation deep learning models in the threat detection pipeline of a large-scale imbalanced X-ray dataset. We trained a Faster R-CNN (region-based convolutional neural network) model to localize possible threat regions in the X-ray security images on a balanced dataset to maximize detection of true positives. Then, we trained a DeepLabV3+ model to verify the preliminary detections by classifying each pixel in the threat regions, which resulted in the suppression of false positives. The two models were combined in one detection pipeline to produce the final detections. Experiment results demonstrate that the proposed method significantly outperformed previous baseline methods and end-to-end instance segmentation methods, achieving mean average precisions (mAPs) of 94.88%, 91.40%, and 89.42% across increasing scales of imbalance in the practical dataset.


Sensors ◽  
2021 ◽  
Vol 21 (20) ◽  
pp. 6825
Author(s):  
Jaime Maldonado ◽  
Lino Antoni Giefer

Bottom-up saliency models identify the salient regions of an image based on features such as color, intensity and orientation. These models are typically used as predictors of human visual behavior and for computer vision tasks. In this paper, we conduct a systematic evaluation of the saliency maps computed with four selected bottom-up models on images of urban and highway traffic scenes. Saliency both over whole images and on object level is investigated and elaborated in terms of the energy and the entropy of the saliency maps. We identify significant differences with respect to the amount, size and shape-complexity of the salient areas computed by different models. Based on these findings, we analyze the likelihood that object instances fall within the salient areas of an image and investigate the agreement between the segments of traffic participants and the saliency maps of the different models. The overall and object-level analysis provides insights on the distinctive features of salient areas identified by different models, which can be used as selection criteria for prospective applications in autonomous driving such as object detection and tracking.


2021 ◽  
Vol 32 (4) ◽  
pp. 1-13
Author(s):  
Xia Feng ◽  
Zhiyi Hu ◽  
Caihua Liu ◽  
W. H. Ip ◽  
Huiying Chen

In recent years, deep learning has achieved remarkable results in the text-image retrieval task. However, only global image features are considered, and the vital local information is ignored. This results in a failure to match the text well. Considering that object-level image features can help the matching between text and image, this article proposes a text-image retrieval method that fuses salient image feature representation. Fusion of salient features at the object level can improve the understanding of image semantics and thus improve the performance of text-image retrieval. The experimental results show that the method proposed in the paper is comparable to the latest methods, and the recall rate of some retrieval results is better than the current work.


2021 ◽  
Vol 6 (4) ◽  
pp. 7041-7048
Author(s):  
Shiqi Lin ◽  
Jikai Wang ◽  
Meng Xu ◽  
Hao Zhao ◽  
Zonghai Chen

Sign in / Sign up

Export Citation Format

Share Document