object identification
Recently Published Documents


TOTAL DOCUMENTS

1056
(FIVE YEARS 267)

H-INDEX

48
(FIVE YEARS 6)

2022 ◽  
Vol 13 (1) ◽  
Author(s):  
Yi Yang ◽  
Tian Wang ◽  
Yang Li ◽  
Weifeng Dai ◽  
Guanzhong Yang ◽  
...  

AbstractBoth surface luminance and edge contrast of an object are essential features for object identification. However, cortical processing of surface luminance remains unclear. In this study, we aim to understand how the primary visual cortex (V1) processes surface luminance information across its different layers. We report that edge-driven responses are stronger than surface-driven responses in V1 input layers, but luminance information is coded more accurately by surface responses. In V1 output layers, the advantage of edge over surface responses increased eight times and luminance information was coded more accurately at edges. Further analysis of neural dynamics shows that such substantial changes for neural responses and luminance coding are mainly due to non-local cortical inhibition in V1’s output layers. Our results suggest that non-local cortical inhibition modulates the responses elicited by the surfaces and edges of objects, and that switching the coding strategy in V1 promotes efficient coding for luminance.


Author(s):  
Ching-Liang Su

In this study, “ring rotation invariant transform” techniques are used to add more salient feature to the original images. The “ring rotation invariant transform” can solve image rotation problem, which transfers a ring signal to several signal vectors in the complex domain, whereby to generate invariant magnitude. Matrix correlation is employed to combine these magnitudes to generate the various discriminators, by which to identify objects. For managing image-shifting problem, one pixel in sample image is compared with surrounding pixels of unknown image. The comparison approaching in this study is by the basis of pixel-to-pixel-comparisons.


2022 ◽  
Author(s):  
Annie Warman ◽  
Stephanie Rossit ◽  
George Law Malcolm ◽  
Allan Clark

It’s been repeatedly shown that pictures of graspable objects can facilitate visual processing and motor responses, even in the absence of reach-to-grasp actions, an effect often attributed the concept of affordances, originally introduced by Gibson (1979). A classic demonstration of this is the handle compatibility effect, which is characterised by faster reaction times when the orientation of a graspable object’s handle is compatible with the hand used to respond, even when the handle orientation is task irrelevant. Nevertheless, whether faster RTs are due to affordances or spatial compatibility effects has been significantly debated. In the proposed studies, we will use a stimulus-response compatibility paradigm to investigate firstly, whether we can replicate the handle compatibility effect while controlling for spatial compatibility. Here, participants will respond with left- or right-handed keypresses to whether the object is upright or inverted and, in separate blocks, whether the object is red or green. RTs will be analysed using repeated-measures ANOVAs. In line with an affordance account, we hypothesise that there will be larger handle compatibility effects for upright/inverted compared to colour judgements, as colour judgements do not require object identification and are not thought to elicit affordances. Secondly, we will investigate whether the handle compatibility effect shows a lower visual field (VF) advantage in line with functional lower VF advantages observed for hand actions. We expect larger handle compatibility effects for objects viewed in the lower VF than upper VF, given that the lower VF is the space where actions most frequently occur.


2022 ◽  
Vol 355 ◽  
pp. 02054
Author(s):  
Sijun Xie ◽  
Yipeng Zhou ◽  
Iker Zhong ◽  
Wenjing Yan ◽  
Qingchuan Zhang

In the industrial area, the deployment of deep learning models in object detection and tracking are normally too large, also, it requires appropriate trade-offs between speed and accuracy. In this paper, we present a compressed object identification model called Tailored-YOLO (T-YOLO), and builds a lighter deep neural network construction based on the T-YOLO and DeepSort. The model greatly reduces the number of parameters by tailoring the two layers of Conv and BottleneckCSP. We verify the construction by realizing the package counting during the input-output warehouse process. The theoretical analysis and experimental results show that the mean average precision (mAP) is 99.50%, the recognition accuracy of the model is 95.88%, the counting accuracy is 99.80%, and the recall is 99.15%. Compared with the YOLOv5 combined DeepSort model, the proposed optimization method ensures the accuracy of packages recognition and counting and reduces the model parameters by 11MB.


2021 ◽  
Vol 18 (4) ◽  
pp. 43-52
Author(s):  
V. I. Santoniy ◽  
Ya. I. Lepikh ◽  
L. M. Budianskaya ◽  
V. I. Yanko

The optimization of the methods for the formation of the spatial-energy distribution of the probing radiation power and the processing the receiving signal by the locating laser information-measuring systems (LIMS), taking into account the spatial-temporal structure, is carried out, and the analysis of the existing methods of their processing is carried out too. An assessment of the integral criteria for the LIMS functioning when operating in conditions of interference has been made. The calculation of the parameters of the LIMS main links was carried out, taking into account the correlation between the resolution of the optical system and the capabilities of object detection, recognition and classification. A method was developed for the formation of the probing radiation density distribution and the receiving signal processing, taking into account its space-time structure, which made it possible to determine the optimal duration of the laser probe pulse. The determined duration makes it possible to eliminate errors in measuring the parameters of an object's movement under the influence of a combination of destabilizing factors and a lack of signal processing time, which will ensure the accuracy of the target detection and recognition.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Taowen Xiao ◽  
Zijian Cai ◽  
Cong Lin ◽  
Qiong Chen

Image sonar is a widely used wireless communication technology for detecting underwater objects, but the detection process often leads to increased difficulty in object identification due to the lack of equipment resolution. In view of the remarkable results achieved by artificial intelligence techniques in the field of underwater wireless communication research, we propose an object detection method based on convolutional neural network (CNN) and shadow information capture to improve the object recognition and localization effect of underwater sonar images by making full use of the shadow information of the object. We design a Shadow Capture Module (SCM) that can capture the shadow information in the feature map and utilize them. SCM is compatible with CNN models that have a small increase in parameters and a certain degree of portability, and it can effectively alleviate the recognition difficulties caused by the lack of device resolution through referencing shadow features. Through extensive experiments on the underwater sonar data set provided by Pengcheng Lab, the proposed method can effectively improve the feature representation of the CNN model and enhance the difference between class and class features. Under the main evaluation standard of PASCAL VOC 2012, the proposed method improved from an average accuracy (mAP) of 69.61% to 75.73% at an IOU threshold of 0.7, which exceeds many existing conventional deep learning models, while the lightweight design of our proposed module is more helpful for the implementation of artificial intelligence technology in the field of underwater wireless communication.


eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Jiedong Zhang ◽  
Yong Jiang ◽  
Yunjie Song ◽  
Peng Zhang ◽  
Sheng He

Regions sensitive to specific object categories as well as organized spatial patterns sensitive to different features have been found across the whole ventral temporal cortex (VTC). However, it is unclear that within each object category region, how specific feature representations are organized to support object identification. Would object features, such as object parts, be represented in fine-scale spatial tuning within object category-specific regions? Here, we used high-field 7T fMRI to examine the spatial tuning to different face parts within each face-selective region. Our results show consistent spatial tuning of face parts across individuals that within right posterior fusiform face area (pFFA) and right occipital face area (OFA), the posterior portion of each region was biased to eyes, while the anterior portion was biased to mouth and chin stimuli. Our results demonstrate that within the occipital and fusiform face processing regions, there exist systematic spatial tuning to different face parts that support further computation combining them.


Author(s):  
A. Sathesh ◽  
Yasir Babiker Hamdan

Recently, in computer vision and video surveillance applications, moving object recognition and tracking have become more popular and are hard research issues. When an item is left unattended in a video surveillance system for an extended period of time, it is considered abandoned. Detecting abandoned or removed things from complex surveillance recordings is challenging owing to various variables, including occlusion, rapid illumination changes, and so forth. Background subtraction used in conjunction with object tracking are often used in an automated abandoned item identification system, to check for certain pre-set patterns of activity that occur when an item is abandoned. An upgraded form of image processing is used in the preprocessing stage to remove foreground items. In subsequent frames with extended duration periods, static items are recognized by utilizing the contour characteristics of foreground objects. The edge-based object identification approach is used to classify the identified static items into human and nonhuman things. An alert is activated at a specific distance from the item, depending on the analysis of the stationary object. There is evidence that the suggested system has a fast reaction time and is useful for monitoring in real time. The aim of this study is to discover abandoned items in public settings in a timely manner.


Land ◽  
2021 ◽  
Vol 10 (12) ◽  
pp. 1316
Author(s):  
Felipe Saad ◽  
Sumalika Biswas ◽  
Qiongyu Huang ◽  
Ana Paula Dalla Corte ◽  
Márcio Coraiola ◽  
...  

The Brazilian Atlantic Forest is a global biodiversity hotspot and has been extensively mapped using satellite remote sensing. However, past mapping focused on overall forest cover without consideration of keystone plant resources such as Araucaria angustifolia. A. angustifolia is a critically endangered coniferous tree that is essential for supporting overall biodiversity in the Atlantic Forest. A. angustifolia’s distribution has declined dramatically because of overexploitation and land-use changes. Accurate detection and rapid assessments of the distribution and abundance of this species are urgently needed. We compared two approaches for mapping Araucaria angustifolia across two scales (stand vs. individual tree) at three study sites in Brazil. The first approach used Worldview-2 images and Random Forest in Google Earth Engine to detect A. angustifolia at the stand level, with an accuracy of >90% across all three study sites. The second approach relied on object identification using UAV-LiDAR and successfully mapped individual trees (producer’s/user’s accuracy = 94%/64%) at one study site. Both approaches can be employed in tandem to map remaining stands and to determine the exact location of A. angustifolia trees. Each approach has its own strengths and weaknesses, and we discuss their adoptability by managers to inform conservation of A. angustifolia.


Sign in / Sign up

Export Citation Format

Share Document