object classification
Recently Published Documents


TOTAL DOCUMENTS

1078
(FIVE YEARS 283)

H-INDEX

38
(FIVE YEARS 6)

2022 ◽  
Vol 18 (1) ◽  
pp. 1-27
Author(s):  
Ran Xu ◽  
Rakesh Kumar ◽  
Pengcheng Wang ◽  
Peter Bai ◽  
Ganga Meghanath ◽  
...  

Videos take a lot of time to transport over the network, hence running analytics on the live video on embedded or mobile devices has become an important system driver. Considering such devices, e.g., surveillance cameras or AR/VR gadgets, are resource constrained, although there has been significant work in creating lightweight deep neural networks (DNNs) for such clients, none of these can adapt to changing runtime conditions, e.g., changes in resource availability on the device, the content characteristics, or requirements from the user. In this article, we introduce ApproxNet, a video object classification system for embedded or mobile clients. It enables novel dynamic approximation techniques to achieve desired inference latency and accuracy trade-off under changing runtime conditions. It achieves this by enabling two approximation knobs within a single DNN model rather than creating and maintaining an ensemble of models, e.g., MCDNN [MobiSys-16]. We show that ApproxNet can adapt seamlessly at runtime to these changes, provides low and stable latency for the image and video frame classification problems, and shows the improvement in accuracy and latency over ResNet [CVPR-16], MCDNN [MobiSys-16], MobileNets [Google-17], NestDNN [MobiCom-18], and MSDNet [ICLR-18].


2022 ◽  
Author(s):  
Maik Bieleke ◽  
Eve Legrand ◽  
Astrid Mignon ◽  
Peter M Gollwitzer

Forming implementation intentions (i.e., if-then planning) is a powerful self-regulation strategy that enhances goal attainment by facilitating the automatic initiation of goal-directed responses upon encountering critical situations. Yet, little is known about the consequences of forming implementation intentions for goal attainment in situations that were not specified in the if-then plan. In three experiments, we assessed goal attainment in terms of speed and accuracy in an object classification task, focusing on situations that were similar or dissimilar to critical situations and required planned or different responses. The results of Experiments 1 and 3 provide evidence for a facilitation of planned responses in critical and in sufficiently similar situations, enhancing goal attainment when the planned response was required and impairing it otherwise. In Experiment 3, additional unfavorable effects however emerged in situations that were dissimilar to the critical one but required the planned response as well. We discuss theoretical implications as well as potential benefits and pitfalls emerging from these non-planned effects of forming implementation intentions.


2021 ◽  
Author(s):  
Jiangbing Qin ◽  
Hongen Wu ◽  
Haiyong Guo ◽  
Shuang Yao ◽  
Xiang Liu ◽  
...  

Author(s):  
Shuhuan Wen ◽  
Xin Liu ◽  
Zhe Wang ◽  
Hong Zhang ◽  
Zhishang Zhang ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 7860
Author(s):  
Chulhee Bae ◽  
Yu-Cheol Lee ◽  
Wonpil Yu ◽  
Sejin Lee

Three-dimensional point clouds have been utilized and studied for the classification of objects at the environmental level. While most existing studies, such as those in the field of computer vision, have detected object type from the perspective of sensors, this study developed a specialized strategy for object classification using LiDAR data points on the surface of the object. We propose a method for generating a spherically stratified point projection (sP2) feature image that can be applied to existing image-classification networks by performing pointwise classification based on a 3D point cloud using only LiDAR sensors data. The sP2’s main engine performs image generation through spherical stratification, evidence collection, and channel integration. Spherical stratification categorizes neighboring points into three layers according to distance ranges. Evidence collection calculates the occupancy probability based on Bayes’ rule to project 3D points onto a two-dimensional surface corresponding to each stratified layer. Channel integration generates sP2 RGB images with three evidence values representing short, medium, and long distances. Finally, the sP2 images are used as a trainable source for classifying the points into predefined semantic labels. Experimental results indicated the effectiveness of the proposed sP2 in classifying feature images generated using the LeNet architecture.


2021 ◽  
pp. 121-128
Author(s):  
S. Kanimozhi ◽  
T. Mala ◽  
A. Kaviya ◽  
M. Pavithra ◽  
P. Vishali

Sign in / Sign up

Export Citation Format

Share Document