DcsNet: a real-time deep network for crack segmentation

Author(s):  
Jie Pang ◽  
Hua Zhang ◽  
Hao Zhao ◽  
Linjing Li
Keyword(s):  
Author(s):  
Sheshang Degadwala ◽  
Utsho Chakraborty ◽  
Sowrav Saha ◽  
Haimanti Biswas ◽  
Dhairya Vyas

2021 ◽  
Vol 11 (22) ◽  
pp. 10540
Author(s):  
Navjot Rathour ◽  
Zeba Khanam ◽  
Anita Gehlot ◽  
Rajesh Singh ◽  
Mamoon Rashid ◽  
...  

There is a significant interest in facial emotion recognition in the fields of human–computer interaction and social sciences. With the advancements in artificial intelligence (AI), the field of human behavioral prediction and analysis, especially human emotion, has evolved significantly. The most standard methods of emotion recognition are currently being used in models deployed in remote servers. We believe the reduction in the distance between the input device and the server model can lead us to better efficiency and effectiveness in real life applications. For the same purpose, computational methodologies such as edge computing can be beneficial. It can also encourage time-critical applications that can be implemented in sensitive fields. In this study, we propose a Raspberry-Pi based standalone edge device that can detect real-time facial emotions. Although this edge device can be used in variety of applications where human facial emotions play an important role, this article is mainly crafted using a dataset of employees working in organizations. A Raspberry-Pi-based standalone edge device has been implemented using the Mini-Xception Deep Network because of its computational efficiency in a shorter time compared to other networks. This device has achieved 100% accuracy for detecting faces in real time with 68% accuracy, i.e., higher than the accuracy mentioned in the state-of-the-art with the FER 2013 dataset. Future work will implement a deep network on Raspberry-Pi with an Intel Movidious neural compute stick to reduce the processing time and achieve quick real time implementation of the facial emotion recognition system.


Author(s):  
Nikita Dvornik ◽  
Konstantin Shmelkov ◽  
Julien Mairal ◽  
Cordelia Schmid

2019 ◽  
Vol 2019 (15) ◽  
pp. 33-1-33-8 ◽  
Author(s):  
Raghav Nagpal ◽  
Chaitanya Krishna Paturu ◽  
Vijaya Ragavan ◽  
Navinprashath R R ◽  
Radhesh Bhat ◽  
...  

Author(s):  
Kai Xiong ◽  
Guanghui Zhao ◽  
Yingbin Wang ◽  
Guangming Shi ◽  
Shuxuan Chen
Keyword(s):  

Author(s):  
Zhengeng Yang ◽  
Hongshan Yu ◽  
Qiang Fu ◽  
Wei Sun ◽  
Wenyan Jia ◽  
...  

Author(s):  
Zahi Al Chami ◽  
Chady Abou Jaoude ◽  
Richard Chbeir

In recent years, data providers are generating and streaming a large number of images. More particularly, processing images that contain faces have received great attention due to its numerous applications, such as entertainment and social media apps. The enormous amount of images shared on these applications presents serious challenges and requires massive computing resources to ensure efficient data processing. However, images are subject to a wide range of distortions in real application scenarios during the processing, transmission, sharing, or combination of many factors. So, there is a need to guarantee acceptable delivery content, even though some distorted images do not have access to their original version. In this paper, we present a framework developed to estimate the images' quality while processing a large number of images in real-time. Our quality evaluation is measured using an integration of a deep network with random forests. In addition, a face alignment metric is used to assess the facial features. Experimental results have been conducted on two artificially distorted benchmark datasets, LIVE and TID2013. We show that our proposed approach outperforms the state-of-art methods, having a Pearson Correlation Coefficient (PCC) and Spearman Rank Order Correlation Correlation Coefficient (SROCC) with subjective human scores of almost 0.942 and 0.931 while minimizing the processing time from 4.8ms to 1.8ms.


Sign in / Sign up

Export Citation Format

Share Document