scholarly journals Social Density Monitoring Toward Selective Cleaning by Human Support Robot With 3D Based Perception System

IEEE Access ◽  
2021 ◽  
Vol 9 ◽  
pp. 41407-41416 ◽  
Author(s):  
Anh Vu Le ◽  
Balakrishnan Ramalingam ◽  
Braulio Felix Gomez ◽  
Rajesh Elara Mohan ◽  
Tran Hoang Quang Minh ◽  
...  
2018 ◽  
Vol 4 (2) ◽  
pp. 155-184
Author(s):  
Katherine M. O'Lone ◽  
Ryan T. McKay

Author(s):  
Mohammed R. Elkobaisi ◽  
Fadi Al Machot

AbstractThe use of IoT-based Emotion Recognition (ER) systems is in increasing demand in many domains such as active and assisted living (AAL), health care and industry. Combining the emotion and the context in a unified system could enhance the human support scope, but it is currently a challenging task due to the lack of a common interface that is capable to provide such a combination. In this sense, we aim at providing a novel approach based on a modeling language that can be used even by care-givers or non-experts to model human emotion w.r.t. context for human support services. The proposed modeling approach is based on Domain-Specific Modeling Language (DSML) which helps to integrate different IoT data sources in AAL environment. Consequently, it provides a conceptual support level related to the current emotional states of the observed subject. For the evaluation, we show the evaluation of the well-validated System Usability Score (SUS) to prove that the proposed modeling language achieves high performance in terms of usability and learn-ability metrics. Furthermore, we evaluate the performance at runtime of the model instantiation by measuring the execution time using well-known IoT services.


2021 ◽  
Vol 10 (3) ◽  
pp. 42
Author(s):  
Mohammed Al-Nuaimi ◽  
Sapto Wibowo ◽  
Hongyang Qu ◽  
Jonathan Aitken ◽  
Sandor Veres

The evolution of driving technology has recently progressed from active safety features and ADAS systems to fully sensor-guided autonomous driving. Bringing such a vehicle to market requires not only simulation and testing but formal verification to account for all possible traffic scenarios. A new verification approach, which combines the use of two well-known model checkers: model checker for multi-agent systems (MCMAS) and probabilistic model checker (PRISM), is presented for this purpose. The overall structure of our autonomous vehicle (AV) system consists of: (1) A perception system of sensors that feeds data into (2) a rational agent (RA) based on a belief–desire–intention (BDI) architecture, which uses a model of the environment and is connected to the RA for verification of decision-making, and (3) a feedback control systems for following a self-planned path. MCMAS is used to check the consistency and stability of the BDI agent logic during design-time. PRISM is used to provide the RA with the probability of success while it decides to take action during run-time operation. This allows the RA to select movements of the highest probability of success from several generated alternatives. This framework has been tested on a new AV software platform built using the robot operating system (ROS) and virtual reality (VR) Gazebo Simulator. It also includes a parking lot scenario to test the feasibility of this approach in a realistic environment. A practical implementation of the AV system was also carried out on the experimental testbed.


2020 ◽  
Vol 2020 (1) ◽  
Author(s):  
Guangyi Yang ◽  
Xingyu Ding ◽  
Tian Huang ◽  
Kun Cheng ◽  
Weizheng Jin

Abstract Communications industry has remarkably changed with the development of fifth-generation cellular networks. Image, as an indispensable component of communication, has attracted wide attention. Thus, finding a suitable approach to assess image quality is important. Therefore, we propose a deep learning model for image quality assessment (IQA) based on explicit-implicit dual stream network. We use frequency domain features of kurtosis based on wavelet transform to represent explicit features and spatial features extracted by convolutional neural network (CNN) to represent implicit features. Thus, we constructed an explicit-implicit (EI) parallel deep learning model, namely, EI-IQA model. The EI-IQA model is based on the VGGNet that extracts the spatial domain features. On this basis, the number of network layers of VGGNet is reduced by adding the parallel wavelet kurtosis value frequency domain features. Thus, the training parameters and the sample requirements decline. We verified, by cross-validation of different databases, that the wavelet kurtosis feature fusion method based on deep learning has a more complete feature extraction effect and a better generalisation ability. Thus, the method can simulate the human visual perception system better, and subjective feelings become closer to the human eye. The source code about the proposed EI-IQA model is available on github https://github.com/jacob6/EI-IQA.


Author(s):  
Hadas Erel ◽  
Denis Trayman ◽  
Chen Levy ◽  
Adi Manor ◽  
Mario Mikulincer ◽  
...  

2009 ◽  
Vol 06 (03) ◽  
pp. 435-457 ◽  
Author(s):  
PHILIPP MICHEL ◽  
JOEL CHESTNUTT ◽  
SATOSHI KAGAMI ◽  
KOICHI NISHIWAKI ◽  
JAMES J. KUFFNER ◽  
...  

We present an approach to motion planning for humanoid robots that aims to ensure reliable execution by augmenting the planning process to reason about the robot's ability to successfully perceive its environment during operation. By efficiently simulating the robot's perception system during search, our planner utilizes a perceptive capability metric that quantifies the 'sensability' of the environment in each state given the task to be accomplished. We have applied our method to the problem of planning robust autonomous grasping motions and walking sequences as performed by an HRP-2 humanoid. A fast GPU-accelerated 3D tracker is used for perception, with a grasp planner and footstep planner incorporating reasoning about the robot's perceptive capability. Experimental results show that considering information about the predicted perceptive capability ensures that sensing remains operational throughout the grasping or walking sequence and yields higher task success rates than perception-unaware planning.


Sign in / Sign up

Export Citation Format

Share Document