multimodal sensors
Recently Published Documents


TOTAL DOCUMENTS

48
(FIVE YEARS 23)

H-INDEX

8
(FIVE YEARS 3)

ACS Nano ◽  
2022 ◽  
Author(s):  
Zhuo Wang ◽  
Zhirong Liu ◽  
Gengrui Zhao ◽  
Zichao Zhang ◽  
Xinyang Zhao ◽  
...  

Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Kang Liu ◽  
Xin Gao

The use of multimodal sensors for lane line segmentation has become a growing trend. To achieve robust multimodal fusion, we introduced a new multimodal fusion method and proved its effectiveness in an improved fusion network. Specifically, a multiscale fusion module is proposed to extract effective features from data of different modalities, and a channel attention module is used to adaptively calculate the contribution of the fused feature channels. We verified the effect of multimodal fusion on the KITTI benchmark dataset and A2D2 dataset and proved the effectiveness of the proposed method on the enhanced KITTI dataset. Our method achieves robust lane line segmentation, which is 4.53% higher than the direct fusion on the precision index, and obtains the highest F2 score of 79.72%. We believe that our method introduces an optimization idea of modal data structure level for multimodal fusion.


2021 ◽  
Vol 11 (19) ◽  
pp. 8978
Author(s):  
Haiming Huang ◽  
Junhao Lin ◽  
Linyuan Wu ◽  
Zhenkun Wen ◽  
Mingjie Dong

This paper focuses on how to improve the operation ability of a soft robotic hand (SRH). A trigger-based dexterous operation (TDO) strategy with multimodal sensors is proposed to perform autonomous choice operations. The multimodal sensors include optical-based fiber curvature sensor (OFCS), gas pressure sensor (GPS), capacitive pressure contact sensor (CPCS), and resistance pressure contact sensor (RPCS). The OFCS embedded in the soft finger and the GPS series connected in the gas channel are used to detect the curvature of the finger. The CPCS attached on the fingertip and the RPCS attached on the palm are employed to detect the touch force. The framework of TDO is divided into sensor detection and action operation. Hardware layer, information acquisition layer, and decision layer form the sensor detection module; action selection layer, actuator drive layer, and hardware layer constitute the action operation module. An autonomous choice decision unit is used to connect the sensor detecting module and action operation module. The experiment results reveal that the TDO algorithm is effective and feasible, and the actions of grasping plastic framework, pinching roller ball pen and screwdriver, and handshake are executed exactly.


Author(s):  
Qiushi Li ◽  
Tongyu Wu ◽  
Wei Zhao ◽  
Jiawen Ji ◽  
Gong Wang

2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Asad Ullah ◽  
Jing Wang ◽  
M. Shahid Anwar ◽  
Taeg Keun Whangbo ◽  
Yaping Zhu

The interest in the facial expression recognition (FER) is increasing day by day due to its practical and potential applications, such as human physiological interaction diagnosis and mental diseases detection. This area has received much attention from the research community in recent years and achieved remarkable results; however, a significant improvement is required in spatial problems. This research work presents a novel framework and proposes an effective and robust solution for FER under an unconstrained environment. Face detection is performed using the supervision of facial attributes. Faceness-Net is used for deep facial part responses for the detection of faces under severe unconstrained variations. In order to improve the generalization problems and avoid insufficient data regime, Deep Convolutional Graphical Adversarial Network (DC-GAN) is utilized. Due to the challenging environmental factors faced in the wild, a large number of noises disrupt feature extraction, thus making it hard to capture ground truth. We leverage different multimodal sensors with a camera that aids in data acquisition, by extracting the features more accurately and improve the overall performance of FER. These intelligent sensors are used to tackle the significant challenges like illumination variance, subject dependence, and head pose. Dual-enhanced capsule network is used which is able to handle the spatial problem. The traditional capsule networks are unable to sufficiently extract the features, as the distance varies greatly between facial features. Therefore, the proposed network is capable of spatial transformation due to action unit aware mechanism and thus forward most desiring features for dynamic routing between capsules. Squashing function is used for the classification function. We have elaborated the effectiveness of our method by validating the results on four popular and versatile databases that outperform all state-of-the-art methods.


2021 ◽  
pp. 89-98
Author(s):  
Aurora Polo-Rodriguez ◽  
Federico Cruciani ◽  
Chris Nugent ◽  
Javier Medina-Quero
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document