scholarly journals Design of a 3-DOF Parallel Hand-Controller

2017 ◽  
Vol 2017 ◽  
pp. 1-12
Author(s):  
Chengcheng Zhu ◽  
Aiguo Song

Hand-controllers, as human-machine-interface (HMI) devices, can transfer the position information of the operator’s hands into the virtual environment to control the target objects or a real robot directly. At the same time, the haptic information from the virtual environment or the sensors on the real robot can be displayed to the operator. It helps human perceive haptic information more truly with feedback force. A parallel hand-controller is designed in this paper. It is simplified from the traditional delta haptic device. The swing arms in conventional delta devices are replaced with the slider rail modules. The base consists of two hexagons and several links. For the use of the linear sliding modules instead of swing arms, the arc movement is replaced by linear movement. So that, the calculating amount of the position positive solution and the force inverse solution is reduced for the simplification of the motion. The kinematics, static mechanics, and dynamic mechanics are analyzed in this paper. What is more, two demonstration applications are developed to verify the performance of the designed hand-controller.

2021 ◽  
Vol 9 (1) ◽  
pp. 32-41
Author(s):  
Alexey Sergeev ◽  
Victor Titov ◽  
Igor Shardyko

This article discusses the control issues of a robotic arm for a hot cell based on the induced virtual reality methodology. A human-machine interface based on the virtual reality is presented, comprising a set of interactive features, designed to construct trajectories, along which the end effector of the arm should move. The prospects of computer vision are further considered as means that update the virtual environment state. An experiment to compare two approaches designed to control the robotic arm in virtual environment was carried out.


2019 ◽  
Vol 9 (11) ◽  
pp. 2243
Author(s):  
Gianluca Giuffrida ◽  
Gabriele Meoni ◽  
Luca Fanucci

During the last years, the mobility of people with upper limb disabilities and constrained on power wheelchairs is empowered by robotic arms. Nowadays, even though modern manipulators offer a high number of functionalities, some users cannot exploit all those potentialities due to their reduced manual skills, even if capable of driving the wheelchair by means of proper Human–Machine Interface (HMI). Owing to that, this work proposes a low-cost manipulator realizing only simple tasks and controllable by three different graphical HMI. The latter are empowered using a You Only Look Once (YOLO) v2 Convolutional Neural Network that analyzes the video stream generated by a camera placed on the robotic arm end-effector and recognizes the objects with which the user can interact. Such objects are shown to the user in the HMI surrounded by a bounding box. When the user selects one of the recognized objects, the target position information is exploited by an automatic close-feedback algorithm which leads the manipulator to automatically perform the desired task. A test procedure showed that the accuracy in reaching the desired target is 78%. The produced HMIs were appreciated by different user categories, obtaining a mean score of 8.13/10.


1990 ◽  
Author(s):  
B. Bly ◽  
P. J. Price ◽  
S. Park ◽  
S. Tepper ◽  
E. Jackson ◽  
...  

Symmetry ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 687
Author(s):  
Jinzhen Dou ◽  
Shanguang Chen ◽  
Zhi Tang ◽  
Chang Xu ◽  
Chengqi Xue

With the development and promotion of driverless technology, researchers are focusing on designing varied types of external interfaces to induce trust in road users towards this new technology. In this paper, we investigated the effectiveness of a multimodal external human–machine interface (eHMI) for driverless vehicles in virtual environment, focusing on a two-way road scenario. Three phases of identifying, decelerating, and parking were taken into account in the driverless vehicles to pedestrian interaction process. Twelve eHMIs are proposed, which consist of three visual features (smile, arrow and none), three audible features (human voice, warning sound and none) and two physical features (yielding and not yielding). We conducted a study to gain a more efficient and safer eHMI for driverless vehicles when they interact with pedestrians. Based on study outcomes, in the case of yielding, the interaction efficiency and pedestrian safety in multimodal eHMI design was satisfactory compared to the single-modal system. The visual modality in the eHMI of driverless vehicles has the greatest impact on pedestrian safety. In addition, the “arrow” was more intuitive to identify than the “smile” in terms of visual modality.


Author(s):  
Saverio Trotta ◽  
Dave Weber ◽  
Reinhard W. Jungmaier ◽  
Ashutosh Baheti ◽  
Jaime Lien ◽  
...  

Procedia CIRP ◽  
2021 ◽  
Vol 100 ◽  
pp. 488-493
Author(s):  
Florian Beuss ◽  
Frederik Schmatz ◽  
Marten Stepputat ◽  
Fabian Nokodian ◽  
Wilko Fluegge ◽  
...  

Nanoscale ◽  
2021 ◽  
Author(s):  
Qiufan Wang ◽  
Jiaheng Liu ◽  
Guofu Tian ◽  
Daohong Zhang

The rapid development of human-machine interface and artificial intelligence is dependent on flexible and wearable soft devices such as sensors and energy storage systems. One of the key factors for...


Sign in / Sign up

Export Citation Format

Share Document